Why use sklearn normalising functions rather than the raw method?

Question:

After having a look at the source code for the MinMaxScaler and MaxAbsScaler, I dont understand why I should use them when I can create the same output with no overhead.

import sklearn.preprocessing
import numpy as np

x = np.arange(25).reshape(5,5)

sklearn.preprocessing.MaxAbsScaler().fit_transform(x)
x/np.max(np.abs(x),axis=0)


sklearn.preprocessing.MinMaxScaler().fit_transform(x)
(x-x.min(axis=0))/(x.max(axis=0)-x.min(axis=0))

With pandas its even easier as the axis=0 is not needed as it is already column wise.

Asked By: Plod

||

Answers:

You can use the raw method of course. But you will lose readability and also the methods and parameters that come from sklearn. Additionally, maybe you want to try other normalization functions, so it is way easier just import it and use it rather than the raw method.

If you are going to share your code with other data scientist, it is just kind to use sklearn for them. Do not invent the wheel.

Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.