How can I unnormalize MinMaxScaler?
Question:
I use minimax scaler for my data… but I want to get the predicted data(output) unnormalized. I want to get the output with the original values not the normalized value..
Any one can help me…
My code for normalized data:
scaler = MinMaxScaler()
scaler_X = MinMaxScaler()
scaler_Y = MinMaxScaler()
# fit_transform for training data:
X_train = scaler_X.fit_transform(train.values[:,1:])
y_train = scaler_Y.fit_transform(train.values[:,:1])
# only transform for test (unseen) data:
X_test = scaler_X.transform(test.values[:,1:])
y_test = scaler_Y.transform(test.values[:,:1])
Answers:
Do you reaaly need to normalize the target Y values as well? The best is probably that you only normalize your X values and the model learns to predict the unnormalized correct label from the normalized inputs. So, you do not have to a conversion in the prediction time. Of course, the X_test has to be normalized as well.
Normalization helps because it solves problems like gradient exploding and scale differences between various features. Normalizing the target values is most of the time not neccessary
You can use inverse_transform
with the corresponding min-max scaler object.
scaler_Y.inverse_transform(data)
All the scalers in sklearn.preprocessing have inverse_transform method designed just for that.
For example, to scale and un-scale your DataFrame with MinMaxScaler you could do:
from sklearn.preprocessing import
MinMaxScaler
scaler = MinMaxScaler()
scaled = scaler.fit_transform(df)
unscaled =
scaler.inverse_transform(scaled)
Just note that the transform function (and fit_transform as well) return a numpy.array, and not a pandas.Dataframe.
Good luck
I use minimax scaler for my data… but I want to get the predicted data(output) unnormalized. I want to get the output with the original values not the normalized value..
Any one can help me…
My code for normalized data:
scaler = MinMaxScaler()
scaler_X = MinMaxScaler()
scaler_Y = MinMaxScaler()
# fit_transform for training data:
X_train = scaler_X.fit_transform(train.values[:,1:])
y_train = scaler_Y.fit_transform(train.values[:,:1])
# only transform for test (unseen) data:
X_test = scaler_X.transform(test.values[:,1:])
y_test = scaler_Y.transform(test.values[:,:1])
Do you reaaly need to normalize the target Y values as well? The best is probably that you only normalize your X values and the model learns to predict the unnormalized correct label from the normalized inputs. So, you do not have to a conversion in the prediction time. Of course, the X_test has to be normalized as well.
Normalization helps because it solves problems like gradient exploding and scale differences between various features. Normalizing the target values is most of the time not neccessary
You can use inverse_transform
with the corresponding min-max scaler object.
scaler_Y.inverse_transform(data)
All the scalers in sklearn.preprocessing have inverse_transform method designed just for that.
For example, to scale and un-scale your DataFrame with MinMaxScaler you could do:
from sklearn.preprocessing import
MinMaxScaler
scaler = MinMaxScaler()
scaled = scaler.fit_transform(df)
unscaled =
scaler.inverse_transform(scaled)
Just note that the transform function (and fit_transform as well) return a numpy.array, and not a pandas.Dataframe.
Good luck