How to use sklearn fit_transform with pandas and return dataframe instead of numpy array?

Question:

I want to apply scaling (using StandardScaler() from sklearn.preprocessing) to a pandas dataframe. The following code returns a numpy array, so I lose all the column names and indeces. This is not what I want.

features = df[["col1", "col2", "col3", "col4"]]
autoscaler = StandardScaler()
features = autoscaler.fit_transform(features)

A “solution” I found online is:

features = features.apply(lambda x: autoscaler.fit_transform(x))

It appears to work, but leads to a deprecationwarning:

/usr/lib/python3.5/site-packages/sklearn/preprocessing/data.py:583:
DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17
and will raise ValueError in 0.19. Reshape your data either using
X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1)
if it contains a single sample.

I therefore tried:

features = features.apply(lambda x: autoscaler.fit_transform(x.reshape(-1, 1)))

But this gives:

Traceback (most recent call last): File “./analyse.py”, line 91, in

features = features.apply(lambda x: autoscaler.fit_transform(x.reshape(-1, 1))) File
“/usr/lib/python3.5/site-packages/pandas/core/frame.py”, line 3972, in
apply
return self._apply_standard(f, axis, reduce=reduce) File “/usr/lib/python3.5/site-packages/pandas/core/frame.py”, line 4081, in
_apply_standard
result = self._constructor(data=results, index=index) File “/usr/lib/python3.5/site-packages/pandas/core/frame.py”, line 226, in
init
mgr = self._init_dict(data, index, columns, dtype=dtype) File “/usr/lib/python3.5/site-packages/pandas/core/frame.py”, line 363, in
_init_dict
dtype=dtype) File “/usr/lib/python3.5/site-packages/pandas/core/frame.py”, line 5163, in
_arrays_to_mgr
arrays = _homogenize(arrays, index, dtype) File “/usr/lib/python3.5/site-packages/pandas/core/frame.py”, line 5477, in
_homogenize
raise_cast_failure=False) File “/usr/lib/python3.5/site-packages/pandas/core/series.py”, line 2885,
in _sanitize_array
raise Exception(‘Data must be 1-dimensional’) Exception: Data must be 1-dimensional

How do I apply scaling to the pandas dataframe, leaving the dataframe intact? Without copying the data if possible.

Asked By: Louic

||

Answers:

You could convert the DataFrame as a numpy array using as_matrix(). Example on a random dataset:

Edit:
Changing as_matrix() to values, (it doesn’t change the result) per the last sentence of the as_matrix() docs above:

Generally, it is recommended to use ‘.values’.

import pandas as pd
import numpy as np #for the random integer example
df = pd.DataFrame(np.random.randint(0.0,100.0,size=(10,4)),
              index=range(10,20),
              columns=['col1','col2','col3','col4'],
              dtype='float64')

Note, indices are 10-19:

In [14]: df.head(3)
Out[14]:
    col1    col2    col3    col4
    10  3   38  86  65
    11  98  3   66  68
    12  88  46  35  68

Now fit_transform the DataFrame to get the scaled_features array:

from sklearn.preprocessing import StandardScaler
scaled_features = StandardScaler().fit_transform(df.values)

In [15]: scaled_features[:3,:] #lost the indices
Out[15]:
array([[-1.89007341,  0.05636005,  1.74514417,  0.46669562],
       [ 1.26558518, -1.35264122,  0.82178747,  0.59282958],
       [ 0.93341059,  0.37841748, -0.60941542,  0.59282958]])

Assign the scaled data to a DataFrame (Note: use the index and columns keyword arguments to keep your original indices and column names:

scaled_features_df = pd.DataFrame(scaled_features, index=df.index, columns=df.columns)

In [17]:  scaled_features_df.head(3)
Out[17]:
    col1    col2    col3    col4
10  -1.890073   0.056360    1.745144    0.466696
11  1.265585    -1.352641   0.821787    0.592830
12  0.933411    0.378417    -0.609415   0.592830

Edit 2:

Came across the sklearn-pandas package. It’s focused on making scikit-learn easier to use with pandas. sklearn-pandas is especially useful when you need to apply more than one type of transformation to column subsets of the DataFrame, a more common scenario. It’s documented, but this is how you’d achieve the transformation we just performed.

from sklearn_pandas import DataFrameMapper

mapper = DataFrameMapper([(df.columns, StandardScaler())])
scaled_features = mapper.fit_transform(df.copy(), 4)
scaled_features_df = pd.DataFrame(scaled_features, index=df.index, columns=df.columns)
Answered By: Kevin
import pandas as pd    
from sklearn.preprocessing import StandardScaler

df = pd.read_csv('your file here')
ss = StandardScaler()
df_scaled = pd.DataFrame(ss.fit_transform(df),columns = df.columns)

The df_scaled will be the ‘same’ dataframe, only now with the scaled values

Answered By: Joe

You can try this code, this will give you a DataFrame with indexes

import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import load_boston # boston housing dataset

dt= load_boston().data
col= load_boston().feature_names

# Make a dataframe
df = pd.DataFrame(data=dt, columns=col)

# define a method to scale data, looping thru the columns, and passing a scaler
def scale_data(data, columns, scaler):
    for col in columns:
        data[col] = scaler.fit_transform(data[col].values.reshape(-1, 1))
    return data

# specify a scaler, and call the method on boston data
scaler = StandardScaler()
df_scaled = scale_data(df, col, scaler)

# view first 10 rows of the scaled dataframe
df_scaled[0:10]
Answered By: Hassan K

You can mix multiple data types in scikit-learn using Neuraxle:

Option 1: discard the row names and column names

from neuraxle.pipeline import Pipeline
from neuraxle.base import NonFittableMixin, BaseStep

class PandasToNumpy(NonFittableMixin, BaseStep):
    def transform(self, data_inputs, expected_outputs): 
        return data_inputs.values

pipeline = Pipeline([
    PandasToNumpy(),
    StandardScaler(),
])

Then, you proceed as you intended:

features = df[["col1", "col2", "col3", "col4"]]  # ... your df data
pipeline, scaled_features = pipeline.fit_transform(features)

Option 2: to keep the original column names and row names

You could even do this with a wrapper as such:

from neuraxle.pipeline import Pipeline
from neuraxle.base import MetaStepMixin, BaseStep

class PandasValuesChangerOf(MetaStepMixin, BaseStep):
    def transform(self, data_inputs, expected_outputs): 
        new_data_inputs = self.wrapped.transform(data_inputs.values)
        new_data_inputs = self._merge(data_inputs, new_data_inputs)
        return new_data_inputs

    def fit_transform(self, data_inputs, expected_outputs): 
        self.wrapped, new_data_inputs = self.wrapped.fit_transform(data_inputs.values)
        new_data_inputs = self._merge(data_inputs, new_data_inputs)
        return self, new_data_inputs

    def _merge(self, data_inputs, new_data_inputs): 
        new_data_inputs = pd.DataFrame(
            new_data_inputs,
            index=data_inputs.index,
            columns=data_inputs.columns
        )
        return new_data_inputs

df_scaler = PandasValuesChangerOf(StandardScaler())

Then, you proceed as you intended:

features = df[["col1", "col2", "col3", "col4"]]  # ... your df data
df_scaler, scaled_features = df_scaler.fit_transform(features)
Answered By: Guillaume Chevalier
features = ["col1", "col2", "col3", "col4"]
autoscaler = StandardScaler()
df[features] = autoscaler.fit_transform(df[features])
Answered By: zzHQzz

You could directly assign a numpy array to a data frame by using slicing.

from sklearn.preprocessing import StandardScaler
features = df[["col1", "col2", "col3", "col4"]]
autoscaler = StandardScaler()
features[:] = autoscaler.fit_transform(features.values)
Answered By: abysslover

This is what I did:

X.Column1 = StandardScaler().fit_transform(X.Column1.values.reshape(-1, 1))
Answered By: Fredrik

Reassigning back to df.values preserves both index and columns.

df.values[:] = StandardScaler().fit_transform(df)
Answered By: Jim

This worked with MinMaxScaler in getting back the array values to original dataframe. It should work on StandardScaler as well.

data_scaled = pd.DataFrame(scaled_features, index=df.index, columns=df.columns)

where, data_scaled is the new data frame, scaled_features = the array post normalization, df = original dataframe for which we need the index and columns back.

Answered By: user15590289

Works for me:

from sklearn.preprocessing import StandardScaler

cols = list(train_df_x_num.columns)
scaler = StandardScaler()
train_df_x_num[cols] = scaler.fit_transform(train_df_x_num[cols])

Since sklearn Version 1.2, estiamtors can return a DataFrame keeping the column names.
set_output can be configured per estimator by calling the set_output method or globally by setting set_config(transform_output="pandas")

See Release Highlights for scikit-learn 1.2 – Pandas output with set_output API

Example for set_output():

from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().set_output(transform="pandas")

Example for set_config():

from sklearn import set_config
set_config(transform_output="pandas")
Answered By: Jan
class  StandardScalerDF:

    def __init__(self, with_mean: bool = True, with_std: bool = True):
        self.with_mean = with_mean
        self.with_std = with_std
        
    def fit(self, data):
        self.scaler = StandardScaler(copy=True, with_mean=self.with_mean, 
                                     with_std=self.with_std).fit(data)
        return self.scaler
    
    def transform(self, data):
        return pd.DataFrame(self.scaler.transform(data), columns=data.columns,
                            index=data.index)

#example: 
obj = StandardScalerDF()
obj.fit(data_as_df)
scaled_data_df = obj.transform(data_as_df) #data type: DataFrame
Answered By: Rick Sanchez C-666
Categories: questions Tags: , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.