Pandas : compute mean or std (standard deviation) over entire dataframe

Question:

Here is my problem, I have a dataframe like this :

    Depr_1  Depr_2  Depr_3
S3  0   5   9
S2  4   11  8
S1  6   11  12
S5  0   4   11
S4  4   8   8

and I just want to calculate the mean over the full dataframe, as the following doesn’t work :

df.mean()

Then I came up with :

df.mean().mean()

But this trick won’t work for computing the standard deviation. My final attempts were :

df.get_values().mean()
df.get_values().std()

Except that in the latter case, it uses mean() and std() function from numpy. It’s not a problem for the mean, but it is for std, as the pandas function uses by default ddof=1, unlike the numpy one where ddof=0.

Asked By: jrjc

||

Answers:

You could convert the dataframe to be a single column with stack (this changes the shape from 5×3 to 15×1) and then take the standard deviation:

df.stack().std()         # pandas default degrees of freedom is one

Alternatively, you can use values to convert from a pandas dataframe to a numpy array before taking the standard deviation:

df.values.std(ddof=1)    # numpy default degrees of freedom is zero

Unlike pandas, numpy will give the standard deviation of the entire array by default, so there is no need to reshape before taking the standard deviation.

A couple of additional notes:

  • The numpy approach here is a bit faster than the pandas one, which is generally true when you have the option to accomplish the same thing with either numpy or pandas. The speed difference will depend on the size of your data, but numpy was roughly 10x faster when I tested a few different sized dataframes on my laptop (numpy version 1.15.4 and pandas version 0.23.4).

  • The numpy and pandas approaches here will not give exactly the same answers, but will be extremely close (identical at several digits of precision). The discrepancy is due to slight differences in implementation behind the scenes that affect how the floating point values get rounded.

Answered By: JohnE

If there are NaN values that are causing problems, and if stack() is too slow for you, numpy has built-in functions that deal with it: prefix each standard function with nan.

np.nanmean(df.values)   # mean with NaN ignored
np.nanstd(df.values)    # stdev with NaN ignored
np.nanmedian(df.values) # median with NaN ignored

Another approach is to just filter NaN values out:

df.values[~np.isnan(df.values)].mean()     # mean
df.values[~np.isnan(df.values)].std()      # stdev
np.median(df.values[~np.isnan(df.values)]) # median
Answered By: not a robot

it’s very simple ws can do it as
df1 = df[1:].mean()
df2 = df[1:].std()
df3 = pd.merge(df1,df2,left_index = True , right_index =True)

it will take mean,std all the columns
and then combine both

Answered By: Zubair Shah
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.