Why does Pandas iterate over DataFrame columns by default?

Question:

Trying to understand the design rationale behind some of Pandas’ features.

If I have a DataFrame with 3560 rows and 18 columns, then

len(frame)

is 3560, but

len([a for a in frame])

is 18.

Maybe this feels natural to someone coming from R; to me it doesn’t feel very ‘Pythonic’. Is there an introduction to the underlying design rationales for Pandas somewhere?

Asked By: trvrm

||

Answers:

There’s a decent explanation in the docs – iteration for Pandas DataFrames is meant to be "dict-like," so the iteration is over the keys (the columns).

Arguably it’s a little confusing that iteration for Series is over the values, but as the docs note, that’s because they are are more "array-like".

Answered By: chrisb

A DataFrame is primarily a column-based data structure.
Under the hood, the data inside the DataFrame is stored in blocks. Roughly speaking there is one block for each dtype.
Each column has one dtype. So accessing a column can be done by selecting the appropriate column from a single block. In contrast, selecting a single row requires selecting the appropriate row from each block and then forming a new Series and copying the data from each block’s row into the Series.
Thus, iterating through rows of a DataFrame is (under the hood) not as natural a process as iterating through columns.

If you need to iterate through the rows, you still can, however, by calling df.iterrows(). You should avoid using df.iterrows if possible for the same reason why it’s unnatural — it requires copying which makes the process slower than iterating through columns.

Answered By: unutbu
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.