PySpark Data Frames when to use .select() Vs. .withColumn()?

Question:

I’m new to PySpark and I see there are two ways to select columns in PySpark, either with ".select()" or ".withColumn()".

From what I’ve heard ".withColumn()" is worse for performance but otherwise than that I’m confused as to why there are two ways to do the same thing.

So when am I supposed to use ".select()" instead of ".withColumn()"?

I’ve googled this question but I haven’t found a clear explanation.

Asked By: JTD2021

||

Answers:

.withColumn() is not for selecting columns, instead it returns a new DataFrame with a new / replaced column (docs).

Answered By: Robert Kossendey

Using:

df.withColumn('new', func('old'))

where func is your spark processing code, is equivalent to:

df.select('*', func('old').alias('new'))  # '*' selects all existing columns

As you see, withColumn() is very convenient to use (probably why it is available), however as you noted, there are performance implications. See this post for details: Spark DAG differs with 'withColumn' vs 'select'

Answered By: bzu

@Robert Kossendey You can use select to chain multiple withColumn() statements without suffering the performance implications of using withColumn. Likewise, there are cases where you may want/need to parameterize the columns created. You could set variables for windows, conditions, values, etcetera to create your select statement.

Answered By: David Finch
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.