Sort in descending order in PySpark

Question:

I’m using PySpark (Python 2.7.9/Spark 1.3.1) and have a dataframe GroupObject which I need to filter & sort in the descending order. Trying to achieve it via this piece of code.

group_by_dataframe.count().filter("`count` >= 10").sort('count', ascending=False)

But it throws the following error.

sort() got an unexpected keyword argument 'ascending'
Asked By: rclakmal

||

Answers:

In PySpark 1.3 sort method doesn’t take ascending parameter. You can use desc method instead:

from pyspark.sql.functions import col

(group_by_dataframe
    .count()
    .filter("`count` >= 10")
    .sort(col("count").desc()))

or desc function:

from pyspark.sql.functions import desc

(group_by_dataframe
    .count()
    .filter("`count` >= 10")
    .sort(desc("count"))

Both methods can be used with with Spark >= 1.3 (including Spark 2.x).

Answered By: zero323

Use orderBy:

df.orderBy('column_name', ascending=False)

Complete answer:

group_by_dataframe.count().filter("`count` >= 10").orderBy('count', ascending=False)

http://spark.apache.org/docs/2.0.0/api/python/pyspark.sql.html

Answered By: Henrique Florencio

you can use groupBy and orderBy as follows also

dataFrameWay = df.groupBy("firstName").count().withColumnRenamed("count","distinct_name").sort(desc("count"))
Answered By: Narendra Maru

By far the most convenient way is using this:

df.orderBy(df.column_name.desc())

Doesn’t require special imports.

In pyspark 2.4.4

1) group_by_dataframe.count().filter("`count` >= 10").orderBy('count', ascending=False)

2) from pyspark.sql.functions import desc
   group_by_dataframe.count().filter("`count` >= 10").orderBy('count').sort(desc('count'))

No need to import in 1) and 1) is short & easy to read,
So I prefer 1) over 2)

Answered By: Prabhath Kota

RDD.sortBy(keyfunc, ascending=True, numPartitions=None)

an example:

words =  rdd2.flatMap(lambda line: line.split(" "))
counter = words.map(lambda word: (word,1)).reduceByKey(lambda a,b: a+b)

print(counter.sortBy(lambda a: a[1],ascending=False).take(10))
Answered By: Aramis NSR

PySpark added Pandas style sort operator with the ascending keyword argument in version 1.4.0. You can now use

df.sort('<col_name>', ascending = False)

Or you can use the orderBy function:

df.orderBy('<col_name>').desc()
Answered By: Mr RK

You can use pyspark.sql.functions.desc instead.

from pyspark.sql.functions import desc

g.groupBy('dst').count().sort(desc('count')).show()
Answered By: Wria Mohammed