LEFT and RIGHT function in PySpark SQL

Question:

I am new for PySpark. I pulled a csv file using pandas.
And created a temp table using registerTempTable function.

from pyspark.sql import SQLContext
from pyspark.sql import Row
import pandas as pd
sqlc = SQLContext(sc)

aa1 = pd.read_csv("D:mck1.csv")

aa2 = sqlc.createDataFrame(aa1)

aa2.show()

+--------+-------+----------+------------+---------+------------+-------------------+
|    City|     id|First_Name|Phone_Number|new_date|new      code|           New_date|
+--------+-------+----------+------------+---------+------------+-------------------+
|KOLKATTA|9000007|       AAA|  1111119411| 20080714|          13|2016-08-16 00:00:00|
|KOLKATTA|9000007|       BBB|  1111119421| 20080714|          13|2016-08-06 00:00:00|
|KOLKATTA|9000007|       CCC|  1111119461| 20080714|          13|2016-08-13 00:00:00|
|KOLKATTA|9000007|       DDD|  1111119471| 20080714|          13|2016-08-27 00:00:00|
|KOLKATTA|9000007|       EEE|  1111119491| 20080714|          13|2016-08-15 00:00:00|
|KOLKATTA|9111147|       FFF|  1111119401| 20080714|          13|2016-08-24 00:00:00|
|KOLKATTA|9585458|   FORMULA|  1111110112| 19990930|          13|2016-08-16 00:00:00|
|KOLKATTA|9569878|   APPLEII|  1111110132| 19990930|          13|2016-08-06 00:00:00|

aa3 = aa2.registerTempTable("mytable1")

sqlc.sql(""" select right(phone_number,4) from mytable1 """).show()

Now I try to pull last four character using right of phone number using right(phone_number,4) and facing followung error

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-18-07f08e3d0a8f> in <module>()
----> 1 sqlc.sql(""" select right(Phone_number,4) from mytable1 """).show()

C:spark-1.4.1-bin-hadoop2.6pythonpysparksqlcontext.pyc in sql(self, sqlQuery)
    500         [Row(f1=1, f2=u'row1'), Row(f1=2, f2=u'row2'), Row(f1=3, f2=u'row3')]
    501         """
--> 502         return DataFrame(self._ssql_ctx.sql(sqlQuery), self)
    503 
    504     @since(1.0)

C:spark-1.4.1-bin-hadoop2.6pythonlibpy4j-0.8.2.1-src.zippy4jjava_gateway.py in __call__(self, *args)
    536         answer = self.gateway_client.send_command(command)
    537         return_value = get_return_value(answer, self.gateway_client,
--> 538                 self.target_id, self.name)
    539 
    540         for temp_arg in temp_args:

C:spark-1.4.1-bin-hadoop2.6pythonlibpy4j-0.8.2.1-src.zippy4jprotocol.py in get_return_value(answer, gateway_client, target_id, name)
    298                 raise Py4JJavaError(
    299                     'An error occurred while calling {0}{1}{2}.n'.
--> 300                     format(target_id, '.', name), value)
    301             else:
    302                 raise Py4JError(

Py4JJavaError: An error occurred while calling o55.sql.
: java.lang.RuntimeException: [1.9] failure: ``union'' expected but `right' found

 select right(Phone_number,4) from mytable1 
        ^
    at scala.sys.package$.error(package.scala:27)
    at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:36)
    at org.apache.spark.sql.catalyst.DefaultParserDialect.parse(ParserDialect.scala:67)
    at org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:145)

Why pyspark is not supporting RIGHT and LEFT function?
How can I take right of four character for a column?

Asked By: Green

||

Answers:

looking at the documentation, have you tried the substring function?

pyspark.sql.functions.substring(str, pos, len)[source]

EDIT

per your comment, you can get the last four like this:

from pyspark.sql.functions import substring

df = sqlContext.createDataFrame([('abcdefg',)], ['s',])
df.select(substring(df.s, -4, 4).alias('s')).collect()
Answered By: flyingmeatball

Instead of right, try with rpad:

sqlc.sql(""" select rpad(phone_number, 4, phone_number) from mytable1 """).show()
Answered By: Carlos Vilchez

I know this is an old question, but this could also be done using "expr" function directly from "aa2" pyspark dataframe:

from pyspark.sql.functions import expr

aa2.select(expr('RIGHT(phone_number, 4)')).show()

|right(phone_number, 4)|
|----------------------|
|                  9411|
|                  9421|
|                  9461|
|                  9471|
|                  9491|
|                  9401|
|                  0112|
|                  0132|

Answered By: gcollar

Adding onto the answer from FlyingMeatball, if you wanted to add the column to the dataframe instead of just outputting it you could use

from pyspark.sql.functions import substring

df = sqlContext.createDataFrame([('abcdefg',)], ['s',])
df.withColumn('s',substring(df.s, -4, 4)).collect()

I found this useful for my personal needs so I wanted to share.

Answered By: Andrew Shade