Including null inside PySpark isin

Question:

This is my dataframe:

from pyspark.sql import SparkSession
from pyspark.sql import functions as F

spark = SparkSession.builder.getOrCreate()

dCols = ['c1',  'c2']
dData = [('a', 'b'), 
         ('c', 'd'),
         ('e', None)]     
df = spark.createDataFrame(dData, dCols)

Is there a syntax to include null inside .isin()?
Something like

df = df.withColumn(
    'newCol',
    F.when(F.col('c2').isin({'d', None}), 'true')  # <=====?
    .otherwise('false')
).show()

After executing the code I get

+---+----+------+
| c1|  c2|newCol|
+---+----+------+
|  a|   b| false|
|  c|   d|  true|
|  e|null| false|
+---+----+------+

instead of

+---+----+------+
| c1|  c2|newCol|
+---+----+------+
|  a|   b| false|
|  c|   d|  true|
|  e|null|  true|
+---+----+------+

I would like to find a solution where I would not need to reference the same column twice, as we need to do now:

(F.col('c2') == 'd') | F.col('c2').isNull()
Asked By: ZygD

||

Answers:

NULL is not a value but represents the absence of a value so you can’t compare it to None or NULL. The comparison will always give false. You need to use isNull to check :

df = df.withColumn(
    'newCol',
    F.when(F.col('c2').isin({'d'}) | F.col('c2').isNull(), 'true')
        .otherwise('false')
).show()

#+---+----+------+
#| c1|  c2|newCol|
#+---+----+------+
#|  a|   b| false|
#|  c|   d|  true|
#|  e|null|  true|
#+---+----+------+
Answered By: blackbishop

One reference to the column is not enough in this case. To check for nulls you need to use a separate isNull method.

Also, if you want a column of true/false, you can cast the result to Boolean directly without using when:

import pyspark.sql.functions as F

df2 = df.withColumn(
    'newCol',
    (F.col('c2').isin(['d']) | F.col('c2').isNull()).cast('boolean')
)

df2.show()
+---+----+------+
| c1|  c2|newCol|
+---+----+------+
|  a|   b| false|
|  c|   d|  true|
|  e|null|  true|
+---+----+------+
Answered By: mck

Try this: use the ‘or’ operation to test for nulls

from pyspark.sql import SparkSession
from pyspark.sql import functions as F
import numpy as np

spark = SparkSession.builder.getOrCreate()

dCols = ['c1',  'c2']
dData = [('a', 'b'), 
         ('c', 'd'),
         ('e', None)]     
df = spark.createDataFrame(dData, dCols)

df = df.withColumn(
    'newCol',
    F.when(F.col('c2').isNull() | (F.col('c2') == 'd'), 'true')   #
    .otherwise('false')
).show()
Answered By: MEdwin

Would the following work? I realize it is a bit confusing, but I think using the null-safe equal operator may solve the ops concern of calling F.col(‘c2’) more than one time.

~F.col(‘c2’).contains(‘d’).eqNullSafe(False)

https://spark.apache.org/docs/3.1.1/api/python/reference/api/pyspark.sql.Column.eqNullSafe.html

Answered By: Titus Merriam