Data Frames being read in with varying number of columns, how do I dynamically change data types of only columns that are Boolean to String data type?

Question:

In my notebook, I have Data Frames being read in that will have a variable number of columns every time the notebook is ran. How do I dynamically change the data types of only the columns that are Boolean data types to String data type?

This is a problem I faced so I am posting the answer incase this helps someone else.

The name of the data frame is "df".

Here we dynamically convert every column in the incoming dataset that is a Boolean data type to a String data type:

def bool_col_DataTypes(DataFrame):

    """This Function accepts a Spark Data Frame as an argument. It returns a list of all Boolean columns in your dataframe."""
    
    DataFrame = dict(DataFrame.dtypes)
    list_of_bool_cols_for_conversion = [x for x, y in DataFrame.items() if y == 'boolean']
    return list_of_bool_cols_for_conversion


list_of_bool_columns = bool_col_DataTypes(df)
    
for i in list_of_bool_columns:
    df = df.withColumn(i, F.col(i).cast(StringType()))
    
new_df = df
Asked By: JTD2021

||

Answers:

data=([(True, 'Lion',1),
       (False, 'fridge',2),
     ( True, 'Bat', 23)])

schema =StructType([StructField('Answer',BooleanType(), True),StructField('Entity',StringType(), True),StructField('ID',IntegerType(), True)])

df=spark.createDataFrame(data, schema)
df.printSchema()

Schema

root
 |-- Answer: boolean (nullable = true)
 |-- Entity: string (nullable = true)
 |-- ID: integer (nullable = true)

Transformation

df1 =df.select( *[col(x).cast('string').alias(x) if y =='boolean' else col(x) for x, y in df.dtypes])

df1.printSchema()

root
 |-- Answer: string (nullable = true)
 |-- Entity: string (nullable = true)
 |-- ID: integer (nullable = true)
Answered By: wwnde
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.