writing a csv with column names and reading a csv file which is being generated from a sparksql dataframe in Pyspark

Question:

i have started the shell with databrick csv package

#../spark-1.6.1-bin-hadoop2.6/bin/pyspark --packages com.databricks:spark-csv_2.11:1.3.0

Then i read a csv file did some groupby op and dump that to a csv.

from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
df = sqlContext.read.format('com.databricks.spark.csv').options(header='true').load(path.csv')   ####it has columns and df.columns works fine
type(df)   #<class 'pyspark.sql.dataframe.DataFrame'>
#now trying to dump a csv
df.write.format('com.databricks.spark.csv').save('path+my.csv')
#it creates a directory my.csv with 2 partitions
### To create single file i followed below line of code
#df.rdd.map(lambda x: ",".join(map(str, x))).coalesce(1).saveAsTextFile("path+file_satya.csv") ## this creates one partition in directory of csv name
#but in both cases no columns information(How to add column names to that csv file???)
# again i am trying to read that csv by
df_new = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").load("the file i just created.csv")
#i am not getting any columns in that..1st row becomes column names

Please don’t answer like add a schema to dataframe after read_csv or while reading mention the column names.

Question1- while giving csv dump is there any way i can add column name with that???

Question2-is there a way to create single csv file(not directory again) which can be opened by ms office or notepad++???

note: I am currently not using cluster, As it is too complex for spark beginner like me. If any one can provide a link for how to deal with to_csv into single file in clustered environment , that would be a great help.

Asked By: Satya

||

Answers:

got answer for 1st question, it was a matter of passing one extra parameter header = ‘true’ along with to csv statement

df.write.format('com.databricks.spark.csv').save('path+my.csv',header = 'true')

#Alternative for 2nd question

Using topandas.to_csv , But again i don’t want to use pandas here, so please suggest if any other way around is there.

Answered By: Satya

Try

df.coalesce(1).write.format('com.databricks.spark.csv').save('path+my.csv',header = 'true')

Note that this may not be an issue on your current setup, but on extremely large datasets, you can run into memory problems on the driver. This will also take longer (in a cluster scenario) as everything has to push back to a single location.

Answered By: Mike Metzger

with spark >= 2.o, we can do something like

df = spark.read.csv('path+filename.csv', sep = 'ifany',header='true')
df.write.csv('path_filename of csv',header=True) ###yes still in partitions
df.toPandas().to_csv('path_filename of csv',index=False)  ###single csv(Pandas Style)
Answered By: Satya

Just in case,
on spark 2.1 you can create a single csv file with the following lines

dataframe.coalesce(1) //So just a single part- file will be created
.write.mode(SaveMode.Overwrite)
.option("mapreduce.fileoutputcommitter.marksuccessfuljobs","false") //Avoid creating of crc files
.option("header","true") //Write the header
.csv("csvFullPath")
Answered By: FrancescoM

The following should do the trick:

df 
  .write 
  .mode('overwrite') 
  .option('header', 'true') 
  .csv('output.csv')

Alternatively, if you want the results to be in a single partition, you can use coalesce(1):

df 
  .coalesce(1) 
  .write 
  .mode('overwrite') 
  .option('header', 'true') 
  .csv('output.csv')

Note however that this is an expensive operation and might not be feasible with extremely large datasets.

Answered By: Giorgos Myrianthous