How to create a sample single-column Spark DataFrame in Python?

Question:

I want to create a sample single-column DataFrame, but the following code is not working:

df = spark.createDataFrame(["10","11","13"], ("age"))

## ValueError
## ...
## ValueError: Could not parse datatype: age

The expected result:

age
10
11
13
Asked By: Ajish Kb

||

Answers:

I used just spark.read to create a dataframe in python, as stated in the documentation, save your data into as a json for example and load it like this:

df = spark.read.json("examples/src/main/resources/people.json")
Answered By: dnhyde

the following code is not working

With single element you need a schema as type

spark.createDataFrame(["10","11","13"], "string").toDF("age")

or DataType:

from pyspark.sql.types import StringType

spark.createDataFrame(["10","11","13"], StringType()).toDF("age")

With name elements should be tuples and schema as sequence:

spark.createDataFrame([("10", ), ("11", ), ("13",  )], ["age"])
Answered By: Alper t. Turker

Well .. There is some pretty easy method for creating sample dataframe in PySpark

>>> df = sc.parallelize([[1,2,3], [2,3,4]]).toDF()
>>> df.show()
+---+---+---+
| _1| _2| _3|
+---+---+---+
|  1|  2|  3|
|  2|  3|  4|
+---+---+---+

to create with some column names

>>> df1 = sc.parallelize([[1,2,3], [2,3,4]]).toDF(("a", "b", "c"))
>>> df1.show()
+---+---+---+
|  a|  b|  c|
+---+---+---+
|  1|  2|  3|
|  2|  3|  4|
+---+---+---+

In this way, no need to define schema too.Hope this is the simplest way

Answered By: Sarath Chandra Vema
from pyspark.sql import SparkSession

spark = SparkSession.builder.getOrCreate()
df = spark.createDataFrame([{"a": "x", "b": "y", "c": "3"}])

Output: (no need to define schema)

+---+---+---+
| a | b | c |
+---+---+---+
|  x|  y|  3|
+---+---+---+
Answered By: LN_P

For pandas + pyspark users, if you’ve already installed pandas in the cluster, you can do this simply:

# create pandas dataframe
df = pd.DataFrame({'col1':[1,2,3], 'col2':['a','b','c']})

# convert to spark dataframe
df = spark.createDataFrame(df)

Local Spark Setup

import findspark
findspark.init()
import pyspark

spark = (pyspark
         .sql
         .SparkSession
         .builder
         .master("local")
         .getOrCreate())
Answered By: YOLO

You can also try something like this –

from pyspark.sql import SQLContext
sqlContext = SQLContext(sc) # sc is the spark context
sample = sqlContext.createDataFrame(
    [
        ('qwe', 23), # enter your data here
        ('rty',34),
        ('yui',56),
        ],
    ['abc', 'def'] # the row header/column labels should be entered here
Answered By: Nidhi

There are several ways to create a DataFrame, PySpark Create DataFrame is one of the first steps you learn while working on PySpark

I assume you already have data, columns, and an RDD.

1) df = rdd.toDF()
2) df = rdd.toDF(columns) //Assigns column names
3) df = spark.createDataFrame(rdd).toDF(*columns)
4) df = spark.createDataFrame(data).toDF(*columns)
5) df = spark.createDataFrame(rowData,columns)

Besides these, you can find several examples on pyspark create dataframe

Answered By: Kumar

See my farsante lib for creating a DataFrame with fake data:

import farsante

df = farsante.quick_pyspark_df(['first_name', 'last_name'], 7)
df.show()
+----------+---------+
|first_name|last_name|
+----------+---------+
|     Tommy|     Hess|
|    Arthur| Melendez|
|  Clemente|    Blair|
|    Wesley|   Conrad|
|    Willis|   Dunlap|
|     Bruna|  Sellers|
|     Tonda| Schwartz|
+----------+---------+

Here’s how to explicitly specify the schema when creating the PySpark DataFrame:

df = spark.createDataFrame(
  [(10,), (11,), (13,)],
  StructType([StructField("some_int", IntegerType(), True)]))

df.show()
+--------+
|some_int|
+--------+
|      10|
|      11|
|      13|
+--------+
Answered By: Powers