How do I run graphx with Python / pyspark?


I am attempting to run Spark graphx with Python using pyspark. My installation appears correct, as I am able to run the pyspark tutorials and the (Java) GraphX tutorials just fine. Presumably since GraphX is part of Spark, pyspark should be able to interface it, correct?

Here are the tutorials for pyspark:

Here are the ones for GraphX:

Can anyone convert the GraphX tutorial to be in Python?

Asked By: Glenn Strycker



GraphX 0.9.0 doesn’t have python API yet. It’s expected in upcoming releases.

Answered By: Wildfire

It looks like the python bindings to GraphX are delayed at least to Spark 1.4 1.5 ∞. It is waiting behind the Java API.

You can track the status at SPARK-3789 GRAPHX Python bindings for GraphX – ASF JIRA

Answered By: Misty Nodine

You should look at GraphFrames (, which wraps GraphX algorithms under the DataFrames API and it provides Python interface.

Here is a quick example from, with slight modification so that it works

first start pyspark with the graphframes pkg loaded

pyspark --packages graphframes:graphframes:0.1.0-spark1.6

python code:

from graphframes import *

# Create a Vertex DataFrame with unique ID column "id"
v = sqlContext.createDataFrame([
  ("a", "Alice", 34),
  ("b", "Bob", 36),
  ("c", "Charlie", 30),
], ["id", "name", "age"])

# Create an Edge DataFrame with "src" and "dst" columns
e = sqlContext.createDataFrame([
  ("a", "b", "friend"),
  ("b", "c", "follow"),
  ("c", "b", "follow"),
], ["src", "dst", "relationship"])
# Create a GraphFrame
g = GraphFrame(v, e)

# Query: Get in-degree of each vertex.

# Query: Count the number of "follow" connections in the graph.
g.edges.filter("relationship = 'follow'").count()

# Run PageRank algorithm, and show results.
results = g.pageRank(resetProbability=0.01, maxIter=20)"id", "pagerank").show()
Answered By: zhibo