I created a spark DataFrame in a Python paragraph in Zeppelin.
sqlCtx = SQLContext(sc)
spDf = sqlCtx.createDataFrame(df)
<
You canput
internal Java object not a Python wrapper:
%pyspark
df = sc.parallelize([(1, "foo"), (2, "bar")]).toDF(["k", "v"])
z.put("df", df._jdf)
and then make sure you use correct type:
val df = z.get("df").asInstanceOf[org.apache.spark.sql.DataFrame]
// df: org.apache.spark.sql.DataFrame = [k: bigint, v: string]
but it is better to register temporary table:
%pyspark
# registerTempTable in Spark 1.x
df.createTempView("df")
and use SQLContext.table to read it:
// sqlContext.table in Spark 1.x
val df = spark.table("df")
df: org.apache.spark.sql.DataFrame = [k: bigint, v: string]
To convert in the opposite direction see Zeppelin: Scala Dataframe to python