问题
I have a data table loaded in bigquery, and I want to import it in my spark cluster via a pyspark .py file.
I saw in Dataproc + BigQuery examples - any available? that there was a way to load a bigquery table in the spark cluster with scala, but is there a way to do it in a pyspark script?
回答1:
This comes from @MattJ in this question. Here's an example to connect to BigQuery in Spark and perform a word count.
import json
import pyspark
sc = pyspark.SparkContext()
hadoopConf=sc._jsc.hadoopConfiguration()
hadoopConf.get("fs.gs.system.bucket")
conf = {"mapred.bq.project.id": "<project_id>", "mapred.bq.gcs.bucket": "<bucket>",
"mapred.bq.input.project.id": "publicdata",
"mapred.bq.input.dataset.id":"samples",
"mapred.bq.input.table.id": "shakespeare" }
tableData = sc.newAPIHadoopRDD(
"com.google.cloud.hadoop.io.bigquery.JsonTextBigQueryInputFormat",
"org.apache.hadoop.io.LongWritable", "com.google.gson.JsonObject",
conf=conf).map(lambda k: json.loads(k[1])).map(lambda x: (x["word"],
int(x["word_count"]))).reduceByKey(lambda x,y: x+y)
print tableData.take(10)
You will need to change <project_id>
and <bucket>
to match the settings for your project.
来源:https://stackoverflow.com/questions/33359963/load-table-from-bigquery-to-spark-cluster-with-pyspark-script