load table from bigquery to spark cluster with pyspark script

若如初见. 提交于 2019-12-07 07:54:45

问题


I have a data table loaded in bigquery, and I want to import it in my spark cluster via a pyspark .py file.

I saw in Dataproc + BigQuery examples - any available? that there was a way to load a bigquery table in the spark cluster with scala, but is there a way to do it in a pyspark script?


回答1:


This comes from @MattJ in this question. Here's an example to connect to BigQuery in Spark and perform a word count.

import json
import pyspark
sc = pyspark.SparkContext()

hadoopConf=sc._jsc.hadoopConfiguration()
hadoopConf.get("fs.gs.system.bucket")

conf = {"mapred.bq.project.id": "<project_id>", "mapred.bq.gcs.bucket": "<bucket>",
    "mapred.bq.input.project.id": "publicdata", 
    "mapred.bq.input.dataset.id":"samples", 
    "mapred.bq.input.table.id": "shakespeare"  }

tableData = sc.newAPIHadoopRDD(
    "com.google.cloud.hadoop.io.bigquery.JsonTextBigQueryInputFormat",
    "org.apache.hadoop.io.LongWritable", "com.google.gson.JsonObject", 
    conf=conf).map(lambda k: json.loads(k[1])).map(lambda x: (x["word"],
    int(x["word_count"]))).reduceByKey(lambda x,y: x+y)

print tableData.take(10)

You will need to change <project_id> and <bucket> to match the settings for your project.



来源:https://stackoverflow.com/questions/33359963/load-table-from-bigquery-to-spark-cluster-with-pyspark-script

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!