Save pandas data frame as csv on to gcloud storage bucket

試著忘記壹切 提交于 2019-11-27 07:23:07

问题


from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
import gc
import pandas as pd
import datetime
import numpy as np
import sys



APP_NAME = "DataFrameToCSV"

spark = SparkSession\
    .builder\
    .appName(APP_NAME)\
    .config("spark.sql.crossJoin.enabled","true")\
    .getOrCreate()

group_ids = [1,1,1,1,1,1,1,2,2,2,2,2,2,2]

dates = ["2016-04-01","2016-04-01","2016-04-01","2016-04-20","2016-04-20","2016-04-28","2016-04-28","2016-04-05","2016-04-05","2016-04-05","2016-04-05","2016-04-20","2016-04-20","2016-04-29"]

#event = [0,1,0,0,0,0,1,1,0,0,0,0,1,0]
event = [0,1,1,0,1,0,1,0,0,1,0,0,0,0]

dataFrameArr = np.column_stack((group_ids,dates,event))

df = pd.DataFrame(dataFrameArr,columns = ["group_ids","dates","event"])

The above python code is to be run on a spark cluster on gcloud dataproc. I would like to save the pandas dataframe as csv file in gcloud storage bucket at gs://mybucket/csv_data/

How do I do this?


回答1:


You can also use this solution with Dask. You can convert your DataFrame to Dask DataFrame, which can be written to csv on Cloud Storage

import dask.dataframe as dd
import pandas
df # your Pandas DataFrame
ddf = dd.from_pandas(df,npartitions=1, sort=True)
ddf.to_csv('gs://YOUR_BUCKET/ddf-*.csv', index=False, sep=',', header=False,  
                               storage_options={'token': gcs.session.credentials}) 

storage_options argument is optional




回答2:


So, I figured out how to do this. Continuing on from the above code, here is the solution:

sc = SparkContext.getOrCreate()  

from pyspark.sql import SQLContext
sqlCtx = SQLContext(sc)
sparkDf = sqlCtx.createDataFrame(df)    
sparkDf.coalesce(1).write.option("header","true").csv('gs://mybucket/csv_data')


来源:https://stackoverflow.com/questions/45495108/save-pandas-data-frame-as-csv-on-to-gcloud-storage-bucket

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!