Write a Pandas DataFrame to Google Cloud Storage or BigQuery

前端 未结 8 1160
予麋鹿
予麋鹿 2020-12-02 13:16

Hello and thanks for your time and consideration. I am developing a Jupyter Notebook in the Google Cloud Platform / Datalab. I have created a Pandas DataFrame and would like

相关标签:
8条回答
  • 2020-12-02 13:51

    I spent a lot of time to find the easiest way to solve this:

    import pandas as pd
    
    df = pd.DataFrame(...)
    
    df.to_csv('gs://bucket/path')
    
    0 讨论(0)
  • 2020-12-02 13:52

    Uploading to Google Cloud Storage without writing a temporary file and only using the standard GCS module

    from google.cloud import storage
    import os
    import pandas as pd
    
    # Only need this if you're running this code locally.
    os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = r'/your_GCP_creds/credentials.json'
    
    df = pd.DataFrame(data=[{1,2,3},{4,5,6}],columns=['a','b','c'])
    
    client = storage.Client()
    bucket = client.get_bucket('my-bucket-name')
        
    bucket.blob('upload_test/test.csv').upload_from_string(df.to_csv(), 'text/csv')
    
    0 讨论(0)
  • 2020-12-02 13:54

    I think you need to load it into a plain bytes variable and use a %%storage write --variable $sample_bucketpath(see the doc) in a separate cell... I'm still figuring it out... But That is roughly the inverse of what I needed to do to read a CSV file in, I don't know if it makes a difference on write but I had to use BytesIO to read the buffer created by the %% storage read command... Hope it helps, let me know!

    0 讨论(0)
  • 2020-12-02 13:57

    Since 2017, Pandas has a Dataframe to BigQuery function pandas.DataFrame.to_gbq

    The documentation has an example:

    import pandas_gbq as gbq gbq.to_gbq(df, 'my_dataset.my_table', projectid, if_exists='fail')

    Parameter if_exists can be set to 'fail', 'replace' or 'append'

    See also this example.

    0 讨论(0)
  • 2020-12-02 14:04

    Using the Google Cloud Datalab documentation

    import datalab.storage as gcs
    gcs.Bucket('bucket-name').item('to/data.csv').write_to(simple_dataframe.to_csv(),'text/csv')
    
    0 讨论(0)
  • 2020-12-02 14:09

    Try the following working example:

    from datalab.context import Context
    import google.datalab.storage as storage
    import google.datalab.bigquery as bq
    import pandas as pd
    
    # Dataframe to write
    simple_dataframe = pd.DataFrame(data=[{1,2,3},{4,5,6}],columns=['a','b','c'])
    
    sample_bucket_name = Context.default().project_id + '-datalab-example'
    sample_bucket_path = 'gs://' + sample_bucket_name
    sample_bucket_object = sample_bucket_path + '/Hello.txt'
    bigquery_dataset_name = 'TestDataSet'
    bigquery_table_name = 'TestTable'
    
    # Define storage bucket
    sample_bucket = storage.Bucket(sample_bucket_name)
    
    # Create storage bucket if it does not exist
    if not sample_bucket.exists():
        sample_bucket.create()
    
    # Define BigQuery dataset and table
    dataset = bq.Dataset(bigquery_dataset_name)
    table = bq.Table(bigquery_dataset_name + '.' + bigquery_table_name)
    
    # Create BigQuery dataset
    if not dataset.exists():
        dataset.create()
    
    # Create or overwrite the existing table if it exists
    table_schema = bq.Schema.from_data(simple_dataframe)
    table.create(schema = table_schema, overwrite = True)
    
    # Write the DataFrame to GCS (Google Cloud Storage)
    %storage write --variable simple_dataframe --object $sample_bucket_object
    
    # Write the DataFrame to a BigQuery table
    table.insert(simple_dataframe)
    

    I used this example, and the _table.py file from the datalab github site as a reference. You can find other datalab source code files at this link.

    0 讨论(0)
提交回复
热议问题