saving csv file to s3 using boto3

后端 未结 3 999
-上瘾入骨i
-上瘾入骨i 2021-01-13 16:41

I am trying to write and save a CSV file to a specific folder in s3 (exist). this is my code:

from io import BytesIO
import pandas as pd
import boto3
s3 =         


        
相关标签:
3条回答
  • 2021-01-13 17:11

    This should work:

    bucket = bucketName
    key = f"{folder}/{filename}"
    csv_buffer=StringIO()
    df.to_csv(csv_buffer)
    content = csv_buffer.getvalue()
    s3.put_object(Bucket=bucket, Body=content,Key=key)
    

    AWS bucket names are not allowed to have slashes ("/"), which should be part of Key. AWS uses slashes to show "virtual" folders in the dashboard. Since csv is a text file I'm using StringIO instead of BytesIO

    0 讨论(0)
  • 2021-01-13 17:17

    This should work

    def to_s3(bucket,filename, content):
        client = boto3.client('s3')
        k = "folder/subfolder"+filename
        client.put_object(Bucket=bucket, Key=k, Body=content)
    
    0 讨论(0)
  • 2021-01-13 17:19

    Saving into s3 buckets can be also done with upload_file with an existing .csv file:

    import boto3
    s3 = boto3.resource('s3')
    
    bucket = 'bucket_name'
    filename = 'file_name.csv'
    s3.meta.client.upload_file(Filename = filename, Bucket= bucket, Key = filename)
    
    0 讨论(0)
提交回复
热议问题