I am trying to write and save a CSV file to a specific folder in s3 (exist). this is my code:
from io import BytesIO
import pandas as pd
import boto3
s3 =
This should work:
bucket = bucketName
key = f"{folder}/{filename}"
csv_buffer=StringIO()
df.to_csv(csv_buffer)
content = csv_buffer.getvalue()
s3.put_object(Bucket=bucket, Body=content,Key=key)
AWS bucket names are not allowed to have slashes ("/"), which should be part of Key. AWS uses slashes to show "virtual" folders in the dashboard. Since csv is a text file I'm using StringIO instead of BytesIO
This should work
def to_s3(bucket,filename, content):
client = boto3.client('s3')
k = "folder/subfolder"+filename
client.put_object(Bucket=bucket, Key=k, Body=content)
Saving into s3 buckets can be also done with upload_file
with an existing .csv file:
import boto3
s3 = boto3.resource('s3')
bucket = 'bucket_name'
filename = 'file_name.csv'
s3.meta.client.upload_file(Filename = filename, Bucket= bucket, Key = filename)