问题
I'm trying to deploy a training script on Google Cloud ML. Of course, I've uploaded my datasets (CSV files) in a bucket on GCS.
I used to import my data with read_csv from pandas, but it doesn't seem to work with a GCS path.
How should I proceed (I would like to keep using pandas) ?
import pandas as pd
data = pd.read_csv("gs://bucket/folder/file.csv")
output :
ERROR 2018-02-01 18:43:34 +0100 master-replica-0 IOError: File gs://bucket/folder/file.csv does not exist
回答1:
You will require to use file_io from tensorflow.python.lib.io to do that as given below:
from tensorflow.python.lib.io import file_io
from pandas.compat import StringIO
import pandas as pd
# read the input data
def read_data(gcs_path):
print('downloading csv file from', gcs_path)
file_stream = file_io.FileIO(gcs_path, mode='r')
data = pd.read_csv(StringIO(file_stream.read()))
return data
Now call the above function
df = read_data('gs://bucket/folder/file.csv')
# print(df.head()) # display top 5 rows including headers
回答2:
Pandas does not have native GCS support. There are two alternatives: 1. copy the file to the VM using gsutil cli 2. use the TensorFlow file_io library to open the file, and pass the file object to pd.read_csv(). Please refer to the detailed answer here.
回答3:
You could also use Dask to extract and then load the data into, let's say, a Jupyter Notebook running on GCP.
Make sure you have Dask is installed.
conda install dask #conda
pip install dask[complete] #pip
import dask.dataframe as dd #Import
dataframe = dd.read_csv('gs://bucket/datafile.csv') #Read CSV data
dataframe2 = dd.read_csv('gs://bucket/path/*.csv') #Read parquet data
This is all you need to load the data.
You can filter and manipulate data with Pandas syntax now.
dataframe['z'] = dataframe.x + dataframe.y
dataframe_pd = dataframe.compute()
来源:https://stackoverflow.com/questions/48569618/how-do-i-use-pandas-read-csv-on-google-cloud-ml