I have a csv file of 8gb and I am not able to run the code as it shows memory error.
file = \"./data.csv\"
df = pd.read_csv(file, sep=\"/\", header=0, dtype=str
If you don't need all columns you may also use usecols
parameter:
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html
usecols : array-like or callable, default None
Return a subset of the columns. [...]
Using this parameter results in much faster parsing time and lower memory usage.