I have a huge dataset and I am trying to read it line by line. For now, I am reading the dataset using pandas:
df = pd.read_csv(\"mydata.csv\", sep =\',\', n
One way could be to read part by part of your file and store each part, for example:
df1 = pd.read_csv("mydata.csv", nrows=10000)
Here you will skip the first 10000 rows that you already read and stored in df1, and store the next 10000 rows in df2.
df2 = pd.read_csv("mydata.csv", skiprows=10000 nrows=10000)
dfn = pd.read_csv("mydata.csv", skiprows=(n-1)*10000, nrows=10000)
Maybe there is a way to introduce this idea into a for or while loop.