For example if my text file is:
blue
green
yellow
black
Here there are four lines and now I want to get the result as four. How can I do th
this one also gives the no.of lines in a file.
a=open('filename.txt','r')
l=a.read()
count=l.splitlines()
print(len(count))
You can use sum() with a generator expression:
with open('data.txt') as f:
print sum(1 for _ in f)
Note that you cannot use len(f)
, since f
is an iterator. _
is a special variable name for throwaway variables, see What is the purpose of the single underscore "_" variable in Python?.
You can use len(f.readlines())
, but this will create an additional list in memory, which won't even work on huge files that don't fit in memory.
This link (How to get line count cheaply in Python?) has lots of potential solutions, but they all ignore one way to make this run considerably faster, namely by using the unbuffered (raw) interface, using bytearrays, and doing your own buffering.
Using a modified version of the timing tool, I believe the following code is faster (and marginally more pythonic) than any of the solutions offered:
def _make_gen(reader):
b = reader(1024 * 1024)
while b:
yield b
b = reader(1024*1024)
def rawpycount(filename):
f = open(filename, 'rb')
f_gen = _make_gen(f.raw.read)
return sum( buf.count(b'\n') for buf in f_gen )
Here are my timings:
rawpycount 0.0048 0.0046 1.00
bufcount 0.0074 0.0066 1.43
wccount 0.01 0.01 2.17
itercount 0.014 0.014 3.04
opcount 0.021 0.02 4.43
kylecount 0.023 0.021 4.58
simplecount 0.022 0.022 4.81
mapcount 0.038 0.032 6.82
I would post it there, but I'm a relatively new user to stack exchange and don't have the requisite manna.
EDIT:
This can be done completely with generators expressions in-line using itertools, but it gets pretty weird looking:
from itertools import (takewhile,repeat)
def rawbigcount(filename):
f = open(filename, 'rb')
bufgen = takewhile(lambda x: x, (f.raw.read(1024*1024) for _ in repeat(None)))
return sum( buf.count(b'\n') for buf in bufgen if buf )
if you import pandas
then you can use the shape function to determine this. Not sure how it performs. Code is as follows:
import pandas as pd
data=pd.read_csv("yourfile") #reads in your file
num_records=[] #creates an array
num_records=data.shape #assigns the 2 item result from shape to the array
n_records=num_records[0] #assigns number of lines to n_records
I am not new to stackoverflow, just never had an account and usually came here for answers. I can't comment or vote up an answer yet. BUT wanted to say that the code from Michael Bacon above works really well. I am new to Python but not to programming. I have been reading Python Crash Course and there are a few things I wanted to do to break up the reading cover to cover approach. One utility that has uses from an ETL or even data quality perspective would be to capture the row count of a file independently from any ETL. The file has X number of rows, you import into SQL or Hadoop and you end up with X number of rows. You can validate at the lowest level the row count of a raw data file.
I have been playing with his code and doing some testing and this code is very efficient so far. I have created several different CSV files, various sizes, and row counts. You can see my code below and my comments provide the times and details. The code Michael Bacon above provided runs about 6 times faster than the normal Python method of just looping the lines.
Hope this helps someone.
import time
from itertools import (takewhile,repeat)
def readfilesimple(myfile):
# watch me whip
linecounter = 0
with open(myfile,'r') as file_object:
# watch me nae nae
for lines in file_object:
linecounter += 1
return linecounter
def readfileadvanced(myfile):
# watch me whip
f = open(myfile, 'rb')
# watch me nae nae
bufgen = takewhile(lambda x: x, (f.raw.read(1024 * 1024) for _ in repeat(None)))
return sum(buf.count(b'\n') for buf in bufgen if buf)
#return linecounter
# ************************************
# Main
# ************************************
#start the clock
start_time = time.time()
# 6.7 seconds to read a 475MB file that has 24 million rows and 3 columns
#mycount = readfilesimple("c:/junk/book1.csv")
# 0.67 seconds to read a 475MB file that has 24 million rows and 3 columns
#mycount = readfileadvanced("c:/junk/book1.csv")
# 25.9 seconds to read a 3.9Gb file that has 3.25 million rows and 104 columns
#mycount = readfilesimple("c:/junk/WideCsvExample/ReallyWideReallyBig1.csv")
# 5.7 seconds to read a 3.9Gb file that has 3.25 million rows and 104 columns
#mycount = readfileadvanced("c:/junk/WideCsvExample/ReallyWideReallyBig1.csv")
# 292.92 seconds to read a 43Gb file that has 35.7 million rows and 104 columns
mycount = readfilesimple("c:/junk/WideCsvExample/ReallyWideReallyBig.csv")
# 57 seconds to read a 43Gb file that has 35.7 million rows and 104 columns
#mycount = readfileadvanced("c:/junk/WideCsvExample/ReallyWideReallyBig.csv")
#stop the clock
elapsed_time = time.time() - start_time
print("\nCode Execution: " + str(elapsed_time) + " seconds\n")
print("File contains: " + str(mycount) + " lines of text.")