I don\'t want to use OS commands as that makes it is OS dependent.
This is available in tarfile
, tarfile.is_tarfile(filename)
, to check if
If you want to check whether a file is a valid Gzip file, you can open it and read one byte from it. If it succeeds, the file is quite probably a gzip file, with one caveat: an empty file also succeeds this test.
Thus we get
def is_gz_file(name):
with gzip.open(name, 'rb') as f:
try:
file_content = f.read(1)
return True
except:
return False
However, as I stated earlier, a file which is empty (0 bytes), still succeeds this test, so you'd perhaps want to ensure that the file is not empty:
def is_gz_file(name):
if os.stat(name).ST_SIZE == 0:
return False
with gzip.open(name, 'rb') as f:
try:
file_content = f.read(1)
return True
except:
return False
EDIT:
as the question was now changed to "a gzip file that doesn't have empty contents", then:
def is_nonempty_gz_file(name):
with gzip.open(name, 'rb') as f:
try:
file_content = f.read(1)
return len(file_content) > 0
except:
return False
Unfortunately, the gzip
module does not expose any functionality equivalent to the -l
list option of the gzip
program. But in Python 3 you can easily get the size of the uncompressed data by calling the .seek
method with a whence
argument of 2, which signifies positioning relative to the end of the (uncompressed) data stream.
.seek
returns the new byte position, so .seek(0, 2)
returns the byte offset of the end of the uncompressed file, i.e., the file size. Thus if the uncompressed file is empty the .seek
call will return 0.
import gzip
def gz_size(fname):
with gzip.open(fname, 'rb') as f:
return f.seek(0, whence=2)
Here's a function that will work on Python 2, tested on Python 2.6.6.
def gz_size(fname):
f = gzip.open(fname, 'rb')
data = f.read()
f.close()
return len(data)
You can read about .seek
and other methods of the GzipFile
class using the pydoc
program. Just run pydoc gzip
in the shell.
Alternatively, if you wish to avoid decompressing the file you can (sort of) read the uncompressed data size directly from the .gz
file. The size is stored in the last 4 bytes of the file as a little-endian unsigned long, so it's actually the size modulo 2**32, therefore it will not be the true size if the uncompressed data size is >= 4GB.
This code works on both Python 2 and Python 3.
import gzip
import struct
def gz_size(fname):
with open(fname, 'rb') as f:
f.seek(-4, 2)
data = f.read(4)
size = struct.unpack('<L', data)[0]
return size
However, this method is not reliable, as Mark Adler (gzip co-author) mentions in the comments:
There are other reasons that the length at the end of the gzip file would not represent the length of the uncompressed data. (Concatenated gzip streams, padding at the end of the gzip file.) It should not be used for this purpose. It's only there as an integrity check on the data.
Here is another solution. It does not decompress the whole file. It returns True
if the uncompressed data in the input file is of zero length, but it also returns True
if the input file itself is of zero length. If the input file is not of zero length and is not a gzip file then OSError
is raised.
import gzip
def gz_is_empty(fname):
''' Test if gzip file fname is empty
Return True if the uncompressed data in fname has zero length
or if fname itself has zero length
Raises OSError if fname has non-zero length and is not a gzip file
'''
with gzip.open(fname, 'rb') as f:
data = f.read(1)
return len(data) == 0
Try something like this:
def is_empty(gzfile):
size = gzfile.read().
if len(size) > 0:
gzfile.rewind()
return False
else:
return True
import gzip
with gzip.open("pCSV.csv.gz", 'r') as f:
f.seek(3)
couterA = f.tell()
f.seek(2,0)
counterB = f.tell()
if(couterA > counterB):
print "NOT EMPTY"
else:
print "EMPTY"
This should do it without reading the file.
Looking through the source code for the Python 2.7 version of the gzip
module, it seems to immediately return EOF, not only in the case where the gzipped file is zero bytes, but also in the case that the gzip file is zero bytes, which is arguably a bug.
However, for your particular use-case, we can do a little better, by also confirming the gzipped file is a valid CSV file.
This code...
import csv
import gzip
# Returns true if the specified filename is a valid gzip'd CSV file
# If the optional 'columns' parameter is specified, also check that
# the first row has that many columns
def is_valid(filename, columns=None):
try:
# Chain a CSV reader onto a gzip reader
csv_file = csv.reader(gzip.open(filename))
# This will try to read the first line
# If it's not a valid gzip, this will raise IOError
for row in csv_file:
# We got at least one row
# Bail out here if we don't care how many columns we have
if columns is None:
return True
# Check it has the right number of columns
return len(row) == columns
else:
# There were no rows
return False
except IOError:
# This is not a valid gzip file
return False
# Example to check whether File.txt.gz is valid
result = is_valid('File.txt.gz')
# Example to check whether File.txt.gz is valid, and has three columns
result = is_valid('File.txt.gz', columns=3)
...should correctly handle the following error cases...
UPDATE:
i would strongly recommend to upgrade to pandas 0.18.1 (currently the latest version), as each new version of pandas introduces nice new features and fixes tons of old bugs. And the actual version (0.18.1) will process your empty files just out of the box (see demo below).
If you can't upgrade to a newer version, then make use of @MartijnPieters recommendation - catch the exception, instead of checking (follow the Easier to ask for forgiveness than permission paradigm)
OLD answer: a small demonstration (using pandas 0.18.1), which tolerates empty files, different number of columns, etc.
I tried to reproduce your error (trying empty CSV.gz, different number of columns, etc.), but i didn't manage to reproduce your exception using pandas v. 0.18.1:
import os
import glob
import gzip
import pandas as pd
fmask = 'd:/temp/.data/37874936/*.csv.gz'
files = glob.glob(fmask)
cols = ['a','b','c']
for f in files:
# actually there is no need to use `compression='gzip'` - pandas will guess it itself
# i left it in order to be sure that we are using the same parameters ...
df = pd.read_csv(f, header=None, names=cols, compression='gzip', sep=',')
print('\nFILE: [{:^40}]'.format(f))
print('{:-^60}'.format(' ORIGINAL contents '))
print(gzip.open(f, 'rt').read())
print('{:-^60}'.format(' parsed DF '))
print(df)
Output:
FILE: [ d:/temp/.data/37874936\1.csv.gz ]
-------------------- ORIGINAL contents ---------------------
11,12,13
14,15,16
------------------------ parsed DF -------------------------
a b c
0 11 12 13
1 14 15 16
FILE: [ d:/temp/.data/37874936\empty.csv.gz ]
-------------------- ORIGINAL contents ---------------------
------------------------ parsed DF -------------------------
Empty DataFrame
Columns: [a, b, c]
Index: []
FILE: [d:/temp/.data/37874936\zz_5_columns.csv.gz]
-------------------- ORIGINAL contents ---------------------
1,2,3,4,5
11,22,33,44,55
------------------------ parsed DF -------------------------
a b c
1 2 3 4 5
11 22 33 44 55
FILE: [d:/temp/.data/37874936\z_bad_CSV.csv.gz ]
-------------------- ORIGINAL contents ---------------------
1
5,6,7
1,2
8,9,10,5,6
------------------------ parsed DF -------------------------
a b c
0 1 NaN NaN
1 5 6.0 7.0
2 1 2.0 NaN
3 8 9.0 10.0
FILE: [d:/temp/.data/37874936\z_single_column.csv.gz]
-------------------- ORIGINAL contents ---------------------
1
2
3
------------------------ parsed DF -------------------------
a b c
0 1 NaN NaN
1 2 NaN NaN
2 3 NaN NaN
Can you post a sample CSV, causing this error or upload it somewhere and post here a link?