I\'m currently reading a file and importing the data in it with the line:
# Read data from file.
data = np.loadtxt(join(mypath, \'file.data\'), unpack=True)
Also check out fnmatch:
>>> import fnmatch
>>> import os
>>>
>>> fnmatch.filter(os.listdir('.'), 'file_*.data')
['file_3453453.data']
>>>
You can use the glob
module. It allows pattern matching on filenames and does exactly what you're asking
import glob
for fpath in glob.glob(mypath):
print fpath
e.g I have a directory with files named google.xml, google.json and google.csv.
I can use glob like this:
>>> import glob
>>> glob.glob('g*gle*')
['google.json', 'google.xml', 'google.csv']
Note that glob
uses the fnmatch
module but it has a simpler interface and it matches paths instead of filenames only.
You can search relative paths and don't have to use os.path.join
. In the example above if I change to the parent directory and try to match file names, it returns the relative paths:
>>> import os
>>> import glob
>>> os.chdir('..')
>>> glob.glob('foo/google*')
['foo/google.json', 'foo/google.xml', 'foo/google.csv']
Try
import os
[os.path.join(root, f) for root, _, files in os.walk(mypath)
for f in files
if f.startswith('file') and f.endswith('.data')]
It'll return a list of all files file*.data
, in case there are more than one. You can just iterate through them. If there is only one file, then just put [0]
at then end of the list comprehension.
I simple solution would be to use the python modules "os" and "re":
import os
import re
for file in os.listdir(mypath):
if re.match("file_\d+\.data", file):
...