I'm currently reading a file and importing the data in it with the line:
# Read data from file.
data = np.loadtxt(join(mypath, 'file.data'), unpack=True)
where the variable mypath
is known. The issue is that the file file.data
will change with time assuming names like:
file_3453453.data
file_12324.data
file_987667.data
...
So I need a way to tell the code to open the file in that path that has a name like file*.data
, assuming that there will always be only one file by that name in the path. Is there a way to do this in python
?
You can use the glob
module. It allows pattern matching on filenames and does exactly what you're asking
import glob
for fpath in glob.glob(mypath):
print fpath
e.g I have a directory with files named google.xml, google.json and google.csv.
I can use glob like this:
>>> import glob
>>> glob.glob('g*gle*')
['google.json', 'google.xml', 'google.csv']
Note that glob
uses the fnmatch
module but it has a simpler interface and it matches paths instead of filenames only.
You can search relative paths and don't have to use os.path.join
. In the example above if I change to the parent directory and try to match file names, it returns the relative paths:
>>> import os
>>> import glob
>>> os.chdir('..')
>>> glob.glob('foo/google*')
['foo/google.json', 'foo/google.xml', 'foo/google.csv']
Also check out fnmatch
:
>>> import fnmatch
>>> import os
>>>
>>> fnmatch.filter(os.listdir('.'), 'file_*.data')
['file_3453453.data']
>>>
I simple solution would be to use the python modules "os" and "re":
import os
import re
for file in os.listdir(mypath):
if re.match("file_\d+\.data", file):
...
Try
import os
[os.path.join(root, f) for root, _, files in os.walk(mypath)
for f in files
if f.startswith('file') and f.endswith('.data')]
It'll return a list of all files file*.data
, in case there are more than one. You can just iterate through them. If there is only one file, then just put [0]
at then end of the list comprehension.
来源:https://stackoverflow.com/questions/18433177/open-file-knowing-only-a-part-of-its-name