问题
I'm trying to wait until some text is written to a live logfile in Python.
fdpexect would seem to be the right thing for this, but it isn't waiting. As soon as it hits the end of the file it terminates.
I'm wondering if fdpexpect just doesn't support this and I'll need to work around it?
The code I have is basically this:
Creating the spawn object:
# we're not using pexpect.spawn because we want
# all the output to be written to the logfile in real time,
# which spawn doesn't seem to support.
p = subprocess.Popen(command,
shell=shell,
stdout=spawnedLog.getFileObj(),
stderr=subprocess.STDOUT)
# give fdspawn the same file object we gave Popen
return (p, pexpect.fdpexpect.fdspawn(spawnedLog.getFileObj()))
Waiting for something:
pexpectObj.expect('something')
This basically quits immediately and before the 'something' event happens with an EOF error.
回答1:
fdpexpect
isn't design to work on normal files. pexpect
will always read from a file object until it hits EOF - for pipes and sockets, this won't happen until the connection is actually closed, but for normal files, it will happen as soon as the entire file has been read. It has no way of knowing that the file is actively being written to by another process.
You could work around this by creating a pipe using os.pipe
, and then implementing your own tee
functionality to write the stdout
of your process to that pipe in addition to the log file. Here's a little toy example that seems to work:
from subprocess import Popen, PIPE, STDOUT
from threading import Thread
import os
import pexpect.fdpexpect
# tee and teed_call are based on http://stackoverflow.com/a/4985080/2073595
def tee(infile, *files):
"""Print `infile` to `files` in a separate thread."""
def fanout(infile, *files):
for line in iter(infile.readline, ''):
for f in files:
f.write(line)
infile.close()
t = Thread(target=fanout, args=(infile,)+files)
t.daemon = True
t.start()
return t
def teed_call(cmd_args, files, **kwargs):
p = Popen(cmd_args,
stdout=PIPE,
stderr=STDOUT,
**kwargs)
threads = []
threads.append(tee(p.stdout, *files))
return (threads, p)
with open("log.txt", 'w') as logf:
# Create pipes for unbuffered reading and writing
rpipe, wpipe = os.pipe()
rpipe = os.fdopen(rpipe, 'r', 0)
wpipe = os.fdopen(wpipe, 'w', 0)
# Have pexpect read from the readable end of the pipe
pobj = pexpect.fdpexpect.fdspawn(rpipe)
# Call some script, and tee output to our log file and
# the writable end of the pipe.
threads, p = teed_call(["./myscript.sh"], [wpipe, logf])
# myscript.sh will print 'hey'
pobj.expect("hey")
# orderly shutdown/cleanup
for t in threads: t.join()
p.wait()
rpipe.close()
wpipe.close()
回答2:
An alternate approach to dano's is to just bite the bullet and use 'tail -f'.
It's a bit hokey and depends on you having 'tail' available.
p = subprocess.Popen(command,
shell=shell,
stdout=spawnedLog.getFileObj(),
stderr=subprocess.STDOUT)
# this seems really dumb, but in order to follow the log
# file and not have fdpexect quit because we encountered EOF
# we're going spawn *another* process to tail the log file
tailCommand = "tail -f %s" % spawnedLog.getPath()
# this is readonly, we're going to look at the output logfile
# that's created
return (p, pexpect.spawn(tailCommand))
来源:https://stackoverflow.com/questions/25769522/can-i-use-fdpexpect-on-a-file-thats-currently-being-written-to