问题
I am trying to run multiple instances of a console-based game (dungeon crawl stone soup -- for research purposes naturally) using a multiprocessing pool to evaluate each run.
In the past when I've used a pool to evaluate similar code (genetic algorithms), I've used subprocess.call
to split off each process. However, with dcss being quite interactive having a shared subshell seems to be problematic.
I have the code I normally use for this kind of thing, with crawl replacing other applications I've thrown a GA at. Is there a better way to handle highly-interactive shells than this? I'd considered kicking off a screen for each instance, but thought there was a cleaner way. My understanding was that shell=True
should be spawning a sub-shell, but I guess I it is spawning one in a way that is shared between each call.
I should mention I have a bot running the game, so I don't want any actual interaction from the user's end to occur.
# Kick off the GA execution
pool_args = zip(trial_ids,run_types,self.__population)
pool.map(self._GAExecute, pool_args)
---
# called by pool.map
def _GAExecute(self,pool_args):
trial_id = pool_args[0]
run_type = pool_args[1]
genome = pool_args[2]
self._RunSimulation(trial_id)
# Call the actual binary
def _RunSimulation(self, trial_id):
command = "./%s" % self.__crawl_binary
name = "-name %s" % trial_id
rc = "-rc %s" % os.path.join(self.__output_dir,'qw-%s'%trial_id,"qw -%s.rc"%trial_id)
seed = "-seed %d" % self.__seed
cdir = "-dir %s" % os.path.join(self.__output_dir,'qw-%s'%trial_id)
shell_command = "%s %s %s %s %s" % (command,name,rc,seed,cdir)
call(shell_command, shell=True)
回答1:
You can indeed associate stdin and stdout to files, as in the answer from @napuzba:
fout = open('stdout.txt','w')
ferr = open('stderr.txt','w')
subprocess.call(cmd, stdout=fout, stderr=ferr)
Another option would be to use Popen instead of call. The difference is that call waits for completion (is blocking) while Popen is not, see What's the difference between subprocess Popen and call (how can I use them)?
Using Popen, you can then keep stdout and stderr inside your object, and then use them later, without having to rely on a file:
p = subprocess.Popen(cmd,stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.wait()
stderr = p.stderr.read()
stdout = p.stdout.read()
Another potential advantage of this method is that you could run multiple instances of Popen without waiting for completion instead of having a thread pool:
processes=[
subprocess.Popen(cmd1,stdout=subprocess.PIPE, stderr=subprocess.PIPE),
subprocess.Popen(cmd2,stdout=subprocess.PIPE, stderr=subprocess.PIPE),
subprocess.Popen(cmd3,stdout=subprocess.PIPE, stderr=subprocess.PIPE)
]
for p in processes:
if p.poll():
# process completed
else:
# no completion yet
On a side note, you should avoid shell=True
if you can, and if you do not use it Popen expects a list as a command instead of a string. Do not generate this list manually, but use shlex which will take care of all corner cases for you, eg.:
Popen(shlex.split(cmd), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
回答2:
Specify the standard input, standard output and standard error with unique file handles for each call:
import subprocess
cmd = ""
fout = open('stdout.txt','w')
fin = open('stdin.txt','r')
ferr = open('stderr.txt','w')
subprocess.call(cmd, stdout=fout , stdin = fin , stderr=ferr )
来源:https://stackoverflow.com/questions/43554548/handling-interactive-shells-with-python-subprocess