问题
I'm trying out a code snippet from the standard python documentation to learn how to use the multiprocessing module. The code is pasted at the end of this message. I'm using Python 2.7.1 on Ubuntu 11.04 on a quad core machine (which according to the system monitor gives me eight cores due to hyper threading)
Problem: All workload seems to be scheduled to just one core, which gets close to 100% utilization, despite the fact that several processes are started. Occasionally all workload migrates to another core but the workload is never distributed among them.
Any ideas why this is so?
Best regards,
Paul
#
# Simple example which uses a pool of workers to carry out some tasks.
#
# Notice that the results will probably not come out of the output
# queue in the same in the same order as the corresponding tasks were
# put on the input queue. If it is important to get the results back
# in the original order then consider using `Pool.map()` or
# `Pool.imap()` (which will save on the amount of code needed anyway).
#
# Copyright (c) 2006-2008, R Oudkerk
# All rights reserved.
#
import time
import random
from multiprocessing import Process, Queue, current_process, freeze_support
#
# Function run by worker processes
#
def worker(input, output):
for func, args in iter(input.get, 'STOP'):
result = calculate(func, args)
output.put(result)
#
# Function used to calculate result
#
def calculate(func, args):
result = func(*args)
return '%s says that %s%s = %s' % \
(current_process().name, func.__name__, args, result)
#
# Functions referenced by tasks
#
def mul(a, b):
time.sleep(0.5*random.random())
return a * b
def plus(a, b):
time.sleep(0.5*random.random())
return a + b
def test():
NUMBER_OF_PROCESSES = 4
TASKS1 = [(mul, (i, 7)) for i in range(500)]
TASKS2 = [(plus, (i, 8)) for i in range(250)]
# Create queues
task_queue = Queue()
done_queue = Queue()
# Submit tasks
for task in TASKS1:
task_queue.put(task)
# Start worker processes
for i in range(NUMBER_OF_PROCESSES):
Process(target=worker, args=(task_queue, done_queue)).start()
# Get and print results
print 'Unordered results:'
for i in range(len(TASKS1)):
print '\t', done_queue.get()
# Add more tasks using `put()`
for task in TASKS2:
task_queue.put(task)
# Get and print some more results
for i in range(len(TASKS2)):
print '\t', done_queue.get()
# Tell child processes to stop
for i in range(NUMBER_OF_PROCESSES):
task_queue.put('STOP')
test()
回答1:
Some how the CPU affinity has been changed. I had this problem with numpy before. I found the solution here http://bugs.python.org/issue17038#msg180663
回答2:
Try replacing the time.sleep
with something that actually requires CPUs and you will see the multiprocess
works just fine! For example:
def mul(a, b):
for i in xrange(100000):
j = i**2
return a * b
def plus(a, b):
for i in xrange(100000):
j = i**2
return a + b
回答3:
multiprocessing does not mean you'll use all cores of a processor, you just get multiple processes and not multi-core processes, this would be handled by the OS and is uncertain, the question @Devraj posted on comments has answers to accomplish what you desire.
回答4:
I have found a work around using Parallel Python. I know this is not the solution using basic Python libraries, but the code is simple and works like a charm
来源:https://stackoverflow.com/questions/6905264/python-multiprocessing-utilizes-only-one-core