问题
I have the following scenario: I have a single thread that is supposed to fill a container with pairs of integers (in essence, task descriptions), and I have a large number of worker threads (8-16) that should take elements from this container and perform some work.
I thought the problem could be easily solved by a blocking queue -- e.g. on item-removal, threads synchronize access to the queue, and sleep if there is no data available.
I (perhaps wrongly) assumed that something like this should exist in the STL or in boost, but I was unable to find anything.
Do I actually have to implement that thing myself ? It seems like such a common scenario...
回答1:
If you do implement it yourself, the implementation should be a fairly straightforward combination of a semaphore, a mutex, and a queue object.
Here's some pseudo-code:
Produce{
pthread_mutex_lock(&mutex);
queue.push_back(someObjectReference);
pthread_mutex_unlock(&mutex);
sem_post(&availabilitySem);
}
Consume{
sem_wait(&availabilitySem);
pthread_mutex_lock(&mutex);
queue.pop_front(someObjectReference);
pthread_mutext_unlock(&mutex);
}
回答2:
If you are on windows take a look at the agents library in VS2010 this is a core scenario.
http://msdn.microsoft.com/en-us/library/dd492627(VS.100).aspx
i.e.
//an unbounded_buffer is like a queue
unbounded_buffer<int> buf;
//you can send messages into it with send or asend
send(buf,1);
//receive will block and wait for data
int result = receive(buf)
you can use threads, 'agents' or 'tasks' to get the data out... or you can link buffers together and convert your blocking semantic producer / consumer problem to a data flow network.
回答3:
If you are on Windows and want a queue that is efficient in terms of how it manages the threads that are allowed to run to process items from it then take a look at IO Completion Ports (see here). My free server framework includes a task queue implementation that's based on IOCPs and that may also be of interest if you intend to go down this route; though it's possibly too specialised for what you want.
回答4:
I think message_queue from boost::interprocess is what you want. The second link has a usage example.
回答5:
You should take a look at ACE (Adaptive Communication Environment) and the ACE_Message_Queue. There's always boost's message_queue, but ACE is where it's at in terms of high performance concurrency.
回答6:
If you're on OSX Snow Leopard, you might want to look at Grand Central Dispatch.
来源:https://stackoverflow.com/questions/1826228/building-a-multithreaded-work-queue-consumer-producer-in-c