Suppose we have a shared queue (implemented using an array), which two threads can access, one for reading data from it, and other for writing data to it. Now, I have a problem of synchronization. I'm implementing this using Win32 API's (EnterCriticalSection etc.).
But my curiosity is what will be the critical section code in enqueue and dequeue operations of the queue?
Just because, two threads are using a shared resource? Why I'm not able to see any problem is this: front and rear are maintained, so, when ReaderThread reads, it can read from front end and when WriterThread writes, it can easily write to rear end.
What potential problems can occur?
For a single producer/consumer circular queue implementation, locks are actually not required. Simply set a condition where the producer cannot write into the queue if the queue is full and the consumer cannot read from the queue if it is empty. Also the producer will always write to a tail
pointer that is pointing to the first available empty slot in the queue, and the consumer will read from a head
pointer that represents the first unread slot in the queue.
You code can look like the following code example (note: I'm assuming in an initialized queue that tail == head
, and that both pointers are declared volatile
so that an optimizing compiler does not re-order the sequence of operations within a given thread. On x86, no memory barriers are required due to the strong memory consistency model for the architecture, but this would change on other architectures with weaker memory consistency models, where memory barriers would be required):
queue_type::pointer queue_type::next_slot(queue_type::pointer ptr);
bool queue_type::enqueue(const my_type& obj)
{
if (next_slot(tail) == head)
return false;
*tail = obj;
tail = next_slot(tail);
return true;
}
bool queue_type::dequeue(my_type& obj)
{
if (head == tail)
return false;
obj = *head;
head = next_slot(head);
return true;
}
The function next_slot
simply increments the head
or tail
pointer so that it returns a pointer to the next slot in the array, and accounts for any array wrap-around functionality.
Finally, we guarantee synchronization in the single producer/consumer model because we do not increment the tail
pointer until it has written the data into the slot it was pointing to, and we do not increment the head
pointer until we have read the data from the slot it was pointing to. Therefore a call to dequeue
will not return valid until at least one call to enequeue
has been made, and the tail
pointer will never over-write the head
pointer because of the check in enqueue
. Additionally, only one thread is incrementing the tail
pointer, and one thread is incrementing the head
pointer, so there are no issues with a shared read or write from or to the same pointer which would create synchronization problems necessitating a lock or some type of atomic operation.
来源:https://stackoverflow.com/questions/6839425/what-will-be-the-critical-section-code-for-a-shared-queue-accessed-by-two-thread