问题
I am working on a RPC framework, I want to use a multi io_service design to decouple the io_objects
that perform the IO (front-end) from the the threads that perform the RPC work (the back-end).
The front-end should be single threaded and the back-end should have a thread pool. I was considering a design to get the front-end and back-end to synchronise using a condition variables. However, it seems boost::thread
and boost::asio
do not comingle --i.e., it seems condition variable async_wait
support is not available. I have a question open on this matter here.
It occured to me that io_service::post()
might be used to synchronise the two io_service objects. I have attached a diagram below, I just want to know if I understand the post
mechanism correctly, and weather this is a sensible implementation.
回答1:
I assume that you use "a single io_service and a thread pool calling io_service::run()"
Also I assume that your frond-end is single-threaded just to avoid a race condition writing from multiple threads to the same socket.
The same goal can be achieved using io_service::strand (tutorial).Your front-end can be MT synchronized by io_service::strand
. All posts
from back-end to front-end (and handlers from front-end to front-end like handle_connect
etc.) should be wrapped by strand
, something like this:
back-end -> front-end:
io_service.post(front_end.strand.wrap(
boost::bind(&Front_end::send_response, front_end_ptr)));
or front-end -> front-end:
socket.async_connect(endpoint, strand.wrap(
boost::bind(&Front_end::handle_connect, shared_from_this(),
boost::asio::placeholders::error)));
And all posts from front-end to back-end shouldn't be wrapped by strand
.
回答2:
If you back-end is a thread pool calling any of the io_service::run(), io_service::run_one(), io_service::poll(), io_service::poll_one()
functions and your handler(s) require access to shared resources then you still have to take care to lock those shared resources somehow in the handler's themselves.
Given the limited amount of information posted in the question, I would assume this would work fine given the caveat above.
However, when posting there is some measurable overhead for setting up the necessary completion ports and waiting -- overhead you could avoid using a different implementation of your back end "queue".
Without knowing the exact details of what you need to accomplish, I would suggest that you look into thread building blocks for pipelines or perhaps more simply for a concurrent queue.
来源:https://stackoverflow.com/questions/6776779/boost-asio-multi-io-service-rpc-framework-design-rfc