I\'m using Boost.Asio for a server application that I\'m writing.
async_send
requires the caller to keep ownership of the data that is being sent until
I solved a similar problem by passing a shared_ptr
to my data to the handler function. Since asio holds on to the handler functor until it's called, and the hander functor keeps the shared_ptr
reference, the data stays allocated as long as there's an open request on it.
edit - here's some code:
Here the connection object holds on to the current data buffer being written, so the shared_ptr
is to the connection object, and the bind
call attaches the method functor to the object reference and the asio call keeps the object alive.
The key is that each handler must start a new asyc operation with another reference or the connection will be closed. Once the connection is done, or an error occurrs, we simply stop generating new read/write requests. One caveat is that you need to make sure you check the error object on all your callbacks.
boost::asio::async_write(
mSocket,
buffers,
mHandlerStrand.wrap(
boost::bind(
&TCPConnection::InternalHandleAsyncWrite,
shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred)));
void TCPConnection::InternalHandleAsyncWrite(
const boost::system::error_code& e,
std::size_t bytes_transferred)
{
You will need one write buffer per connection, others have been saying to use a vector per connection as was your original idea, but I would recommend for simplicity to use a vector of strings with your new approach.
Boost.ASIO has some special cases built around using strings with its buffers for writes, which make them easier to work with.
Just a thought.
Krit explained the data corruption, so I'll give you an implementation suggestion instead.
I would suggest that you use a separate vector for each send operation that is currently being executed. You probably don't want one for each connection since you might want to send several messages on the same connection sequentially without waiting for completion of the previous ones.
A possible fix would be to use a shared_ptr
to hold your local vector
and change the handler's signature to receive a shared_ptr
so to prolong the life of the data
until the sending is complete (thanks to Tim for pointing that out to me):
void handler( boost::shared_ptr<std::vector<char> > data )
{
}
void func()
{
boost::shared_ptr<std::vector<char> > data(new std::vector<char>);
// ...
// fill data with stuff
// ...
socket.async_send(boost::asio::buffer(*data), boost:bind(handler,data));
}
But now I'm wondering if I have multiple clients, will I need to create a separate vector for each connection?
Yes, though each vector does not need to be in global scope. The typical solution to this problem is to retain the buffer
as a member of an object, and bind a member function of that object to a functor passed to the async_write
completion handler. This way the buffer will be retained in scope throughout the lifetime of the asynchronous write. The asio examples are littered with this usage of binding member functions using this
and shared_from_this
. In general it is preferable to use shared_from_this
to simplify object lifetime, especially in the face of io_service:stop()
and ~io_service()
. Though for simple examples, this scaffolding is often unnecessary.
The destruction sequence described above permits programs to simplify their resource management by using shared_ptr<>. Where an object's lifetime is tied to the lifetime of a connection (or some other sequence of asynchronous operations), a shared_ptr to the object would be bound into the handlers for all asynchronous operations associated with it.
A good place to start is the async echo server due to its simplicity.
boost::asio::async_write(
socket,
boost::asio::buffer(data, bytes_transferred),
boost::bind(
&session::handle_write,
this,
boost::asio::placeholders::error
)
);
You can't use a single vector unless you send the same and constant data to all the clients (like a prompt message). This is caused by nature of async I/O. If you are sending, the system will keep a pointer to your buffer in its queue along with some AIO packet struct. As soon as it's done with some previous queued send operations and there's a free space in its own buffer, the system will start forming packets for your data and copy chunks of your buffer inside the corresponding places in TCP frames. So if you modify content of your buffer along the way, you'll corrupt the data sent to the client. If you're receiving, the system may optimize it even further and feed your buffer to the NIC as a target for DMA operation. In this case a significant number of CPU cycles can be saved on data copying because it's done by DMA controller. Probably, though, this optimization will work only if NIC supports hardware TCP unload.
UPDATE: On Windows, Boost.Asio uses overlapped WSA IO with completion notifications via IOCP.