I am developing a client server application (TCP) in Linux using C++. I want to send more than 65,000
bytes at the same time. In TCP, the maximum packet size is
It is possible that your problem is related to kernel socket buffer sizes. Try adding the following to your code:
int buffsize = 1024*1024;
setsockopt(s, SOL_SOCKET, SO_RCVBUF, &buffsize, sizeof(buffsize));
You might need to increase some sysctl variables too:
sysctl -w net.core.rmem_max=8388608
sysctl -w net.core.wmem_max=8388608
Note however, that relying on TCP to fill your whole buffer is generally a bad idea. You should rather call recv() multiple times. The only good reason why you would want to receive more than 64K is for improved performance. However, Linux should already have auto-tuning that will progressively increase the buffer sizes as required.
I would suggest exploring kqueue or something similar. With event notification there is no need to loop on recv
. Just call a simple read function upon an EV_READ
event and use a single call to the recv
function upon the socket that triggered the event. Your function can have a buffer size of 10 bytes or however much you want it doesn't matter because if you did not read the entire message the first time around you'll just get another EV_READ event on the socket and you recall your read function. When the data is read you'll get a EOF event. No need to hustle with loops that may or may not block other incoming connections.
Judging from the comments above, it seems you don't understand how recv
works, or how it is supposed to be used.
You really want to call recv
in a loop, until either you know that the expected amount of data has been received or until you get a "zero bytes read" result, which means the other end has closed the connection. Always, no exceptions.
If you need to do other things concurrently (likely, with a server process!) then you will probably want to check descriptor readiness with poll
or epoll
first. That lets you multiplex sockets as they become ready.
The reason why you want to do it that way, and never any different, is that you don't know how the data will be packeted and how (or when) packets will arrive. Plus, recv
gives no guarantee about the amount of data read at a time. It will offer what it has in its buffers at the time you call it, no more and no less (it may block if there's nothing, but then you still don't have a guarantee that any particular amount of data will be returned when it resumes, it may still return e.g. 50 bytes!).
Even if you only send, say, 5,000 bytes total, it is perfectly valid behaviour for TCP to break this into 5 (or 10, or 20) packets, and for recv
to return 500 (or 100, or 20, or 1) bytes at a time, every time you call it. That's just how it works.
TCP guarantees that anything you send will eventually arrive at the other end or produce an error. And, it guarantees that whatever you send arrives in order. It does not guarantee much else. Above all, it does not guarantee that any particular amount of data is ready at any given time.
You must be prepared for that, and the only way to do it is calling recv
repeatedly. Otherwise you will always lose data under some circumstances.
MSG_WAITALL
should in principle make it work the way you expect, but that is bad behaviour, and it is not guaranteed to work. If the socket (or some other structure in the network stack) runs against a soft or hard limit, it may not, and probably will not fulfill your request. Some limits are obscure, too. For example, the number for SO_RCVBUF
must be twice as large as what you expect to receive under Linux, because of implementation details.
Correct behaviour of a server application should never depend on assumptions such as "it fits into the receive buffer". Your application needs to be prepared, in principle, to receive terabytes of data using a 1 kilobyte receive buffer, and in chunks of 1 byte at a time, if need be. A larger receive buffer will make it more efficient, but that's it... it still has to work either way.
The fact that you only seee failures upwards of some "huge" limit is just luck (or rather, bad luck). The fact that it apparently "works fine" up to that limit suggests what you do is correct, but it isn't. It's an unlucky coincidence that it works.
EDIT:
As requested in below comment, here is what this could look like (Code is obviously untested, caveat emptor.)
std::vector<char> result;
int size;
char recv_buf[250];
for(;;)
{
if((size = recv(fd, recv_buf, sizeof(recv_buf), 0)) > 0)
{
for(unsigned int i = 0; i < size; ++i)
result.push_back(recv_buf[i]);
}
else if(size == 0)
{
if(result.size() < expected_size)
{
printf("premature close, expected %u, only got %u\n", expected_size, result.size());
}
else
{
do_something_with(result);
}
break;
}
else
{
perror("recv");
exit(1);
}
}
That will receive any amount of data you want (or until operator new
throws bad_alloc
after allocating a vector several hundred MiB in size, but that's a different story...).
If you want to handle several connections, you need to add poll
or epoll
or kqueue
or a similar functionality (or... fork
), I'll leave this as exercise for the reader.
in tcp max packet sixe is 65,635,bytes
No it isn't. TCP is a byte-stream protocol over segments over IP packets, and the protocol has unlimited transmission sizes over any one connection. Look at all those 100MB downloads: how do you think they work?
Just send and receive the data. You'll get it.