When working with Sockets in Java, how can you tell whether the client has finished sending all (binary) data, before you could start processing them. Consider for example:
As some ppl already said you can't avoid some kind of protocol for communication. It should look like this for example:
On the server side you have:
void sendMSG(PrintWriter out){
try {
//just for example..
Process p = Runtime.getRuntime().exec("cmd /c dir C:");
BufferedReader br = new BufferedReader(new InputStreamReader(
p.getInputStream()));
//and then send all this crap to the client
String s = "";
while ((s = br.readLine()) != null) {
out.println("MSG");
out.println(s);
}
} catch (Exception e) {
System.out.println("Command incorrect!");
}
out.println("END");
}
//You are not supposed to close the stream or the socket, because you might want to send smth else later..
On the client side you have:
void recieveMSG(BufferedReader in) {
try {
while (in.readLine().equals("MSG")) {
System.out.println(in.readLine());
}
} catch (IOException e) {
System.out.println("Connection closed!");
}
}
Yes, you're right - using available()
like this is unreliable. Personally I very rarely use available()
. If you want to read until you reach the end of the stream (as per the question title), keep calling read()
until it returns -1. That's the easy bit. The hard bit is if you don't want the end of the stream, but the end of "what the server wants to send you at the moment."
As the others have said, if you need to have a conversation over a socket, you must make the protocol explain where the data finishes. Personally I prefer the "length prefix" solution to the "end of message token" solution where it's possible - it generally makes the reading code a lot simpler. However, it can make the writing code harder, as you need to work out the length before you send anything. This is a pain if you could be sending a lot of data.
Of course, you can mix and match solutions - in particular, if your protocol deals with both text and binary data, I would strongly recommend length-prefixing strings rather than null-terminating them (or anything similar). Decoding string data tends to be a lot easier if you can pass the decoder a complete array of bytes and just get a string back - you don't need to worry about reading to half way through a character, for example. You could use this as part of your protocol but still have overall "records" (or whatever you're transmitting) with an "end of data" record to let the reader process the data and respond.
Of course, all of this protocol design stuff is moot if you're not in control of the protocol :(
I think this is the task more of a protocol, assuming that you are the man who writes both the transmitting and receiving sides of application. For example you could implement some simple logic protocol and divide you data into packets. Then divide packets into two parts: the head and the body. And then to say that your head consists of a predefined starting sequence and contains number of bytes in the body. Of forget about starting sequence and simpy transfer number of bytes in the bofy as a first byte of the packet. Then you've could solve you problem.
as Nikita said this is more of task of protocol. Either you can go by header and body approach or you can send a special character or symbol for end of stream to break processing loop. Something like if you send say '[[END]]' on socket to denote end of stream.