I have the following code, tested with python2.7 and python3.5 , on linux / mac , on distant and local machines:
import socket
import time
s = socket.socket()
s
Because the client side represented by your python code does not yet know that the server's socket is closed and gone. Think of the TCP connection as two separate channels: one in each direction. The TCP protocol allows each side to close its sending side of the connection at a time of its own choosing. And TCP has no way of indicating to the other side that future sends in the other direction will not be accepted.
In more detail... Ordinary termination of a TCP session involves each side sending a packet with the FIN
flag set. That indicates that the peer sending the FIN
will not send any more data. It does not indicate that the receiver of the FIN
packet may not send any more data.
So, what happens here is that when you kill netcat
, a FIN
packet is sent from the server side to the client (and acknowledged by the network stack on behalf of the client). That closes the server=>client direction of the socket. However, the client=>server direction is still usable as far as the client side knows. Later, when your client attempts to send data, a packet is sent containing the data. Now, the network stack on the server side immediately responds telling the client side that the server is no longer there by sending a RST
packet. However, your sendall
function call is long-finished by the time that is received.
So, if your client were to sleep another second (or actually just a small fraction of a second) and then attempt another send, that subsequent send would raise an exception.
It might be instructive to create a packet capture of the entire session and study it with wireshark. Run this in another window before invoking your python code (then kill with Ctrl-C afterward):
sudo tcpdump -i lo -w /tmp/f.pcap port 8080
You can also use wireshark to capture it, specifying the same port 8080
as a capture filter -- or tcp.port == 8080
as a display filter.