boost::asio::ip::tcp::socket is connected?

后端 未结 4 798
野趣味
野趣味 2020-12-23 22:16

I want to verify the connection status before performing read/write operations.

Is there a way to make an isConnect() method?

I saw this, but it seems \"ugly

相关标签:
4条回答
  • 2020-12-23 22:52

    TCP promises to watch for dropped packets -- retrying as appropriate -- to give you a reliable connection, for some definition of reliable. Of course TCP can't handle cases where the server crashes, or your Ethernet cable falls out or something similar occurs. Additionally, knowing that your TCP connection is up doesn't necessarily mean that a protocol that will go over the TCP connection is ready (eg., your HTTP webserver or your FTP server may be in some broken state).

    If you know the protocol being sent over TCP then there is probably a way in that protocol to tell you if things are in good shape (for HTTP it would be a HEAD request)

    0 讨论(0)
  • 2020-12-23 23:00

    If you are sure that the remote socket has not sent anything (e.g. because you haven't sent a request to it yet), then you can set your local socket to a non blocking mode and try to read one or more bytes from it.

    Given that the server hasn't sent anything, you'll either get a asio::error::would_block or some other error. If former, your local socket has not yet detected a disconnection. If latter, your socket has been closed.

    Here is an example code:

    #include <iostream>
    #include <boost/asio.hpp>
    #include <boost/asio/spawn.hpp>
    #include <boost/asio/steady_timer.hpp>
    
    using namespace std;
    using namespace boost;
    using tcp = asio::ip::tcp;
    
    template<class Duration>
    void async_sleep(asio::io_service& ios, Duration d, asio::yield_context yield)
    {
      auto timer = asio::steady_timer(ios);
      timer.expires_from_now(d);
      timer.async_wait(yield);
    }
    
    int main()
    {
      asio::io_service ios;
      tcp::acceptor acceptor(ios, tcp::endpoint(tcp::v4(), 0));
    
      boost::asio::spawn(ios, [&](boost::asio::yield_context yield) {
        tcp::socket s(ios);
        acceptor.async_accept(s, yield);
        // Keep the socket from going out of scope for 5 seconds.
        async_sleep(ios, chrono::seconds(5), yield);
      });
    
      boost::asio::spawn(ios, [&](boost::asio::yield_context yield) {
        tcp::socket s(ios);
        s.async_connect(acceptor.local_endpoint(), yield);
    
        // This is essential to make the `read_some` function not block.
        s.non_blocking(true);
    
        while (true) {
          system::error_code ec;
          char c;
          // Unfortunately, this only works when the buffer has non
          // zero size (tested on Ubuntu 16.04).
          s.read_some(asio::mutable_buffer(&c, 1), ec);
          if (ec && ec != asio::error::would_block) break;
          cerr << "Socket is still connected" << endl;
          async_sleep(ios, chrono::seconds(1), yield);
        }
    
        cerr << "Socket is closed" << endl;
      });
    
      ios.run();
    }
    

    And the output:

    Socket is still connected
    Socket is still connected
    Socket is still connected
    Socket is still connected
    Socket is still connected
    Socket is closed
    

    Tested on:

    Ubuntu: 16.04
    Kernel: 4.15.0-36-generic
    Boost: 1.67

    Though, I don't know whether or not this behavior depends on any of those versions.

    0 讨论(0)
  • 2020-12-23 23:05

    you can send a dummy byte on a socket and see if it will return an error.

    0 讨论(0)
  • 2020-12-23 23:14

    TCP is meant to be robust in the face of a harsh network; even though TCP provides what looks like a persistent end-to-end connection, it's all just a lie, each packet is really just a unique, unreliable datagram.

    The connections are really just virtual conduits created with a little state tracked at each end of the connection (Source and destination ports and addresses, and local socket). The network stack uses this state to know which process to give each incoming packet to and what state to put in the header of each outgoing packet.

    Virtual TCP Conduit

    Because of the underlying — inherently connectionless and unreliable — nature of the network, the stack will only report a severed connection when the remote end sends a FIN packet to close the connection, or if it doesn't receive an ACK response to a sent packet (after a timeout and a couple retries).

    Because of the asynchronous nature of asio, the easiest way to be notified of a graceful disconnection is to have an outstanding async_read which will return error::eof immediately when the connection is closed. But this alone still leaves the possibility of other issues like half-open connections and network issues going undetected.

    The most effectively way to work around unexpected connection interruption is to use some sort of keep-alive or ping. This occasional attempt to transfer data over the connection will allow expedient detection of an unintentionally severed connection.

    The TCP protocol actually has a built-in keep-alive mechanism which can be configured in asio using asio::tcp::socket::keep_alive. The nice thing about TCP keep-alive is that it's transparent to the user-mode application, and only the peers interested in keep-alive need configure it. The downside is that you need OS level access/knowledge to configure the timeout parameters, they're unfortunately not exposed via a simple socket option and usually have default timeout values that are quite large (7200 seconds on Linux).

    Probably the most common method of keep-alive is to implement it at the application layer, where the application has a special noop or ping message and does nothing but respond when tickled. This method gives you the most flexibility in implementing a keep-alive strategy.

    0 讨论(0)
提交回复
热议问题