vmsplice() and TCP

后端 未结 1 2043
臣服心动
臣服心动 2021-02-05 07:26

In the original vmsplice() implementation, it was suggested that if you had a user-land buffer 2x the maximum number of pages that could fit in a pipe, a successful

相关标签:
1条回答
  • 2021-02-05 08:19

    Yes, due to the TCP socket holding on to the pages for an indeterminate time you cannot use the double-buffering scheme mentioned in the example code. Also, in my use case the pages come from circular buffer so I cannot gift the pages to the kernel and alloc fresh pages. I can verify that I am seeing data corruption in the received data.

    I resorted to polling the level of the TCP socket's send queue until it drains to 0. This fixes data corruption but is suboptimal because draining the send queue to 0 affects throughput.

    n = ::vmsplice(mVmsplicePipe.fd.w, &iov, 1, 0);
    while (n) {
        // splice pipe to socket
        m = ::splice(mVmsplicePipe.fd.r, NULL, mFd, NULL, n, 0);
        n -= m;
    }
    
    while(1) {
        int outsize=0;
        int result;
    
        usleep(20000);
    
        result = ::ioctl(mFd, SIOCOUTQ, &outsize);
        if (result == 0) {
            LOG_NOISE("outsize %d", outsize);
        } else {
            LOG_ERR_PERROR("SIOCOUTQ");
            break;
        }
        //if (outsize <= (bufLen >> 1)) {
        if (outsize == 0) {
            LOG("outsize %d <= %u", outsize, bufLen>>1);
            break;
        }
    };
    
    0 讨论(0)
提交回复
热议问题