Thread-safety when accessing data from N-theads in context of an async TCP-server

南楼画角 提交于 2021-01-28 19:21:46

问题


As the title says i have a question concerning the following scenario (simplyfied example):

Assume that i have an object of the Generator-Class below, which continuously updates its dataChunk member ( running in the main thread).

class Generator
{
  void generateData();
  uint8_t dataChunk[999];
}

Furthermore i have an async. acceptor of TCP-connections to which 1-N clients can connect to (running in a second thread). The acceptor starts a new thread for each new client-connection, in which an object of the Connection class below, receives a request message from the client and provides a fraction of the dataChunk (belonging to the Generator) as an answer. Then waits for a new request and so on...

class Connection
{

  void setDataChunk(uint8_t* dataChunk);
  void handleRequest();
  uint8_t* dataChunk;
}

Finally the actual question: The desired behaviour is that the Generator object generates a new dataChunk and waits until all 1-N Connection objects have delt with their client requests until it generates a new dataChunk.

How do i lock the dataChunk for writing access of the Generator object while the Connection objects deal with their requests, but all Connection objects in their respective threads are supposed to have reading-access at the same time during their request-handling phase.

On the other hand the Connection objects are supposed to wait for a new dataChunk after dealing with their respective request, without dropping a new client request.

--> I think a single mutex won't do the trick here.

My first idea was to share a struct between the objects with a semaphore for the Generator and a vector of semaphores for the connections. With these, every object could "understand" the state of the full-system and work accordingly.

What to you guys think, what is best practice i cases like this?

Thanks in advance!


回答1:


There are several ways to solve it.

You can use std::shared_mutex.

void Connection::handleRequest()
{
    while(true)
    {
        std::shared_lock<std::shared_mutex> lock(GeneratorObj.shared_mutex);
        if(GeneratorObj.DataIsAvailable()) // we need to know that data is available
        {
            // Send to client
            break;
        }
    }
}

void Generator::generateData()
{
    std::unique_lock<std::shared_mutex> lock(GeneratorObj.shared_mutex);

    // Generate data
}

Or you can use a boost::lockfree::queue, but data structures will be different.




回答2:


How do i lock the dataChunk for writing access of the Generator object while the Connection objects deal with their requests, but all Connection objects in their respective threads are supposed to have reading-access at the same time during their request-handling phase.

I'd make a logical chain of operations, that includes the generation.

Here's a sample:

  • it is completely single threaded
  • accepts unbounded connections and deals with dropped connections
  • it uses a deadline_timer object to signal a barrier when waiting for to send of a chunck to (many) connections. This makes it convenient to put the generateData call in an async call chain.

Live On Coliru

#include <boost/asio.hpp>
#include <list>
#include <iostream>

namespace ba = boost::asio;
using ba::ip::tcp;
using boost::system::error_code;

using Clock = std::chrono::high_resolution_clock;
using Duration = Clock::duration;
using namespace std::chrono_literals;

struct Generator {
    void generateData();
    uint8_t dataChunk[999];
};

struct Server {
    Server(unsigned short port) : _port(port) {
        _barrier.expires_at(boost::posix_time::neg_infin);

        _acc.set_option(tcp::acceptor::reuse_address());
        accept_loop();
    }

    void generate_loop() {
        assert(n_sending == 0);

        garbage_collect(); // remove dead connections, don't interfere with sending

        if (_socks.empty()) {
            std::clog << "No more connections; pausing Generator\n";
        } else {
            _gen.generateData();
            _barrier.expires_at(boost::posix_time::pos_infin);

            for (auto& s : _socks) {
                ++n_sending;
                ba::async_write(s, ba::buffer(_gen.dataChunk), [this,&s](error_code ec, size_t written) {
                    assert(n_sending);
                    --n_sending; // even if failed, decreases pending operation
                    if (ec) {
                        std::cerr << "Write: " << ec.message() << "\n";
                        s.close();
                    }
                    std::clog << "Written: " << written << ", " << n_sending << " to go\n";

                    if (!n_sending) {
                        // green light to generate next chunk
                        _barrier.expires_at(boost::posix_time::neg_infin);
                    }
                });
            }

            _barrier.async_wait([this](error_code ec) {
                if (ec && ec != ba::error::operation_aborted)
                    std::cerr << "Client activity: " << ec.message() << "\n";
                else generate_loop();
            });
        }
    }

    void accept_loop() {
        _acc.async_accept(_accepting, [this](error_code ec) {
                if (ec) {
                    std::cerr << "Accept fail: " << ec.message() << "\n";
                } else {
                    std::clog << "Accepted: " << _accepting.remote_endpoint() << "\n";
                    _socks.push_back(std::move(_accepting));

                    if (_socks.size() == 1) // first connection?
                        generate_loop();    // start generator

                    accept_loop();
                }
            });
    }

    void run_for(Duration d) {
        _svc.run_for(d);
    }

    void garbage_collect() {
        _socks.remove_if([](tcp::socket& s) { return !s.is_open(); });
    }
  private:
    ba::io_service _svc;
    unsigned short _port;
    tcp::acceptor _acc { _svc, { {}, _port } };
    tcp::socket _accepting {_svc};

    std::list<tcp::socket> _socks;

    Generator _gen;
    size_t n_sending = 0;
    ba::deadline_timer _barrier {_svc};
};

int main() {
    Server s(6767);
    s.run_for(3s); // COLIRU
}

#include <fstream>
// synchronously generate random data chunks
void Generator::generateData() {
    std::ifstream ifs("/dev/urandom", std::ios::binary);
    ifs.read(reinterpret_cast<char*>(dataChunk), sizeof(dataChunk));
    std::clog << "Generated chunk: " << ifs.gcount() << "\n";
}

Prints (for just the 1 client):

Accepted: 127.0.0.1:60870
Generated chunk: 999
Written: 999, 0 to go
Generated chunk: 999
   [... snip ~4000 lines ...]
Written: 999, 0 to go
Generated chunk: 999
Write: Broken pipe
Written: 0, 0 to go
No more connections; pausing Generator


来源:https://stackoverflow.com/questions/50304403/thread-safety-when-accessing-data-from-n-theads-in-context-of-an-async-tcp-serve

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!