How to use the boost io_service with a priority queue?

房东的猫 提交于 2021-02-10 15:41:58

问题


I have a program that have two function. one is a cycle timer, the other one is receiving some sockets.

I found that, if there were more then one packages coming in before the timer tirggered, the boost will run all the socket-handles and then run the timer-handle.

I wrote a simple code to simulate this timing like below:

#include <iostream>
#include <memory>

#include <boost/asio.hpp>
#include <boost/thread/thread.hpp>
#include <boost/asio/io_service.hpp>
#include <boost/asio/steady_timer.hpp>

std::string get_time()
{
    struct timespec time_spec;
    clock_gettime(CLOCK_REALTIME, &time_spec);
    int h  = (int)(time_spec.tv_sec / 60 / 60 % 24);
    int m  = (int)(time_spec.tv_sec / 60 % 60);
    int s  = (int)(time_spec.tv_sec % 60);
    int ms = (int)(time_spec.tv_nsec / 1000);
    char st[50];
    snprintf(st, 50, "[%02d:%02d:%02d:%06d]", h, m, s, ms);

    return std::string(st);
}

void fA()
{
  std::cout << get_time() << " : fA()" << std::endl;
  boost::this_thread::sleep(boost::posix_time::milliseconds(40));
}

void fB()
{
  std::cout << get_time() << " : fB()" << std::endl;
  boost::this_thread::sleep(boost::posix_time::milliseconds(20));
}

int main(int argc, char *argv[])
{
    boost::asio::io_service io;
    std::shared_ptr<boost::asio::io_service::work> work = std::make_shared<boost::asio::io_service::work>(io);

    std::shared_ptr<boost::asio::steady_timer> t100ms = std::make_shared<boost::asio::steady_timer>(io);
    std::shared_ptr<boost::asio::steady_timer> t80ms = std::make_shared<boost::asio::steady_timer>(io);

    std::cout << get_time() << " : start" << std::endl;

    t100ms->expires_from_now(std::chrono::milliseconds(100));
    t80ms->expires_from_now(std::chrono::milliseconds(80));

    t100ms->async_wait([&](const boost::system::error_code &_error) {
        if(_error.value() == boost::system::errc::errc_t::success) {
            std::cout << get_time() << " : t100ms" << std::endl;
        }
    });
    t80ms->async_wait([&](const boost::system::error_code &_error) {
        if(_error.value() == boost::system::errc::errc_t::success) {
            std::cout << get_time() << " : t80ms" << std::endl;
            io.post(fA);
            io.post(fB);
        }
    });

    io.run();

    return 0;
}

The reuslt of this code is :

[08:15:40:482721] : start
[08:15:40:562867] : t80ms
[08:15:40:562925] : fA()
[08:15:40:603037] : fB()
[08:15:40:623186] : t100ms

But, the result I want is :

[08:15:40:482721] : start
[08:15:40:562867] : t80ms
[08:15:40:562925] : fA()
[08:15:40:603037] : t100ms
[08:15:40:604037] : fB()

The t100ms could be run between the fA and the fB, which time is more near the correct wantted time [08:15:40:582721] at the 100ms later from the start.

I found a Invocation example, which give an example for a priority queue.

And try to modify it by add my codes into this example.

    ...

    timer.async_wait(pri_queue.wrap(42, middle_priority_handler));


    std::shared_ptr<boost::asio::steady_timer> t100ms = std::make_shared<boost::asio::steady_timer>(io_service);
    std::shared_ptr<boost::asio::steady_timer> t80ms = std::make_shared<boost::asio::steady_timer>(io_service);

    std::cout << get_time() << " : start" << std::endl;

    t100ms->expires_from_now(std::chrono::milliseconds(100));
    t80ms->expires_from_now(std::chrono::milliseconds(80));

    t100ms->async_wait(pri_queue.wrap(100, [&](const boost::system::error_code &_error) {
        if(_error.value() == boost::system::errc::errc_t::success) {
            std::cout << get_time() << " : t100ms" << std::endl;
        }
    }));
    t80ms->async_wait(pri_queue.wrap(100, [&](const boost::system::error_code &_error) {
        if(_error.value() == boost::system::errc::errc_t::success) {
            std::cout << get_time() << " : t80ms" << std::endl;
            io_service.post(pri_queue.wrap(0, fA));
            io_service.post(pri_queue.wrap(0, fB));
        }
    }));

    while (io_service.run_one())

    ...

But, the result still not shown as my mind. It like below:

[08:30:13:868299] : start
High priority handler
Middle priority handler
Low priority handler
[08:30:13:948437] : t80ms
[08:30:13:948496] : fA()
[08:30:13:988606] : fB()
[08:30:14:008774] : t100ms

Where am I wrong?


回答1:


Handlers are run in the order in which they are posted.

When the 80ms expire, you immediately post both fA() and fB(). Of course, they will run first because t100ms is still pending.

Here's your example but much simplified:

Live On Coliru

#include <iostream>
#include <boost/asio.hpp>
#include <thread>
using boost::asio::io_context;
using boost::asio::steady_timer;
using namespace std::chrono_literals;

namespace {
    static auto now = std::chrono::system_clock::now;
    static auto get_time = [start = now()]{
        return "at " + std::to_string((now() - start)/1ms) + "ms:\t";
    };

    void message(std::string msg) {
        std::cout << (get_time() + msg + "\n") << std::flush; // minimize mixing output from threads
    }

    auto make_task = [](auto name, auto duration) {
        return [=] {
            message(name);
            std::this_thread::sleep_for(duration);
        };
    };
}

int main() {
    io_context io;

    message("start");

    steady_timer t100ms(io, 100ms);
    t100ms.async_wait([&](auto ec) {
        message("t100ms " + ec.message());
    });

    steady_timer t80ms(io, 80ms);
    t80ms.async_wait([&](auto ec) {
        message("t80ms " + ec.message());
        post(io, make_task("task A", 40ms));
        post(io, make_task("task B", 20ms));
    });

    io.run();
}

Prints

at 0ms: start
at 80ms:        t80ms Success
at 80ms:        task A
at 120ms:       task B
at 140ms:       t100ms Success

One Approach

Assuming you're really trying to time the operation, consider running multiple threads. With this three-word change the output is:

at 1ms: start
at 81ms:    t80ms Success
at 81ms:    task A
at 82ms:    task B
at 101ms:   t100ms Success

To serialize A and B still, post them on a strand by changing:

post(io, make_task("task A", 40ms));
post(io, make_task("task B", 20ms));

To

auto s = make_strand(io);
post(s, make_task("task A", 40ms));
post(s, make_task("task B", 20ms));

Now prints

at 0ms: start
at 80ms:        t80ms Success
at 80ms:        task A
at 100ms:       t100ms Success
at 120ms:       task B

(full listing below).

No Thread Please

The other approach when you do no wish to use threads (for simplicity/safety e.g.), you will indeed require a queue. I'd consider writing it out as simply:

struct Queue {
    template <typename Ctx>
    Queue(Ctx context) : strand(make_strand(context)) {}

    void add(Task f) {
        post(strand, [this, f=std::move(f)] {
            if (tasks.empty())
                run();
            tasks.push_back(std::move(f));
        });
    }

  private:
    boost::asio::any_io_executor strand;
    std::deque<Task> tasks;

    void run() {
        post(strand, [this] { drain_loop(); });
    }

    void drain_loop() {
        if (tasks.empty()) {
            message("queue empty");
        } else {
            tasks.front()(); // invoke task
            tasks.pop_front();
            run();
        }
    }
};

Now we can safely choose whether we want it in a threaded context or not - because all queue operations are on a strand.

int main() {
    thread_pool io; // or io_context io;
    Queue tasks(io.get_executor());

    message("start");

    steady_timer t100ms(io, 100ms);
    t100ms.async_wait([&](auto ec) {
        message("t100ms " + ec.message());
    });

    steady_timer t80ms(io, 80ms);
    t80ms.async_wait([&](auto ec) {
        message("t80ms " + ec.message());
        tasks.add(make_task("task A", 40ms));
        tasks.add(make_task("task B", 40ms));
    });

    io.join(); // or io.run()
}

Using thread_pool io;:

at 0ms: start
at 80ms:        t80ms Success
at 80ms:        task A
at 100ms:       t100ms Success
at 120ms:       task B
at 160ms:       queue empty

Using io_context io; (or thread_pool io(1); of course):

at 0ms: start
at 80ms:        t80ms Success
at 80ms:        task A
at 120ms:       task B
at 160ms:       t100ms Success
at 160ms:       queue empty


来源:https://stackoverflow.com/questions/64854938/how-to-use-the-boost-io-service-with-a-priority-queue

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!