Can I use std::async without waiting for the future limitation?

后端 未结 4 751
伪装坚强ぢ
伪装坚强ぢ 2020-11-27 14:47

High level
I want to call some functions with no return value in a async mode without waiting for them to finish. If I use std::async the future object

相关标签:
4条回答
  • 2020-11-27 15:26

    You can move the future into a global object, so when the local future's destructor runs it doesn't have to wait for the asynchronous thread to complete.

    std::vector<std::future<void>> pending_futures;
    
    myResonseType processRequest(args...)
    {
        //Do some processing and valuate the address and the message...
    
        //Sending the e-mail async
        auto f = std::async(std::launch::async, sendMail, address, message);
    
        // transfer the future's shared state to a longer-lived future
        pending_futures.push_back(std::move(f));
    
        //returning the response ASAP to the client
        return myResponseType;
    
    }
    

    N.B. This is not safe if the asynchronous thread refers to any local variables in the processRequest function.

    While using std::async (at least on MSVC) is using an inner thread pool.

    That's actually non-conforming, the standard explicitly says tasks run with std::launch::async must run as if in a new thread, so any thread-local variables must not persist from one task to another. It doesn't usually matter though.

    0 讨论(0)
  • 2020-11-27 15:31

    why do you not just start a thread and detach if you do not care on joining ?

    std::thread{ sendMail, address, message}.detach();   
    

    std::async is bound to the lifetime of the std::future it returns and their is no alternative to that.

    Putting the std::future in a waiting queue read by an other thread will require the same safety mechanism as a pool receiving new task, like mutex around the container.

    Your best option, then, is a thread pool to consume tasks directly pushed in a thread safe queue. And it will not depends on a specific implementation.

    Below a thread pool implementation taking any callable and arguments, the threads do poling on the queue, a better implementation should use condition variables (coliru) :

    #include <iostream>
    #include <queue>
    #include <memory>
    #include <thread>
    #include <mutex>
    #include <functional>
    #include <string>
    
    struct ThreadPool {
        struct Task {
            virtual void Run() const = 0;
            virtual ~Task() {};
        };   
    
        template < typename task_, typename... args_ >
        struct RealTask : public Task {
            RealTask( task_&& task, args_&&... args ) : fun_( std::bind( std::forward<task_>(task), std::forward<args_>(args)... ) ) {}
            void Run() const override {
                fun_();
            }
        private:
            decltype( std::bind(std::declval<task_>(), std::declval<args_>()... ) ) fun_;
        };
    
        template < typename task_, typename... args_ >
        void AddTask( task_&& task, args_&&... args ) {
            auto lock = std::unique_lock<std::mutex>{mtx_};
            using FinalTask = RealTask<task_, args_... >;
            q_.push( std::unique_ptr<Task>( new FinalTask( std::forward<task_>(task), std::forward<args_>(args)... ) ) );
        }
    
        ThreadPool() {
            for( auto & t : pool_ )
                t = std::thread( [=] {
                    while ( true ) {
                        std::unique_ptr<Task> task;
                        {
                            auto lock = std::unique_lock<std::mutex>{mtx_};
                            if ( q_.empty() && stop_ ) 
                                break;
                            if ( q_.empty() )
                                continue;
                            task = std::move(q_.front());
                            q_.pop();
                        }
                        if (task)
                            task->Run();
                    }
                } );
        }
        ~ThreadPool() {
            {
                auto lock = std::unique_lock<std::mutex>{mtx_};
                stop_ = true;
            }
            for( auto & t : pool_ )
                t.join();
        }
    private:
        std::queue<std::unique_ptr<Task>> q_;
        std::thread pool_[8]; 
        std::mutex mtx_;
        volatile bool stop_ {};
    };
    
    void foo( int a, int b ) {
        std::cout << a << "." << b;
    }
    void bar( std::string const & s) {
        std::cout << s;
    }
    
    int main() {
        ThreadPool pool;
        for( int i{}; i!=42; ++i ) {
            pool.AddTask( foo, 3, 14 );    
            pool.AddTask( bar, " - " );    
        }
    }
    
    0 讨论(0)
  • 2020-11-27 15:42

    Rather than moving the future into a global object (and manually manage deletion of unused futures), you can actually move it into the local scope of the asynchronously called function.

    "Let the async function take its own future", so to speak.

    I have come up with this template wrapper which works for me (tested on Windows):

    #include <future>
    
    template<class Function, class... Args>
    void async_wrapper(Function&& f, Args&&... args, std::future<void>& future,
                       std::future<void>&& is_valid, std::promise<void>&& is_moved) {
        is_valid.wait(); // Wait until the return value of std::async is written to "future"
        auto our_future = std::move(future); // Move "future" to a local variable
        is_moved.set_value(); // Only now we can leave void_async in the main thread
    
        // This is also used by std::async so that member function pointers work transparently
        auto functor = std::bind(f, std::forward<Args>(args)...);
        functor();
    }
    
    template<class Function, class... Args> // This is what you call instead of std::async
    void void_async(Function&& f, Args&&... args) {
        std::future<void> future; // This is for std::async return value
        // This is for our synchronization of moving "future" between threads
        std::promise<void> valid;
        std::promise<void> is_moved;
        auto valid_future = valid.get_future();
        auto moved_future = is_moved.get_future();
    
        // Here we pass "future" as a reference, so that async_wrapper
        // can later work with std::async's return value
        future = std::async(
            async_wrapper<Function, Args...>,
            std::forward<Function>(f), std::forward<Args>(args)...,
            std::ref(future), std::move(valid_future), std::move(is_moved)
        );
        valid.set_value(); // Unblock async_wrapper waiting for "future" to become valid
        moved_future.wait(); // Wait for "future" to actually be moved
    }
    

    I am a little surprised it works because I thought that the moved future's destructor would block until we leave async_wrapper. It should wait for async_wrapper to return but it is waiting inside that very function. Logically, it should be a deadlock but it isn't.

    I also tried to add a line at the end of async_wrapper to manually empty the future object:

    our_future = std::future<void>();
    

    This does not block either.

    0 讨论(0)
  • 2020-11-27 15:47

    i have no idea what i'm doing, but this seem to work:

    // :( http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3451.pdf
    template<typename T>
    void noget(T&& in)
    {
        static std::mutex vmut;
        static std::vector<T> vec;
        static std::thread getter;
        static std::mutex single_getter;
        if (single_getter.try_lock())
        {
            getter = std::thread([&]()->void
            {
                size_t size;
                for(;;)
                {
                    do
                    {
                        vmut.lock();
                        size=vec.size();
                        if(size>0)
                        {
                            T target=std::move(vec[size-1]);
                            vec.pop_back();
                            vmut.unlock();
                            // cerr << "getting!" << endl;
                            target.get();
                        }
                        else
                        {
                            vmut.unlock();
                        }
                    }while(size>0);
                    // ¯\_(ツ)_/¯
                    std::this_thread::sleep_for(std::chrono::milliseconds(100));
                }
            });
            getter.detach();
        }
        vmut.lock();
        vec.push_back(std::move(in));
        vmut.unlock();
    }
    

    it creates a dedicated getter thread for each type of future you throw at it (eg. if you give a future and future, you'll have 2 threads. if you give it 100x future, you'll still only have 2 threads), and when there's a future you don't want to deal with, just do notget(fut); - you can also noget(std::async([]()->void{...})); works just fine, no block, it seems. warning, do not try to get the value from a future after using noget() on it. that's probably UB and asking for trouble.

    0 讨论(0)
提交回复
热议问题