Efficiently waiting for all tasks in a threadpool to finish

前端 未结 1 686
说谎
说谎 2021-02-05 18:59

I currently have a program with x workers in my threadpool. During the main loop y tasks are assigned to the workers to complete, but after the

1条回答
  •  夕颜
    夕颜 (楼主)
    2021-02-05 19:17

    This is one way to do what you're trying. Using two condition variables on the same mutex is not for the light-hearted unless you know what is going on internally. I didn't need the atomic processed member other than my desire to demonstrate how many items were finished between each run.

    The sample workload function in this generates one million random int values, then sorts them (gotta heat my office one way or another). waitFinished will not return until the queue is empty and no threads are busy.

    #include 
    #include 
    #include 
    #include 
    #include 
    #include 
    #include 
    
    //thread pool
    class ThreadPool
    {
    public:
        ThreadPool(unsigned int n = std::thread::hardware_concurrency());
    
        template void enqueue(F&& f);
        void waitFinished();
        ~ThreadPool();
    
        unsigned int getProcessed() const { return processed; }
    
    private:
        std::vector< std::thread > workers;
        std::deque< std::function > tasks;
        std::mutex queue_mutex;
        std::condition_variable cv_task;
        std::condition_variable cv_finished;
        std::atomic_uint processed;
        unsigned int busy;
        bool stop;
    
        void thread_proc();
    };
    
    ThreadPool::ThreadPool(unsigned int n)
        : busy()
        , processed()
        , stop()
    {
        for (unsigned int i=0; i latch(queue_mutex);
        stop = true;
        cv_task.notify_all();
        latch.unlock();
    
        // all threads terminate, then we're done.
        for (auto& t : workers)
            t.join();
    }
    
    void ThreadPool::thread_proc()
    {
        while (true)
        {
            std::unique_lock latch(queue_mutex);
            cv_task.wait(latch, [this](){ return stop || !tasks.empty(); });
            if (!tasks.empty())
            {
                // got work. set busy.
                ++busy;
    
                // pull from queue
                auto fn = tasks.front();
                tasks.pop_front();
    
                // release lock. run async
                latch.unlock();
    
                // run function outside context
                fn();
                ++processed;
    
                latch.lock();
                --busy;
                cv_finished.notify_one();
            }
            else if (stop)
            {
                break;
            }
        }
    }
    
    // generic function push
    template
    void ThreadPool::enqueue(F&& f)
    {
        std::unique_lock lock(queue_mutex);
        tasks.emplace_back(std::forward(f));
        cv_task.notify_one();
    }
    
    // waits until the queue is empty.
    void ThreadPool::waitFinished()
    {
        std::unique_lock lock(queue_mutex);
        cv_finished.wait(lock, [this](){ return tasks.empty() && (busy == 0); });
    }
    
    // a cpu-busy task.
    void work_proc()
    {
        std::random_device rd;
        std::mt19937 rng(rd());
    
        // build a vector of random numbers
        std::vector data;
        data.reserve(100000);
        std::generate_n(std::back_inserter(data), data.capacity(), [&](){ return rng(); });
        std::sort(data.begin(), data.end(), std::greater());
    }
    
    int main()
    {
        ThreadPool tp;
    
        // run five batches of 100 items
        for (int x=0; x<5; ++x)
        {
            // queue 100 work tasks
            for (int i=0; i<100; ++i)
                tp.enqueue(work_proc);
    
            tp.waitFinished();
            std::cout << tp.getProcessed() << '\n';
        }
    
        // destructor will close down thread pool
        return EXIT_SUCCESS;
    }
    

    Output

    100
    200
    300
    400
    500
    

    Best of luck.

    0 讨论(0)
提交回复
热议问题