std::condition_variable wait_until surprising behaviour

狂风中的少年 提交于 2021-01-27 20:42:42

问题


Building with VS2013, specifying time_point::max() to a condition variable's wait_until results in an immediate timeout.

This seems unintuitive - I would naively expect time_point::max() to wait indefinitely (or at least a very long time). Can anyone confirm if this is documented, expected behaviour or something specific to MSVC?

Sample program below; note replacing time_point::max() with now + std::chrono::hours(1) gives the expected behaviour (wait_for exits once cv is notified, with no timeout)


#include <condition_variable>
#include <mutex>
#include <chrono>
#include <future>
#include <functional>

void fire_cv( std::mutex *mx, std::condition_variable *cv )
{
    std::unique_lock<std::mutex> lock(*mx);
    printf("firing cv\n");
    cv->notify_one();
}

int main(int argc, char *argv[])
{
    std::chrono::steady_clock::time_point now = std::chrono::steady_clock::now();

    std::condition_variable test_cv;
    std::mutex test_mutex;

    std::future<void> s;
    {
        std::unique_lock<std::mutex> lock(test_mutex);
        s = std::async(std::launch::async, std::bind(fire_cv, &test_mutex, &test_cv));
        printf("blocking on cv\n");
        std::cv_status result = test_cv.wait_until( lock, std::chrono::steady_clock::time_point::max() );

        //std::cv_status result = test_cv.wait_until( lock, now + std::chrono::hours(1) ); // <--- this works as expected!
        printf("%s\n", (result==std::cv_status::timeout) ? "timeout" : "no timeout");
    }
    s.wait();

    return 0;
}

回答1:


I debugged MSCV 2015's implementation, and wait_until calls wait_for internally, which is implemented like this:

template<class _Rep,
        class _Period>
        _Cv_status wait_for(
            unique_lock<mutex>& _Lck,
            const chrono::duration<_Rep, _Period>& _Rel_time)
        {   // wait for duration
        _STDEXT threads::xtime _Tgt = _To_xtime(_Rel_time); // Bug!
        return (wait_until(_Lck, &_Tgt));
        }

The bug here is that _To_xtime overflows, which results in undefined behavior, and the result is a negative time_point:

template<class _Rep,
    class _Period> inline
    xtime _To_xtime(const chrono::duration<_Rep, _Period>& _Rel_time)
    {   // convert duration to xtime
    xtime _Xt;
    if (_Rel_time <= chrono::duration<_Rep, _Period>::zero())
        {   // negative or zero relative time, return zero
        _Xt.sec = 0;
        _Xt.nsec = 0;
        }
    else
        {   // positive relative time, convert
        chrono::nanoseconds _T0 =
            chrono::system_clock::now().time_since_epoch();
        _T0 += chrono::duration_cast<chrono::nanoseconds>(_Rel_time); //Overflow!
        _Xt.sec = chrono::duration_cast<chrono::seconds>(_T0).count();
        _T0 -= chrono::seconds(_Xt.sec);
        _Xt.nsec = (long)_T0.count();
        }
    return (_Xt);
    }

std::chrono::nanoseconds by default stores its value in a long long, and so after its definition, _T0 has a value of 1'471'618'263'082'939'000 (this changes obviously). Adding _Rel_time (9'223'244'955'544'505'510) results definitely in signed overflow.

We have already passed every negative time_point possible, so a timeout happens.



来源:https://stackoverflow.com/questions/39041450/stdcondition-variable-wait-until-surprising-behaviour

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!