Achieving realtime 1 millisecond accurate events without suffering from thread scheduling

后端 未结 2 603
醉梦人生
醉梦人生 2021-02-06 12:29

Problem

I am creating a Windows 7 based C# WPF application using .Net 4.5, and one its major features is to <

2条回答
  •  温柔的废话
    2021-02-06 13:11

    I suspect nothing you do, in user-mode, to a thread's priority or affinity will guarantee the behavior you seek, so I think you may need something like your options 3 or 4, which means writing a kernel-mode driver.

    In kernel-mode, there is the notion of IRQL, where code triggered to run at higher levels preempts code running at lower levels. User-mode code runs at IRQL 0, so all kernel-mode code at any higher level takes precedence. The thread scheduler itself runs at an elevated level, 2 I believe (which is called DISPATCH_LEVEL), so it can preempt any scheduled user-mode code of any priority, including, I believe, REALTIME_PRIORITY_CLASS. Hardware interrupts including timers run even higher.

    A hardware timer will invoke its interrupt handler about as accurately as the timer resolution, if there's a CPU/core available at a lower IRQL (higher-level interrupt handlers not executing).

    If there is much work to do, one shouldn't do it in the interrupt handler (IRQL > DISPATCH_LEVEL), but use the interrupt handler to schedule the larger body of work to run "soon" at DISPATCH_LEVEL using a Deferred Procedure Call (DPC), which still prevents the thread scheduler from interfering, but doesn't prevent other interrupt handlers from handling their hardware interrupts.

    A likely problem with your option 3 is that firing an event to wake a thread to run user-mode code at IRQL 0 is that it again allows the thread scheduler to decide when the user-mode code will execute. You may need to do your time-sensitive work in kernel-mode at DISPATCH_LEVEL.

    Another issue is that interrupts fire without regard to the process context the CPU core was running. So when the timer fires, the handler likely runs in the context of a process unrelated to yours. So you may need to do your time-sensitive work in a kernel-mode driver, using kernel-space memory, independent of your process, and then feed any results back to your app later, when it resumes running and can interact with the driver. (Apps can interact with drivers by passing buffers down via the DeviceIoControl API.)

    I am not suggesting you implement a hardware timer interrupt handler; the OS already does that. Rather, use the kernel timer services to invoke your code based on the OS handling of the timer interrupt. See KeSetTimer and ExSetTimer. Both of these can call back to your code at DISPATCH_LEVEL after the timer fires.

    And (even in kernel-mode) the system timer resolution may, by default, be too coarse for your 1 ms requirement.

    https://msdn.microsoft.com/en-us/library/windows/hardware/dn265247(v=vs.85).aspx

    For example, for Windows running on an x86 processor, the default interval between system clock ticks is typically about 15 milliseconds

    For higher resolution, you may

    1. change the system clock resolution

    Starting with Windows 2000, a driver can call the ExSetTimerResolution routine to change the time interval between successive system clock interrupts. For example, a driver can call this routine to change the system clock from its default rate to its maximum rate to improve timer accuracy. However, using ExSetTimerResolution has several disadvantages compared to using high-resolution timers created by ExAllocateTimer.

    ...

    1. use newer kernel-mode APIs for high-resolution timers that manage the clock resolution automatically.

    Starting with Windows 8.1, drivers can use the ExXxxTimer routines to manage high-resolution timers. The accuracy of a high-resolution timer is limited only by the maximum supported resolution of the system clock. In contrast, timers that are limited to the default system clock resolution are significantly less accurate.

    However, high-resolution timers require system clock interrupts to—at least, temporarily—occur at a higher rate, which tends to increase power consumption. Thus, drivers should use high-resolution timers only when timer accuracy is essential, and use default-resolution timers in all other cases.

提交回复
热议问题