问题
I'm attempting to measure the time spent in a thread for progress reporting purposes, but I'm getting very strange results from from the GetThreadTimes system call. Given the following program (compiled in VS 2013, targeting .NET 4.5):
using System;
using System.Diagnostics;
using System.Runtime.InteropServices;
using System.Threading;
namespace ThreadTimingTest
{
class Program
{
static Stopwatch _wallClockTimer;
static System.Timers.Timer _timer = new System.Timers.Timer();
private static Thread _thread;
private static IntPtr _threadHandle;
static void Main(string[] args)
{
_timer = new System.Timers.Timer();
_timer.Elapsed += (s, e) =>
{
System.Runtime.InteropServices.ComTypes.FILETIME start, end, rawKernelTime, rawUserTime;
GetThreadTimes(_threadHandle, out start, out end, out rawKernelTime, out rawUserTime);
//ref: http://stackoverflow.com/a/6083846
ulong uLow = (ulong)rawKernelTime.dwLowDateTime;
ulong uHigh = (uint)rawKernelTime.dwHighDateTime;
uHigh = uHigh << 32;
long kernelTime = (long)(uHigh | uLow);
uLow = (ulong)rawUserTime.dwLowDateTime;
uHigh = (uint)rawUserTime.dwHighDateTime;
uHigh = uHigh << 32;
long userTime = (long)(uHigh | uLow);
Debug.WriteLine("Kernel time: " + kernelTime);
Debug.WriteLine("User time: " + userTime);
Debug.WriteLine("Combined raw execution time: " + (kernelTime + userTime));
long elapsedMilliseconds = (kernelTime + userTime) / 10000; //convert to milliseconds: raw timing unit is 100 nanoseconds
Debug.WriteLine("Elapsed thread time: " + elapsedMilliseconds + " milliseconds");
Debug.WriteLine("Wall Clock Time: " + _wallClockTimer.ElapsedMilliseconds + " milliseconds");
};
_timer.Interval = 1000;
_wallClockTimer = new Stopwatch();
Debug.WriteLine("Starting...");
RunTest();
Debug.WriteLine("Ended.");
}
public static void RunTest()
{
_thread =
new Thread
(
() =>
{
_threadHandle = GetCurrentThread();
Stopwatch sw = Stopwatch.StartNew();
while (sw.ElapsedMilliseconds < 3000)
{
int i = 1 + 2;
} //do busy-work for 3 seconds
sw.Stop();
}
);
_timer.Start();
_thread.Start();a
_wallClockTimer.Start();
_thread.Join();
}
[DllImport("kernel32.dll", SetLastError = true)]
static extern bool GetThreadTimes(IntPtr hThread,
out System.Runtime.InteropServices.ComTypes.FILETIME lpCreationTime, out System.Runtime.InteropServices.ComTypes.FILETIME lpExitTime,
out System.Runtime.InteropServices.ComTypes.FILETIME lpKernelTime, out System.Runtime.InteropServices.ComTypes.FILETIME lpUserTime);
[DllImport("kernel32.dll")]
private static extern IntPtr GetCurrentThread();
}
}
I get the following output:
Starting...
Kernel time: 0
User time: 0
Combined raw execution time: 0
Elapsed thread time: 0 milliseconds
Wall Clock Time: 1036 milliseconds
Kernel time: 0
User time: 0
Combined raw execution time: 0
Elapsed thread time: 0 milliseconds
Wall Clock Time: 2036 milliseconds
The thread '<No Name>' (0x191c) has exited with code 0 (0x0).
Ended.
I would expect GetThreadTimes
to report something other than zero for the thread times: why is zero reported?
回答1:
After making a couple simple mods to your code based on the link provided by Hans, valid times are displayed.
Adding a few interop declarations:
[DllImport("kernel32.dll", SetLastError = true)]
[return: MarshalAs(UnmanagedType.Bool)]
static extern bool DuplicateHandle(IntPtr hSourceProcessHandle,
IntPtr hSourceHandle, IntPtr hTargetProcessHandle, out IntPtr lpTargetHandle,
uint dwDesiredAccess, [MarshalAs(UnmanagedType.Bool)] bool bInheritHandle, uint dwOptions);
[Flags]
public enum DuplicateOptions : uint
{
DUPLICATE_CLOSE_SOURCE = (0x00000001), // Closes the source handle. This occurs regardless of any error status returned.
DUPLICATE_SAME_ACCESS = (0x00000002), // Ignores the dwDesiredAccess parameter. The duplicate handle has the same access as the source handle.
}
[DllImport("kernel32.dll")]
static extern IntPtr GetCurrentProcess();
then modifying how the handle is assigned:
//_threadHandle = GetCurrentThread(); <-- previous assignment
IntPtr processHandle = GetCurrentProcess();
bool result = DuplicateHandle(processHandle, GetCurrentThread(), processHandle, out _threadHandle, 0, false, (uint) DuplicateOptions.DUPLICATE_SAME_ACCESS);
produces the following result:
Starting...
Kernel time: 0
User time: 10000000
Combined raw execution time: 10000000
Elapsed thread time: 1000 milliseconds
Wall Clock Time: 1006 milliseconds
Kernel time: 0
User time: 20000000
Combined raw execution time: 20000000
Elapsed thread time: 2000 milliseconds
Wall Clock Time: 2004 milliseconds
Kernel time: 0
User time: 30000000
Combined raw execution time: 30000000
Ended.
Elapsed thread time: 3000 milliseconds
Wall Clock Time: 3045 milliseconds
EDIT:
Recently a great deal of effort has been given to handling too many threads that are created for a given system. Let's say you have a quad processor, and 20+ threads all want to run. Threads have a fairly large cost with respect to startup, kernel management, memory (they have their own stack), etc. The system may actually be slower (juggling contexts and scheduling) than if the thread count were to be reduced. So in .NET, libraries like TPL were created (allowing the developer to manage tasks, not threads). This allows the CLR to balance the appropriate thread count to the target system. But in your case (where you explicitly create a managed thread), there is nearly always a 1 to 1 relationship with native threads.
Hope this helps.
来源:https://stackoverflow.com/questions/26472936/why-does-getthreadtimes-return