Timing a remote call in a multithreaded java program

风格不统一 提交于 2019-12-13 06:15:33

问题


I am writing a stress test that will issue many calls to a remote server. I want to collect the following statistics after the test:

  1. Latency (in milliseconds) of the remote call.
  2. Number of operations per second that the remote server can handle.

I can successfully get (2), but I am having problems with (1). My current implementation is very similar to the one shown in this other SO question. And I have the same problem described in that question: latency reported by using System.currentTimeMillis() is longer than expected when the test is run with multiple threads.

I analyzed the problem and I am positive the problem comes from the thread interleaving (see my answer to the other question that I linked above for details), and that System.currentTimeMillis() is not the way to solve this problem.

It seems that I should be able to do it using java.lang.management, which has some interesting methods like:

  1. ThreadMXBean.getCurrentThreadCpuTime()
  2. ThreadMXBean.getCurrentThreadUserTime()
  3. ThreadInfo.getWaitedTime()
  4. ThreadInfo.getBlockedTime()

My problem is that even though I have read the API, it is still unclear to me which of these methods will give me what I want. In the context of the other SO question that I linked, this is what I need:

long start_time = **rightMethodToCall()**;

result = restTemplate.getForObject("Some URL",String.class);
long difference = (**rightMethodToCall()** - start_time);

So that the difference gives me a very good approximation of the time that the remote call took, even in a multi-threaded environment.

Restriction: I'd like to avoid protecting that block of code with a synchronized block because my program has other threads that I would like to allow to continue executing.

EDIT: Providing more info.:

The issue is this: I want to time the remote call, and just the remote call. If I use System.currentTimeMillis or System.nanoTime(), AND if I have more threads than cores, then it is possible that I could have this thread interleaving:

  1. Thread1: long start_time ...
  2. Thread1: result = ...
  3. Thread2: long start_time ...
  4. Thread2: result = ...
  5. Thread2: long difference ...
  6. Thread1: long difference ...

If that happens, then the difference calculated by Thread2 is correct, but the one calculated by Thread1 is incorrect (it would be greater than it should be). In other words, for the measurement of the difference in Thread1, I would like to exclude the time of lines 4 and 5. Is this time that the thread was WAITING?

Summarizing question in a different way in case it helps other people understand it better (this quote is how @jason-c put it in his comment.):

[I am] attempting to time the remote call, but running the test with multiple threads just to increase testing volume.


回答1:


Use System.nanoTime() (but see updates at end of this answer).

You definitely don't want to use the current thread's CPU or user time, as user-perceived latency is wall clock time, not thread CPU time. You also don't want to use the current thread's blocking or waiting time, as it measures per-thread contention times which also doesn't accurately represent what you are trying to measure.

System.nanoTime() will return relatively accurate results (although granularity is technically only guaranteed to be as good or better than currentTimeMillis(), in practice it tends to be much better, generally implemented with hardware clocks or other performance timers, e.g. QueryPerformanceCounter on Windows or clock_gettime on Linux) from a high resolution clock with a fixed reference point, and will measure exactly what you are trying to measure.

long start_time = System.nanoTime();
result = restTemplate.getForObject("Some URL",String.class);
long difference = (System.nanoTime() - start_time);
long milliseconds = difference / 1000000;

System.nanoTime() does have it's own set of issues but be careful not to get whipped up in paranoia; for most applications it is more than adequate. You just wouldn't want to use it for, say, precise timing when sending audio samples to hardware or something (which you wouldn't do directly in Java anyways).

Update 1:

More importantly, how do you know the measured values are longer than expected? If your measurements are showing true wall clock time, and some threads are taking longer than others, that is still an accurate representation of user-perceived latency, as some users will experience those longer delay times.

Update 2 (based on clarification in comments):

Much of my above answer is still valid then; but for different reasons.

Using per-thread time does not give you an accurate representation because a thread could be idle/inactive while the remote request is still processing, and you would therefore exclude that time from your measurement even though it is part of perceived latency.

Further inaccuracies are introduced by the remote server taking longer to process the simultaneous requests you are making - this is an extra variable that you are adding (although it may be acceptable as representative of the remote server being busy).

Wall time is also not completely accurate because, as you have seen, variances in local thread overhead may add extra latency that isn't typically present in single-request client applications (although this still may be acceptable as representative of a client application that is multi-threaded, but it is a variable you cannot control).

Of those two, wall time will still get you closer to the actual result than per-thread time, which is why I left the previous answer above. You have a few options:

  • You could do your tests on a single thread, serially -- this is ultimately the most accurate way to achieve your stated requirements.
  • You could not create more threads than cores, e.g. a fixed size thread pool with bound affinities (tricky: Java thread affinity) to each core and measurements running as tasks on each. Of course this still adds any variables due to synchronization of underlying mechanisms that are beyond your control. This may reduce the risk of interleaving (especially if you set the affinities) but you still do not have full control over e.g. other threads the JVM is running or other unrelated processes on the system.
  • You could measure the request handling time on the remote server; of course this does not take network latency into account.
  • You could continue using your current approach and do some statistical analysis on the results to remove outliers.
  • You could not measure this at all, and simply do user tests and wait for a comment on it before attempting to optimize it (i.e. measure it with people, who are what you're developing for anyways). If the only reason to optimize this is for UX, it could very well be the case that users have a pleasant experience and the wait time is totally acceptable.

Also, none of this makes any guarantees that other unrelated threads on the system won't be affecting your timings, but that is why it is important to both a) run your test multiple times and average (obviously) and b) set an acceptable requirement for timing error's that you are OK with (do you really need to know this to e.g. 0.1ms accuracy?).

Personally, I would either do the first, single-threaded approach and let it run overnight or over a weekend, or use your existing approach and remove outliers from the result and accept a margin of error in the timings. Your goal is to find a realistic estimate within a satisfactory margin of error. You will also want to consider what you are going to ultimately do with this information when deciding what is acceptable.



来源:https://stackoverflow.com/questions/19800573/timing-a-remote-call-in-a-multithreaded-java-program

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!