This sonar page basically lists the various methods employed by different code coverage analysis tools:
- Source code instrumentation(Used by Clover)
- Offline byte code instrumentation(Used by Cobertura)
- On-the-fly byte code instrumentation(Used by Jacoco)
What are these three methods and which one is the most efficient and why?If the answer to the question of efficiency is "it depends" , then please explain why?
Source code instrumentation consists in adding instructions to the source code before compiling it. These instructions are used to trace which parts of the codes have been executed.
Offline byte-code instrumentation consists in adding those same instructions, but after the compilation, directly into the byte-code.
On-the-fly byte-code instrumentation consists in adding those same instructions in the byte-code, but dynamically, at runtime, when the byte-code is loaded by the JVM.
This page has a comparison between the methods. It might be biased, since it's part of the Clover documentation.
Depending on your definition of "efficient", choose the one you like the most. I don't think you'll get enormous differences. They all do the job, and the big picture will be the same whatever the method used.
In general the effect on coverage is the same.
Source code instrumentation can give superior reporting results, simply because byte-code instrumentation cannot distinguish any structure within source lines, as the code block granularity is only recorded in terms of source lines.
Imagine I have two nested if statements (or equivalently, if (a && b) ... *) in a single line. A source code instrumenter can see these, and provide coverage information for the multiple arms within the if, within the source line; it can report blocks based on lines and columns. A byte code instrumenter only sees one line wrapped around the conditions. Does it report the line as "covered" if condition a executes, but is false?
You may argue this is a rare circumstance (and it probably is), and is therefore not very useful. When you get bogus coverage on it followed by a field failure, you may change your mind about utility.
There's a nice example and explanation of how byte code coverage makes getting coverage of switch statements right, extremely difficult.
A source code instrumenter may also achieve faster test executions, because it has the compiler helping optimize the instrumented code. In particular, a probe inserted inside a loop by a binary instrumenter may get compiled inside the loop by a JIT compiler. A good Java compiler will see the instrumentation produces a loop-invariant result, and lift the instrumentation out of the loop. (A JIT compiler can arguably do this too; the question is whether they actually do so).
来源:https://stackoverflow.com/questions/15255798/what-are-the-differences-between-the-three-methods-of-code-coverage-analysis