问题
The current implementation of the built-in benchmarking tool appears to run the code inside the iter
call multiple times for each time the setup code outside the iter
is run. When the code being benchmarked modifies the setup data, subsequent iterations of the benchmarked code are no longer benchmarking the same thing.
As a concrete example, I am benchmarking how fast it takes to remove values from a Vec
:
#![feature(test)]
extern crate test;
use test::Bencher;
#[bench]
fn clearing_a_vector(b: &mut Bencher) {
let mut things = vec![1];
b.iter(|| {
assert!(!things.is_empty());
things.clear();
});
}
This will fail:
test clearing_a_vector ... thread 'main' panicked at 'assertion failed: !things.is_empty()', src/lib.rs:11
Performing a similar benchmark of push
ing an element onto the vector shows that the iter
closure was executed nearly 980 million times (depending on how fast the closure is). The results could be very misleading if there's a single run that does what I expect and millions more that don't.
Tests were run with Rust nightly 1.19.0 (f89d8d184 2017-05-30)
回答1:
Check out pew, a recently published crate for benchmarking rust code. It allows you to do one time set up that is cloned into every benchmark, or manually run set up by pausing/resuming the benchmark.
This library is in very early phases, but it might be what you're looking for. Contributions are always welcome.
来源:https://stackoverflow.com/questions/44344832/how-can-i-benchmark-code-that-mutates-the-setup-data