benchmarking

Comparing times with sub-second accuracy

北城余情 提交于 2019-12-20 16:17:24
问题 How can I get the number of milliseconds since epoch? Note that I want the actual milliseconds, not seconds multiplied by 1000. I am comparing times for stuff that takes less than a second and need millisecond accuracy. (I have looked at lots of answers and they all seem to have a *1000) I am comparing a time that I get in a POST request to the end time on the server. I just need the two times to be in the same format, whatever that is. I figured unix time would work since Javascript has a

Is `if` faster than ifelse?

跟風遠走 提交于 2019-12-20 10:30:36
问题 When I was re-reading Hadley's Advanced R recently, I noticed that he said in Chapter 6 that `if` can be used as a function like `if`(i == 1, print("yes"), print("no")) (If you have the physical book in hand, it's on Page 80) We know that ifelse is slow (Does ifelse really calculate both of its vectors every time? Is it slow?) as it evaluates all arguments. Will `if` be a good alternative to that as if seems to only evaluate TRUE arguments (this is just my assumption)? Update : Based on the

How to do “performance-based” (benchmark) unit testing in Python

匆匆过客 提交于 2019-12-20 10:13:23
问题 Let's say that I've got my code base to as high a degree of unit test coverage as makes sense. (Beyond a certain point, increasing coverage doesn't have a good ROI.) Next I want to test performance. To benchmark code to make sure that new commits aren't slowing things down needlessly. I was very intrigued by Safari's zero tolerance policy for slowdowns from commits. I'm not sure that level of commitment to speed has a good ROI for most projects, but I'd at least like to be alerted that a

Why is C++ initial allocation so much larger than C's?

对着背影说爱祢 提交于 2019-12-20 08:22:55
问题 When using the same code, simply changing the compiler (from a C compiler to a C++ compiler) will change how much memory is allocated. I'm not quite sure why this is and would like to understand it more. So far the best response I've gotten is "probably the I/O streams", which isn't very descriptive and makes me wonder about the "you don't pay for what you don't use" aspect of C++. I'm using the Clang and GCC compilers, versions 7.0.1-8 and 8.3.0-6 respectively. My system is running on Debian

Rust benchmark optimized out

送分小仙女□ 提交于 2019-12-20 04:49:49
问题 I am trying to benchmark getting keys from a Rust hash map. I have the following benchmark: #[bench] fn rust_get(b: &mut Bencher) { let (hash, keys) = get_random_hash::<HashMap<String, usize>>(&HashMap::with_capacity, &rust_insert_fn); let mut keys = test::black_box(keys); b.iter(|| { for k in keys.drain(..) { hash.get(&k); } }); } where get_random_hash is defined as: fn get_random_hash<T>( new: &Fn(usize) -> T, insert: &Fn(&mut T, String, usize) -> (), ) -> (T, Vec<String>) { let mut keys =

How can I run a query multiple times in phpmyadmin?

你说的曾经没有我的故事 提交于 2019-12-20 03:15:14
问题 I want a way to be able to benchmark a query like 1,000,000 times. What's the easiest way to do this? Currently I've searched for a way to issue a query multiple times but nothing pops up. I've also come across the benchmark() command that can be run in mysql command line, but it seems to have some limitations and I can't seem to get it to work. 回答1: This isn't really the job of phpMyAdmin, a GUI for MySQL beginners. Put the query in a script, in a loop that runs 1,000,000 times. Though that

How can I prevent the Rust benchmark library from optimizing away my code?

霸气de小男生 提交于 2019-12-20 02:27:13
问题 I have a simple idea I'm trying to benchmark in Rust. However, when I go to measure it using test::Bencher , the base case that I'm trying to compare against: #![feature(test)] extern crate test; #[cfg(test)] mod tests { use test::black_box; use test::Bencher; const ITERATIONS: usize = 100_000; struct CompoundValue { pub a: u64, pub b: u64, pub c: u64, pub d: u64, pub e: u64, } #[bench] fn bench_in_place(b: &mut Bencher) { let mut compound_value = CompoundValue { a: 0, b: 2, c: 0, d: 5, e: 0,

PyPy 17x faster than Python. Can Python be sped up?

自古美人都是妖i 提交于 2019-12-20 02:25:32
问题 Solving a recent Advent of Code problem, I found my default Python was ~40x slower than PyPy. I was able to get that down to about 17x with this code by limiting calls to len and limiting global lookups by running it in a function. Right now, e.py runs in 5.162 seconds on python 3.6.3 and .297 seconds on PyPy on my machine. My question is: is this the irreducible speedup of JIT, or is there some way to speed up the CPython answer? (short of extreme means: I could go to Cython/Numba or

Memory benchmark plot: understanding cache behaviour

会有一股神秘感。 提交于 2019-12-19 19:12:25
问题 I've tried every kind of reasoning I can possibly came out with but I don't really understand this plot. It basically shows the performance of reading and writing from different size array with different stride. I understand that for small stride like 4 bytes I read all the cell in the cache, consequently I have good performance. But what happen when I have the 2 MB array and the 4k stride? or the 4M and 4k stride? Why the performance are so bad? Finally why when I have 1MB array and the

MySQL BIGINT(20) vs Varchar(31) performance

青春壹個敷衍的年華 提交于 2019-12-19 11:37:07
问题 I have read that bigint like 23423423423423423637 for primare unique key is better than varchar like 961637593864109_412954765521130 but how big is the difference when there are let's say 1 million rows when I never will sort but only select/update one row. It would be much more comfortable for me to use varchar and I will stay with that when the performance difference is under 30% or anything. I can't find any benchmark for that. 回答1: This would really have to be measured, we can make some