benchmarking

Changing POST data used by Apache Bench per iteration

岁酱吖の 提交于 2019-12-09 08:49:27
问题 I'm using ab to do some load testing, and it's important that the supplied querystring (or POST) parameters change between requests. I.e. I need to make requests to URLs like: http://127.0.0.1:9080/meth?param=0 http://127.0.0.1:9080/meth?param=1 http://127.0.0.1:9080/meth?param=2 ... to properly exercise the application. ab seems to only read the supplied POST data file once, at startup, so changing its content during the test run is not an option. Any suggestions? 回答1: You're going to need

Benchmarks used to test a C and C++ allocator?

青春壹個敷衍的年華 提交于 2019-12-09 05:41:01
问题 Please kindly advise on benchmarks used to test a C and C++ allocator? Benchmarks satisfying any of the following aspects are considered: Speed Fragmentation Concurrency Thanks! 回答1: If you ask about a general allocator for a C/C++ program then I have found this paper Hoard: A Scalable Memory Allocator for Multithreaded Applications which considers this question. This is a quote from this document There is as yet no standard suite of benchmarks for evaluating multithreaded allocators. We know

How fast is Berkeley DB SQL compared to SQLite?

房东的猫 提交于 2019-12-08 22:40:10
问题 Oracle recently released a Berkeley DB back-end to SQLite. I happen to have a hundreds-of-megabytes SQLite database that could very well benefit from "improved performance, concurrency, scalability, and reliability", but Oracle's site appears to lack any measurements of the improvements. Has anyone here done some benchmarking? 回答1: I participated in the beta evaluation of the BDB SQLite code and one of the things I tried to get a handle on was the performance difference. At this point, I

Performance benchmark of native android map vs webview map, what parameters can be included in the benchmark

落爺英雄遲暮 提交于 2019-12-08 19:52:28
I am trying to compare native google maps (v2) vs the embeddable HTML version encapsulated in a webview on android. While it's pretty evident that the native maps are smoother and faster, I must prove that somehow. I have been searching on the internet for quite some time and did not seem to find any existing benchmarks. Does anybody know someone who actually done something similar? I am already thinking of creating such benchmark of my own, but how can the performance actually be measured? My ideas so far are: Measure rendering of different number of markers, polylines, etc... Measure map

String Pool: “Te”+“st” faster than “Test”?

只谈情不闲聊 提交于 2019-12-08 17:38:20
问题 I am trying some performance benchmark regarding String Pool. However, the outcome is not expected. I made 3 static methods perform0() method ... creates a new object every time perform1() method ... String literal "Test" perform2() method ... String constant expression "Te"+"st" My expectation was (1. fastest -> 3. slowest) "Test" because of string pooling "Te"+"st" because of string pooling but bit slower than 1 because of + operator new String(..) because of no string pooling. But the

C++ code execution time varies with small source change that shouldn't introduce any extra work

烂漫一生 提交于 2019-12-08 17:32:05
问题 While working on benchmarking some code, I found that its execution time would vary with even the most innocuous code changes. I have attempted to boil down the code below to the most minimal test case, but it is still rather lengthy (for which I apologize). Changing virtually anything largely affects the benchmark results. #include <string> #include <vector> #include <iostream> #include <random> #include <chrono> #include <functional> constexpr double usec_to_sec = 1000000.0; // Simple

Benchmarking with googletest?

﹥>﹥吖頭↗ 提交于 2019-12-08 14:52:11
问题 Background (skip to Question below if not interested) I have a simulator that runs through three states: Single threaded startup (I/O ok) Multi-threaded in-memory CPU-bound simulation stage (I/O not ok) Post-simulation, post-join single threaded stage (I/O ok) What the heck! During standard testing, CPU usage dropped from 100% down to 20% , and the total run took about 30 times longer than normal (130secs vs 4.2secs). When Callgrind revealed nothing suspicious, my head buzzed as I was on the

Why are python's for loops so non-linear for large inputs?

做~自己de王妃 提交于 2019-12-08 14:43:35
问题 I was benchmarking some python code I noticed something strange. I used the following function to measure how fast it took to iterate through an empty for loop: def f(n): t1 = time.time() for i in range(n): pass print(time.time() - t1) f(10**6) prints about 0.035 , f(10**7) about 0.35 , f(10**8) about 3.5 , and f(10**9) about 35 . But f(10**10) ? Well over 2000 . That's certainly unexpected. Why would it take over 60 times as long to iterate through 10 times as many elements? What's with

Step by step guide for benchmarking PHP project

牧云@^-^@ 提交于 2019-12-08 10:12:06
问题 Can anyone guide me how to load test/benchmark a project written in plain procedural PHP (no framework) and MySQL to identify the bottleneck ? The project uses SESSION to store some values. I've the last version of WAMP ! [On SO i found JMeter to do the job, but there was no step by step guide, neither i found it on the JMeter's site. Looking for help from you.] 回答1: If you want to profile your code to find out, which part of it takes all the time, you're looking for a profiler. With WAMP, I

Importing multiple versions of the same Module/Package for Benchmarking

99封情书 提交于 2019-12-08 09:04:31
I am working on a Package, this package using BinDeps to pull in some C source code, and compile some binaries. My julia module then mostly just exposes ccalls to those functions. Now there are about 5 different options for how the C source can be compiled, with various different optimisations turned on. And to go with them various other changes that are triggered by by a constant that is written to the deps.jl that is output by BinDeps. So I would like to import each of the different builds of my package as different modules, so I can benchmark them using BenchmarkTools.jl . Currently, my