benchmarking

How to accurately measure clock cycles used by a c++ function?

不羁的心 提交于 2019-12-04 13:17:25
I know that I have to use: rdtsc. The measured function is deterministic but the result is far from being repeatable (I get 5% oscillations from run to run). Possible causes are: context switching cache misses Do you know any other causes? How to eliminate them? TSCs (what rdtsc uses) are often not synchronized on multi-processor systems. It may help to set the CPU affinity in order to bind the process to a single CPU. You could also get timestamps from HPET timers if available, which aren't prone to the same problem. As for repeatability, those variances are true. You could disable caching,

How do I compile a single source file within an MSVC project from the command line?

落花浮王杯 提交于 2019-12-04 13:13:35
I'm about to start doing some benchmarking/testing of our builds, and I'd like to drive the whole thing from a command line. I am aware of DevEnv but am not convinced it can do what I want. If I could have a single file built within a single project, I'd be happy. Can this be done? The magical incantation is as follows. Note that this has only been tested with VS 2010 - I have heard this is the first version of Visual Studio with this capability: The Incantation <msbuild> <project> <settings> <file> Where msbuild is a path to MSBuild.exe . Usually this should be set up for you by the VS2010

Go HTTP server testing ab vs wrk so much difference in result

佐手、 提交于 2019-12-04 12:34:05
问题 I am trying to see how many requests the go HTTP server can handle on my machine so I try to do some test but the difference is so large that I am confused. First I try to bench with ab and run this command $ ab -n 100000 -c 1000 http://127.0.0.1/ Doing 1000 concurrent requests. The result is as follows: Concurrency Level: 1000 Time taken for tests: 12.055 seconds Complete requests: 100000 Failed requests: 0 Write errors: 0 Total transferred: 12800000 bytes HTML transferred: 1100000 bytes

Suggestions for optimizing passing expressions as method parameters

北慕城南 提交于 2019-12-04 12:21:10
I'm a great fan of the relatively recent trend of using lambda expressions instead of strings for indicating properties in, for instance, ORM mapping. Strongly typed >>>> Stringly typed. To be clear, this is what I'm talking about: builder.Entity<WebserviceAccount>() .HasTableName( "webservice_accounts" ) .HasPrimaryKey( _ => _.Id ) .Property( _ => _.Id ).HasColumnName( "id" ) .Property( _ => _.Username ).HasColumnName( "Username" ).HasLength( 255 ) .Property( _ => _.Password ).HasColumnName( "Password" ).HasLength( 255 ) .Property( _ => _.Active ).HasColumnName( "Active" ); In some recent

Poor performance

五迷三道 提交于 2019-12-04 12:15:11
I am doing performance tests for my master thesis and I'm getting very poor performance of Symfony2 simple application. It's simple app, one query and some math. Test results for command: ab -c10 -t60 http://sf2.cities.localhost/app.php Server Software: Apache/2.2.20 Server Hostname: sf2.cities.localhost Server Port: 80 Document Path: /app.php Document Length: 2035 bytes Concurrency Level: 10 Time taken for tests: 60.162 seconds Complete requests: 217 Failed requests: 68 (Connect: 0, Receive: 0, Length: 68, Exceptions: 0) Write errors: 0 Non-2xx responses: 68 Total transferred: 393876 bytes

How to do good benchmarking of complex functions?

懵懂的女人 提交于 2019-12-04 12:12:23
I am about to embark in very detailed benchmarking of a set of complex functions in C. This is "science level" detail. I'm wondering, what would be the best way to do serious benchmarking? I was thinking about running them, say, 10 times each, averaging the timing results and give the standard dev, for instance, just using <time.h> . What would you guys do to obtain good benchmarks? Reporting an average and standard deviation gives a good description of a distribution when the distribution in question is approximately normal. However, this is rarely true of computational performance

Writing a time function in Haskell

寵の児 提交于 2019-12-04 09:08:29
问题 I’m new to Haskell and I’d like to be able to time the runtime of a given function call or snippet of code. In Clojure I can use ‘time’: user=> (time (apply * (range 2 10000))) "Elapsed time: 289.795 msecs" 2846259680917054518906413212119868890148051... In Scala, I can define the function myself: scala> def time[T](code : => T) = { | val t0 = System.nanoTime : Double | val res = code | val t1 = System.nanoTime : Double | println("Elapsed time " + (t1 - t0) / 1000000.0 + " msecs") | res | }

POSTing multipart/form-data with Apache Bench (ab)

泄露秘密 提交于 2019-12-04 08:59:17
问题 I'm trying to benchmark our upload server by simulating several concurrent requests using Apache Bench ( ab ). I've read this post that details the necessary steps and also this Stackoverflow question but I'm still unable to create a valid benchmark. This is the command I'm using with Apache Bench ab -n 10 -c 6 -p post_data.txt -T "multipart/form-data; boundary=1234567890" http://myuploadserver.com/upload These are the contents of my post_data.txt file. I apologize for the length. -

What harm can a C/asm program do to Linux when run by an unprivileged user?

被刻印的时光 ゝ 提交于 2019-12-04 08:55:59
I have been thinking about a scenario where one lets users (can be anyone, possibly with bad intentions) submit code which is run on a Linux PC (let's call it the benchmark node). The goal is to make a kind of automated benchmarking environment for single-threaded routines. Let's say that a website posts some code to a proxy. This proxy hands this code to the benchmark node, and the benchmark node only has an ethernet connection to the proxy, not internet itself. If one lets whatever user post C/asm code to be run on the benchmark node, what security challenges will one face? The following

Why is reading one byte 20x slower than reading 2, 3, 4, … bytes from a file?

99封情书 提交于 2019-12-04 08:49:40
问题 I have been trying to understand the tradeoff between read and seek . For small "jumps" reading unneeded data is faster than skipping it with seek . While timing different read/seek chunk sizes to find the tipping point, I came across a odd phenomenon: read(1) is about 20 times slower than read(2) , read(3) , etc. This effect is the same for different read methods, e.g. read() and readinto() . Why is this the case? Search in the timing results for the following line 2/3 of the way through: 2