benchmarking

Codeigniter benchmarking, where are these ms coming from?

两盒软妹~` 提交于 2019-12-11 01:34:30
问题 I'm in the process of benchmarking my website. class Home extends Controller { function Home() { parent::Controller(); $this->benchmark->mark('Constructor_start'); $this->output->enable_profiler(TRUE); $this->load->library ('MasterPage'); $this->benchmark->mark('Constructor_end'); } function index() { $this->benchmark->mark('Index_start'); $this->masterpage->setMasterPage('master/home'); $this->masterpage->addContent('home/index', 'page'); $this->masterpage->show(); $this->benchmark->mark(

How to load test Server Sent Events?

拈花ヽ惹草 提交于 2019-12-11 01:09:38
问题 I have a small app that sends Server Sent Events. I would like to load test my app so I can benchmark the latency from the time a message is pushed to the time the message is received so I can know when/where the performance breaks down. What tools are available to be able to do this? 回答1: Since Server-Sent Events it is just HTTP you can use siege utility. Here is the example: siege -b -t 1m -c45 http://127.0.0.1:9292/streaming Where: -b benchmark mode i.e. don't wait between connections -t

How to get consistent results when compare speed of numpy.save and h5py?

让人想犯罪 __ 提交于 2019-12-11 01:03:58
问题 I'm trying to compare the speed efficiency of two tools that would allow to save 2 GB of numpy array to disk into a file : numpy.save and h5py.create_dataset . (Note : this is just a first test, the real case I have to deal with, is several thousands of numpy arrays of size between 1 and 2 MB, ie several GB at the end) Here is the code I use for doing the benchmark. The problem is that the results are really inconsistent : import numpy as np import h5py import time def writemem(): myarray =

How to Configure and Sample Intel Performance Counters In-Process

蓝咒 提交于 2019-12-11 00:53:16
问题 In a nutshell, I'm trying to achieve the following inside a userland benchmark process (pseudo-code, assuming x86_64 and a UNIX system): results[] = ... for (iteration = 0; iteration < num_iterations; iteration++) { pctr_start = sample_pctr(); the_benchmark(); pctr_stop = sample_pctr(); results[iteration] = pctr_stop - pctr_start; } FWIW, the performance counter I am thinking of using is CPU_CLK_UNHALTED.THREAD_ALL , to read the number of core cycles independent of clock frequency changes (In

Can 32-bit SPARC V8 application run on 64-bit SPARC V9?

时光总嘲笑我的痴心妄想 提交于 2019-12-10 23:39:28
问题 I have few benchmark application complied for SPARC V8 32-bit architecture. I used them for performance evaluation of SPARC 32-bit processor. However, few application fall short of in performance. I want to test the performance with a 64-bit SPARC V9 architecture ( like OpenSPARC T1/T2). My question is will the compiled binaries for the 32-bit SPARC V8 architecture run in SPARC V9 architecture without any modifications? Are the binaries in both architectures compatible? 回答1: Presuming that

When benchmarking, what causes a lag between CPU time and “elapsed real time”?

半腔热情 提交于 2019-12-10 21:58:21
问题 I'm using a built-in benchmarking module for some quick and dirty tests. It gives me: CPU time system CPU time (actually I never get any result for this with the code I'm running) the sum of the user and system CPU times (always the same as the CPU time in my case) the elapsed real time I didn't even know I needed all that information. I just want to compare two pieces of code and see which one takes longer. I know that one piece of code probably does more garbage collection than the other

Python Versions Performance

a 夏天 提交于 2019-12-10 20:47:52
问题 Where I can find a comparative speed benchmark between python versions? For example the performance between 2.6, 2.7, 3.0 , 3.1 and 3.2 versions. 回答1: Pystone benchmark on 2.6,2.7,3.2: http://www.levigross.com/post/2340736877/pystone-benchmark-on-2-6-2-7-3-2 (3.0 and 3.1 are probably slower than 3.2) 回答2: There is a Python module with various real-world performance tasks to measure different builds / versions of Python - performance. You can install it with the following command: pip install

Benchmark code - dividing by the number of iterations or not?

∥☆過路亽.° 提交于 2019-12-10 18:36:16
问题 I had an interesting discussion with my friend about benchmarking a C/C++ code (or code, in general). We wrote a simple function which uses getrusage to measure cpu time for a given piece of code. (It measures how much time of cpu it took to run a specific function). Let me give you an example: const int iterations = 409600; double s = measureCPU(); for( j = 0; j < iterations; j++ ) function(args); double e = measureCPU(); std::cout << (e-s)/iterations << " s \n"; We argued, should we divide

Fastest/Proper way of ordering if/else if statements

点点圈 提交于 2019-12-10 17:05:07
问题 In PHP, is there a fastest/proper way of ordering if/else if statements? For some reason, in my head, I like to think that the first if statement should be the anticipated "most popular" met condition, followed by the 2nd, etc. But, does it really matter? Is there is a speed or processing time affected if the 2nd condition is the most popular choice (meaning the system must always read the first condition) Ex: if ("This is the most chosen condition" == $conditions) { } else if ("This is the

Reducing memory footprint with multiprocessing?

邮差的信 提交于 2019-12-10 16:53:15
问题 One of my applications runs about 100 workers. It started out as a threading application, but performance (latency) issues were hit. So I converted those workers to multiprocessing.Process es. The benchmark below shows that the reduction in load was achieved at the cost of more memory usage (factor 6). So where precisely does the memory usage come from if Linux uses cow and the workers do not share any data? How can I reduce the memory footprint? (Alternative question: How can I reduce the