benchmarking

Measuring and benchmarking processing power of a javascript engine in a browser

随声附和 提交于 2019-12-19 09:44:21
问题 What is an accurate way to measure the performance of a javascript engine like v8 or spidermonkey ? It should at least have not very high deviations from one evaluation and another, probably allow to rank between different javascript engines on different operating systems and different hardware configurations. My first attempt was this in a web page with nothing on it, I loaded that page in web browsers. Then I tried executing this code in Google Chrome's javascript console and it came out

How can I measure the performance and TCP RTT of my server code?

跟風遠走 提交于 2019-12-19 07:34:55
问题 I created a basic TCP server that reads incoming binary data in protocol buffer format, and writes a binary msg as response. I would like to benchmark the the roundtrip time. I tried iperf, but could not make it send the same input file multiple times. Is there another benchmark tool than can send a binary input file repeatedly? 回答1: If you have access to a linux or unix machine 1 , you should use tcptrace. All you need to do is loop through your binary traffic test while capturing with

Graph representation benchmarking

馋奶兔 提交于 2019-12-19 06:45:07
问题 Currently am developing a program that solves (if possible) any given labyrinth of dimensions from 3X4 to 26x30. I represent the graph using both adj matrix (sparse) and adj list. I would like to know how to output the total time taken by the DFS to find the solution using one and then the other method. Programatically, how could i produce such benchmark? 回答1: An useful table to work out with various graphs implementations: OPERATION EDGE LIST ADJ LIST ADJ MATRIX degree(v) O(m) O(d(v)) O(n)

gettimeofday() C++ Inconsistency

独自空忆成欢 提交于 2019-12-19 04:09:11
问题 I'm doing a project that involves comparing programming languages. I'm computing the Ackermann function. I tested Java, Python, and Ruby, and got responses between 10 and 30 milliseconds. But C++ seems to take 125 milliseconds. Is this normal, or is it a problem with the gettimeofday() ? Gettimeofday() is in time.h. I'm testing on a (virtual) Ubuntu Natty Narwhal 32-bit. I'm not short processing power (Quad-core 2.13 GHz Intel Xeon). My code is here: #include <iostream> #include <sys/time.h>

How to Disable Dynamic Frequency Scaling?

萝らか妹 提交于 2019-12-19 04:05:12
问题 I would like to do some microbenchmarks, and try to do them right. Unfortunately dynamic frequency scaling makes benchmarking highly unreliable. Is there a way to programmatically (C++, Windows) find out if dynamic frequency scaling is enabled? If, can this be disabled in a program? Ive tried to just use a warmup phase that uses 100% CPU for a second before the actual benchmark takes place, but this turned out to be not reliable either. UPDATE : Even when I disable SpeedStep in the BIOS, cpu

Is there a way to count the number of IL instructions executed?

梦想的初衷 提交于 2019-12-19 03:37:07
问题 I want to do some benchmarking of a C# process, but I don't want to use time as my vector - I want to count the number of IL instructions that get executed in a particular method call. Is this possible? Edit I don't mean static analysis of a method body - I'm referring to the actual number of instructions that are executed - so if, for example, the method body includes a loop, the count would be increased by however many instructions make up the loop * the number of times the loop is iterated

Apache benchmark multipart/form-data

落爺英雄遲暮 提交于 2019-12-18 18:24:10
问题 i'm facing a strange problem with apache benchmark post file. I need to stress a feature that handles file upload. So, I googled about, and found a post describing how to build a post file properly. Its contents looks like: --1234567 Content-Disposition: form-data; name="user_id" 3 --1234567 Content-Disposition: form-data; name="file"; filename="cf_login.png" Content-Type: image/png [base64 encoded file content] --1234567-- The ab line is this: $ ab -c 1 -n 5 -v 4 -T 'multipart/form-data;

Vectorized string operations in Numpy: why are they rather slow?

自古美人都是妖i 提交于 2019-12-18 17:25:47
问题 This is of those "mostly asked out of pure curiosity (in possibly futile hope I will learn something)" questions. I was investigating ways of saving memory on operations on massive numbers of strings, and for some scenarios it seems like string operations in numpy could be useful. However, I got somewhat surprising results: import random import string milstr = [''.join(random.choices(string.ascii_letters, k=10)) for _ in range(1000000)] npmstr = np.array(milstr, dtype=np.dtype(np.unicode_,

Tools to benchmark web-services

≡放荡痞女 提交于 2019-12-18 16:59:06
问题 What tools are best for measuring web-services performance? It would be nice to get report for total transferred data, total POSTs, requests per second, time per request, transfer rate and response time per request. 回答1: Not quite for web services, but a very simple command line tool is distributed with Apache to benchmark HTTP performance, it is called ApacheBench and can be found in the bin directory as ab.exe ApacheBench's documentation 回答2: I have used jmeter in the past. Check it out.

Slow performance for deeply nested subquery factoring (CTE)

大兔子大兔子 提交于 2019-12-18 15:54:11
问题 This query consists of 16 equal steps. Every step is doing the same calculation on the same dataset (a single row), but last steps take too much time for it. with t0 as (select 0 as k from dual) ,t1 as (select k from t0 where k >= (select avg(k) from t0)) ,t2 as (select k from t1 where k >= (select avg(k) from t1)) ,t3 as (select k from t2 where k >= (select avg(k) from t2)) ,t4 as (select k from t3 where k >= (select avg(k) from t3)) ,t5 as (select k from t4 where k >= (select avg(k) from t4