benchmarking

How could this Java code be sped up?

半城伤御伤魂 提交于 2020-01-02 07:58:32
问题 I am trying to benchmark how fast can Java do a simple task: read a huge file into memory and then perform some meaningless calculations on the data. All types of optimizations count. Whether it's rewriting the code differently or using a different JVM, tricking JIT .. Input file is a 500 million long list of 32 bit integer pairs separated by a comma. Like this: 44439,5023 33140,22257 ... This file takes 5.5GB on my machine. The program can't use more than 8GB of RAM and can use only a single

PHP: in_array() vs array_intersect() performance

痴心易碎 提交于 2020-01-02 03:50:30
问题 What, and how much, is faster - manually iterating over an array with foreach and checking for needle occurrence with in_array() , or using array_intersect() ? 回答1: Benchmark Test Script <?php $numbers = range(32, 127); $numbersLetters = array_map('chr', $numbers); for (;;) { $numbersLetters = array_merge($numbersLetters, $numbersLetters); if (count($numbersLetters) > 10000) { break; } } $numbers = range(1, count($numbersLetters)); printf("Sample size: %d elements in 2 arrays (%d total) \n",

PHP vs MySQL Performance ( if , functions ) in query

旧街凉风 提交于 2020-01-02 02:37:07
问题 I just see this artice i need to know what's is best berformance in this cases if statment in query SELECT *,if( status = 1 , "active" ,"unactive") as status_val FROM comments VS <?php $x = mysql_query("SELECT * FROM comments"); while( $res = mysql_fetch_assoc( $x ) ){ if( $x['status'] == 1 ){ $status_val = 'active'; }else{ $status_val = 'unactive'; } } ?> Cut 10 from string SELECT * , SUBSTR(comment, 0, 10) as min_comment FROM comments VS <?php $x = mysql_query("SELECT * FROM comments");

Performance of row vs column operations in NumPy

假如想象 提交于 2020-01-02 01:07:51
问题 There are a few articles that show that MATLAB prefers column operations than row operations, and that depending on you lay out your data the performance can vary significantly. This is apparently because MATLAB uses a column-major order for representing arrays. I remember reading that Python (NumPy) uses a row-major order. With this, my questions are: Can one expect a similar difference in performance when working with NumPy? If the answer to the above is yes, what would be some examples

running nokogiri in Jruby vs. just ruby

只谈情不闲聊 提交于 2020-01-01 22:29:54
问题 I found startling difference in CPU and memory consumption usage. It seems garbage collection is not happening when i run the following nokogiri script require 'rubygems' require 'nokogiri' require 'open-uri' def getHeader() doz = Nokogiri::HTML(open('http://losangeles.craigslist.org/wst/reb/1484772751.html')) puts doz.xpath("html[1]\/body[1]\/h2[1]") end (1..10000).each do |a| getHeader() end when run in Jruby, CPU consumption is over 10, and memory consumption % rises with time(starts from

What harm can a C/asm program do to Linux when run by an unprivileged user?

拟墨画扇 提交于 2020-01-01 10:57:12
问题 I have been thinking about a scenario where one lets users (can be anyone, possibly with bad intentions) submit code which is run on a Linux PC (let's call it the benchmark node). The goal is to make a kind of automated benchmarking environment for single-threaded routines. Let's say that a website posts some code to a proxy. This proxy hands this code to the benchmark node, and the benchmark node only has an ethernet connection to the proxy, not internet itself. If one lets whatever user

Haskell calculate time of function performing

霸气de小男生 提交于 2020-01-01 10:19:11
问题 i tried to code to calculate time that a function costs list <- buildlist 10000 10000 starttime <- getClockTime let sortedlist = quicksort list endtime <- getClockTime let difftime = diffClockTimes endtime starttime function buildlist: buildlist :: Int -> Int -> IO [Int] buildlist n m = do seed <- getStdGen let l = randomRs (0, m) seed let list = take n l return list function quicksort: quicksort [] = [] quicksort (x:xs) = let head = [a|a<-xs,a<=x] tail = [a|a<-xs,a>x] in quicksort head ++ [x

Looking for an accurate way to micro benchmark small code paths written in C++ and running on Linux/OSX

戏子无情 提交于 2020-01-01 10:16:48
问题 I'm looking to do some very basic micro benchmarking of small code paths, such as tight loops, that I've written in C++. I'm running on Linux and OSX, and using GCC. What facilities are there for sub millisecond accuracy? I am thinking a simple test of running the code path many times (several tens of millions?) will give me enough consistency to get a good reading. If anyone knows of preferable methods, please feel free to suggest them. 回答1: You can use "rdtsc" processor instruction on x86

c++ and c# speed compared

妖精的绣舞 提交于 2020-01-01 09:47:13
问题 I was worried about C#'s speed when it deals with heavy calculations, when you need to use raw CPU power. I always thought that C++ is much faster than C# when it comes to calculations. So I did some quick tests. The first test computes prime numbers < an integer n, the second test computes some pandigital numbers. The idea for second test comes from here: Pandigital Numbers C# prime computation: using System; using System.Diagnostics; class Program { static int primes(int n) { uint i, j; int

Techniques for causing consistent GC Churn

三世轮回 提交于 2020-01-01 08:59:10
问题 I'm looking to benchmark how something performs while contending with a high amount of ongoing garbage collection. I've previously benchmarked how it behaves in a stable, single-threaded run, and I'd now like to do the same tests in a more stressed JVM; essentially I'd like to have background threads creating and destroying objects at a reasonably consistent pace. I'm looking for suggestions on how to implement a stable yet GC-intensive operation. It needs to accomplish several goals: Spend a