benchmarking

How to benchmark unit tests in Python without adding any code

十年热恋 提交于 2019-12-05 12:40:55
I have a Python project with a bunch of tests that have already been implemented, and I'd like to begin benchmarking them so I can compare performance of the code, servers, etc over time. Locating the files in a manner similar to Nose was no problem because I have "test" in the names of all my test files anyway. However, I'm running into some trouble in attempting to dynamically execute these tests. As of right now, I'm able to run a script that takes a directory path as an argument and returns a list of filepaths like this: def getTestFiles(directory): fileList = [] print "Searching for 'test

How do I run Guava's benchmark suite?

落爺英雄遲暮 提交于 2019-12-05 12:20:10
Guava has a guava-tests subdirectory that contains a directory subtree called benchmark . It appears that executing mvn test (or mvn install ) runs the full suite of unit tests in the test subtree, but nothing is run in the benchmarks suite. My question is: how do you actually run the benchmark suite? In other words, if I download the guava source from git (say, in a Linux environment), what are the steps I need to take to build guava and run its benchmark suite locally? There is surprisingly little information about this online. I stumbled across this old Google groups post , as well as a git

What advice can you give me for writing a meaningful benchmark?

烂漫一生 提交于 2019-12-05 11:04:42
I have developed a framework that is used by several teams in our organisation. Those "modules", developed on top of this framework, can behave quite differently but they are all pretty resources consuming even though some are more than others. They all receive data in input, analyse and/or transform it, and send it further. We planned to buy new hardware and my boss asked me to define and implement a benchmark based on the modules in order to compare the different offers we have got. My idea is to simply start sequentially each module with a well chosen bunch of data as input. Do you have any

What are good test cases for benchmarking & stress testing substring search algorithms?

。_饼干妹妹 提交于 2019-12-05 10:26:25
I'm trying to evaluate different substring search (ala strstr) algorithms and implementations and looking for some well-crafted needle and haystack strings that will catch worst-case performance and possible corner-case bugs. I suppose I could work them out myself but I figure someone has to have a good collection of test cases sitting around somewhere... Some thoughts and a partial answer to myself: Worst case for brute force algorithm: a^(n+1) b in (a^n b)^m e.g. aaab in aabaabaabaabaabaabaab Worst case for SMOA: Something like yxyxyxxyxyxyxx in (yxyxyxxyxyxyxy)^n . Needs further refinement.

Are there benchmarks comparing the respective memory usage of django, rails and PHP frameworks?

百般思念 提交于 2019-12-05 10:04:12
I have to run a Web server with many services on an embedded server with limited RAM (1 GB, no swap). There will be a maximum of 100 users. I will have services such as a forum, little games (javascript or flash), etc. My team knows Ruby on Rails very well, but I am a bit worried about Rails' memory usage. I really do not want to start a troll here, but I am wondering if there are any serious (i.e. documented) benchmarks comparing Rails, Django, CakePHP or any other PHP framework? Could you please point to benchmarks or give me your opinion about Rails' memory usage? Please please please no

Can we get away with replacing existing JS templating solutions with ES6 templates?

孤者浪人 提交于 2019-12-05 08:35:41
One very attractive feature of ES6 is its built in template strings. At this point in time, since transpiling to ES5 is a must for cross browser compatibility, I am curious what the performance differences are between transpiled ES6 templates and existing solutions such as Mustache, Handlebars, Jade etc. Obviously if you need advanced features from a templating language, ES6 templates may not fulfill all of your needs, but if you are performing basic templating, is it fair to say that ES6 template strings could replace your current templating engine? Template strings in ES6 aren't really

Performance Benchmark CouchDB x Relational Databases

笑着哭i 提交于 2019-12-05 08:28:46
Does anyone knows a link for a good performance benchmark of CouchDB x "Any relational Database" Not a performance benchmark, but also significantly more "real world". http://johnpwood.net/2009/08/18/couchdb-the-last-mile/ Someone tried... http://jayant7k.blogspot.com/2009/08/document-oriented-data-stores.html 来源: https://stackoverflow.com/questions/1296741/performance-benchmark-couchdb-x-relational-databases

Fortran's performance

戏子无情 提交于 2019-12-05 08:19:22
Fortran's performances on Computer Language Benchmark Game are surprisingly bad. Today's result puts Fortran 14th and 11th on the two quad-core tests, 7th and 10th on the single cores. Now, I know benchmarks are never perfect, but still, Fortran was (is?) often considered THE language for high performance computing and it seems like the type of problems used in this benchmark should be to Fortran's advantage. In an recent article on computational physics, Landau (2008) wrote: However, [Java] is not as efficient or as well supported for HPC and parallel processing as are FORTRAN and C, the

Count metrics with JMH

旧时模样 提交于 2019-12-05 07:51:48
How can I use to calculate the amount of CPU time and memory in JMH? For example, I have: Code: @State(Scope.Thread) @BenchmarkMode(Mode.All) public class JMHSample_My { int x = 1; int y = 2; @GenerateMicroBenchmark public int measureAdd() { return (x + y); } @GenerateMicroBenchmark public int measureMul() { return (x * y); } public static void main(String[] args) throws RunnerException { Options opt = new OptionsBuilder() .include(".*" + JMHSample_My.class.getSimpleName() + ".*") .warmupIterations(5) .measurementIterations(5) .forks(1) .build(); new Runner(opt).run(); } } Result: Benchmark

How can I benchmark code that mutates the setup data?

痞子三分冷 提交于 2019-12-05 07:43:22
The current implementation of the built-in benchmarking tool appears to run the code inside the iter call multiple times for each time the setup code outside the iter is run. When the code being benchmarked modifies the setup data, subsequent iterations of the benchmarked code are no longer benchmarking the same thing. As a concrete example, I am benchmarking how fast it takes to remove values from a Vec : #![feature(test)] extern crate test; use test::Bencher; #[bench] fn clearing_a_vector(b: &mut Bencher) { let mut things = vec![1]; b.iter(|| { assert!(!things.is_empty()); things.clear(); })