benchmarking

Performance of row vs column operations in NumPy

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-05 00:45:17
There are a few articles that show that MATLAB prefers column operations than row operations, and that depending on you lay out your data the performance can vary significantly . This is apparently because MATLAB uses a column-major order for representing arrays. I remember reading that Python (NumPy) uses a row-major order. With this, my questions are: Can one expect a similar difference in performance when working with NumPy? If the answer to the above is yes, what would be some examples that highlight this difference ? Like many benchmarks, this really depends on the particulars of the

.NET benchmarking frameworks

╄→гoц情女王★ 提交于 2019-12-05 00:02:14
问题 Are there any .NET frameworks for writing micro-benchmarks like Japex or this (both are for Java)? 回答1: Jon Skeet wrote one: http://msmvps.com/blogs/jonskeet/archive/2009/01/26/benchmarking-made-easy.aspx It also lives on google-code Unfortunately, it not as rich as Japex 回答2: Check this out, it is really cool library, VERY easy to use http://blogs.msdn.com/vancem/archive/2009/02/06/measureit-update-tool-for-doing-microbenchmarks.aspx The best feature I like in it is the normalization feature

Is there any performance benchmark for Thrift on HBase?

孤人 提交于 2019-12-04 23:49:22
问题 I have a system that may writing huge data to hbase. The system is written by c++ and found out that hbase have thrift interface for other languages. My question is, Is there any performance benchmark for Thrift on HBase? What is the most disadvantage compaire with java native api? 回答1: I recommend these recent two blog posts on this topic: HBase + Thrift performance part 1 HBase + Thrift performance part 2 The two posts give detailed performance measurements of using Thrift with HBase. 来源:

Svnserve VS mod_dav_svn

☆樱花仙子☆ 提交于 2019-12-04 21:20:13
问题 We plan to install a Subversion repository in an environment where the network is quite slow on its own. The previous VCS used there was VSS, and it was a nightmare to use it (not only because of its "feature"). So, my question is the choice between Svnserve and the apache module. I know that the apache module will be slower due to the stateless protocol, but I've no idea how much is the increase of the time it implies. Is there somewhere some benchmarks or rules that indicate the average

How much cost check constraints in Postgres 9.x?

最后都变了- 提交于 2019-12-04 19:45:54
I'd like to know if there are some benchmark to compare how much cost insert some check constraints on a table of 60 columns where on 20 i'd like to insert a constraints of NotEmpty and on 6 rows NotNull. My case is that i have on my table Empty values and Null values (that in my case means always "no data"). I'd like to unify that data values with just one. That's why I'm thinking to insert NotEmpty constraints on columns, because as i have read null value are not heavy (in byte size) as like empty values (and respect his real meaning). But from other side NotNull Constraint are more deep

Subprocess memory usage in python

帅比萌擦擦* 提交于 2019-12-04 19:34:14
How can one measure/benchmark maximum memory usage of a subprocess executed within python? I made a little utility class that demonstrates how to do this with the psutil library : import psutil import subprocess class ProcessTimer: def __init__(self,command): self.command = command self.execution_state = False def execute(self): self.max_vms_memory = 0 self.max_rss_memory = 0 self.t1 = None self.t0 = time.time() self.p = subprocess.Popen(self.command,shell=False) self.execution_state = True def poll(self): if not self.check_execution_state(): return False self.t1 = time.time() try: pp = psutil

Haskell measuring function performance

拥有回忆 提交于 2019-12-04 18:29:47
问题 In Haskell, how can i 'simply' measure a functions performance. For example, how long it takes to run, or how much memory it takes?. I am aware of profiling, however, is there a more simple way that will not require me to change my code too much? 回答1: Measuring how long it takes to run and how much memory it takes are two separate problems, namely: benchmarking and profiling. Haskell has a well defined set of tools for both. Solving neither of the problems requires you to make any changes to

Trial-division code runs 2x faster as 32-bit on Windows than 64-bit on Linux

社会主义新天地 提交于 2019-12-04 17:39:19
问题 I have a piece of code that runs 2x faster on windows than on linux. Here are the times I measured: g++ -Ofast -march=native -m64 29.1123 g++ -Ofast -march=native 29.0497 clang++ -Ofast -march=native 28.9192 visual studio 2013 Debug 32b 13.8802 visual studio 2013 Release 32b 12.5569 It really seems to be too huge a difference. Here is the code: #include <iostream> #include <map> #include <chrono> static std::size_t Count = 1000; static std::size_t MaxNum = 50000000; bool IsPrime(std::size_t

Cassandra Reading Benchmark with Spark

独自空忆成欢 提交于 2019-12-04 17:08:25
I'm doing a benchmark on Cassandra's Reading performance. In the test-setup step I created a cluster with 1 / 2 / 4 ec2-instances and data nodes. I wrote 1 table with 100 million of entries (~3 GB csv-file). Then I launch a Spark application which reads the data into a RDD using the spark-cassandra-connector. However, I thought the behavior should be the following: The more instances Cassandra (same instance amount on Spark) uses, the faster the reads! With the writes everything seems to be correct (~2-times faster if cluster 2-times larger). But: In my benchmark the read is always faster with

Executable runs faster on Wine than Windows — why?

随声附和 提交于 2019-12-04 16:56:10
问题 Solution: Apparently the culprit was the use of floor() , the performance of which turns out to be OS-dependent in glibc. This is a followup question to an earlier one: Same program faster on Linux than Windows -- why? I have a small C++ program, that, when compiled with nuwen gcc 4.6.1, runs much faster on Wine than Windows XP (on the same computer). The question: why does this happen? The timings are ~15.8 and 25.9 seconds, for Wine and Windows respectively. Note that I'm talking about the