My python program executes faster than my java version of the same program. What gives?

匿名 (未验证) 提交于 2019-12-03 03:04:01

问题:

Update: 2009-05-29

Thanks for all the suggestions and advice. I used your suggestions to make my production code execute 2.5 times faster on average than my best result a couple of days ago. In the end I was able to make the java code the fastest.

Lessons:

  • My example code below shows the insertion of primitive ints but the production code is actually storing strings (my bad). When I corrected that the python execution time went from 2.8 seconds to 9.6. So right off the bat, the java was actually faster when storing objects.

  • But it doesn't stop there. I had been executing the java program as follows:

    java -Xmx1024m SpeedTest

But if you set the initial heap size as follows you get a huge improvement:

java -Xms1024m -Xmx1024m SpeedTest 

This simple change reduced the execution time by more than 50%. So the final result for my SpeedTest is python 9.6 seconds. Java 6.5 seconds.

Original Question:

I had the following python code:

import time import sys  def main(args):         iterations = 10000000     counts = set()     startTime = time.time();         for i in range(0, iterations):         counts.add(i)     totalTime = time.time() - startTime     print 'total time =',totalTime     print len(counts)  if __name__ == "__main__":     main(sys.argv) 

And it executed in about 3.3 seconds on my machine but I wanted to make it faster so I decided to program it in java. I assumed that because java is compiled and is generally considered to be faster than python I would see some big paybacks.

Here is the java code:

import java.util.*; class SpeedTest {         public static void main(String[] args)     {                 long startTime;         long totalTime;         int iterations = 10000000;         HashSet counts = new HashSet((2*iterations), 0.75f);          startTime = System.currentTimeMillis();         for(int i=0; i<iterations; i++)         {             counts.add(i);         }         totalTime = System.currentTimeMillis() - startTime;         System.out.println("TOTAL TIME = "+( totalTime/1000f) );         System.out.println(counts.size());     } } 

So this java code does basically the same thing as the python code. But it executed in 8.3 seconds instead of 3.3.

I have extracted this simple example from a real-world example to simplify things. The critical element is that I have (set or hashSet) that ends up with a lot of members much like the example.

Here are my questions:

  1. How come my python implementation is faster than my java implementation?

  2. Is there a better data structure to use than the hashSet (java) to hold a unique collection?

  3. What would make the python implementation faster?

  4. What would make the java implementation faster?

UPDATE:

Thanks to all who have contributed so far. Please allow me to add some details.

I have not included my production code because it is quite complex. And would generate a lot of distraction. The case I present above is the most simplified possible. By that I mean that the java put call seems to be much slower than the python set`s add().

The java implementation of the production code is also about 2.5 - 3 times slower than the python version -- just like the above.

I am not concerned about vm warmup or startup overhead. I just want to compare the code from my startTime to my totalTime. Please do not concern yourselves with other matters.

I initialized the hashset with more than enough buckets so that it should never have to rehash. (I will always know ahead of time how many elements the collection will ultimately contain.) I suppose one could argue that I should have initialized it to iterations/0.75. But if you try it you will see that execution time is not significantly impacted.

I set Xmx1024m for those that were curious (my machine has 4GB of ram).

I am using java version: Java(TM) SE Runtime Environment (build 1.6.0_13-b03).

In the production version of I am storing a string (2-15 chars) in the hashSet so I cannot use primitives, although that is an interesting case.

I have run the code many, many times. I have very high confidence that the python code is between 2.5 and 3 times faster than the java code.

回答1:

You're not really testing Java vs. Python, you're testing java.util.HashSet using autoboxed Integers vs. Python's native set and integer handling.

Apparently, the Python side in this particular microbenchmark is indeed faster.

I tried replacing HashSet with TIntHashSet from GNU trove and achieved a speedup factor between 3 and 4, bringing Java slightly ahead of Python.

The real question is whether your example code is really as representative of your application code as you think. Have you run a profiler and determined that most of the CPU time is spent in putting a huge number of ints into a HashSet? If not, the example is irrelevant. Even if the only difference is that your production code stores other objects than ints, their creation and the computation of their hashcode could easily dominate the set insertion (and totally destroy Python's advantage in handling ints specially), making this whole question pointless.



回答2:

I suspect that is that Python uses the integer value itself as its hash and the hashtable based implementation of set uses that value directly. From the comments in the source:

This isn't necessarily bad! To the contrary, in a table of size 2**i, taking the low-order i bits as the initial table index is extremely fast, and there are no collisions at all for dicts indexed by a contiguous range of ints. The same is approximately true when keys are "consecutive" strings. So this gives better-than-random behavior in common cases, and that's very desirable.

This microbenchmark is somewhat of a best case for Python because it results in exactly zero hash collisions. Whereas, if Javas HashSet is rehashing the keys it has to perform the additional work and also gets much worse behavior with collisions.

If you store the range(iterations) in a temporary variable and do a random.shuffle on it before the loop the runtime is more than 2x slower even if the shuffle and list creation is done outside the loop.



回答3:

Another possible explanation is that sets in Python are implemented natively in C code, while HashSet's in Java are implemented in Java itself. So, sets in Python should be inherently much faster.



回答4:

It has generally been my experience that python programs run faster than java programs, despite the fact that java is a bit "lower level" language. Incidently, both languages are compiled into byte code (that's what those .pyc file are -- you can think of them as kind of like .class files). Both languages are byte-code interpreted on a virtual stack machine.

You would expect python to be slower at things like, for example, a.b. In java, that a.b will resolve into a dereference. Python, on the other hand, has to do one or more hash table lookups: check the local scope, check the module scope, check global scope, check builtins.

On the other hand, java is notoriously bad at certain operations such as object creation (which is probably the culprit in your example) and serialization.

In summary, there's no simple answer. I wouldn't expect either language to be faster for all code examples.

Correction: several people have pointed out that java isn't so bad at object creation any more. So, in your example, it's something else. Perhaps it's autoboxing that's expensive, perhaps python's default hashing algorithm is better in this case. In my practical experience, when I rewrite java code to python, I always see a performance increase, but that could be as much due to the language as it is due to rewritng in general leads to performance improvements.



回答5:

I'd like to dispel a couple myths I saw in the answers:

Java is compiled, yes, to bytecode but ultimately to native code in most runtime environments. People who say C is inherently faster aren't telling the entire story, I could make a case that byte compiled languages are inherently faster because the JIT compiler can make machine-specific optimizations that are unavailable to way-ahead-of-time compilers.

A number of things that could make the differences are:

  • Python's hash tables and sets are the most heavily optimized objects in Python, and Python's hash function is designed to return similar results for similar inputs: hashing an integer just returns the integer, guaranteeing that you will NEVER see a collision in a hash table of consecutive integers in Python.
  • A secondary effect of the above is that the Python code will have high locality of reference as you'll be accessing the hash table in sequence.
  • Java does some fancy boxing and unboxing of integers when you add them to collections. On the bonus side, this makes arithmetic way faster in Java than Python (as long as you stay away from bignums) but on the downside it means more allocations than you're used to.


回答6:

Edit: A TreeSet might be faster for the real use case, depending on allocation patterns. My comments below deals only with this simplified scenario. However, I do not believe that it would make a very significant difference. The real issue lays elsewhere.

Several people here has recommended replacing the HashSet with a TreeSet. This sounds like a very strange advice to me, since there's no way that a data structure with O(log n) insertion time is faster then a O(1) structure that preallocates enough buckets to store all the elements.

Here's some code to benchmark this:

import java.util.*; class SpeedTest {         public static void main(String[] args)     {                 long startTime;         long totalTime;         int iterations = 10000000;         Set counts;          System.out.println("HashSet:");         counts = new HashSet((2*iterations), 0.75f);         startTime = System.currentTimeMillis();         for(int i=0; i<iterations; i++) {             counts.add(i);         }         totalTime = System.currentTimeMillis() - startTime;         System.out.println("TOTAL TIME = "+( totalTime/1000f) );         System.out.println(counts.size());          counts.clear();          System.out.println("TreeSet:");         counts = new TreeSet();         startTime = System.currentTimeMillis();         for(int i=0; i<iterations; i++) {             counts.add(i);         }         totalTime = System.currentTimeMillis() - startTime;         System.out.println("TOTAL TIME = "+( totalTime/1000f) );         System.out.println(counts.size());     } } 

And here's the result on my machine:

$ java -Xmx1024M SpeedTest HashSet: TOTAL TIME = 4.436 10000000 TreeSet: TOTAL TIME = 8.163 10000000 

Several people also argued that boxing isn't a performance issue and that object creation is inexpensive. While it's true that object creation is fast, it's definitely not as fast as primitives:

import java.util.*; class SpeedTest2 {         public static void main(String[] args)     {                 long startTime;         long totalTime;         int iterations = 10000000;          System.out.println("primitives:");         startTime = System.currentTimeMillis();         int[] primitive = new int[iterations];         for (int i = 0; i < iterations; i++) {             primitive[i] = i;         }         totalTime = System.currentTimeMillis() - startTime;         System.out.println("TOTAL TIME = "+( totalTime/1000f) );          System.out.println("primitives:");         startTime = System.currentTimeMillis();         Integer[] boxed = new Integer[iterations];         for (int i = 0; i < iterations; i++) {             boxed[i] = i;         }         totalTime = System.currentTimeMillis() - startTime;         System.out.println("TOTAL TIME = "+( totalTime/1000f) );     } } 

Result:

$ java -Xmx1024M SpeedTest2 primitives: TOTAL TIME = 0.058 primitives: TOTAL TIME = 1.402 

Moreover, creating a lot of objects results in additional overhead from the garbage collector. This becomes significant when you start keeping tens of millions of live objects in memory.



回答7:

I find benchmarks like this to be meaningless. I don't solve problems that look like the test case. It's not terribly interesting.

I'd much rather see a solution for a meaningful linear algebra solution using NumPy and JAMA. Maybe I'll try it and report back with results.



回答8:

I'm not too familiar with python, but I do know HashSet can't contain primitives, so when you say counts.add(i) the i there is getting autoboxed into a new Integer(i) call. That's probably your problem.

If for some reason you really needed a 'set' of integers between 0 and some large n, its probably best declared as an 'boolean[] set = new boolean[n]'. Then you could go through the array and mark items that are in the set as 'true' without incurring the overhead of creating n Integer wrapper objects. If you wanted to go further than that you could use a byte[] of size n/8 and use the individual bits directly. Or perhaps BigInteger.

EDIT

Stop voting my answer up. Its wrong.

EDIT

No really, its wrong. I get comparable performance if I do what the question suggest, populate the set with N Integers. if I replace the contents of the for loop with this:

    Integer[] ints = new Integer[N];     for (int i = 0; i < N; ++i) {         ints[i] = i;     } 

Then it only takes 2 seconds. If you don't store the Integer at all then it takes less than 200 millis. Forcing the allocation of 10000000 Integer objects does take some time, but it looks like most of the time is spent inside the HashSet put operation.



回答9:

There's a number of issues here which I'd like to bring together.

First if it's a program that you are only going to run once, does it matter it takes an extra few seconds?

Secondly, this is just one microbenchmarks. Microbenchmarks are pointless for comparing performance.

Startup has a number of issues.

The Java runtime is much bigger than Python so takes longer to load from disk and takes up more memory which may be important if you are swapping.

If you haven't set -Xms you may be running the GC only to resize the heap. Might as well have the heap properly sized at the start.

It is true that Java starts off interpreting and then compiles. Around 1,500 iterations for Sun client [C1] Hotspot and 10,000 for server [C2]. Server Hotspot will give you better performance eventually, but take more memory. We may see client Hotspot use server for very frequently executed code for best of both worlds. However, this should not usually be a question of seconds.

Most importantly you may be creating two objects per iteration. For most code, you wouldn't be creating these tiny objects for such a proportion of the execution. TreeSet may be better on number of objects score, with 6u14 and Harmony getting even better.

Python may possibly be winning by storing small integer objects in references instead of actually have an object. That is undoubtably a good optimisation.

A problem with a lot of benchmarks is you are mixing a lot of different code up in one method. You wouldn't write code you cared about like that, would you? So why are you attempting to performance test which is unlike code you would actually like to run fast?

Better data structure: Something like BitSet would seem to make sense (although that has ynchronisation on it, which may or may not impact performance).



回答10:

You need to run it multiple times to get a real idea of "how fast" each runs. The JVM startup time [for one] is adding to the single running time of the Java version.

You're also creating a HashSet with a large initial capacity, which means the backing HashMap will be created with that many available slots, unlike the Python where you create a basic Set. Hard to tell if that would hinder though, as when your HashSet grows it will have to reallocate the stored objects.



回答11:

Are you using the -server flag with the jvm? You can't test for performance without it. (You also have to warm up the jvm before doing the test.)

Also, you probably want to use TreeSet<Integer>. HashSet will be slower in the long run.

And which jvm are you using? The newest I hope.

EDIT

When I say use TreeSet, I mean in general, not for this benchmark. TreeSet handles the real world issue of non even hashing of objects. If you get too many objects in the same bin in a HashSet, you will perform about O(n).



回答12:

If you really want to store primitive types in a set, and do heavy work on it, roll your own set in Java. The generic classes are not fast enough for scientific computing.

As Ants Aasma mentions, Python bypasses hashing and uses the integer directly. Java creates an Integer object (autoboxing), and then casts it to an Object (in your implementation). This object must be hashed, as well, for use in a hash set.

For a fun comparision, try this:

Java

import java.util.HashSet; class SpeedTest {    public static class Element {     private int m_i;     public Element(int i) {       m_i = i;     }   }    public static void main(String[] args)   {             long startTime;     long totalTime;     int iterations = 1000000;     HashSet<Element> counts = new HashSet<Element>((int)(2*iterations), 0.75f);      startTime = System.currentTimeMillis();     for(int i=0; i<iterations; ++i)     {       counts.add(new Element(i));     }     totalTime = System.currentTimeMillis() - startTime;     System.out.println("TOTAL TIME = "+( totalTime/1000f) );     System.out.println(counts.size());   } } 

Results:

$java SpeedTest TOTAL TIME = 3.028 1000000  $java -Xmx1G -Xms1G SpeedTest TOTAL TIME = 0.578 1000000 

Python

#!/usr/bin/python import time import sys  class Element(object):   def __init__(self, i):     self.num = i  def main(args):         iterations = 1000000     counts = set()     startTime = time.time();         for i in range(0, iterations):         counts.add(Element(i))     totalTime = time.time() - startTime     print 'total time =',totalTime     print len(counts)   if __name__ == "__main__":   main(sys.argv) 

Results:

$./speedtest.py  total time = 20.6943161488 1000000 

How's that for 'python is faster than java'?



回答13:

How much memory did you start the JVM with? It depends? When I run the JVM with your program with 1 Gig of RAM:

$ java -Xmx1024M -Xms1024M -classpath . SpeedTest  TOTAL TIME = 5.682 10000000 $ python speedtest.py  total time = 4.48310899734 10000000 

If I run the JVM with less memory, it takes longer ... considerably longer:

$ java -Xmx768M -Xms768M -classpath . SpeedTest  TOTAL TIME = 6.706 10000000 $ java -Xmx600M -Xms600M -classpath . SpeedTest  TOTAL TIME = 14.086 10000000 

I think the HashSet is the performance bottleneck in this particular instance. If I replace the HashSet with a LinkedList, the program gets substantially faster.

Finally -- note that Java programs are initially interpreted and only those methods that are called many times are compiled. Thus, you're probably comparing Python to Java's interpreter, not the compiler.



回答14:

Just a stab in the dark here, but some optimizations that Python is making that Java probably isn't:

  • The range() call in Python is creating all 10000000 integer objects at once, in optimized C code. Java must create an Integer object each iteration, which may be slower.
  • In Python, ints are immutable, so you can just store a reference to a global "42", for example, rather than allocating a slot for the object. I'm not sure how Java boxed Integer objects compare.
  • Many of the built-in Python algorithms and data structures are rather heavily optimized for special cases. For instance, the hash function for integers is, simply the identity function. If Java is using a more "clever" hash function, this could slow things down quite a bit. If most of your time is spent in data structure code, I wouldn't be surprised at all to see Python beat Java given the amount of effort that has been spent over the years hand-tuning the Python C implementation.


回答15:

A few changes for faster Python.

#!/usr/bin/python import time import sys  import psyco                 #<<<<   psyco.full()  class Element(object):     __slots__=["num"]        #<<<<     def __init__(self, i):         self.num = i  def main(args):         iterations = 1000000     counts = set()     startTime = time.time();     for i in xrange(0, iterations):         counts.add(Element(i))     totalTime = time.time() - startTime     print 'total time =',totalTime     print len(counts)  if __name__ == "__main__":   main(sys.argv) 

Before

(env)~$ python speedTest.py total time = 8.82906794548 1000000 

After

(env)~$ python speedTest.py total time = 2.44039201736 1000000 

Now some good old cheating and ...

#!/usr/bin/python import time import sys import psyco  psyco.full()  class Element(object):     __slots__=["num"]     def __init__(self, i):         self.num = i  def main(args):         iterations = 1000000     counts = set()     elements = [Element(i) for i in range(0, iterations)]     startTime = time.time();     for e in elements:         counts.add(e)     totalTime = time.time() - startTime     print 'total time =',totalTime     print len(counts)  if __name__ == "__main__":   main(sys.argv)  (env)~$ python speedTest.py total time = 0.526521921158 1000000 


回答16:

I agree with Gandalf about the startup time. Also, you are allocating a huge HashSet which is not at all similar to your python code. I imaging if you put this under a profiler, a good chunk of time would be spent there. Also, inserting new elements is really going to be slow with this size. I would look into TreeSet as suggested.



回答17:

The biggest issue is probably that what the given code measures is wall time -- what your watch measures -- but what should be measured to compare code runtime is process time -- the amount of time the cpu spend executing that particular code and not other tasks.



回答18:

You can make the Java microbenchamrk much faster, by adding just a simple little extra.

    HashSet counts = new HashSet((2*iterations), 0.75f); 

becomes

    HashSet counts = new HashSet((2*iterations), 0.75f) {         @Override public boolean add(Object element) { return false; }     }; 

Simple, faster and gets the same result.



回答19:

You might want to see if you can "prime" the JIT compiler into compiling the section of code you're interested in, by perhaps running it as a function once beforehand and sleeping briefly afterwords. This might allow the JVM to compile the function down to native code.



回答20:

Well, if you're going to tune the Java program, you might as well tune the Python program too.

>>> import timeit >>> timeit.Timer('x = set()\nfor i in range(10000000):\n    x.add(i)').repeat(3, 1) [2.1174559593200684, 2.0019571781158447, 1.9973630905151367] >>> timeit.Timer('x = set()\nfor i in xrange(10000000):\n    x.add(i)').repeat(3, 1) [1.8742368221282959, 1.8714439868927002, 1.869229793548584] >>> timeit.Timer('x = set(xrange(10000000))').repeat(3, 1) [0.74582195281982422, 0.73061800003051758, 0.73396801948547363] 

Just using xrange makes it about 8% faster on my machine. And the expression set(xrange(10000000)) builds exactly the same set, but 2.5x faster (from 1.87 seconds to 0.74).

I like how tuning a Python program makes it shorter. :) But Java can do the same trick. As everyone knows, if you want a dense set of smallish integers in Java, you don't use a hash table. You use a java.util.BitSet!

BitSet bits = new BitSet(iterations);  startTime = System.currentTimeMillis(); bits.set(0, iterations, true); totalTime = System.currentTimeMillis() - startTime; System.out.println("TOTAL TIME = "+( totalTime/1000f) ); System.out.println(bits.cardinality()); 

That should be fairly quick. Unfortunately I don't have the time to test it right now.



标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!