The Java (not really) Faster than C++ Benchmark

Last update: Mon Apr 11 22:32:56 CEST 2011


When I first read "Java faster than C++ benchmark", I was sure that there was something wrong with it. After all, Java couldn't be faster that C++, right? What would be next? C++ faster than C? C faster than Assembler?

After quite a while I've found some time to update the results using brand new JVMs and GCC versions.

Benchmark environment

All new tests were performed on Linux 2.6, Intel® Core™ Quad Q6600 2.40GHz. Older tests (marked as such) were done on AMD Athlon™ XP 2500+ (Barton). No other load was placed on the machine during tests, run level 1 (single user mode) had been used. GNU C Library had optimizations for i686 (standard Debian package, this should be of benefit for both Java and C++).

When a tested GCC version wasn't available with my Linux distro, I compiled GCC myself from the standard release. Please note that the result of this benchmark (I am talking about "C++ faster than Java" result) is valid for GCC 3.4.4, 4.0.x, 4.1.x, 4.2.x and 4.3.x as well, but GCC series 4.5 is faster than earlier GCCs. If truth be told, each new major GCC release performs better than the previous one - as should be expected, given the amount of work GCC guys put into their project.

Testing conditions

I've kept the original number of repetitions in all tests.

Time was measured in the same way as in the original results, with one exception: the time used as benchmark result was elapsed real (wall clock) time used by the process. If we stick to the original method of measurement, the results will be incorrect and biased towards Java - because of threads used by the JVM. Wall clock time is better and more accurate if the machine operates under no load - as was the case here.


Since Java compiler does not have any settings that could affect performance, Java code was compiled with standard settings, using Java compiler from Sun in the same version as the JVM that was used later to do the actual benchmark.

To run hash, heapsort and strcat tests I had to increase the heap size. It's pretty much standard practice for enterprise software, so I do not see it as a drawback of Java.


When looking at C++ code, I've noticed many performance problems, which probably may go unnoticed for people with strong Java background. These problems were not present in Java code.

In other words, to make this benchmark fair, I had to make some modifications to the original code. In doing so, I tried to make the code as close to the original as possible, even if my personal coding style is completely different. If anyone feels that some further modifications could/should be made to Java or C++ code, do not hesitate to contact me.

For C++ compilation, I've used the following options:

-O2 -fomit-frame-pointer -finline-functions -std=c++0x -march=core2
Note: older results were produced with -march=athlon-xp architecture.

I consider the first two options standard for compilation (all major Linux distributions, Linux kernel and many others use these two for compilation).

-finline-functions is used to tell the compiler to guess which functions should be inlined (Java also does that).

-std=c++0x is used to tell the compiler to use the 0x C++ standard (it's like a JDK level in Java).

Core2 - well, that's my machine. I could use pentium or i686 instead (which could be considered more standard), but it would not change the overall result - that, is C++ being the clear winner. So, let's get down to business, shall we?

Changes to the original benchmark

Here's the list of changes that I've made to the C++ programs. I did not list the trivial changes needed for porting to the version on g++ I've used (like adding an additional header file). All of them should be irreleveant for performance. You can download the modified code here.

Results / And the winner is...

On Intel, all tests but two ended with C++ being better. Overall, C++ was 1.7x faster than Java.

Results for recent tests performed on Intel Quad Core:

Old tests performed on Athlon:

Old tests on Intel:


The most important conclusion is obvious. (For this set of benchmarks,) C++ is clearly the winner.

Second conclusion - don't use Client VM in older Java versions.

Unfortunately, there's also a third conclusion. It seems that it's much, much easier to create a well performing program in Java. So, please consider it for a moment before you start recoding your Java program in C++ just to make it faster... Also, let's face it: for most applications, Java being less than twice slower than C++ means nothing, and the development time is significantly shorter with Java.

Java 6 vs Java 5

Client vs Server VM

On older processors and Java versions, it seems that client VM in Java 6 is tuned a little bit better than in Java 5. Still, server VM performs significantly better than client VM. On newer processors and Java versions, client JVM performed better. It does not make sense to draw any consulsions from that for such small programs though.

Java progress

It seems that on new hardware the gap between Java and C++ has narrowed. Going from 3x slower to 1.7x slower is quite an impressive feat on Java side. And keep in mind that C++ also was getting faster with every compiler release.

Performance issues

If we don't take into account Ackermann test, Java 6 Server VM performs better than Java 5 Server VM. The most likely reason for problems in Ackermann is limited inlining in Java 6 - at least in comparison to Java 5. If not for this problem, Java 6 would actually get significantly better than Java 5 in overall results (~15 seconds).

Please note that I had to increase the stack size in Ackermann test for Java 6 - otherwise neither of the JVM versions would finish the test. This further supports the theory about changes in the inlining heuristics.

Back to my homepage

Przemyslaw Bruski, SCJD - mail me at [pbruskispam "at" op "dot" pl]