Another Reason Java is Faster than C (maybe)

Paul S. R. Chisholm points out a new reason virtual machine based languages such as Java may sometimes outperform statically optimized languages such as C:

Portability depends on architecture (for example, x86 vs. PowerPC), but high performance depends on microarchitecture (for example, Pentium M vs. Athlon 64 X2). Today’s Core 2 chips have many high performance features missing from the 1993 original Pentiums. A good compiler like gcc can take advantage of those additional features. This is bad news if you’re using a binary Linux distribution, compiled to a lowest common denominator. It’s good news if you’re building and installing Linux from source, with something like Linux From Scratch or Gentoo/Portage. It’s also good news for just-in-time compilers (think Java, .NET, and Mono); they’re compiling on the “target” machine, so they can generate code tailored for the machine’s exact microarchitecture.

This sounds plausible in theory. What I don’t know is whether Java takes advantage of this in practice. Has anyone looked at the JIT source code lately? Can anyone say whether it makes any microarchitecture-specific optimizations?

8 Responses to “Another Reason Java is Faster than C (maybe)”

  1. bob Says:

    C is only hobbled in this regard when a compiled program is limited to a single architecture.

    Fat Mach-O executables can contain any number of architectures or micro-architectures.

    In other words, this isn’t a C vs. Java problem, it’s an object-file problem.

  2. Xan Gregg Says:

    When I did some Java vs. C benchmarks a couple years ago (http://www.forthgo.com/blog/java-vs-c-vs-c-at-finding-primes/), I noticed in passing that moving from 64-bit integers to 32-bit integers gave C a big improvement but hardly affected the Java times. I assumed that the JVM noticed that I was running on a 64-bit machine and customized the code for it, but the C compiler had to lock-in early on 32-bit instructions and couldn’t use 64-bit instructions directly. Thus 64-bit integers were costly for C even on a 64-bit machine.

    Now, if only they’d get the Range Check Elimination Optimization to be useful (by adding static analysis), they’d be able to really compete on speed.

  3. Nicolás Lichtmaier Says:

    Indeed… doesn’t some media players include optimized versions of some some parts of their code for different x86 variants?

  4. Lawrence Says:

    Well wait. A Java program would be little compiled (JIT) and most interpreted by a jvm.
    On linux this jvm would be probably compiled for 386 to support all platform, or not ?

  5. Jonathan Says:

    IBM’s JITC is supposed to have been doing that since at least 1999. E.g., “Simple code scheduling within a basic block is applied to reorder the instructions so that they best fit the requirements of the underlying machine. The JIT compiler has a potential advantage over the traditional compilation technique in that it can identify the type of machine it is running on, and we make use of this information in both code generation and code scheduling.” and
    “Different machine instructions are selected, depending on the underlying processor type for some bytecode instructions.” (http://www.research.ibm.com/journal/sj/391/suganuma.html)

    I’ve always presumed that Sun’s HotSpot works similar magic; emphasizing SPARC architectures until they started getting serious about selling x86 based machines. They do have at least one related patent anyway (http://www.patentstorm.us/patents/6139199-description.html)

    Given the awareness that a JVM can have of the characteristics of the actual hardware it is running on I’ve always been surprised that Java hasn’t beaten C more frequently.

  6. Frank Berger Says:

    @Lawrence A Java program would be little compiled (JIT) and most interpreted by a jvm.
    How do you get that idea? Hotspot compiles often used code and interprets the rest…. What percentage this is depends on the programm

    On linux this jvm would be probably compiled for 386 to support all platform, or not ? Why should a Java compiler currently running, let’s say on an X64 stay compatible with an 386? It’s compiled at runtime and thrown away at the end of the program. The compiled code has never to run on an 386. It will compile code for a genuine 386 only if it runs on a 386

  7. Robert Dinse Says:

    Just-in-time compilers have the overhead of compiling prior to executing the code; a binary object doesn’t have that overhead. For many applications, the compilation overhead is going to outweigh any microcode tuning advantage. More often than not where x86 architecture is concerned, you’re talking about specialized instructions for video decoding or 3d graphics and not real general purpose functions that will be used by code designed to be platform independent like Java. With respect to Linux, grab the source and compile it yourself, optimized for the platform you’re compiling on. Yes, most distributions are going to compile for the lowest common denominator, truth be told in most areas that won’t matter. But for the kernel and critical code, anything floating point or graphics intensive; compile yourself for the platform for best performance.

  8. Elliotte Rusty Harold Says:

    For many applications, yes. However for any long-running application the JIT overhead is negligible. JIT compilation time is fixed and finite. A long-running app like a server amortizes this over a long enough period that you won’t notice it.

Leave a Reply