Monday, May 31, 2010

Kicking JRuby Performance Up a Notch

We've often talked about the benefit of running on the JVM, and while our performance numbers have been *good*, they've never been as incredibly awesome as a lot of people expected. After all, regardless of how well we performed when compared to other Ruby implementations, we weren't as fast as statically-typed JVM languages.

It's time for that to change.

I've started playing with performing more optimizations based on runtime information in JRuby. You may know that JRuby has always had a JIT (just-in-time compiler), which lazily compiled Ruby AST into JVM bytecode. What you may not know is that unlike most other JIT-based systems, we did not gather any information at runtime that might help the eventual compilation produces a better result. All we applied were the same static optimizations we could safely do for AOT compilation (ahead-of-time, like jrubyc), and ultimately the JIT mode just deferred compilation to reduce the startup cost of compiling everything before it runs.

This was a pragmatic decision; the JVM itself does a lot to boost performance, even with our very naïve compiler and our limited static optimizations. Because of that, our performance has been perfectly fine for most users; we haven't even made a concentrated effort to improve execution speed for almost 18 months, since the JRuby 1.1.6 release. We've spent a lot more time handling the issues users actually wanted us to work on: better Java integration, specific Ruby incompatibilities, memory reduction (and occasional leaks), peripheral libraries like jruby-rack and activerecord-jdbc, and general system stability. When we asked users what we should focus on, performance always came up as a "nice to have" but rarely as something people had a problem with. We performed very nicely.

These days, however, we're recognizing a few immutable truths:

  • You can never be fast enough, and if your performance doesn't steadily improve people will feel like you're moving backward.
  • People eventually *do* want Ruby code to run as fast as C or Java, and if we can make it happen we should.
  • JRuby users like to write in Ruby, obviously, so we should make an effort to allow writing more of JRuby in Ruby, which requires that we suffer no performance penalty in the process.
  • Working on performance can be a dreadful time sink, but succeeding in improving it can be a tremendous (albeit shallow) ego boost.
So along with other work for JRuby 1.6, I'm back in the compiler.

I'm just starting to play with this stuff, so take all my results as highly preliminary.

A JRuby Call Site Primer

At each location in Ruby code where a dynamic call happens, JRuby installs what's called a call site. Other VMs may simply refer to the location of the call as the "call site", but in JRuby, each call site has an implementation of org.jruby.runtime.CallSite associated with it. So given the following code:

def fib(a)
if a < 2
a
else
fib(a - 1) + fib(a - 2)
end
end

There are six call sites: the "<" call for "a < 2", the two "-" calls for "a - 1" and "a - 2", the two "fib" calls, and the "+" call for "fib(a - 1) + fib(a - 2)". The naïve approach to doing a dynamic call would be to query the object for the named method every time and then invoke it, like the dumbest possible reflection code might do. In JRuby (like most dynamic language VMs) we instead have a call site cache at each call site to blunt that lookup cost. And in implementation terms, that means each caching CallSite is an instance of org.jruby.runtime.callsite.CachingCallSite. Easy, right?

The simplest way to take advantage of runtime information is to use these caching call sites as hints for what method we've been calling at a given call site. Let's take the example of "+". The "+" method here is being called against a Fixnum object every time it's encountered, since our particular run of "fib" never overflows into Bignum and never works with Float objects. In JRuby, Fixnum is implemented with org.jruby.RubyFixnum, and the "+" method is bound to RubyFixnum.op_plus. Here's the implementation of op_plus in JRuby:
    @JRubyMethod(name = "+")
public IRubyObject op_plus(ThreadContext context, IRubyObject other) {
if (other instanceof RubyFixnum) {
return addFixnum(context, (RubyFixnum)other);
}
return addOther(context, other);
}

The details of the actual addition in addFixnum are left as an exercise for the reader.

Notice that we have an annotation on the method: @JRubyMethod(name = "+"). All the Ruby core classes implemented in JRuby are tagged with a similar annotation, where we can specify the minimum and maximum numbers of arguments, the Ruby-visible method name (e.g. "+" here, which is not a valid Java method name), and other details of how the method should be called. Notice also that the method takes two arguments: the current ThreadContext (thread-local Ruby-specific runtime structures), and the "other" argument being added.

When we build JRuby, we scan the JRuby codebase for JRubyMethod annotations and generate what we call invokers, one for every unique class+name combination in the core classes. These invokers are all instance of org.jruby.internal.runtime.methods.DynamicMethod, and they carry various informational details about the method as well as implementing various "call" signatures. The reason we generate an invoker class per method is simple: most JVMs will not inline through code used for many different code paths, so if we only had a single invoker for all methods in the system...nothing would ever inline.

So, putting it all together. At runtime, we have a CachingCallSite instance for every method call in the code. The first time we call "+" for example, we go out and fetch the method object and cache it in place for future calls. That cached method object is a subclass of DynamicMethod, which knows how to do the eventual invocation of RubyFixnum.op_plus and return the desired result. Simple!

First Steps Toward Runtime Optimization

The changes I'm working on locally will finally take advantage of the fact that after running for a while, we have a darn good idea what methods are being called. And if we know what methods are being called, those invocations no longer have to be dynamic, provided our assumptions hold (assumptions being things like "we're always calling against Fixnums" or "it's always the core implementation of "+"). Put differently: because JRuby defers its eventual compilation to JVM bytecode, we can potentially turn dynamic calls into static calls.

The process is actually very simple, and since I just started playing with it two days ago, I'll describe the steps I took.

Step One: Turn Dynamic Into Static

First, I needed the generated methods to bring along enough information for me to do a direct call. Specifically, I needed to know what target Java class they pointed at (RubyFixnum in our "+" example), what the Java name of the method was (op_plus), what arguments the method signature expected to receive (returning IRubyObject and receiving ThreadContext, IRubyObject), and whether the implementation was static (as are most module-based methods...but our "op_plus" is not). So I bundled this information into what I called a "NativeCall" data structure, populated on any invokers for which a native call might be possible.
    public static class NativeCall {
private final Class nativeTarget;
private final String nativeName;
private final Class nativeReturn;
private final Class[] nativeSignature;
private final boolean statik;
...

In the compiler, I added a similar bit of logic. When compiling a dynamic call, it now will look to see whether the call site has already cached some method. If so, it checks whether that method has a corresponding NativeCall data structure associated with it. If it does, then instead of emitting the dynamic call logic it would normally emit, it compiles a normal, direct JVM invocation. So where we used to have a call like this:
INVOKEVIRTUAL org/jruby/runtime/CallSite.call

We now have a call like this:
INVOKEVIRTUAL org/jruby/RubyFixnum.op_plus

This call to "+" is now essentially equivalent to writing the same code in Java against our RubyFixnum class.

This comprises the first step: get the compiler to recognize and "make static" dynamic calls we've seen before. My experimental code is able to do this for most core class methods right now, and it will not be difficult to extend it to any call.

(The astute VM implementer will notice I made no mention of inserting a guard or test before the direct call, in case we later need to actually call a different method. This is, for the moment, intentional. I'll come back to it).

Step Two: Reduce Some Fixnum Use

The second step I took was to allow some call paths to use primitive values rather than boxed RubyFixnum objects. RubyFixnum in JRuby always represents a 64-bit long value, but since our call path only supports invocation with IRubyObject, we generally have to construct a new RubyFixnum for all dynamic calls. These objects are very small and usually very short-lived, so they don't impact GC times much. But they do impact allocation rates; we still have to grab the memory space for every RubyFixnum, and so we're constantly chewing up memory bandwidth to do so.

There are various ways in which VMs can eliminate or reduce object allocations like this: stack allocation, value types, true fixnums, escape analysis, and more. But of these, only escape analysis is available on current JVMs, and it's a very fragile optimization: all paths that would consume an object must be completely inlined, or else the object can't be elided.

So to help reduce the burden on the JVM, we have a couple call paths that can receive a single long or double argument where it's possible for us to prove that we're passing a boxed long or double value (i.e. a RubyFixnum or RubyFloat). The second step I took was to make the compiler aware of several of these methods on RubyFixnum, such as the long version of op_plus:
    public IRubyObject op_plus(ThreadContext context, long other) {
return addFixnum(context, other);
}

Since we have not yet implemented more advanced means of proving a given object is always a Fixnum (like doing local type propagation or per-variable type profiling at runtime), the compiler currently can only see that we're making a call with a literal value like "100" or "12.5". As luck would have it, there's three such cases in the "fib" implementation above: one for the "<" call and two for the "-" calls. By combining the compiler's knowledge of which actual method is being called in each case and how to make a call without standing up a RubyFixnum object, our actual call in the bytecode gets simplified further:
INVOKEVIRTUAL org/jruby/RubyFixnum.op_minus (Lorg/jruby/runtime/ThreadContext;J)

(For the uninitiated, that "J" at the end of the signature means this call is passing a primitive long for the second argument to op_minus.)

We now have the potential to insert additional compiler smarts about how to make a call more efficiently. This is a simple case, of course, but consider the potential for another case: calling from Ruby to arbitrary Java code. Instead of encumbering that call with all our own multi-method dispatch logic *plus* the multiple layers of Java's reflection API, we can instead make the call *directly*, going straight from Ruby code to Java code with no intervening code. And with no intervening code, the JVM will be able to inline arbitrary Java code straight into Ruby code, optimize it as a whole. Are we getting excited yet?

Step Three: Steal a Micro-optimization

There's two pesky calls remaining in "fib": the recursive invocations of "fib" itself. Now the smart thing to do would be to proceed with adding logic to the compiler to be aware of calls from currently-jitting Ruby code to not-yet jitted Ruby code and handle that accordingly. For example, we might also force those methods to compile, and the methods they call to compile, and so on, allowing us to optimize from a "hot" method outward to potentially "colder" methods within some threshold distance. And of course, that's probably what we'll do; it's trivial to trigger any Ruby method in JRuby to JIT at any time, so adding this to the compiler will be a few minutes work. But since I've only been playing with this since Thursday, I figured I'd do an even smarter thing: cheat.

Instead of adding the additional compiler smarts, I instead threw in a dirt-simple single-optimization subset of those smarts: detect self-recursion and optimize that case alone. In JRuby terms, this meant simply seeing that the "fib" CachingCallSite objects pointed back to the same "fib" method we were currently compiling. Instead of plumbing them through the dynamic pipeline, we make them be direct recursive calls, similar to the core method calls to "<" and "-" and "+". So our fib calls turn into the following bytecode-level invocation:
INVOKESTATIC ruby/jit/fib_ruby_B52DB2843EB6D56226F26C559F5510210E9E330D.__file__

Where "B52DB28..." is the SHA1 hash of the "fib" method, and __file__ is the default entry point for jitted calls.

Everything Else I'm Not Telling You

As with any experiment, I'm bending compatibility a bit here to show how far we can move JRuby's "upper bound" on performance. So these optimizations currently include some caveats.

They currently damage Ruby backtraces, since by skipping the invokers we're no longer tracking Ruby-specific backtrace information (current method, file, line, etc) in a heap-based structure. This is one area where JRuby loses some performance, since the JVM is actually double-tracking both the Java backtrace information and the Ruby backtrace information. We will need to do a bit more work toward making the Java backtrace actually show the relevant Ruby information, or else generate our Ruby backtraces by mining the Java backtrace (and the existing "--fast" flag actually does this already, matching normal Ruby traces pretty well).

The changes also don't track the data necessary for managing "backref" and "lastline" data (the $~ and $_ pseudo-globals), which means that regular expression matches or line reads might have reduced functionality in the presence of these optimizations (though you might never notice; those variables are usually discouraged). This behavior can be restored by clever use of thread-local stacks (only pushing down the stack when in the presence of a method that might use those pseudo-globals), or by simply easing back the optimizations if those variables are likely to be used. Ideally we'll be able to use runtime profiling to make a smart decision, since we'll know whether a given call site has ever encountered a method that reads or writes $~ or $_.

Finally, I have not inserted the necessary guards before these direct invocations that would branch to doing a normal dynamic call. This was again a pragmatic decision to speed the progress of my experiment...but it also raises an interesting question. What if we knew, without a doubt, that our code had seen all the types and methods it would ever see. Wouldn't it be nice to say "optimize yourself to death" and know it's doing so? While I doubt we'll see a JRuby ship without some guard in place (ideally a simple "method ID" comparison, which I wouldn't expect to impact performance much), I think it's almost assured that we'll allow users to "opt in" to completely "statickifying" specific of code. And it's likely that if we started moving more of JRuby's implementation into Ruby code, we would take advantage of "fully" optimizing as well.

With all these caveats in mind, let's take a look at some numbers.

And Now, Numbers Meaningless to Anyone Not Using JRuby To Calculate Fibonacci Numbers

The "fib" method is dreadfully abused in benchmarking. It's largely a method call benchmark, since most calls spawn at least two more recursive calls and potentially several others if numeric operations are calls as well. In JRuby, it doubles as an allocation-rate or memory-bandwidth benchmark, since the bulk of our time is spent constructing Fixnum objects or updating per-call runtime data structures.

But since it's a simple result to show, I'm going to abuse it again. Please keep in mind these numbers are meaningless except in comparison to each other; JRuby is still cranking through a lot more objects than implementations with true Fixnums or value types, and the JVM is almost certainly over-optimizing portions of this benchmark. But of course, that's part of the point; we're making it possible for the JVM to accelerate and potentially eliminate unnecessary computation, and it's all becoming possible because we're using runtime data to improve runtime compilation.

I'll be using JRuby 1.6.dev (master plus my hacks) on OS X Java 6 (1.6.0_20) 64-bit Server VM on a MacBook Pro Core 2 Duo at 2.66GHz.

First, the basic JRuby "fib" numbers with full Ruby backtraces and dynamic calls:
  0.357000   0.000000   0.357000 (  0.290000)
0.176000 0.000000 0.176000 ( 0.176000)
0.174000 0.000000 0.174000 ( 0.174000)
0.175000 0.000000 0.175000 ( 0.175000)
0.174000 0.000000 0.174000 ( 0.174000)

Now, with backtraces calculated from Java backtraces, dynamic calls restructured slightly to be more inlinable (but still dynamic and via invokers), and limited use of primitive call paths. Essentially the fastest we can get in JRuby 1.5 without totally breaking compatibility (and with some minor breakages, in fact):
  0.383000   0.000000   0.383000 (  0.330000)
0.109000 0.000000 0.109000 ( 0.108000)
0.111000 0.000000 0.111000 ( 0.112000)
0.101000 0.000000 0.101000 ( 0.101000)
0.101000 0.000000 0.101000 ( 0.101000)

Now, using all the runtime optimizations described above. Keep in mind these numbers will probably drop a bit with guards in place (but they're still pretty fantastic):
  0.443000   0.000000   0.443000 (  0.376000)
0.035000 0.000000 0.035000 ( 0.035000)
0.036000 0.000000 0.036000 ( 0.036000)
0.036000 0.000000 0.036000 ( 0.036000)
0.036000 0.000000 0.036000 ( 0.036000)


And a couple comparisons of another mostly-recursive, largely-abused benchmark target: the tak function...

Static optimizations only, as fast as we can make it:
  1.750000   0.000000   1.750000 (  1.695000)
0.779000 0.000000 0.779000 ( 0.779000)
0.764000 0.000000 0.764000 ( 0.764000)
0.775000 0.000000 0.775000 ( 0.775000)
0.763000 0.000000 0.763000 ( 0.763000)

And with runtime optimizations:
  0.899000   0.000000   0.899000 (  0.832000)
0.331000 0.000000 0.331000 ( 0.331000)
0.332000 0.000000 0.332000 ( 0.332000)
0.329000 0.000000 0.329000 ( 0.329000)
0.331000 0.000000 0.331000 ( 0.332000)

So for these trivial benchmarks, a couple hours hacking in JRuby's compiler has produced numbers 2-3x faster than the fastest JRuby performance we've achieved thusfar. Understanding why requires one last section.

Hooray for the JVM

There's a missing trick here I haven't explained: the JVM is still doing most of the work for us.

In the plain old dynamic-calling, static-optimized case, the JVM is dutifully optimizing everything we throw at it. It's taking our Ruby calls, inlining the invokers and their eventual calls, inlining core class methods (yes, this means we've always been able to inline Ruby into Ruby or Java into Ruby or Ruby into Java, given the appropriate tweaks), and optimizing things very well. But here's the problem: the JVM's default settings are tuned for optimizing *Java*, not for optimizing Ruby. In Java, a call from one method to another has exactly one hop. In JRuby's dynamic calls, it may take three or four hops, bouncing through CallSites and DynamicMethods to eventually get to the target. Since Hotspot only inlines up to 9 levels of calls by default, you can see we're eating up that budget very quickly. Add to that the fact that Java to Java calls have no intervening code to eat up bytecode size thresholds, and we're essentially only getting a fraction of the optimization potential out of Hotspot that a "simpler" language like Java does.

Now consider the dynamically-optimized case. We've eliminated both the CallSite and the DynamicMethod from most calls, even if we have to insert a bit of guard code to do it. That means nine levels of Ruby calls can inline, or nine levels of logic from Ruby to core methods or Ruby to Java. We've now given Hotspot a much better picture of the system to optimize. We've also eliminated some of the background noise of doing a Ruby call, like updating rarely-used data structures. We'll need to ensure backtraces come out in some usable form, but at least we're not double-tracking them.

The bottom line here is that instead of doing all the much harder work of adding inlining to our own compiler, all we need to do is let the JVM do it, and we benefit from the years of work that have gone into current JVMs' optimizing compilers. The best way to solve a hard problem is to get someone else to solve it for you, and the JVM does an excellent job of solving this particular hard problem. Instead of giving the JVM a fish *or* teaching it how to fish, we're just giving it a map of the lake; it's already an expert fisherman.

There's also another side to these optimizations: our continued maintenance of an interpreter is starting to pay off. Other JVM languages that don't have an interpreted mode have a much more difficult time doing any runtime optimization; simply put, it's really hard to swap out bytecode you've already loaded unless you're explicitly abstracting how that bytecode gets generated in the first place. In JRuby, where methods have always had to switch from interpreted to jitted at runtime, we're can can hop back and forth much more freely, optimizing along the way.

That's all for now. Don't expect to download JRuby 1.6 in a few months and see everything be three times (or ten times! or 100 times!) faster. There's a lot of details we need to work out about when these optimizations will be safe, how users might be able to opt-in to "full" optimization if necessary, and whether we'll get things fast enough to start replacing core code with Ruby. But early results are very promising, and it's assured we'll ship some of this very soon.

9 comments:

Anonymous said...

Hey Charles.

You are doing heroic work here but...

Maybe instead of trying to wring the last bit of performance out of ruby you'd be better off making duby viable and building a tool that could convert ruby code to duby code.

Even if it only worked 80% it would be great especially if you could rewrite sections of slow ruby in duby.

Just sayin.

Piyush said...

Go JRuby!!!

If ruby goes 100 times faster than what it is right now, it is freaking going to changes this freaking world! Those who say performance is good to have are probably correct from their POV but not from a language POV. If ruby is much more faster than what it is right now it will open up vistas which are not even their for Ruby right now.

Waiting eagerly for more insights into jruby optimizations

Unknown said...

Wow !

I'm moving from Java to JRuby and I'm amazed by your work !

It so good to know JRuby is growing and every day a more and more a viable business solution!

Please keep the good work, Charles !

Unknown said...

Charles,

I saw you twitter about how hard it is to write about jruby internals. Rest assured you did a fantastic job. I followed everything that you wrote. It's fascinating to follow along for the ride. I wish I had time to be a language hacker. Alas business responsibilities pull me away.

Mark

Roger Pack said...

Interesting work.
I like the part that was "at user request" could be optimized.

Reminds me of a discussion I had recently with the Rubinius guys

http://groups.google.com/group/rubinius-dev/browse_thread/thread/de4472f7b4ec7778

the jist of is is basically "maybe the user can give simplifying hints to the VM, like don't worry about integer overflows or 'I am done doing any meta programming'" or (perhaps, for jruby) something like "I am not worried about Integer overflow" or "please convert this all to duby" or what not.

Also of interest might be some of the optimizations that ruby2cext does:

http://github.com/rdp/crystalizer/tree/master/lib/ruby2cext/plugins

(inlining case statements when they're all integers etc. etc.)

Anyway keep up the good work. I

Charles Oliver Nutter said...

Anonymous: Duby is a great option, but it's certainly not Ruby. You can't do runtime metaprogramming with it, and it requires libraries to be very statically available and loaded. It would be tricky to write something as complex as Rails for Duby, even if we do get more compile-time metaprogramming elements into the language.

That said, I think we'll see JRuby and Duby approach each other. Duby has recently gained dynamic invocation (via Java 7 invokedynamic), and now JRuby is gaining dynamic optimizations that can approach static performance. Both approaches are valid and both are interesting to me :)

Roger Pack: Yes, the optimizations we're starting to do now are similar to what ruby2cext can do and similar to what Rubinius has started to do. We should be able to do almost all the same optimizations both of those do while also leveraging the JVM's own optimizations. For your loop example, using these dynamic optimizations improves the time from 9s to around 3.5s. That's very impressive considering that we still construct Fixnum objects (Rubinius uses tagged pointers) and we're not eliminating the loop (which does nothing). For real math algorithms, these preliminary results perform comparably to the best new Ruby implementations. So I think we're definitely on the right track, and there's a lot more we can do.

Ulven said...

Thanks for your brilliant work!!
We're happily doing webapps in jruby on rails nowadays. Easy integration with enterprisey java systems in the back, makes us more productive and gives us more fun :)

Keep on doing your magic :)

Anonymous said...

Ok, JVM will help and you can help the JVM. Now for a real performance improvement.

What if I want to virtualize a Ruby-Rails application transparently over 1000 AWS LINUX instances all running LINUX VM's. Putting it another way, how about 1000's of VM's distributed over both local and cloud servers.

Of course I'd like to do this without having to change my Ruby-Rails application which I might develop on a single LINUX server. No pressure here.

Starting at the JVM level, what parts of the software stack will have to change to accommodate this kind of transparent architecture?

johnswolter@wolterworks.com

Roger Pack said...

Is this stuff in trunk yet? :)
-r