Wednesday, June 16, 2010

My Short List of Key Missing JVM Features

I mused today on Twitter that there's just a few small things that the JVM/JDK need to become a truly awesome platform for all sorts of development. Since so many people asked for more details, I'm posting a quick list here. There's obviously other things, but these are the ones on my mind today.


Cold Performance

Current JVMs start up pretty fast, and there's changes coming in Hotspot in Java 7 that will make them even better. Usually this comes from combinations of pre-verifying bytecode (or providing verification hints), sharing class data across processes, and run-of-the-mill tweaks to make the loading and linking processes more efficient. But for many apps, this doesn't do anything to solve the biggest startup hit of all: cold execution performance. Because the JVM doesn't save off the jitted products of each run, it must start "cold" every time, running everything in the bytecode interpreter until it gets hot enough to compile. Even on JVMs that don't have an interpreter, the initial cost of compiling everything to not-particularly-optimized assembly also causes a major startup hit (try running command-line stuff on JRockit or J9).

There's a few things people have suggested, and they're all hard:
  • Tiered compilation - compile earlier using the fastest-possible, least-optimizing compiler, but decorate the compiled code with appropriate profiling logic to do a better job later. Hotspot in Java 7 may ship a tiered compiler, but there have been some resource setbacks that delayed its development.
  • Save off compilation or optimization artifacts - this is theoretically possible, but the deeper you go the harder it is to save it. Usually the in-memory results of optimization and compilation depend on the layout of things...in memory. Saving them to disk means you need to scrub out anything that might be different in a new process like memory addresses and class identities. But .NET can do this, though it largely *just* does static compilation. Happy medium?
  • Keep a JVM process running and toss it new work. We do this in JRuby with the Nailgun library, but it has some problems. First off, it can leave various aspects of the JVM in a dirty state, like system properties and memory footprint. Second, it can't kill off rogue threads that don't terminate, so they can collect over time. And third...it's not actually running at the console, so a lot of console things you'd do normally don't work.
This is probably the biggest unsolvable problem for JRuby right now, and the one we most often have to apologize for. JRuby is fast...at times, very fast...and getting faster every day. But not during the first 5 seconds, and so everyone gets the same bad impression.

Better Console/Terminal Support

There's endless blogs out there complaining about how the standard IO streams you get from the JVM are crippled in various ways. You can't select on them, for example, which is the source of a few unfixable bugs in JRuby. You can't pass them along to subprocesses, which is perhaps more a failing of the process-launching APIs in the JDK than standard IO itself. There's no direct terminal support in any of Java's APIs, so people end up yanking in libraries like jline just to support line editing. If the JDK shipped with some nice terminal and process APIs, a lot of the hassles developers have writing command-line tools in Java would melt away.

There's some light at the end of the tunnel. NIO2, scheduled to be part of Java 7, will bring better process launching APIs (with inherited standard IO streams, if you desire), a broader range of selectable channels, and much more. Hopefully it will be enough.

Fix the Busted APIs

JDBC is broken. Why? Because you have to register your driver in a global hard-referencing hash, and have to unregister it from the same classloader or it will leak. That means that if you're loading JDBC drivers from within a webapp or EE application, *your entire application remains in memory* because the driver references it and that map references the driver. This is the primary reason why most Java web application servers leak memory on undeploy, and it's another "unfixable" from JRuby's perspective.

Object serialization is broken. Why? Because it plays all sorts of tricks to get your classloader, reflectively access fields (if you're going to reflectively access them anyway, why not just break encapsulation if security allows it), and construct object instances without allowing you the opportunity to initialize them appropriately yourself. You have to provide no-arg constructors, have to un-final fields so they can be set up outside of construction, and heaven forbid you use default serialization: it's dead slow.

Reflection is too slow and there's no way around it. Not only do you end up calling through many extra levels of logic for reflective invocation, you have to box your argument lists, box your numerics, and wrap everything in exception-handling. And it doesn't have to be this way. The invokedynamic work brings along with it method handles, which are fast, direct pointers to methods. This should have been added long ago, but thankfully it's on the way in Java 7. Until then, projects like JRuby will have to continue eating the cost of reflection...or generate method handles by hand. We do both.

Regular expressions are broken. Why? Because simple alternations can blow the Java stack when fed especially large input. The current Sun-created regex implementation recurses for things like alternation, making it easy for it to fail to match on large input. The problem is so bad that we've actually switched regular expression engines in JRuby *four times*, including two implementations we wrote ourselves. Nobody can say we haven't bled for our users.

And there's numerous other examples. Some are relics of Java 1.0 that never got corrected (because old APIs don't die, they just get deprecated...or ignored). Some are relics of the idea that gigantic monolithic servers hosting dozens of apps (and leaking memory when they undeploy, or else contending for basic resources that separate processes would not) are a good idea, when in actuality running multiple JVMs that each only host one or a few apps works far better. Making a real effort to smooth these bad APIs would go a long way.

Better Support for Native Libraries and POSIX Features

As of Java 6, there's still no support for working with symlinks and only limited support for setting file permissions. Process launching is absolutely terrible. You can't select on all channels...only on sockets. If you want to use a native library, you have to write JNI code to do it, even though there are libraries like JNA and JFFI in the wild that do an outstanding job of dynamically loading and binding those libraries.

Missing the POSIXy features is basically inexcusable today. Most of the system-level APIs in the JDK are still based on a lowest common denominator somewhere near Windows 95, even though all modern operating systems provide at least a substantial subset of those APIs. NIO2 will bring many improvements, but it's almost certain that some parts of POSIX won't be exposed, either because there's not enough resources to spec out the Java APIs for them or because they end up being too system-specific.

As for loading native libraries...this is again something that should have been rolled into the JDK a long time ago. Many people will cry foul..."pure Java!" they'll shout...and I agree. But there are times when some functionality simply doesn't exist in a Java library, or doesn't scale well as an out-of-process call. For these times, you just have to use the native library...and the barrier to entry for doing that on the Java platform is just too high. Rolle JNA or JFFI or something similar into the JDK, so we grown-ups can choose when we want to use native code.

The Punchline

The punchline is that for most of these things, we've solved or worked around them in JRuby...at *great expense*. I'd go so far as to say we've done more to work around the JVM and the JDK's lackings than any other project. We've gone out of our way to improve startup by any means possible. We ship beautiful, solidly-performing libraries for binding native libs without writing a line of C code. We generate a large amount of code at compile and runtime to avoid using reflection as much. We maintain and ship native POSIX layers for a dozen platforms. We've (i.e. one of our champions, Marcin Mielzynski) implemented our own regular expression engine (i.e. a port of Oniguruma). We've pulled every trick in the book to get process launching to work nicely and behave like Ruby users expect. And so on and so forth.

But it's not sustainable. We can't continue to patch around the JVM and JDK forever, even if we've done a great job so far. Hopefully this will serve as a wake-up call for JVM and JDK implementers around the world: If you don't want the Java platform to be a server-only, large-app-only, long-running-only, headless-only world...it's time to fix these things. I'm standing by to help coordinate those efforts :)

Tuesday, June 01, 2010

Restful Services in Ruby using JRuby and Jersey

There's lots of ways to present RESTful web services these days, and REST has obviously become the new "it's IPC no it's not" hotness. And of course Rubyists have been helping to lead the way, building restfulness into just about everything they write. Rails itself is built around REST, with most controllers doubling as RESTful interfaces, and it even provides extra tools to help you transparently make RESTful calls from your application. If you're doing RESTful services for a typical Ruby application, Rails is the way to go (and even if you're not using Ruby in general...you should be considering JRuby + Rails).

In the Java world the options aren't quite as clear, but one API has at least attempted to standardize the idea of doing RESTful services on the Java platform: JSR-311, otherwise known as JAX-RS.

JAX-RS in theory makes it easy for you to simply mark up a piece of Java code with annotations and have it automatically be presented as a RESTful service. Of the available options, it may be the simplest, quickest way to get a Java-based service published and running.

So I figured I'd try to use it from JRuby, and when doing it in Ruby it's actually surprisingly clean, even compared to Ruby options.

The Service

I followed the Jersey Getting Started tutorial using Ruby for everything (and not using Maven in this case).

My version of their HelloWorldResource looks like this in Ruby:

require 'java'java_import 'javax.ws.rs.Path'
java_import 'javax.ws.rs.GET'
java_import 'javax.ws.rs.Produces'

java_package 'com.headius.demo.jersey'
java_annotation 'Path("/helloworld")'
class HelloWorld
java_annotation 'GET'
java_annotation 'Produces("text/plain")'
def cliched_message
"Hello World"
end
end
Notice that we're using the new features in JRuby 1.5 for producing "real" Java classes: java_package to specify a target package for the Java class, java_annotation to specify class and method annotations.

We compile it using jrubyc from JRuby 1.5 like this:
~/projects/jruby ➔ jrubyc -c ../jersey-archive-1.2/lib/jsr311-api-1.1.1.jar --javac restful_service.rb
Generating Java class HelloWorld to /Users/headius/projects/jruby/com/headius/demo/jersey/HelloWorld.java
javac -d /Users/headius/projects/jruby -cp /Users/headius/projects/jruby/lib/jruby.jar:../jersey-archive-1.2/lib/jsr311-api-1.1.1.jar /Users/headius/projects/jruby/com/headius/demo/jersey/HelloWorld.java
The new --java(c) flags in jrubyc examine your source for any classes, spitting out .java source for each one in turn. Along the way, if you have marked it up with signatures, annotations, imports, and so on, it will emit those into the Java source as well. If you specify --javac (as opposed to --java), it will also compile the resulting sources for you with jruby and your user-specified jars in the classpath, as I've done here.

The result looks like this to Java:
~/projects/jruby ➔ javap com.headius.demo.jersey.HelloWorld
Compiled from "HelloWorld.java"
public class com.headius.demo.jersey.HelloWorld extends org.jruby.RubyObject{
public static org.jruby.runtime.builtin.IRubyObject __allocate__(org.jruby.Ruby, org.jruby.RubyClass);
public com.headius.demo.jersey.HelloWorld();
public java.lang.Object cliched_message();
static {};
}
Under the covers, this class will load in the source of our restful_service.rb file and wire up all the Ruby and Java pieces so that both sides see HelloWorld as the in-memory representation of the Ruby HelloWorld class. Method calls are dispatched to the Ruby code, constructors dispatch to initialize, and so on. It's truly living in both worlds.

With the service in hand, we now need a server script to start it up.

The Server Script

Continuing with the tutorial, I've taken their simple Java-based server script and ported it directly to Ruby:
require 'java'
java_import com.sun.jersey.api.container.grizzly.GrizzlyWebContainerFactory

base_uri = "http://localhost:9998/"
init_params = {
"com.sun.jersey.config.property.packages" => "com.headius.demo.jersey"
}

puts "Starting grizzly"
thread_selector = GrizzlyWebContainerFactory.create(
base_uri, init_params.to_java)

puts <<EOS
Jersey app started with WADL available at #{base_uri}application.wadl
Try out #{base_uri}helloworld
Hit enter to stop it...
EOS

gets

thread_selector.stop_endpoint

exit(0)
It's somewhat cleaner and shorter, but it wasn't particularly large to begin with. At any rate, it shows how simple it is to launch a Grizzly server and how nice annotation-based APIs can be for auto-configuring our JAX-RS service.

The CLASSPATH

Ahh CLASSPATH. You are so maligned when all you hope to do is make it explicit where libraries are coming from. The world should learn from your successes and your failures.

There's five jars required for this Jersey example to run. I've tossed them into my CLASSPATH env var, but you're free to do it however you like
jersey-core-1.2.jar
jersey-server-1.2.jar
jsr311-api-1.1.1.jar
asm-3.1.jar
grizzly-servlet-webserver-1.9.9.jar
The first four are available in the jersey-archive download, and you can fetch the Grizzly jar from Maven or other places.

Testing It Out

The lovely bit about this is that it's essentially a four-step process to do the entire thing: write and compile the service, write the server script, set up CLASSPATH, and run the server. Here's the server output, finding my HelloWorld service right where it should:
~/projects/jruby ➔ jruby main.rb
Starting grizzly
Jersey app started with WADL available at http://localhost:9998/application.wadl
Try out http://localhost:9998/helloworld
Hit enter to stop it...
Jun 1, 2010 6:36:55 PM com.sun.jersey.api.core.PackagesResourceConfig init
INFO: Scanning for root resource and provider classes in the packages:
com.headius.demo.jersey
Jun 1, 2010 6:36:55 PM com.sun.jersey.api.core.ScanningResourceConfig logClasses
INFO: Root resource classes found:
class com.headius.demo.jersey.HelloWorld
Jun 1, 2010 6:36:55 PM com.sun.jersey.api.core.ScanningResourceConfig init
INFO: No provider classes found.
Jun 1, 2010 6:36:55 PM com.sun.jersey.server.impl.application.WebApplicationImpl initiate
INFO: Initiating Jersey application, version 'Jersey: 1.2 05/07/2010 02:04 PM'
And perhaps the most anti-climactic climax ever, curling our newly-deployed service:
~ ➔ curl http://localhost:9998/helloworld
Hello World
It works!

What We've Learned

This was a simple example, but we demonstrated several things:
  • JRuby's new jrubyc --java(c) support for generating "real" Java classes
  • Tagging a Ruby class with Java annotations, so it can be seen by a Java framework
  • Booting a Grizzly server from Ruby code
  • Implementing a JAX-RS service with Jersey in Ruby code
Do you have any examples of other nice annotation-based APIs we could test out with JRuby?

Monday, May 31, 2010

Kicking JRuby Performance Up a Notch

We've often talked about the benefit of running on the JVM, and while our performance numbers have been *good*, they've never been as incredibly awesome as a lot of people expected. After all, regardless of how well we performed when compared to other Ruby implementations, we weren't as fast as statically-typed JVM languages.

It's time for that to change.

I've started playing with performing more optimizations based on runtime information in JRuby. You may know that JRuby has always had a JIT (just-in-time compiler), which lazily compiled Ruby AST into JVM bytecode. What you may not know is that unlike most other JIT-based systems, we did not gather any information at runtime that might help the eventual compilation produces a better result. All we applied were the same static optimizations we could safely do for AOT compilation (ahead-of-time, like jrubyc), and ultimately the JIT mode just deferred compilation to reduce the startup cost of compiling everything before it runs.

This was a pragmatic decision; the JVM itself does a lot to boost performance, even with our very naïve compiler and our limited static optimizations. Because of that, our performance has been perfectly fine for most users; we haven't even made a concentrated effort to improve execution speed for almost 18 months, since the JRuby 1.1.6 release. We've spent a lot more time handling the issues users actually wanted us to work on: better Java integration, specific Ruby incompatibilities, memory reduction (and occasional leaks), peripheral libraries like jruby-rack and activerecord-jdbc, and general system stability. When we asked users what we should focus on, performance always came up as a "nice to have" but rarely as something people had a problem with. We performed very nicely.

These days, however, we're recognizing a few immutable truths:

  • You can never be fast enough, and if your performance doesn't steadily improve people will feel like you're moving backward.
  • People eventually *do* want Ruby code to run as fast as C or Java, and if we can make it happen we should.
  • JRuby users like to write in Ruby, obviously, so we should make an effort to allow writing more of JRuby in Ruby, which requires that we suffer no performance penalty in the process.
  • Working on performance can be a dreadful time sink, but succeeding in improving it can be a tremendous (albeit shallow) ego boost.
So along with other work for JRuby 1.6, I'm back in the compiler.

I'm just starting to play with this stuff, so take all my results as highly preliminary.

A JRuby Call Site Primer

At each location in Ruby code where a dynamic call happens, JRuby installs what's called a call site. Other VMs may simply refer to the location of the call as the "call site", but in JRuby, each call site has an implementation of org.jruby.runtime.CallSite associated with it. So given the following code:

def fib(a)
if a < 2
a
else
fib(a - 1) + fib(a - 2)
end
end

There are six call sites: the "<" call for "a < 2", the two "-" calls for "a - 1" and "a - 2", the two "fib" calls, and the "+" call for "fib(a - 1) + fib(a - 2)". The naïve approach to doing a dynamic call would be to query the object for the named method every time and then invoke it, like the dumbest possible reflection code might do. In JRuby (like most dynamic language VMs) we instead have a call site cache at each call site to blunt that lookup cost. And in implementation terms, that means each caching CallSite is an instance of org.jruby.runtime.callsite.CachingCallSite. Easy, right?

The simplest way to take advantage of runtime information is to use these caching call sites as hints for what method we've been calling at a given call site. Let's take the example of "+". The "+" method here is being called against a Fixnum object every time it's encountered, since our particular run of "fib" never overflows into Bignum and never works with Float objects. In JRuby, Fixnum is implemented with org.jruby.RubyFixnum, and the "+" method is bound to RubyFixnum.op_plus. Here's the implementation of op_plus in JRuby:
    @JRubyMethod(name = "+")
public IRubyObject op_plus(ThreadContext context, IRubyObject other) {
if (other instanceof RubyFixnum) {
return addFixnum(context, (RubyFixnum)other);
}
return addOther(context, other);
}

The details of the actual addition in addFixnum are left as an exercise for the reader.

Notice that we have an annotation on the method: @JRubyMethod(name = "+"). All the Ruby core classes implemented in JRuby are tagged with a similar annotation, where we can specify the minimum and maximum numbers of arguments, the Ruby-visible method name (e.g. "+" here, which is not a valid Java method name), and other details of how the method should be called. Notice also that the method takes two arguments: the current ThreadContext (thread-local Ruby-specific runtime structures), and the "other" argument being added.

When we build JRuby, we scan the JRuby codebase for JRubyMethod annotations and generate what we call invokers, one for every unique class+name combination in the core classes. These invokers are all instance of org.jruby.internal.runtime.methods.DynamicMethod, and they carry various informational details about the method as well as implementing various "call" signatures. The reason we generate an invoker class per method is simple: most JVMs will not inline through code used for many different code paths, so if we only had a single invoker for all methods in the system...nothing would ever inline.

So, putting it all together. At runtime, we have a CachingCallSite instance for every method call in the code. The first time we call "+" for example, we go out and fetch the method object and cache it in place for future calls. That cached method object is a subclass of DynamicMethod, which knows how to do the eventual invocation of RubyFixnum.op_plus and return the desired result. Simple!

First Steps Toward Runtime Optimization

The changes I'm working on locally will finally take advantage of the fact that after running for a while, we have a darn good idea what methods are being called. And if we know what methods are being called, those invocations no longer have to be dynamic, provided our assumptions hold (assumptions being things like "we're always calling against Fixnums" or "it's always the core implementation of "+"). Put differently: because JRuby defers its eventual compilation to JVM bytecode, we can potentially turn dynamic calls into static calls.

The process is actually very simple, and since I just started playing with it two days ago, I'll describe the steps I took.

Step One: Turn Dynamic Into Static

First, I needed the generated methods to bring along enough information for me to do a direct call. Specifically, I needed to know what target Java class they pointed at (RubyFixnum in our "+" example), what the Java name of the method was (op_plus), what arguments the method signature expected to receive (returning IRubyObject and receiving ThreadContext, IRubyObject), and whether the implementation was static (as are most module-based methods...but our "op_plus" is not). So I bundled this information into what I called a "NativeCall" data structure, populated on any invokers for which a native call might be possible.
    public static class NativeCall {
private final Class nativeTarget;
private final String nativeName;
private final Class nativeReturn;
private final Class[] nativeSignature;
private final boolean statik;
...

In the compiler, I added a similar bit of logic. When compiling a dynamic call, it now will look to see whether the call site has already cached some method. If so, it checks whether that method has a corresponding NativeCall data structure associated with it. If it does, then instead of emitting the dynamic call logic it would normally emit, it compiles a normal, direct JVM invocation. So where we used to have a call like this:
INVOKEVIRTUAL org/jruby/runtime/CallSite.call

We now have a call like this:
INVOKEVIRTUAL org/jruby/RubyFixnum.op_plus

This call to "+" is now essentially equivalent to writing the same code in Java against our RubyFixnum class.

This comprises the first step: get the compiler to recognize and "make static" dynamic calls we've seen before. My experimental code is able to do this for most core class methods right now, and it will not be difficult to extend it to any call.

(The astute VM implementer will notice I made no mention of inserting a guard or test before the direct call, in case we later need to actually call a different method. This is, for the moment, intentional. I'll come back to it).

Step Two: Reduce Some Fixnum Use

The second step I took was to allow some call paths to use primitive values rather than boxed RubyFixnum objects. RubyFixnum in JRuby always represents a 64-bit long value, but since our call path only supports invocation with IRubyObject, we generally have to construct a new RubyFixnum for all dynamic calls. These objects are very small and usually very short-lived, so they don't impact GC times much. But they do impact allocation rates; we still have to grab the memory space for every RubyFixnum, and so we're constantly chewing up memory bandwidth to do so.

There are various ways in which VMs can eliminate or reduce object allocations like this: stack allocation, value types, true fixnums, escape analysis, and more. But of these, only escape analysis is available on current JVMs, and it's a very fragile optimization: all paths that would consume an object must be completely inlined, or else the object can't be elided.

So to help reduce the burden on the JVM, we have a couple call paths that can receive a single long or double argument where it's possible for us to prove that we're passing a boxed long or double value (i.e. a RubyFixnum or RubyFloat). The second step I took was to make the compiler aware of several of these methods on RubyFixnum, such as the long version of op_plus:
    public IRubyObject op_plus(ThreadContext context, long other) {
return addFixnum(context, other);
}

Since we have not yet implemented more advanced means of proving a given object is always a Fixnum (like doing local type propagation or per-variable type profiling at runtime), the compiler currently can only see that we're making a call with a literal value like "100" or "12.5". As luck would have it, there's three such cases in the "fib" implementation above: one for the "<" call and two for the "-" calls. By combining the compiler's knowledge of which actual method is being called in each case and how to make a call without standing up a RubyFixnum object, our actual call in the bytecode gets simplified further:
INVOKEVIRTUAL org/jruby/RubyFixnum.op_minus (Lorg/jruby/runtime/ThreadContext;J)

(For the uninitiated, that "J" at the end of the signature means this call is passing a primitive long for the second argument to op_minus.)

We now have the potential to insert additional compiler smarts about how to make a call more efficiently. This is a simple case, of course, but consider the potential for another case: calling from Ruby to arbitrary Java code. Instead of encumbering that call with all our own multi-method dispatch logic *plus* the multiple layers of Java's reflection API, we can instead make the call *directly*, going straight from Ruby code to Java code with no intervening code. And with no intervening code, the JVM will be able to inline arbitrary Java code straight into Ruby code, optimize it as a whole. Are we getting excited yet?

Step Three: Steal a Micro-optimization

There's two pesky calls remaining in "fib": the recursive invocations of "fib" itself. Now the smart thing to do would be to proceed with adding logic to the compiler to be aware of calls from currently-jitting Ruby code to not-yet jitted Ruby code and handle that accordingly. For example, we might also force those methods to compile, and the methods they call to compile, and so on, allowing us to optimize from a "hot" method outward to potentially "colder" methods within some threshold distance. And of course, that's probably what we'll do; it's trivial to trigger any Ruby method in JRuby to JIT at any time, so adding this to the compiler will be a few minutes work. But since I've only been playing with this since Thursday, I figured I'd do an even smarter thing: cheat.

Instead of adding the additional compiler smarts, I instead threw in a dirt-simple single-optimization subset of those smarts: detect self-recursion and optimize that case alone. In JRuby terms, this meant simply seeing that the "fib" CachingCallSite objects pointed back to the same "fib" method we were currently compiling. Instead of plumbing them through the dynamic pipeline, we make them be direct recursive calls, similar to the core method calls to "<" and "-" and "+". So our fib calls turn into the following bytecode-level invocation:
INVOKESTATIC ruby/jit/fib_ruby_B52DB2843EB6D56226F26C559F5510210E9E330D.__file__

Where "B52DB28..." is the SHA1 hash of the "fib" method, and __file__ is the default entry point for jitted calls.

Everything Else I'm Not Telling You

As with any experiment, I'm bending compatibility a bit here to show how far we can move JRuby's "upper bound" on performance. So these optimizations currently include some caveats.

They currently damage Ruby backtraces, since by skipping the invokers we're no longer tracking Ruby-specific backtrace information (current method, file, line, etc) in a heap-based structure. This is one area where JRuby loses some performance, since the JVM is actually double-tracking both the Java backtrace information and the Ruby backtrace information. We will need to do a bit more work toward making the Java backtrace actually show the relevant Ruby information, or else generate our Ruby backtraces by mining the Java backtrace (and the existing "--fast" flag actually does this already, matching normal Ruby traces pretty well).

The changes also don't track the data necessary for managing "backref" and "lastline" data (the $~ and $_ pseudo-globals), which means that regular expression matches or line reads might have reduced functionality in the presence of these optimizations (though you might never notice; those variables are usually discouraged). This behavior can be restored by clever use of thread-local stacks (only pushing down the stack when in the presence of a method that might use those pseudo-globals), or by simply easing back the optimizations if those variables are likely to be used. Ideally we'll be able to use runtime profiling to make a smart decision, since we'll know whether a given call site has ever encountered a method that reads or writes $~ or $_.

Finally, I have not inserted the necessary guards before these direct invocations that would branch to doing a normal dynamic call. This was again a pragmatic decision to speed the progress of my experiment...but it also raises an interesting question. What if we knew, without a doubt, that our code had seen all the types and methods it would ever see. Wouldn't it be nice to say "optimize yourself to death" and know it's doing so? While I doubt we'll see a JRuby ship without some guard in place (ideally a simple "method ID" comparison, which I wouldn't expect to impact performance much), I think it's almost assured that we'll allow users to "opt in" to completely "statickifying" specific of code. And it's likely that if we started moving more of JRuby's implementation into Ruby code, we would take advantage of "fully" optimizing as well.

With all these caveats in mind, let's take a look at some numbers.

And Now, Numbers Meaningless to Anyone Not Using JRuby To Calculate Fibonacci Numbers

The "fib" method is dreadfully abused in benchmarking. It's largely a method call benchmark, since most calls spawn at least two more recursive calls and potentially several others if numeric operations are calls as well. In JRuby, it doubles as an allocation-rate or memory-bandwidth benchmark, since the bulk of our time is spent constructing Fixnum objects or updating per-call runtime data structures.

But since it's a simple result to show, I'm going to abuse it again. Please keep in mind these numbers are meaningless except in comparison to each other; JRuby is still cranking through a lot more objects than implementations with true Fixnums or value types, and the JVM is almost certainly over-optimizing portions of this benchmark. But of course, that's part of the point; we're making it possible for the JVM to accelerate and potentially eliminate unnecessary computation, and it's all becoming possible because we're using runtime data to improve runtime compilation.

I'll be using JRuby 1.6.dev (master plus my hacks) on OS X Java 6 (1.6.0_20) 64-bit Server VM on a MacBook Pro Core 2 Duo at 2.66GHz.

First, the basic JRuby "fib" numbers with full Ruby backtraces and dynamic calls:
  0.357000   0.000000   0.357000 (  0.290000)
0.176000 0.000000 0.176000 ( 0.176000)
0.174000 0.000000 0.174000 ( 0.174000)
0.175000 0.000000 0.175000 ( 0.175000)
0.174000 0.000000 0.174000 ( 0.174000)

Now, with backtraces calculated from Java backtraces, dynamic calls restructured slightly to be more inlinable (but still dynamic and via invokers), and limited use of primitive call paths. Essentially the fastest we can get in JRuby 1.5 without totally breaking compatibility (and with some minor breakages, in fact):
  0.383000   0.000000   0.383000 (  0.330000)
0.109000 0.000000 0.109000 ( 0.108000)
0.111000 0.000000 0.111000 ( 0.112000)
0.101000 0.000000 0.101000 ( 0.101000)
0.101000 0.000000 0.101000 ( 0.101000)

Now, using all the runtime optimizations described above. Keep in mind these numbers will probably drop a bit with guards in place (but they're still pretty fantastic):
  0.443000   0.000000   0.443000 (  0.376000)
0.035000 0.000000 0.035000 ( 0.035000)
0.036000 0.000000 0.036000 ( 0.036000)
0.036000 0.000000 0.036000 ( 0.036000)
0.036000 0.000000 0.036000 ( 0.036000)


And a couple comparisons of another mostly-recursive, largely-abused benchmark target: the tak function...

Static optimizations only, as fast as we can make it:
  1.750000   0.000000   1.750000 (  1.695000)
0.779000 0.000000 0.779000 ( 0.779000)
0.764000 0.000000 0.764000 ( 0.764000)
0.775000 0.000000 0.775000 ( 0.775000)
0.763000 0.000000 0.763000 ( 0.763000)

And with runtime optimizations:
  0.899000   0.000000   0.899000 (  0.832000)
0.331000 0.000000 0.331000 ( 0.331000)
0.332000 0.000000 0.332000 ( 0.332000)
0.329000 0.000000 0.329000 ( 0.329000)
0.331000 0.000000 0.331000 ( 0.332000)

So for these trivial benchmarks, a couple hours hacking in JRuby's compiler has produced numbers 2-3x faster than the fastest JRuby performance we've achieved thusfar. Understanding why requires one last section.

Hooray for the JVM

There's a missing trick here I haven't explained: the JVM is still doing most of the work for us.

In the plain old dynamic-calling, static-optimized case, the JVM is dutifully optimizing everything we throw at it. It's taking our Ruby calls, inlining the invokers and their eventual calls, inlining core class methods (yes, this means we've always been able to inline Ruby into Ruby or Java into Ruby or Ruby into Java, given the appropriate tweaks), and optimizing things very well. But here's the problem: the JVM's default settings are tuned for optimizing *Java*, not for optimizing Ruby. In Java, a call from one method to another has exactly one hop. In JRuby's dynamic calls, it may take three or four hops, bouncing through CallSites and DynamicMethods to eventually get to the target. Since Hotspot only inlines up to 9 levels of calls by default, you can see we're eating up that budget very quickly. Add to that the fact that Java to Java calls have no intervening code to eat up bytecode size thresholds, and we're essentially only getting a fraction of the optimization potential out of Hotspot that a "simpler" language like Java does.

Now consider the dynamically-optimized case. We've eliminated both the CallSite and the DynamicMethod from most calls, even if we have to insert a bit of guard code to do it. That means nine levels of Ruby calls can inline, or nine levels of logic from Ruby to core methods or Ruby to Java. We've now given Hotspot a much better picture of the system to optimize. We've also eliminated some of the background noise of doing a Ruby call, like updating rarely-used data structures. We'll need to ensure backtraces come out in some usable form, but at least we're not double-tracking them.

The bottom line here is that instead of doing all the much harder work of adding inlining to our own compiler, all we need to do is let the JVM do it, and we benefit from the years of work that have gone into current JVMs' optimizing compilers. The best way to solve a hard problem is to get someone else to solve it for you, and the JVM does an excellent job of solving this particular hard problem. Instead of giving the JVM a fish *or* teaching it how to fish, we're just giving it a map of the lake; it's already an expert fisherman.

There's also another side to these optimizations: our continued maintenance of an interpreter is starting to pay off. Other JVM languages that don't have an interpreted mode have a much more difficult time doing any runtime optimization; simply put, it's really hard to swap out bytecode you've already loaded unless you're explicitly abstracting how that bytecode gets generated in the first place. In JRuby, where methods have always had to switch from interpreted to jitted at runtime, we're can can hop back and forth much more freely, optimizing along the way.

That's all for now. Don't expect to download JRuby 1.6 in a few months and see everything be three times (or ten times! or 100 times!) faster. There's a lot of details we need to work out about when these optimizations will be safe, how users might be able to opt-in to "full" optimization if necessary, and whether we'll get things fast enough to start replacing core code with Ruby. But early results are very promising, and it's assured we'll ship some of this very soon.

Friday, April 30, 2010

Building Ruboto: Precompiling Ruby for Android

I originally started to send this to the JRuby dev list and to the Ruboto list, but realized quickly that it might make a good blog post. Since I don't blog enough lately, here it is.

I've been looking into better ways to precompile Ruby code to classes for deploy on Android devices. Normally, JRuby users can just jar up their .rb files and load and require them as though they were on the filesystem; JRuby finds, loads, and runs them just fine. This works well enough on Android, but since there's no way to generate bytecode at runtime, JRuby code that isn't precompiled must run interpreted forever...and run a bit slower than we'd like because of it. In order for Ruby to be a first-class language for Android development, we must make it possible to precompile Ruby code *completely* and bundle it up with the application. So this evening I spent some time making that possible.

I have some good news, some bad news, and some good news. First, a bit of background into JRuby's compiler.

What JRuby's Compiler Produces

JRuby's ahead-of-time compiler produces a single .class file per .rb file, to ease deployment and lookup of those files (and because I think it's ugly to always vomit out an unidentifiable class for every method body). This produces a nice 1:1 mapping between .rb and .class, but it comes at a cost: since those .class files are just "bags of methods" we need to bind those methods somehow. This usually happens at runtime, with JRuby generating a small "handle" class for every method as it is bound. So for a script like this:

# foo.rb
class Foo
def bar; end
end

def hello; end

You will get one top-level class file when you AOT compile, and then two more class files are generated at runtime for the "handles" for methods "bar" and "hello". This provides the best possible performance for invocation, plus a nice 1:1 on-disk format...but it means we're still generating a lot of code at runtime.

The other complication is that jrubyc normally outputs a .class file of the same name as the .rb file, to ease lookup of that .class file at runtime. So the main .class for the above script would be called "foo.class". The problem with this is that "foo.rb" may not always be loaded as "foo.rb". A user might load '../yum/../foo.rb' or some other peculiar path. As a result, the base name of the file is not enough to determine what class name to load. To solve this, I've introduced an alternate naming scheme that uses the SHA1 hash of the *actual* content of the file as the class name. So, for the above script, the resulting class would be named:

ruby.jit.FILE_351347C9126659D4479558A2706DBC35E45D16D2

While this isn't a pretty name, it does provide a way to locate the compiled version of a script universally, regardless of what path is used to load it.

The Good News

I've modified jrubyc (on master only...we need to talk about whether this should be a late addition to 1.5) to have a new --sha1 flag. As you might guess, this flag alters the compile process to generate the sha1-named class for each compiled file.

~/projects/jruby ➔ jrubyc foo.rb 
Compiling foo.rb to class foo

~/projects/jruby ➔ jrubyc --sha1 foo.rb
Compiling foo.rb to class ruby.jit.FILE_351347C9126659D4479558A2706DBC35E45D16D2

~/projects/jruby ➔ jruby -X+C -J-Djruby.jit.debug=true -e "require 'foo'"
...
found jitted code for ./foo.rb at class: ruby.jit.FILE_351347C9126659D4479558A2706DBC35E45D16D2
...

This is actually finding the foo.rb file, calculating its SHA1 hash, and then loading the .class file instead. So if you had a bunch of .rb code for an Android application and wanted to precompile it, you'd run this command to get the sha1 classes, and then include both the .rb file and the .class file in your application (the .rb file must be there because...you guessed it...we need to calculate the sha1 hash from its contents).

To test this out, I actually ran jrubyc against the Ruby stdlib to produce a sha1 class for every .rb file:

~/projects/jruby ➔ jrubyc -t /tmp --sha1 lib/ruby/1.8/
Compiling all in '/Users/headius/projects/jruby/lib/ruby/1.8'...
Compiling lib/ruby/1.8//abbrev.rb to class ruby.jit.FILE_4F30363F88066CC74555ABA5BE4B73FDE323BE1A
Compiling lib/ruby/1.8//base64.rb to class ruby.jit.FILE_DD42170B797E34D082C952B92A19474E3FDF3FA2
Compiling lib/ruby/1.8//benchmark.rb to class ruby.jit.FILE_0C42EBD7F248AF396DE7A70C0FBC31E9E8D233DE
...
Compiling lib/ruby/1.8//xsd/xmlparser/rexmlparser.rb to class ruby.jit.FILE_8B106B9E9F2F1768470A7A4E6BD1A36FC0859862
Compiling lib/ruby/1.8//xsd/xmlparser/xmlparser.rb to class ruby.jit.FILE_AF51477EA5467822D8ADED37EEB5AB5D841E07D9
Compiling lib/ruby/1.8//xsd/xmlparser/xmlscanner.rb to class ruby.jit.FILE_3203482AEE794F4B9D5448BF51935879B026092C

This produces 524 class files for 524 .rb files, just as it should, and running with forced compilation (-X+C) and jruby.jit.debug=true shows that it finds each class when loading anything from stdlib. That's a good start!

What About the Handles?

I mentioned above that we also generate, at runtime, a small handle class for every bound method in a given script. And again, since we can't generate bytecode on-device, we need a way to pregenerate all those handles.

An hour's worth of work later, and jrubyc has a --handles flag that will additionally spit out all method handles for each script compiled. Here's our foo script compiled with --sha1 and --handles, along with the resulting .class files:

~/projects/jruby ➔ jrubyc --sha1 --handles foo.rb
Compiling foo.rb to class ruby.jit.FILE_351347C9126659D4479558A2706DBC35E45D16D2
Generating direct handles for foo.rb

~/projects/jruby ➔ ls ruby/jit/*351347*
ruby/jit/FILE_351347C9126659D4479558A2706DBC35E45D16D2.class

~/projects/jruby ➔ ls *351347*
ruby_jit_FILE_351347C9126659D4479558A2706DBC35E45D16D2Invokermethod__1$RUBY$barFixed0.class
ruby_jit_FILE_351347C9126659D4479558A2706DBC35E45D16D2Invokermethod__2$RUBY$helloFixed0.class

And sure enough, we can also see that these handles are being loaded instead of generated at runtime. So it's possible with these two options to *completely* precompile JRuby sources into .class files. Hooray!

The Bad News

My next step was obviously to try to precompile and dex the entire Ruby standard library. That's 524 files, but how many method bodies? We'd need to generate a handle for each one of them.

~/projects/jruby ➔ mkdir stdlib-compiled

~/projects/jruby ➔ jrubyc --sha1 --handles -t stdlib-compiled/ lib/ruby/1.8/
Compiling all in '/Users/headius/projects/jruby/lib/ruby/1.8'...
Compiling lib/ruby/1.8//abbrev.rb to class ruby.jit.FILE_4F30363F88066CC74555ABA5BE4B73FDE323BE1A
Generating direct handles for lib/ruby/1.8//abbrev.rb
Compiling lib/ruby/1.8//base64.rb to class ruby.jit.FILE_DD42170B797E34D082C952B92A19474E3FDF3FA2
Generating direct handles for lib/ruby/1.8//base64.rb
...
Compiling lib/ruby/1.8//xsd/xmlparser/xmlparser.rb to class ruby.jit.FILE_AF51477EA5467822D8ADED37EEB5AB5D841E07D9
Generating direct handles for lib/ruby/1.8//xsd/xmlparser/xmlparser.rb
Compiling lib/ruby/1.8//xsd/xmlparser/xmlscanner.rb to class ruby.jit.FILE_3203482AEE794F4B9D5448BF51935879B026092C
Generating direct handles for lib/ruby/1.8//xsd/xmlparser/xmlscanner.rb

~/projects/jruby ➔ find stdlib-compiled/ -name \*.class | wc -l
8212

Wowsers, that's a lot of method bodies..over 7500 of them. But of course this is the entire Ruby standard library, with code for network protocols, templating, xml parsing, soap, and so on. Now for the more frightening numbers: keeping in mind that .class is a pretty verbose file format, how big are all these class files?

~/projects/jruby ➔ du -ks stdlib-compiled/ruby
14008 stdlib-compiled/ruby

~/projects/jruby ➔ du -ks stdlib-compiled/
44784 stdlib-compiled/

Yeeow! The standard library alone (without handles) produces 14MB of .class files, and with handles it goes up to a whopping 44MB of .class files! That seems a bit high, doesn't it? Especially considering that the .rb files add up to around 4.5MB?

Well there's a few explanations for this. First off, the generated handles are rather small, around 2k each, but they each are probably 50% the exact same code. They're generated as separate handles primarily because the JVM will not inline the same loaded body of code through two different call paths, so we have to duplicate that logic repeatedly. Java 7 fixes some of this, but for now we're stuck. The handle classes also share almost identical constant pools, or in-file tables of strings. Many of the same characteristics apply to the compiled Ruby scripts, so the 44MB number is a bit larger than it needs to be.

We can show a more realistic estimate of on-disk size by compressing the lot, first with normal "jar", and then with the "pack200" utility, which takes greater advantage of the .class format's intra-file redundancy:

~/projects/jruby ➔ cd stdlib-compiled/

~/projects/jruby/stdlib-compiled ➔ jar cf stdlib-compiled.jar .

~/projects/jruby/stdlib-compiled ➔ pack200 stdlib-compiled.pack.gz stdlib-compiled.jar

~/projects/jruby/stdlib-compiled ➔ ls -l stdlib-compiled.*
-rw-r--r-- 1 headius staff 13424221 Apr 30 01:43 stdlib-compiled.jar
-rw-r--r-- 1 headius staff 4051355 Apr 30 01:44 stdlib-compiled.pack.gz

Now we're seeing more reasonable numbers. A 13MB jar file is still pretty large, but it's not bad considering we started with 44MB of .class files. The packed size is even better: only 4MB for a completely-compiled Ruby standard library, and ultimately *smaller* than the original sources.

So what's the bad news? It obviously wasn't the size, since I just showed that was a red herring. The bad news is when we try to dex this jar.

The "dx" Tool

The Android SDK ships with a tool called "dx" which gets used at build time to translate Java bytecode (in .class files, .jar files, etc) to Dalvik bytecode (resulting in a .dex file. Along the way it optimizes the code, tidies up redundancies, and basically makes it as clean and compact as possible for distribution to an Android device. Once on the device, the Dalvik bytecode gets immediately compiled into whatever the native processor runs, so the dex data needs to be as clean and optimized as possible.

Every Java application shipped for Android must pass through dx in some way, so my next step was to try to "dex" the compiled standard library:

~/projects/jruby/out ➔ ../../android-sdk-mac_86/platforms/android-7/tools/dx --dex --verbose --positions=none --no-locals --output=stdlib-compiled.dex stdlib-compiled.jar 
processing archive stdlib-compiled.jar...
ignored resource META-INF/MANIFEST.MF
processing ruby/jit/FILE_003796EE1C0C24540DF7239B8197C183BC7017BB.class...
processing ruby/jit/FILE_00499F5FE29ED8EDB63965B0F65B19CFE994D120.class...
...
processing ruby_jit_FILE_FEF23DE8CDA5B9BD9D880CBC08D3249158379E58Invokermethod__5$RUBY$run_suiteFixed0.class...
processing ruby_jit_FILE_FEF23DE8CDA5B9BD9D880CBC08D3249158379E58Invokermethod__6$RUBY$create_resultFixed0.class...

trouble writing output: format == null

Uh-oh, that doesn't look good. What happened?

Well it turns out that the Ruby standard library *plus* all the handles needed to bind it is too much for the current dex file format It's a known issue that similarly bit Maciek Makowski (reported of the linked bug) when he tried to dex both the Scala compiler and the Scala base set of libraries in one go. And similar to his case, I was able to successfully dex *either* the precompiled stdlib *or* the generated handles...but not both at the same time.

What Can We Do?

It appears that for the moment, it's not going to be possible to completely precompile the entire Ruby standard library. But there's ways around that.

First off, probably no application on the planet needs the entire standard library, so we can easily just include the files needed for a given app. That may be enough to cut the size down tremendously. It's also perfectly possible to build a very complicated Ruby application for Android that will easily fit into the current dex format; I doubt most mobile applications would result in 4.5MB of uncompressed .rb source. So the added --sha1 and --handle features will be immediately useful for Android development.

Secondly, I've been planning on adding a different way to bind methods that doesn't require a class file per method. I would probably generate a large switch for each .rb file and then bind the methods numerically, so only a single additional class (and only a few methods in that class) would be needed to bind an entire compiled .rb script. This issue with dex will force me to finally do that.

And lastly, there's a bit more good news. Remember that the packed size of the entire standard library plus handles was around 4MB? Here's the sizes of the dex'ed standard library and handles:

~/projects/jruby ➔ ls -l *.dex
-rw-r--r-- 1 headius staff 3718340 Apr 30 00:57 stdlib-compiled-solo.dex
-rw-r--r-- 1 headius staff 8656300 Apr 30 00:52 stdlib-compiled-handles.dex

~/projects/jruby ➔ jar cf blah.apk *.dex

~/projects/jruby ➔ ls -l blah.apk
-rw-r--r-- 1 headius staff 3179625 Apr 30 02:01 blah.apk

Once dex has worked its magic against our sources, we're now down to 3.1MB of compressed code...a pretty good size for the entire Ruby standard library plus 7500+ noisy, repetitive handles. We're definitely within reach of making full Ruby development for Android a reality.

Sunday, April 11, 2010

Nokogiri Java Port: Help Us Finish It!

One of the most commonly used native extensions for Ruby is the Nokogiri XML API. Nokogiri wraps libxml and has a fair amount of C code that links directly against the Ruby C extension API...an API we don't support in JRuby (and won't, without a lot of community help).

A bit over a year ago, the Nokogiri folks did us a big favor by creating an FFI version of Nokogiri that works surprisingly well; it's probably the most widely-used FFI-based Ruby library around. But the endgame for Nokogiri on JRuby has always been to get a pure-Java version. Not everyone is allowed to link native libraries on their Java platform of choice, and those that are often have trouble getting the right libxml versions installed. The Java version needs to happen.

That day is very close.

I spent a bit of time this weekend getting the Nokogiri "java" port running on my system, and the folks working on it have brought it almost to 100% passing. It's time to push it over the edge.

Building and Testing

Here's my process for getting it building. Let me know if this needs to be edited.

Update: Added rake-compiler and hoe to gems you need to install and modified the git command-line for versions that don't automatically create a local tracking branch.

1. Clone the Nokogiri repository and switch to the "java" branch

~/projects ➔ git clone git://github.com/tenderlove/nokogiri.git
Initialized empty Git repository in /Users/headius/projects/nokogiri/.git/
remote: Counting objects: 14767, done.
remote: Compressing objects: 100% (3882/3882), done.
remote: Total 14767 (delta 10482), reused 13969 (delta 9945)
Receiving objects: 100% (14767/14767), 3.73 MiB | 742 KiB/s, done.
Resolving deltas: 100% (10482/10482), done.

~/projects ➔ cd nokogiri/

~/projects/nokogiri ➔ git checkout -b java origin/java
Branch java set up to track remote branch java from origin.
Switched to a new branch 'java'

2. Install racc, rexical, rake-compiler, and hoe into Ruby (C Ruby, that is, since they also have extensions)
~/projects/nokogiri ➔ sudo gem install racc rexical rake-compiler hoe
Building native extensions. This could take a while...
Successfully installed racc-1.4.6
Successfully installed rexical-1.0.4
Successfully installed rake-compiler-0.7.0
Successfully installed hoe-2.6.0
4 gems installed

3. Build the lexer and parser using C Ruby
~/projects/nokogiri ➔ rake gem:dev:spec
(in /Users/headius/projects/nokogiri)
warning: couldn't activate the debugging plugin, skipping
rake-compiler must be configured first to enable cross-compilation
/usr/bin/racc -l -o lib/nokogiri/css/generated_parser.rb lib/nokogiri/css/parser.y
rex --independent -o lib/nokogiri/css/generated_tokenizer.rb lib/nokogiri/css/tokenizer.rex

4. Build the Java bits (using rake in JRuby)
~/projects/nokogiri ➔ jruby -S rake java:build
(in /Users/headius/projects/nokogiri)
warning: couldn't activate the debugging plugin, skipping
javac -g -cp /Users/headius/projects/jruby/lib/jruby.jar:../../lib/nekohtml.jar:../../lib/nekodtd.jar:../../lib/xercesImpl.jar:../../lib/isorelax.jar:../../lib/jing.jar nokogiri/*.java nokogiri/internals/*.java
jar cf ../../lib/nokogiri/nokogiri.jar nokogiri/*.class nokogiri/internals/*.class

5. Run the tests (again with rake on JRuby)
~/projects/nokogiri ➔ jruby -S rake test
(in /Users/headius/projects/nokogiri)
...full output...

On my system, I get about 8 failures and 19 errors, out of 785 tests and 1657 assertions. We're very close!

A few other useful tasks:
  • jruby -S rake java:clean_all wipes out the build Java stuff
  • jruby -S rake java:gem builds the Java gem, if you want to try installing it
Helping Out

If you'd like to help fix these bugs, there's a few ways to approach it.
  • Join the nokogiri-talk Google Group so you can communicate with others working on the port. The key folks right now are Yoko Harada and Sergio Arbeo (who did the original bulk of the work for GSoC 2009). I'm also poking at it a bit in my spare time.
  • Post to the group to let folks know you want to help. This will help avoid duplicated effort.
  • Pick tests that appear to be missing or incorrect Ruby logic, like "not implemented", nil results ("method blah not found for nil") or arity errors ("3 arguments for 2" kinds of things). These are often the simplest ones to fix.
  • Don't give up! We're almost there!
It would be great if we could have a 100% working Nokogiri Java port for JRuby 1.5 final this month. I hope to see you on the nokogiri-talk list! Feel free to comment here if you have questions about getting bootstrapped.

Saturday, April 03, 2010

Getting Started with Duby

Hello again!

As you may know, I've been working part-time on a new language called Duby. Duby looks like Ruby, since it co-opts the JRuby parser, and includes some of the features of the Ruby language like optional arguments and closures. But Duby is not Ruby; it's statically typed, compiles to "native" code (JVM bytecode, for example) before running, and does not have any built-in library of its own (preferring to just use what's available on a given runtime). Here's a quick sample of Duby code:

class Foo
def initialize(hello:String)
puts 'constructor'
@hello = hello
end

def hello(name:String)
puts "#{@hello}, #{name}"
end
end

Foo.new('Hiya').hello('Duby')

This post is not going to be an overview of the Duby language; I'll get that together soon, once I take stock of where the language stands as far as features go. Instead, this "getting started" post will show how you can grab the Duby repository and start playing with it right now.

First you need to pull down three resources: Duby itself, BiteScript (the Ruby DSL I use to generate JVM bytecode), and a JRuby 1.5 snapshot:
~/projects/tmp ➔ git clone git://github.com/headius/duby.git
Initialized empty Git repository in /Users/headius/projects/tmp/duby/.git/
remote: Counting objects: 2810, done.
remote: Compressing objects: 100% (1291/1291), done.
remote: Total 2810 (delta 1690), reused 2509 (delta 1447)
Receiving objects: 100% (2810/2810), 10.64 MiB | 722 KiB/s, done.
Resolving deltas: 100% (1690/1690), done.

~/projects/tmp ➔ git clone git://github.com/headius/bitescript.git
Initialized empty Git repository in /Users/headius/projects/tmp/bitescript/.git/
remote: Counting objects: 470, done.
remote: Compressing objects: 100% (404/404), done.
remote: Total 470 (delta 166), reused 313 (delta 57)
Receiving objects: 100% (470/470), 93.56 KiB, done.
Resolving deltas: 100% (166/166), done.

~/projects/tmp ➔ curl http://ci.jruby.org/snapshots/jruby-bin-1.5.0.dev.tar.gz | tar xz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 11.3M 100 11.3M 0 0 353k 0 0:00:32 0:00:32 --:--:-- 262k

~/projects/tmp ➔ ls
bitescript duby jruby-1.5.0.dev

~/projects/tmp ➔ mv jruby-1.5.0.dev/ jruby

Once you have these three pieces in place, Duby can now be run. It's easiest to put the JRuby snapshot in PATH, but you can just run it directly too:
~/projects/tmp ➔ cd duby

~/projects/tmp/duby ➔ ../jruby/bin/jruby bin/duby -e "puts 'hello'"
hello

~/projects/tmp/duby ➔ ../jruby/bin/jruby bin/dubyc -e "puts 'hello'"

~/projects/tmp/duby ➔ java DashE
hello

Finally, you may want to create a "complete" Duby jar that includes Duby, BiteScript, JRuby, and Java classes for command-line or Ant task usage. Using JRuby 1.5's Ant integration, the Duby Rakefile can produce that for you:
~/projects/tmp/duby ➔ ../jruby/bin/jruby -S rake jar:complete
(in /Users/headius/projects/tmp/duby)
mkdir -p build
Compiling Ruby sources
Generating Java class DubyCommand to DubyCommand.java
javac -d build -cp ../jruby/lib/jruby.jar:. DubyCommand.java
Compiling Duby sources
mkdir -p dist
Building jar: /Users/headius/projects/tmp/duby/dist/duby.jar
mkdir -p dist
Building jar: /Users/headius/projects/tmp/duby/dist/duby-complete.jar

~/projects/tmp/duby ➔ java -jar dist/duby-complete.jar run -e 'puts "Duby is Awesome!"'
Duby is Awesome!


Hopefully we'll soon have duby.jar, duby-complete.jar, and a new Duby gem released, but this is a quick way to get involved.

I'll get back to you with a post on the Duby language itself Real Soon Now!

Update: I have also uploaded a snapshot duby-complete.jar (which includes both the Main-Class for jar execution and the simple Ant task org.jruby.duby.ant.Compiler) on the Duby Github downloads page. Have fun!

Using Ivy with JRuby 1.5's Ant Integration

JRuby 1.5 will be released soon, and one of the coolest new features is the integration of Ant support into Rake, the Ruby build tool. Tom Enebo wrote an article on the Rake/Ant integration for the Engine Yard blog, which has lots of examples of how to start migrating to Rake without leaving Ant behind. I'm not going to cover all that here.

I've been using the Rake/Ant stuff for a few weeks now, first for my "weakling" RubyGem which adds a queue-supporting WeakRef to JRuby, and now for cleaning up Duby's build process. Along the way, I've realized I really never want to write Ant scripts again; they're so much nicer in Rake, and I have all of Ruby and Ant available to me.

One thing Ant still needs help with is dependency resolution. Many people make the leap to Maven, and let it handle all the nuts and bolts. But that only works if you really buy into the Maven way of life...a problem if you're like me and you live in a lot of hybrid worlds where the Maven way doesn't necessarily fit. So many folks are turning to Apache Ivy to get dependency management in their builds without using Maven.

Today I thought I'd translate the simple "no-install" Ivy example build (warning, XML) to Rake, to see how easy it would be. The results are pretty slick.

First we need to construct the equivalent to the "download-ivy" and "install-ivy" ant tasks. I chose to put that in a Rake namespace, like this:

namespace :ivy do
ivy_install_version = '2.0.0-beta1'
ivy_jar_dir = './ivy'
ivy_jar_file = "#{ivy_jar_dir}/ivy.jar"

task :download do
mkdir_p ivy_jar_dir
ant.get :src => "http://repo1.maven.org/maven2/org/apache/ivy/ivy/#{ivy_install_version}/ivy-#{ivy_install_version}.jar",
:dest => ivy_jar_file,
:usetimestamp => true
end

task :install => :download do
ant.path :id => 'ivy.lib.path' do
fileset :dir => ivy_jar_dir, :includes => '*.jar'
end

ant.taskdef :resource => "org/apache/ivy/ant/antlib.xml",
#:uri => "antlib:org.apache.ivy.ant",
:classpathref => "ivy.lib.path"
end
end

Notice that instead of using Ant properties, I've just used Ruby variables for the Ivy install version, dir, and file. I've also removed the "uri" element to ant.taskdef because I'm not sure if we have an equivalent for that in Rake yet (note to self: figure out if we have an equivalent for that).

With these two tasks, we can now fetch ivy and install it for the remainder of the build. Here's running the download task from the command line:
~/projects/duby ➔ rake ivy:download
(in /Users/headius/projects/duby)
mkdir -p ./ivy
Getting: http://repo1.maven.org/maven2/org/apache/ivy/ivy/2.0.0-beta1/ivy-2.0.0-beta1.jar
To: /Users/headius/projects/duby/ivy/ivy.jar

Now we want a simple task that uses ivy:install to fetch resources and make them available for the build. Here's the example from Apache, using the cachepath task, written in Rake:
task :go => "ivy:install" do
ant.cachepath :organisation => "commons-lang",
:module => "commons-lang",
:revision => "2.1",
:pathid => "lib.path.id",
:inline => "true"
end

Pretty clean and simple, and it fits nicely into the flow of the Rakefile. I can also switch this to using the "retrieve" task, which just pulls the jars down and puts them where I want them:
task :go => "ivy:install" do
ant.retrieve :organisation => 'commons-lang',
:module => 'commons-lang',
:revision => '2.1',
:pattern => 'javalib/[conf]/[artifact].[ext]',
:inline => true
end

This fetches the Apache Commons Lang package along with all dependencies into javalib, separated by what build configuration they are associated with (runtime, test, etc). Here it is in action:
~/projects/duby ➔ rake go
(in /Users/headius/projects/duby)
mkdir -p ./ivy
Getting: http://repo1.maven.org/maven2/org/apache/ivy/ivy/2.0.0-beta1/ivy-2.0.0-beta1.jar
To: /Users/headius/projects/duby/ivy/ivy.jar
Not modified - so not downloaded
Trying to override old definition of task buildnumber
:: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ ::
:: loading settings :: url = jar:file:/Users/headius/.ant/lib/ivy.jar!/org/apache/ivy/core/settings/ivysettings.xml
:: resolving dependencies :: commons-lang#commons-lang-caller;working
confs: [default, master, compile, provided, runtime, system, sources, javadoc, optional]
found commons-lang#commons-lang;2.1 in public
:: resolution report :: resolve 64ms :: artifacts dl 3ms
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 1 | 0 | 0 | 0 || 1 | 0 |
| master | 1 | 0 | 0 | 0 || 1 | 0 |
| compile | 1 | 0 | 0 | 0 || 0 | 0 |
| provided | 1 | 0 | 0 | 0 || 0 | 0 |
| runtime | 1 | 0 | 0 | 0 || 0 | 0 |
| system | 1 | 0 | 0 | 0 || 0 | 0 |
| sources | 1 | 0 | 0 | 0 || 1 | 0 |
| javadoc | 1 | 0 | 0 | 0 || 1 | 0 |
| optional | 1 | 0 | 0 | 0 || 0 | 0 |
---------------------------------------------------------------------
:: retrieving :: commons-lang#commons-lang-caller
confs: [default, master, compile, provided, runtime, system, sources, javadoc, optional]
4 artifacts copied, 0 already retrieved (1180kB/30ms)

But if I have multiple artifacts, this could be pretty cumbersome. Since this is Ruby, I can just put this in a method and call it repeatedly:
def ivy_retrieve(org, mod, rev)
ant.retrieve :organisation => org,
:module => mod,
:revision => rev,
:pattern => 'javalib/[conf]/[artifact].[ext]',
:inline => true
end

artifacts = %w[
commons-lang commons-lang 2.1
org.jruby jruby 1.4.0
]

task :go => "ivy:install" do
artifacts.each_slice(3) do |*artifact|
ivy_retrieve(*artifact)
end
end

Look for JRuby 1.5 release candidates soon, and let us know what you think of the new Ant integration!

Sunday, March 28, 2010

Ruby Summer of Code 2010

This year, no major Ruby organization got accepted to Google's Summer of Code (even though a half dozen Python projects got accepted, but I won't rant here). What do we as Rubyists do? Take it sitting down? NO! We make our own Summer of Code!

Thanks to Engine Yard, Ruby Central, and the Rails team, Ruby Summer of Code has raised $100k in just three days, allowing us to run 20 student projects! Hooray!

Ruby Summer of Code

Now of course we really would love to have some JRuby projects involved. There's so much exciting stuff going on with JRuby, and I believe it's the most promising platform for really growing the Ruby community. So we've set up a page for JRuby Ruby Summer of Code 2010 ideas. Here's a few to get you started:

  • JRuby on Android work, including command-line tooling, performance work, and all the little bits and pieces needed to make Ruby a first-class Android language.
  • Porting key C extensions to JRuby, so there's an alternative for people migrating.
  • A super-fast lightweight server similar to the GlassFish gem.
  • A full Hibernate and/or JPA backend for DataMapper or DataObjects, so that all databases Hibernate supports "just work" with JRuby.
  • Work on JRuby's nascent suport for Ruby C extensions by building the API out
  • Help get JRuby's early optimizing compiler wired up, to take JRuby's perf to the next level
  • Duby-related projects, like IDE support, better tooling, codebase cleanup, features, documentation.
And there's dozens of other projects out there just waiting for you! Add yourself as a student on the RubySOC page, add some ideas to the JRuby ideas page, and let's get hacking!

Monday, March 01, 2010

JRuby Startup Time Tips

JRuby has become notorious among Ruby implementations for having a slow startup time. Some of this is the JVM's fault, since it doesn't save JIT products and often just takes a long time to boot and get going. Some of this is JRuby's fault, though we work very hard to eliminate startup bottlenecks once they're reported and diagnosed. But a large number of startup time problems stem from simple configuration issues. Here's a few tips to help you improve JRuby's startup time.


(Note: JRuby actually does pretty well on base startup time compared to other JVM languages; but MRI, while generally not the fastest Ruby impl, starts up incredibly fast.)

Make sure you're running the client JVM

This is by far the easiest way to improve startup. On HotSpot, the JVM behind OpenJDK and Sun's JDK, there are two different JIT-compiling backends: "client" and "server". The "client" backend does only superficial optimizations to JVM code as it compiles, but compiles things earlier and more quickly. The "server" backend performs larger-scale, more-global optimizations as it compiles, but takes a lot longer to get there and uses more resources when it compiles.

Up until recently, most JVM installs preferred the "client" backend, which meant JVM startup was about as fast as it could get. Unfortunately, many newer JVM releases on many operating systems are either defaulting to "server" or defaulting to a 64-bit JVM (which only has "server"). This means that without you or JRuby doing anything wrong, startup time takes a major hit.

Here's an example session showing a typical jruby -v line for a "server" JVM and the speed of doing some RubyGems requires (which is where most time is spent booting a RubyGems-heavy application):
~/projects/jruby ➔ jruby -v
jruby 1.5.0.dev (ruby 1.8.7 patchlevel 174) (2010-03-01 6857a4e) (Java HotSpot(TM) 64-Bit Server VM 1.6.0_17) [x86_64-java]

~/projects/jruby ➔ time jruby -e "require 'rubygems'; require 'active_support'"

real 0m5.174s
user 0m7.643s
sys 0m0.422s

~/projects/jruby ➔ time jruby -e "require 'rubygems'; require 'active_support'"

real 0m5.068s
user 0m7.662s
sys 0m0.449s
Ouch, over 5 seconds just to boot RubyGems and load ActiveSupport. You can see in this case that the JVM is preferring the "64-Bit Server VM", which will give you great runtime performance but pretty dismal startup time. If you see this sort of -v output, you can usually force the JVM to run in 32-bit "client" mode by specifying "-d32" as a JVM option. Here's an example using an environment variable:
~/projects/jruby ➔ export JAVA_OPTS="-d32"

~/projects/jruby ➔ time jruby -e "require 'rubygems'; require 'active_support'"

real 0m2.320s
user 0m2.583s
sys 0m0.207s

~/projects/jruby ➔ time jruby -e "require 'rubygems'; require 'active_support'"

real 0m2.275s
user 0m2.580s
sys 0m0.207s
On my system, a MacBook Pro running OS X 10.6, switching to the "client" VM improves this scenario by well over 50%. You may also want to try the "-client" option alone or in combination with -d32, since some 32-bit systems will still default to "server". Play around with it and see what works for you, and then let us know your platform, -v string, and how much improvement you see.

Regenerate the JVM's shared archive

Starting with Java 5, the HotSpot JVM has included a feature known as Class Data Sharing (CDS). Originally created by Apple for their OS X version of HotSpot, this feature loads all the common JDK classes as a single archive into a shared memory location. Subsequent JVM startups then simply reuse this read-only shared memory rather than reloading the same data again. It's a large reason why startup times on Windows and OS X have been so much better in recent years, and users of those systems may be able to ignore this tip.

On Linux, however, the shared archive is often *never* generated, since installers mostly just unpack the JVM into its final location and never run it. In order to force your system to generate the shared archive, run some equivalent of this command line:
headius@headius-desktop:~/jruby$ sudo java -Xshare:dump
Loading classes to share ... done.
Rewriting and unlinking classes ... done.
Calculating hash values for String objects .. done.
Calculating fingerprints ... done.
Removing unshareable information ... done.
Moving pre-ordered read-only objects to shared space at 0x94030000 ... done.
Moving read-only objects to shared space at 0x9444bef8 ... done.
Moving common symbols to shared space at 0x9444d870 ... done.
Moving remaining symbols to shared space at 0x945154e0 ... done.
Moving string char arrays to shared space at 0x94516118 ... done.
Moving additional symbols to shared space at 0x945ac5b0 ... done.
Read-only space ends at 0x94612560, 6169952 bytes.
Moving pre-ordered read-write objects to shared space at 0x94830000 ... done.
Moving read-write objects to shared space at 0x94ea64a0 ... done.
Moving String objects to shared space at 0x94ee3708 ... done.
Read-write space ends at 0x94f27448, 7304264 bytes.
Updating references to shared objects ... done.
For many users, this can make a tremendous difference in startup time, since all that extra class data remains available in memory. You may need to specify the -d32 or -client options along with the -Xshare option. Play with those and the other -Xshare modes to see if it helps your situation.

Delay or disable JRuby's JIT

An interesting side effect of JRuby's JIT is that it sometimes actually slows execution for really short runs. The compiler isn't free, obviously, nor is the cost of loading, verifying, and linking the resulting JVM bytecode. If you have a very short command that touches a lot of code, you might want to try disabling or delaying the JIT.

Disabling is easy: pass the -X-C flag or set the jruby.compile.mode property to "OFF":
~/projects/jruby ➔ time jruby -S gem install rake
Successfully installed rake-0.8.7
1 gem installed
Installing ri documentation for rake-0.8.7...
Installing RDoc documentation for rake-0.8.7...

real 0m13.188s
user 0m12.342s
sys 0m0.685s

~/projects/jruby ➔ time jruby -X-C -S gem install rake
Successfully installed rake-0.8.7
1 gem installed
Installing ri documentation for rake-0.8.7...
Installing RDoc documentation for rake-0.8.7...

real 0m12.590s
user 0m12.342s
sys 0m0.583s
This doesn't generally give you a huge boost, but it can be enough to keep you from going mad.

Avoid spawning "sub-rubies"

It's a fairly common idiom for Rubyists to spawn a Ruby subprocess using Kernel#system, Kernel#exec, or backquotes. For example, you may want to prepare a clean environment for a test run. That sort of scenario is perfectly understandable, but spawning many sub-rubies can take a tremendous toll on overall runtime.

When JRuby sees a #system, #exec, or backquote starting with "ruby", we will attempt to run it in the same JVM using a new JRuby instance. Because we have always supported "multi-VM" execution (where multiple isolated Ruby environments share a single process), this can make spawning sub-Rubies considerably faster. This is, in fact, how JRuby's Nailgun support (more on that later) keeps a single JVM "clean" for multiple JRuby command executions. But even though this can improve performance, there's still a cost for starting up those JRuby instances, since they need to have fresh, clean core classes and a clean runtime.

The worst-case scenario is when we detect that we can't spin up a JRuby instance in the same process, such as if you have shell redirection characters in the command line (e.g. system 'ruby -e blah > /dev/null'). In those cases, we have no choice but to launch an entirely new JRuby process, complete with a new JVM, and you'll be paying the full zero-to-running cost.

If you're able, try to limit how often you spawn "sub-rubies" or use tools like Nailgun or spec-server to reuse a given process for multiple hits.

Do less at startup

This is a difficult tip to follow, since often it's not your code doing so much at startup (and usually it's RubyGems itself). One of the sad truths of JRuby is that because we're based on the JVM, and the JVM takes a while to warm up, code executed early in a process runs a lot slower than code executed later. Add to this the fact that JRuby doesn't JIT Ruby code into JVM bytecode until it's been executed a few times, and you can see why cold performance is not one of JRuby's strong areas.

It may seem like delaying the inevitable, but doing less at startup can have surprisingly good results for your application. If you are able to eliminate most of the heavy processing until an application window starts up or a server starts listening, you may avoid (or spread out) the cold performance hit. Smart use of on-disk caches and better boot-time algorithms can help a lot, like saving a cache of mostly-read-only data rather than reloading and reprocessing it on every boot.

Try using Nailgun

In JRuby 1.3, we officially shipped support for Nailgun. Nailgun is a small library and client-side tool that reuses a single JVM for multiple invocations. With Nailgun, small JRuby command-line invocations can be orders of magnitude faster. Have a look at my article on Nailgun in JRuby 1.3.0 for details.

Nailgun seems like a magic bullet, but unfortunately it does little to help certain common cases like booting RubyGems or starting up Rails (such as when running tests). It also can't help cases where you are causing lots of sub-rubies to be launched. Your best bet is to give it a try and let us know if it helps.

Play around with JVM flags

There's scads of JVM flags that can improve (or degrade) startup time. For a good list of flags you can play with, see my article on my favorite JVM flags, paying special attention to the heap-sizing and GC flags. If you find combinations that really help your application start up faster, let us know!

Help us find bottlenecks

The biggest advances in startup-time performance have come from users like you investigating the load process to see where all that time is going. If you do a little poking around and find that particular libraries take unreasonably long to start (or just do too much at startup) or if you find that startup time seems to be limited by something other than CPU (like if your hard drive starts thrashing madly or your memory bus is being saturated) there may be improvements possible in JRuby or in the libraries and code you're loading. Dig a little...you may be surprised what you find.

Here's a few JRuby flags that might help you investigate:
  • --sample turns on the JVM's sampling profiler. It's not super accurate, but if there's some egregious bottleneck it should rise to the top.
  • -J-Xrunhprof:cpu=times turns on the JVM's instrumented profiler, saving profile results to java.hprof.txt. This slows down execution tremendously, but can give you more accurate low-level timings for JRuby and JDK code.
  • -J-Djruby.debug.loadService.timing=true turns on timing of all requires, showing fairly accurately where boot-time load costs are heaviest.
  • On Windows, where you may not have a "time" command, pass -b to JRuby (as in 'jruby -b ...') to print out a timing of your command's runtime execution (excluding JVM boot time).

Let us help you!

Sometimes, there's an obvious misconfiguration or bug in JRuby that causes an application to start up unreasonably slowly. If startup time seems drastically different than what you've seen here, there's a good chance something's wrong with your setup. Post your situation to the JRuby mailing list or find us on IRC, and we (along with other JRuby users) will do our best to help you.

Thursday, January 28, 2010

Five Reasons to Wait on the iPad

Like most geeks, I watched with great anticipation as the iPad was unveiled yesterday. In my case, it was from liveblogs monitored on spotty 3G and WiFi access at the Jfokus after-conference event. I'm reasonably impressed with the device, especially the $500 price point on the low end. But after giving it some thought, I have a few good reasons to wait on purchasing one. Here's five I came up with:

  1. Wait for the next generation. Never, ever buy the first generation of a device this new and this complex. Doubly so for anything the Apple hype machine is pimping. First-gen devices are always at least a little bit tweaky, and Apple has proven repeatedly that their first-gen devices are overpriced, underpowered, and replaced by something better in 4-6 months.
  2. Wait for competitors to answer. Sure, there have been other tablets and "slates" announced over the past year, but the iPad has moved the bar. Given the number of competing vendors, the availability of viable tablet options like Windows 7 and Android (or Chrome OS), and the ever-present iControl and iLock-in associated with all things Apple, you can bet there's going to be a bunch of competitive options during 2010.
  3. Wait for everyone else to buy it. Yeah, this one is painful, but think about all the suckers that bought the iPhone right when it came out. You'll spend a couple months as a temporary luddite, ridiculed by your peers. And then you'll get a better device for cheaper and have the last laugh. I mean, isn't half the joy of the iPad in having the bigger iPenis? You can hold off for a while.
  4. Wait for crackers to bust it wide open. Nobody's happy that the iPhone is a closed platform, nor are they happy that the App Store is so sketchy at approving applications. So why not wait and see what the busy iPhone hackers can do with an iPad before diving it? Chances are it will be a far better laptop replacement once they get ahold of it.
  5. Wait because you don't actually need it. It can't replace your phone. It can't replace your laptop. It can't replace your 50" LCD TV. Seriously now...what do you need it for?
Me, I'm on the fence. I can afford it, but then I probably wouldn't be able to afford something else. And I'm a programmer...I want to be able to put my own apps on the device (or give apps to friends) without dealing with the App Store gatekeeper. In my ideal world, it would be Apple's hardware and design sensibility combined with Android's open platform and familiar runtime. Anything even close to that would outshine the iPad for me.

Update: One last bit of anecdotal evidence. Before the iPhone, I had held off buying anything other than the crappiest, cheapest phones, the lowest-end music devices (yep, I had the "pack of gum" Shuffle), the most basic digital cameras, and no PDA. I was waiting for something that would allow me to get rid of all devices at once. iPhone obviously did that, as a music/media player, internet device, PDA, phone, and camera all in one. iPad takes two of those features away (phone and camera) and only adds a larger screen with the potential for large-form apps.

Friday, January 08, 2010

Busy Week: JRuby with Android, Maven, Rake, C exts, and More!

(How long has it been since I last blogged? Answer: a long time. I'll try to blog more frequently now that the (TOTALLY INSANE) fall conference season is over. Remind me to never have three conferences in a month ever again.)

Hello friends! It's been a busy week! I thought I'd show some of what's happening with JRuby, so you know we're still here plugging away.

Android Update

Earlier this week I helped Jan Berkel get the Ruboto IRB application working on a more recent JRuby build. There were a few minor tweaks needed, and Jan was interested in helping out, so I made the tweaks and added him as a committer.

The Ruboto IRB project repository is here: http://github.com/headius/ruboto-irb

Lately my interest in JRuby on Android has increased. I realized very recently that JRuby is just about the only mainstream JVM languge that can create *new* code while running on the device, which opens up a whole host of possibilities. It is not possible to implement an interactive shell in Groovy or Scala or Clojure, for example, since they all must first compile code to JVM bytecode, and JVM bytecode can't run on the Dalvik VM directly.

As I played with Ruboto IRB this week, I discovered something even more exciting: almost all the bugs that prevented JRuby from working well in early Android releases have been solved! Specifically, the inability to reflect any android.* classes seems to be fixed in both Android 1.6 and Android 2.0.x. Why is this so cool? It's cool because with Ruboto IRB you can interactively play with almost any Android API:



This example accesses the Activity (the IRB class in Ruboto IRB), constructs a new WebView, loads some HTML into it, and (not shown) replaces the content WebView. Interactively. On the device. Awesome.



I am trusting you to not go mad with power, and to use Ruboto only for good.

RubyGems/Maven Integration

JRuby has always been a Ruby implementation first, and as a result we've often neglected Java platform integration. But with Ruby 1.8.7 compatibility very solid and Ruby 1.9 work under way, we've started to turn our attentions toward Java again.

One of the key areas for language integration is tool support. And for Java developers, tool support invariably involves Maven.

About a year ago, I started a little project to turn Maven artifacts into RubyGems. The mapping was straightforward: both have dependencies, a name, a description, a unique identifier, version numbers, and a standard file format for describing a given package. The maven_gem project was my proof-of-concept that it's possible to merge the two worlds.

The maven_gem repository is here: http://github.com/jruby/maven_gem

The project sat mostly dormant until I circled back to it this fall. But once I got the guys from Sonatype involved (purveyors of the Nexus Maven server) things really got interesting.

Thanks to Tamas Cservenak from Sonatype, we now have something once thought impossible: full RubyGems integration of all the Maven artifacts in the world!

The Nexus RubyGems support repository is here: http://github.com/cstamas/nexus-ruby-support/

~/projects/jruby ➔ gem install com.lowagie.itext-rtf
Successfully installed bouncycastle.bcmail-jdk14-138-java
Successfully installed bouncycastle.bcprov-jdk14-138-java
Successfully installed bouncycastle.bctsp-jdk14-138-java
Successfully installed com.lowagie.itext-rtf-2.1.7-java
4 gems installed
Installing ri documentation for bouncycastle.bcmail-jdk14-138-java...
Installing ri documentation for bouncycastle.bcprov-jdk14-138-java...
Installing ri documentation for bouncycastle.bctsp-jdk14-138-java...
Installing ri documentation for com.lowagie.itext-rtf-2.1.7-java...
Installing RDoc documentation for bouncycastle.bcmail-jdk14-138-java...
Installing RDoc documentation for bouncycastle.bcprov-jdk14-138-java...
Installing RDoc documentation for bouncycastle.bctsp-jdk14-138-java...
Installing RDoc documentation for com.lowagie.itext-rtf-2.1.7-java...


Here's a an example of a full session, where I additionally install Rhino and then use it from IRB: http://gist.github.com/271764

I should reiterate what this means for JRuby users: as of JRuby 1.5, you'll be able to install or use in dependencies any Java library ever published to the public Maven repository. In short, you now have an additional 60000-some libraries at your fingertips. Awesome, no?

There are some remaining issues to work through, like the fact that RubyGems itself chokes on that many gems when generating indexes, but there's a test server up and working. We'll get all the issues resolved by the time we release JRuby 1.5 RC1. Jump on the JRuby mailing list if you're like to discuss this new capability.

Rake? Ant? Why not Both?

Another item on the integration front is Tom Enebo's work on providing seamless two-way integration of Rake (Ruby's build tool) and Ant. There's three aspects to Rake/Ant integration:
  • Using Rake tasks from Ant and Ant tasks from Rake
  • Calling Rake targets from Ant and Ant targets from Rake
  • Mixed build systems with part in Rake and part in Ant
Tom's work so far has focused on the first bullet, but the other two will come along as well. You'll be able to translate your Ant script to Rake and have it work without modification, call out to Rake from Ant, include a Rakefile in Ant and use its targets, and so on.

Here's an example of pulling in a build.xml file, depending on its targets, and calling Ant tasks from Rake:
require 'ant'

ant.load 'build.xml' # defines tasks :mkdir + :setup in ant!

task :compile => [:mkdir, :setup] do
ant.javac(:srcdir => src_dir, :destdir => "./build/classes") do
classpath :refid => "project.class.path"
end
end

Ideally we'll cover all possible integration scenarios, and finally blur the lines between Rake and Ant. And we'll be able to move JRuby's build into Rake, which will make all of us very happy. Look forward to this in JRuby 1.5 as well.

The C Extension Question

One aspect of Ruby itself that we've punted on is support for Ruby's C extension API. We haven't done that to spite C extension users or developers--far from it...we'd love to flip a switch and have C extensions magically work. The problem is that Ruby's C API provides too-invasive access to the internals of objects, and there's just no way we can support that sort of access without incurring a massive overhead (because we'd have to copy stuff back and forth for every operation).

But there's another possibility we've started to explore: supporting only a "safe" subset of the Ruby C API, and providing a few additional functions to replace the "unsafe" invasive bits. To that end, we (as in Wayne Meissner, creator of the FFI implementations for JRuby and Ruby) have cleaned up and published a C extension API shim library to Github.

The JRuby C extension shim library repository is here: http://github.com/wmeissner/jruby-cext

Last night I spent some time getting this building and working on OS X, and to my surprise I was able to get a (very) trivial C extension to work!

Here's the part in C. Note that this is identical to what you'd write if you were implementing the extension for Ruby:
#include <stdio.h>
#include <ruby.h>

VALUE HelloModule = Qnil;
VALUE HelloClass = Qnil;

VALUE say_hello(VALUE self, VALUE hello);
VALUE get_hello(VALUE self);

void
Init_defineclass()
{
HelloClass = rb_define_class("Hello", rb_cObject);
rb_define_method(HelloClass, "get_hello", get_hello, 0);
rb_define_method(HelloClass, "say_hello", say_hello, 1);
}

VALUE
say_hello(VALUE self, VALUE hello)
{
return Qnil;
}
VALUE
get_hello(VALUE self)
{
return rb_str_new2("Hello, World");
}

Here's the little snippit of Ruby code that loads and calls it. Note that the ModuleLoader logic would be hidden behind require/load in a final version of the C extension support.
require 'java'

m = org.jruby.cext.ModuleLoader.new
m.load(self, "defineclass")

h = Hello.new
puts "Hello.new returned #{h.inspect}"
puts "h.get_hello returns #{h.get_hello}"

Among the C API pieces we probably won't ever support are things like RSTRING, RARRAY, RHASH that give direct access to string, array, and hash internals, anything dealing with Ruby's threads or runtime, and so on. Basically the pieces that don't fit well into JNI (the Java VM C API) would not be supported.

It's also worth mentioning that this is really a user-driven venture. If you are interested in a C extension API for JRuby, then you'll need to help us get there. Not only are we plenty busy with Java and Ruby support, we are also not extension authors ourselves. Have a look at the repository, hop on the JRuby dev list, and we'll start collaborating.

JRuby in 2010

There's a lot more coming for JRuby in 2010. We're going to finally let you create "real" Java classes from Ruby code that you can compile against, serialize, annotate, and specify in configuration files. We're going to offer JRuby development support to help prioritize and accelerate bug fixes for commercial users that really need them. We're going to have a beautiful, simple deployment option in Engine Yard Cloud, with fire-and-forget for your JRuby-based applications (including any non-Ruby libraries and code you might use like Java, Scala or Clojure). And we're going to finally publish a JRuby book, with all the walkthroughs, reference material, and troubleshooting tips you'll need to adopt JRuby today.

It's going to be a great year...and a lot of fun.