Monday, January 28, 2008

Lang.NET 2008: Day 1 Thoughts

Yes friends, I'm at Microsoft's Lang.NET symposium this week. Does this strike you as a bit peculiar?

Lang.NET is Microsoft's event for folks interested in using and implementing "alternative" languages on the CLR. From the Lang.NET site itself:

Lang .NET 2008 Symposium is a forum for discussion on programming languages, managed execution environments, compilers, multi-language libraries, and integrated development environments. This conference provides an excellent opportunity for Programming Language Implementers and Researchers from both industry and academia to meet and share their knowledge, experience, and suggestions for future research and development in the area of programming languages
Of course .NET and the CLR aren't mentioned explicitly here, but being a Microsoft event I think it's reasonable to expect it to be .NET heavy. And that's certainly ok, since at least one of the key professed design goals of the CLR is to support multiple languages, and by multiple we'd like to think they mean more than VB or C#. Microsoft would understandably like to cultivate a culture of language implementation and experimentation on CLR, since history has shown repeatedly that programming language diversity is a must for any platform, system, or framework to be long-lived. This fact is not new to many of us, although there are certain companies that until recently refused to believe it. But I digress.

What is a long time Java user and short time JVM language implementer doing here? Why would I subject myself to the slings and arrows of a potentially unfriendly .NET crowd, or at least subject myself to the scalding apathy of a roomful of engineers with no interest in the JVM or my work on JRuby? Well, that's an excellent question.

Perhaps it's more entertaining to describe *what* I'm doing here before explaining *why* I'm doing it. The credit for the idea goes to Brian Goetz, a coworker of mine at Sun and currently an engineer working on JavaFX (though you probably know him better from his talks on Java performance or his book and talks on Java concurrency). Given that we're both working on JVM language-related projects, and we both would like to see the JVM and the Java platform continue to grow, evolve, and expand, he thought it might be useful for us two and John Rose to attend the conference to present and discuss the many enhancements we're considering to support non-Java languages. You may know John Rose from his posts on adding various wild features to the JVM...features that you, dear reader, and others like you may never have expected to enter the JVM in any form. That's how I know him, and from my many discussions with him about how to make JRuby perform well on current JVM versions. But we must look to the future, and attending Lang.NET is at least in part to help us validate that this is an appropriate future to pursue.

So today John Rose and I presented his Da Vinci Machine project to a largely .NET crowd, with John covering the theoretical aspects of what features we'd like to add to the JVM and how he plans to add them, and me covering the practical reasons why we'd like to add those features using JRuby as an example. Those of you who saw my presentation at RubyConf would have recognized my slides; I largely presented the same "JRuby design internals" content in more depth and with language implementers and enthusiasts in mind. Specifically I presented details about JRuby's parser, interpreter, core class implementations, extensions and POSIX support, and of course its compiler and related optimizations. The last area was obviously the most interesting for this crowd, but I'll explain that later.

So there's the primary "what" of the trip. And it was the primary "what" we three shared in common. But there were a few other "what"s I had in mind entering into this, and they also explain the "why":
  1. I hoped to understand better how the DLR (and to a lesser extent the CLR) improve the language-implementing ecosystem on .NET.
  2. I wanted to meet and talk with .NET language implementers about their strategies for various problems facing JRuby and similar JVM languages.
  3. I wanted to meet my counterparts on the .NET side of the world, which is increasingly becoming a parallel universe to the JVM and the Java platform...at least as far as problem-solving strategies and future directions go.
  4. I enjoy meeting and talking with smart people. There's a few of them here.
So in order to "yeggefy" my post a bit, I want to elaborate on the first and primary point in more detail, rather than give another yawner of a conference play-by-play.

Pain

It's worth prefacing this with a disclaimer: I am not a Microsoft fan. Perhaps it's like being burned too many times by a hot iron. My work on LiteStep, my enthusiasm for open source, my interest in collaborative, communal development processes, and my dark days learning Win32, MFC, and COM mean I'll probably never be particularly warm to Microsoft projects, technology, or ideals, regardless of how they may evolve over time. That's not to say Microsoft can't change or isn't changing...but as you probably know it's hard to make yourself touch a cold iron. Pain rewires our circuitry, and lingers on in our altered behavior forever. I have felt pain.

But I endeavor to rise above my own wiring and bigotry, which is why I force myself to help other JVM language implementations, force myself to help other Ruby implementations, and force myself to share freely and openly all I can even with members of otherwise competing worlds. In this sense, the pain that comes from cooperating with projects I might otherwise want to undermine, derail, or discount is antithetical to my believe in open doors and open processes. I must cure myself of this affliction, rewrite myself to not feel that pain.

And so I am trying to study the DLR, which brings me to the big point #1. I will appreciate any and all corrections to everything I state here.

Eyeing the DLR

The DLR is Microsoft's Dynamic Language Runtime, a set of libraries and tools designed to make it easier to implement dynamic languages atop the CLR. The DLR provides facilities for compiler and interpreter generation (via language-agnostic expression trees), fast dynamic invocation (via self-updating dynamic call sites), and cross-language method dispatch and type system support. It is, to my eyes, intended to be the "everytool" needed to implement dynamic languages for .NET.

The DLR has largely grown out of Jim Hugunin's work on IronPython, and aims to tackle all the same issues we've dealt with while working on JRuby. But it also provides more than that in its self-optimizing capabilities and its expression tree logic, two features that undoubtedly make it easier to produce CLR dynamic language implementations with acceptable to excellent performance characteristics.

Expression trees in the DLR are used as an intermediate representation of language semantics. Where the typical language implementation process has you parse to an AST and then either interpret that or perform the additional work to compile the AST to a target "assembly" language, expression trees are meant to be (mostly) the last step in your language implementation process (outside of implementing language-specific types and dealing with language-specific nuances). Up to the point of creating a parsed AST the two paths are largely the same. But on the DLR, instead of proceeding to compiler work, you further transform the AST into an expression tree that represents the language behavior in DLR terms. So if your language provides mechanisms to conditionally run a given expression (modifiers in Ruby terms), you might turn your one conditional expression AST node into equivalent test and execute DLR expression tree nodes. It appears the translation is almost always a widening translation, where the DLR hopes to provide a broad set of "microsemantics" that can be used to compose a broader range of language features. And of course the idea is that there's a quickly-flattening asymptotic curve to the number of expression tree nodes required to implement each new language. Whether that's the case is yet to be seen.

I must admit, I've poo-pooed the expression tree idea in the past, mostly because I did not understand it and had not found a way to see beyond my personal biases. While a language-generic expression tree is certainly a very appealing goal, one that lowers the entry cost of language implementation considerably, it always seemed like a too-lofty ideal. Of course it's impossible to predict what features future languages might decide to add, so can we reasonably expect to provide a set of expression nodes that can represent those languages? Won't we be endlessly forced to extend that set of node types for each new language that comes along? And if to alleviate such pain we are given hooks to call back to custom language-specific code for cases where the DLR falls down, doesn't that eliminate the advantage we hoped to gain in the first place?

I'll make no secret about my skepticism here, but I've seen a couple things today that make me far more optimistic about the utility of language-agnostic expression trees.

The first was during Anders Hejlsberg's talk about C# 3.0. C# 3.0 adds a considerable slate of features, some which I can appreciate (type inference, for example, would take 99% of the pain out of using generics; Anders called that out as a key reason for adding inference to C#) and others which I'm still dubious on (LINQ's additions to C#'s already large set of keywords). But his talk made one thing abundantly clear: these changes are little more than compiler magic, smoke and mirrors to produce syntactic sugar. And he admitted as much, showing that behind the scenes LINQ and inferred types and their ilk basically compile down to the same IL that would be produced if you write the same sugar by hand. And that's particularly heartening to me, since it means the vast majority of these features could be added to Java or supported by JVM languages without modifying the JVM itself. Whether they're features we want to add is a separate discussion.

The second detail that made me better appreciate expression trees stems directly from the first: they're just objects, man. Although expression tree support is being wired directly into C# and VB.NET for at least the small subset of expressions LINQ encompasses, in general we're talking about nothing more than a graph of objects and a set of libraries and conventions that can utilize them. If it were reasonable for us to expose JRuby's AST as a "one true AST" to represent all languages, we'd be able to support a similar set of features, adding syntax to Java to generate Ruby ASTs at runtime and capabilities for manipulating, executing, and compiling those ASTs. Naturally, we'd never try to make an argument that the JRuby AST can even come close to approximating a set of low-level features "all languages" might need to support, but hopefully you see the parallel. So again, we're working at a level well above the VM itself. That means the power of expression trees could definitely be put into the hands of JVM language developers without any modifications to existing JVMs. And in the DLR's case, they're even considering the implications of allowing new language implementations to provide extensions to the set of expression tree nodes, so at least some of them recognize that it's all just objects too...and perhaps starting to recognize that nobody will ever build the perfect mouse trap.

So that brings me to my other key point of interest in the DLR: its mechanisms for optimizing performance at runtime.

The DLR DynamicSite is basically an invokable object that represents a given call at a given position in a program's source code. When you perform a dynamic invocation, the DynamicSite runs through a list of rules, checking each in turn until it finds a match. If it finds a match, it invokes the associated target code, which presumably will do things like coerce types, look up methods, and perform the actual work of the invocation. As more calls pass through the same call site, DynamicSite will (if necessary) repeatedly replace itself with new code (new IL in fact) that includes some subset of previous rules and targets (potentially all of them if they're all still valid or none of them if they're all now invalid) along with at least one new rule for the new set of circumstances brought about by this call. I've prototyped this exact same behavior in JRuby's call sites (on a more limited scale), but have not had the time to include them in JRuby proper just yet (and might never have to...more on that later). The idea is that as the program runs, new information becomes available that will either make old assumptions incorrect or make new optimizations possible. So the DynamicSite continually evolves, presumably toward some state of harmony or toward some upper limit (i.e. "I give up").

And there's a simple reason why the DLR must do this to get acceptable performance out of languages like Python and Ruby:

Because the CLR doesn't.

A Solid Base

In the JVM world, we've long been told about and only recently realized the benefits of dynamic optimization. The JVM has over time had varying capabilities to dynamically optimize and reoptimize code...and perhaps most importantly, to dynamically *deoptimize* code when necessary. Deoptimization is very exciting when dealing with performance concerns, since it means you can make much more aggressive optimizations--potentially unsafe optimizations in the uncertain future of an entire application--knowing you'll be able to fall back on a tried and true safe path later on. So you can do things like inlining entire call paths once you've seen the same path a few times. You can do things like omitting synchronization guards until it becomes obvious they're needed. And you can change the set of optimizations applied after the fact...in essence, you can safely be "wrong" and learn from your mistakes at runtime. This is the key reason why Java now surpasses C and C++ in specific handcrafted benchmarks and why it should eventually be able to exceed C and C++ in almost all benchmarks. And it's a key reason why we're able to get acceptable performance out of JRuby having done far less work than Microsoft has done on IronPython and the DLR.

The CLR, on the other hand, does not (yet) have the same level of dynamic optimization that the JVM does. Specifically, it does not currently support deoptimizing code that has already been JITed, and it does not always (occasionally? rarely?) include type information into its consideration of how to optimize code. (Those of you more familiar with CLR internals please update or correct me on how much optimization exists today and how deep those optimizations go.) So for dynamic languages to perform well, you simply have to do additional work. You don't have a static call path you can JIT early and trust to never change. You can't bind a call site to a specific method body, knowing verifiably that it will forever be the same code called from that site. And ultimately, that means you must implement those capabilities yourself, with profiling, self-updating call sites that build up rule and target sets based on gathered information.

So I think there's a reasonably simple answer now to folks asking me if I believe we need a "DLR" for JVM language implementers to target. The answer is that certain parts of the DLR are definitely attractive, and they may be worth implementing to ease the pain of JVM language implementation or at least reduce the barrier to entry. But there's a large set of DLR features we simply don't need to create if we can find a way to open up the JVM's existing ability to optimize dynamically...if we can punch a hole in the bulky, dynamic-unfriendly Java exterior. And that's exactly what we're doing (really, John is doing) with JSR-292 (dynamic invocation for JVM) and the MLVM.

Where then do we go from here? If the correct path is only partly down DLR street, what else is missing? Well, some of my Sun brethren may not like to hear these (and some other may love to hear them), but here's a short list of what I believe needs to happen to keep the JVM relevant into the future (at least if we take it as written that multi-language support is a must):
  1. We need clear and consistent proof that a multi-language JVM is a priority, and not just at Sun Microsystems but also in the wider JVM and Java platform communities. The libraries and runtimes and support code necessary will grow out of that commitment. It's obviously not worth the effort to make all this happen if nobody cares and nobody will want to use it. But I don't think that's the case, and currently there's not a strong enough indication from any of the major community players that this cause is worth fighting for. At least in Sun's case, we've thrown down the gauntlet by open-sourcing Java and initiating projects like the JVM Languages mailing list, the JVM Language Runtime project, and the Multi-Language VM project. But we need you, and that brings me to my second point.
  2. We're fighting an uphill battle for talented, excited resources in this domain, and without more money and time spent by the major players on multi-language support for the JVM (in collaboration, I might add), it's not going to happen. Again, whether that's a bad thing depends on whether you believe in a multi-language world. I do, and I'm willing to spend whatever of my time is necessary to make this happen. But I'm no patsy for political interests that might route us down the wrong path, and I'm definitely not going to be able to do this alone. Where do you stand?
  3. We need the freedom to make the JVM "reborn". Java is suffering from middle age (or old age, depending on who you ask) so making nontrivial changes to the JVM specification is generally met with stiff resistance. But I believe this resistance comes either from folks who don't realize the stakes involved, or from folks with their own bigotry and biases, or perhaps simply from rank-and-file pragmatists who don't want to or can't invest the resources necessary to make their implementation of the "JVM" more useful to the relative few of us currently interested in next-generation JVM language support. And of course, there were lots of folks who believed without a doubt the Titanic could never sink, and that additional resources to ensure it would be wasted. It's time to learn from such mistakes. Yes, we can and will use MLVM as a proving ground for these features, and yes, I'm sure Sun and other players will continue to use careful, measured steps to evolve the JVM and the Java language. These are both appropriate directions. But we must never shut the door on the future by claiming that either the JVM or the Java language are "done". There's three words for an entity that can no longer change: "it's dead, Jim". Java is not dead...and I'll be damned if I'm going to let it or the JVM die.
As always my opinions are my own, and may or may not reflect the positions of Sun Microsystems or its partners. But I know for a fact there are many other like-minded individuals that feel the same way.

I'm looking forward to day two of Lang.NET 2008, and to hearing from all of you out there. Comments, questions, flames and praise are welcome. But actions speak louder than words, and the time for action is now.

Update: Here's a link to Tom's and my RubyConf slides. I'll add a link to the Lang.NET slides when they're posted.

Thursday, January 03, 2008

Jython's Back, Baby!

Well it's been a long hard slog for the Jython team. Once thought dead, they seemed to pick up steam more and more over the past year. They got out a long awaited 2.2 release and started to work on many missing 2.3, 2.4, and 2.5 features. They started tackling new parsers and compilers. They spoke with me and other folks at Sun about the future of dynamic languages on the JVM. And above all, they've been working their asses off, having sprints and codefests and hacking away on every corner of Jython.

It looks like their hard work is paying off. Jim Baker reports that they have successfully run Django on Jython. They're using bleeding edge revisions of both Jython and Django, and there's a bit more work to be done, but hey, lots of folks thought it would be impossible. Haven't we heard that somewhere before?

Hats off to whole Jython team and their obviously excellent community for making this a reality. This is just the beginning!