A bit ago, I was catching up on my feeds and noticed that Neal Gafter had announced the first prototype of Java closures. I've been a fan of the BGGR proposal, so I thought I'd catch up on the current status and try applying it to a pain point in the JRuby source: the compiler.
The current compiler is made up of two halves: the AST walker and the bytecode emitter. The AST walker recursively walks the AST, calling appropriate methods on a set of interfaces into the bytecode emitter. The bytecode emitter, in turn, spits out appropriate bytecodes and calls back to the AST walker. Back and forth, the AST is traversed and all nested structures are assembled appropriately into a functional Java method.
This back and forth is key to the structure and relative simplicity of the compiler. Take for example the following method in ASTCompiler.java, which compiles a Ruby "next" keyword (similar to Java's "continue"):public static void compileNext(Node node, MethodCompiler context) {
context.lineNumber(node.getPosition());
final NextNode nextNode = (NextNode) node;
ClosureCallback valueCallback = new ClosureCallback() {
public void compile(MethodCompiler context) {
if (nextNode.getValueNode() != null) {
ASTCompiler.compile(nextNode.getValueNode(), context);
} else {
context.loadNil();
}
}
};
context.pollThreadEvents();
context.issueNextEvent(valueCallback);
}
First, the "lineNumber" operation is called on the MethodCompiler, my interface for primary bytecode emitter. This emits bytecode for line number information based on the parsed position in the Ruby AST.
Then we get a reference to the NextNode passed in.
Now here's where it gets a little tricky. The "next" operation can be compiled in one of two ways. If it occurs within a normal loop, and the compiler has an appropriate jump location, it will compile as a normal Java GOTO operation. If, on the other hand, the "next" occurs within a closure (and not within an immediately-enclosing loop), we must initiate a non-local branch operation. In short, we must throw a NextJump.
In Ruby, unlike in Java, "next" can take an optional value. In the simple case, where "next" is within a normal loop, this value is ignored. When a "next" occurs within a closure, the given value becomes the local return from that invocation of the closure. The idea is that you might write code like this, where you want to do an explicit local return from a closure rather than let the return value "fall off the end":def foo
puts "still going" while yield
end
a = 0
foo {next false if a > 4; a += 4; true}
...which simply prints "still going" four times.
The straightforward way to compile this non-local "next" would be to evaluate the argument, construct a NextJump object, swap the two so we can call the NextJump(IRubyObject value) constructor with the given value, and then raise the exception. But that requires us to juggle values around all the time. This simple case doesn't seem like such a problem, but imagine the hundreds or thousands of nodes the compiler will handle for a given method, all spending at least part of their time juggling stack values around. It would be a miserable waste.
So the compiler constructs a poor-man's closure: an anonymous inner class. The inner class implements our "ClosureCallback" interface which has a single method "compile" accepting a single MethodCompiler parameter "context". This allows the non-local "next" bytecode emitter to first construct the NextJump, then ask the AST compiler to continue processing AST nodes. The compiler walks the "value" node for the "next" operation, again causing appropriate bytecode emitter calls to be made, and finally we have our value on the stack, exactly where we want it. We continue constructing the NextJump and happily toss it into the ether.
The final line of the compileNext method initiates this process.
So what would this look like with the closure specification in play? We'll simplify it with a function object.public static void compileNext(Node node, MethodCompiler context) {
context.lineNumber(node.getPosition());
final NextNode nextNode = (NextNode) node;
ClosureCallback valueCallback = { MethodCompiler => context
if (nextNode.getValueNode() != null) {
ASTCompiler.compile(nextNode.getValueNode(), context);
} else {
context.loadNil();
}
};
context.pollThreadEvents();
context.issueNextEvent(valueCallback);
}
That's starting to look a little cleaner. Gone is the explicit "new"ing of a ClosureCallback anonymous class, along with the superfluous "compiler" method declaration. We're also seeing a bit of magic outside the function type: closure conversion. Our little closure that accepts a MethodCompiler parameter is being coerced into the appropriate interface type for the "valueCallback" variable.
How about a more complicated example? Here's a much longer method from JRuby that handles "operator assignment", or any code that looks like a += b:public static void compileOpAsgn(Node node, MethodCompiler context) {
context.lineNumber(node.getPosition());
// FIXME: This is a little more complicated than it needs to be;
// do we see now why closures would be nice in Java?
final OpAsgnNode opAsgnNode = (OpAsgnNode) node;
final ClosureCallback receiverCallback = new ClosureCallback() {
public void compile(MethodCompiler context) {
ASTCompiler.compile(opAsgnNode.getReceiverNode(), context); // [recv]
context.duplicateCurrentValue(); // [recv, recv]
}
};
BranchCallback doneBranch = new BranchCallback() {
public void branch(MethodCompiler context) {
// get rid of extra receiver, leave the variable result present
context.swapValues();
context.consumeCurrentValue();
}
};
// Just evaluate the value and stuff it in an argument array
final ArrayCallback justEvalValue = new ArrayCallback() {
public void nextValue(MethodCompiler context, Object sourceArray,
int index) {
compile(((Node[]) sourceArray)[index], context);
}
};
BranchCallback assignBranch = new BranchCallback() {
public void branch(MethodCompiler context) {
// eliminate extra value, eval new one and assign
context.consumeCurrentValue();
context.createObjectArray(new Node[]{opAsgnNode.getValueNode()}, justEvalValue);
context.getInvocationCompiler().invokeAttrAssign(opAsgnNode.getVariableNameAsgn());
}
};
ClosureCallback receiver2Callback = new ClosureCallback() {
public void compile(MethodCompiler context) {
context.getInvocationCompiler().invokeDynamic(
opAsgnNode.getVariableName(), receiverCallback, null,
CallType.FUNCTIONAL, null, false);
}
};
if (opAsgnNode.getOperatorName() == "||") {
// if lhs is true, don't eval rhs and assign
receiver2Callback.compile(context);
context.duplicateCurrentValue();
context.performBooleanBranch(doneBranch, assignBranch);
} else if (opAsgnNode.getOperatorName() == "&&") {
// if lhs is true, eval rhs and assign
receiver2Callback.compile(context);
context.duplicateCurrentValue();
context.performBooleanBranch(assignBranch, doneBranch);
} else {
// eval new value, call operator on old value, and assign
ClosureCallback argsCallback = new ClosureCallback() {
public void compile(MethodCompiler context) {
context.createObjectArray(new Node[]{opAsgnNode.getValueNode()}, justEvalValue);
}
};
context.getInvocationCompiler().invokeDynamic(
opAsgnNode.getOperatorName(), receiver2Callback, argsCallback,
CallType.FUNCTIONAL, null, false);
context.createObjectArray(1);
context.getInvocationCompiler().invokeAttrAssign(opAsgnNode.getVariableNameAsgn());
}
context.pollThreadEvents();
}
Gods, what a monster. And notice my snarky comment at the top about how nice closures would be (it's really there in the source, see for yourself). This method obviously needs to be refactored, but there's a key goal here that isn't addressed easily by currently-available Java syntax: the caller and the callee must cooperate to produce the final result. And in this case that means numerous closures.
I will spare you the walkthrough on this, and I will also spare you the one or two other methods in the ASTCompiler class that are even worse. Instead, we'll jump to the endgame:public static void compileOpAsgn(Node node, MethodCompiler context) {
context.lineNumber(node.getPosition());
// FIXME: This is a little more complicated than it needs to be;
// do we see now why closures would be nice in Java?
final OpAsgnNode opAsgnNode = (OpAsgnNode) node;
ClosureCallback receiverCallback = { MethodCompiler context =>
ASTCompiler.compile(opAsgnNode.getReceiverNode(), context); // [recv]
context.duplicateCurrentValue(); // [recv, recv]
};
BranchCallback doneBranch = { MethodCompiler context =>
// get rid of extra receiver, leave the variable result present
context.swapValues();
context.consumeCurrentValue();
};
// Just evaluate the value and stuff it in an argument array
ArrayCallback justEvalValue = { MethodCompiler context, Object sourceArray, int index =>
compile(((Node[]) sourceArray)[index], context);
};
BranchCallback assignBranch = { MethodCompiler context =>
// eliminate extra value, eval new one and assign
context.consumeCurrentValue();
context.createObjectArray(new Node[]{opAsgnNode.getValueNode()}, justEvalValue);
context.getInvocationCompiler().invokeAttrAssign(opAsgnNode.getVariableNameAsgn());
};
ClosureCallback receiver2Callback = { MethodCompiler context =>
context.getInvocationCompiler().invokeDynamic(
opAsgnNode.getVariableName(), receiverCallback, null,
CallType.FUNCTIONAL, null, false);
};
// eval new value, call operator on old value, and assign
ClosureCallback argsCallback = { MethodCompiler context =>
context.createObjectArray(new Node[]{opAsgnNode.getValueNode()}, justEvalValue);
};
if (opAsgnNode.getOperatorName() == "||") {
// if lhs is true, don't eval rhs and assign
receiver2Callback.compile(context);
context.duplicateCurrentValue();
context.performBooleanBranch(doneBranch, assignBranch);
} else if (opAsgnNode.getOperatorName() == "&&") {
// if lhs is true, eval rhs and assign
receiver2Callback.compile(context);
context.duplicateCurrentValue();
context.performBooleanBranch(assignBranch, doneBranch);
} else {
context.getInvocationCompiler().invokeDynamic(
opAsgnNode.getOperatorName(), receiver2Callback, argsCallback,
CallType.FUNCTIONAL, null, false);
context.createObjectArray(1);
context.getInvocationCompiler().invokeAttrAssign(
opAsgnNode.getVariableNameAsgn());
}
context.pollThreadEvents();
}
There's two things I'd like you to notice here. First, it's a bit shorter as a result of the literal function objects and closure conversion. It's also a bit DRYer, which naturally plays into code reduction. Second, there's far less noise to contend with. Rather than having a minimum of five verbose lines to define a one-line closure (for example), we now have three terse ones. We've managed to tighten the focus to the lines of code we're actually interested in: the bodies of the closures.
Of course this quick tour doesn't get into the much wider range of features that the closures proposal contains, such as non-local returns. It also doesn't show closures being invoked because with closure conversion many existing interfaces can be represented as function objects automatically.
I'll be looking at the closure proposal a bit more closely, and time permitting I'll try to get a simple JRuby prototype compiler wired up using the techniques above. I'd recommend you give it a try too, and offer Neal your feedback.
Monday, November 05, 2007
Closures Prototype Applied to JRuby Compiler
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment