Thursday, November 03, 2011

The Java Way, simple Type Inference and Flow Sensitice Typing

In "Groovy static type checker: status update" Cédric gave one of his favorite type checking examples. Even though the example made some things clear that I was not so clear about in my last blog post, I still think we need to look at this example a bit more in detail.

Perfectly fine Groovy code like this:

class A {
void foo() {}
}
def name = new A()
name.foo()
name = 1
name = '123'
name.toInteger()

is a problem for a static type checker and I want to explain once more in a bit more detail why. There are currently 3 ways to approach this code...

The Way of Java
In this version we try to be as much Java as possible, but obviously we have to give the def a meaning. In standard Groovy this is basically equal to Object. As I wrote in my last blog post, the view on types is in Groovy a tiny bit different compared to Java. Anyway, if def is just an alias for Object and we compile this code with Java's type rules, then the code will not compile:

class A {
void foo() {}
}
def name = new A() // name is of type Object
name.foo() // error: foo() does not exist on Object
name = 1 // assigning Integer to Object typed name is allowed
name = '123' // assigning String to Object typed name is allowed
name.toInteger() // error: toInteger() does not exist on Object

While the assignments would all work just fine, the method calls will not pass, since those methods are not available on Object.

Simple Type Inference
This seems to be the way groovypp goes. If I am wrong about it, feel free to correct me. Again we have to give def a meaning and this time we use right hand side of the assignment to do so. For the remaining code we stay more or less with the Java rules. The result is then this:

class A {
void foo() {}
}
def name = new A() // name is of type A
name.foo() // no problem, foo() exists on A
name = 1 // error: assigning Integer to A typed name
name = '123' // error: assigning String to Object typed name
name.toInteger() // error: toInteger() does not exist on A

Instead of the two problem from before we have no 3, 2 of them at a different position in our code. If we want that piece of code to compile then those two approaches won't do it.

Flow Sensitive Typing
In this third version we still have to give def a meaning, but this time the meaning is not fixed:

class A {
void foo() {}
}
def name = new A() // name is of flow type A
name.foo() // no problem, foo() exists on A
name = 1 // name becomes of flow type Integer
name = '123' // name becomes of flow type String
name.toInteger() // no problem, toInteger() does exist on String

With this we reach our goal - who would have guessed that ;)

The difficult thing for a Java programmer here probably is, that name is not of a fixed type. In fact, looking at many papers in the area of formal semantics, most type systems out there use the flow type only for type checks, not to actually give a variable a type. On the other hand, if you look at the Java way, you could say, that simple type inference is just an enhancement. Instead of letting the user write the type, the compiler will set the type. There are actually many old languages that support that kind of logic. This is really nothing new. Still, if we see that as an enhancement, then saying we let the compiler set the type of a variable automatically at more than one place can be considered as just the next step.

But I am getting side tracked... I only wanted to show why exactly this example is causing a problem or why not. That is all. [UPDATE: I had to reformat the article a bit, because I had problems with my java script based syntax highlighter and with the line length of some code examples]

Wednesday, October 26, 2011

Flow Sensitive Typing?

While we (Guillaume, Cedric and myself) had a meeting in Paris, we talked about the typing system of Grumpy a bit.

Coming from a dynamic language and going static often feels quite limiting. For me the main point of a static type system is to ensure the code I just wrote is not totally stupid. Actually many would say the purpose is to ensure the correctness of the code, but maybe I am a more dynamic guy, because I think this is impossible to achieve for a poor compiler. So a static compiler usually checks

  • method calls, to ensure the method I want to call really exists
  • fields/properties, to ensure they exist
  • check assignments, to ensure right-hand-side and left-hand-side have compatible types
  • check type usage, for complying with the type system (including generics, class declarations and so on)
  • and others...
So usually if a compiler detects a violation it will cause a compilation error and if it cannot check things the code will probably include runtime checks.

Optional Typing

Now Groovy has, what we call optional typing. In Groovy the compiler won't check fields, properties or methods on their existence, since the compiler cannot be totally sure we really mean some entity, that exists at compile time. Groovy allows you to create/remove/add/replace methods at runtime and attach them to classes via their MetaClass. A program that would not compile statically, may run just fine with those additions. What the Groovy compiler does though is to check type usage to some extend. So you can for example not extend an interface. The Groovy compiler has to do this, because the JVM is also typed to some extend, and doesn't allow directly for arbitrary type systems. sure there are ways around, but that always means to reduce the high integration with Java, and we don't want that.

Another aspect is for assignments. If you assign a value to a typed variable in Groovy, then the compiler won't ensure this works at runtime, but it will add a cast, that may fail. Effectively this means for a typed variable in Groovy: We guarantee you, that if the assignment works, the value assigned to the variable, will be at least of the declared type.

This implies for example for a String typed variable, that you assign a non String to, that its toString() method is called. We call that way of casting a Groovy cast, and the rules for it are actually quite complex.

Still there are enough cases we could actually check, if we would know the type of the right-hand-side. In general we don't know that type, because for example a method call is done there and then we cannot ensure the type. By the time we actually reach that point in the code, the method might have been replaced.

Strong Typing
If you follow the discussions about typing, then you will most probably see very fast, that dynamic and static typing might be kind of defined, but beyond that, there are often conflicting definitions for other terms. For example some say that Groovy is not strong typed, while Java is. In my definition strong typing means that an value cannot change its type at runtime, without creating first a new value. In Java we have this situation for example for boxing. You can assign an int to an Object, but not without the value being boxed, thus a new value being created. Now in Groovy this is just the same. An int cannot become an Integer or a String, just like that. We depend on the type system, enforced on us by the JVM, and the JVM is strong typed... well that may change in the future, but for now it is strong typed. In Groovy you can add methods for example and with it changing the interface a value provides, but there is no way for a value of a certain class to become the same value with a totally different class, without a new value being created and that one used instead.

Flow Sensitive Typing
Flow Sensitive Typing is not unknown in the world. It is normally used to for example find the type of a complex expression, to then check that with the actual allowed type in an assignment. Now in Groovy we want to go a bit a different way. Basically we want to have not a fixed static type, instead each assignment can specify a new one. If you defined a variable using "def", then in normal Groovy all assignments to it are allowed. Basically we see "def" as Object in Groovy. But if you want static method checks, you still want something like "def str = "my string"; println str.toUpperCase()" to work. This case can so far be solved also by type inference. But in Groovy you can do also this: "def v = 1; v = v.toString(); println v.toUpperCase()". Even though we start out with an int, we assign later a String to v. If we work only with a simple inferencing system, this will not compile. But since it is of course our goal to make a wide range of Groovy programs available even in a static checked version, we would of course like to allow this. And a simple flow analysis can indeed give us the information, that the flow type of v change to String and thus the toUpperCase() method exists. In other words, this would compile. Taking into consideration, that "def" in Groovy doesn't mean much more than Object, we don't want this being limited to "def" only. We want also to allow this: "Object v = 1; v = v.toString(); println v.toUpperCase()" Java would not allow for this. Sure, you can assign the 1, you can even call toString() and assign it to v, but because v is declared as Object, the compiler would start barking at you for the toUpperCase() call. Our thinking is, that there is not really a need to limit the type system like this. As in Groovy, we would again give the guarantee, that v is at least an Object. But the specific type is in Groovy depending on the runtime type, on Grumpy on the flow type. Something Grumpy would for example still not allow is "int i = new Object()"

But till now this flow sensitive type system is not approved of by the community.


Feeling Grumpy?

Grumpy, might have been mentioned a few times here and there already, but what is it? 

It is a little project for Groovy we are currently working on and the project title is for now Grumpy. Grumpy is no final name, we use it really just until we found a final and more serious name.

The goal of Grumpy is to have a static type checker for Groovy driven by an annotation. This will be optional and not cause any changes to normal Groovy. We see this as part of the Java migration story, in which people may not want the dynamic type system of Groovy everywhere. Grumpy is also no static compiler, it will be normal Groovy under the hood. I will not exclude a future static compiler based on Grumpy, but that is not the primary purpose of Grumpy. also we don't want to compete with Scala here. For my taste their type system is too complex. We may go beyond Java's system though, if we think it makes sense.

Basically there is a static type checker with Groovy++ (and a static compiler), but integrating that into Groovy just for type checking will mean either to integrate huge parallel structures, that no one but the Groovy++ people do understand. Or it means to invest a lot of time to transfer everything over, probably more than it takes to write the type checker new. Thus we decided to make a new type checker and try to involve the community as much as possible on every step.


How will it work?
So far you will be able to annotate methods or classes and an AST transform will then perform some type checking on that code. We don't want any new syntax constructs for Grumpy. This means only a subset of Groovy will be accepted by Grumpy.

Can I mix dynamic typed code and Grumpy?
Yes you can. If you use the per method level annotations you can just use a different method for the dynamic or grumpy code. So far we have no annotation that will turn dynamic typing on again for the case you used the class level annotation, but that may follow in the future.

When will it be available?
Grumpy will be part of Groovy 1.9, the next beta will already include a pretty advanced Grumpy.

What will Grumpy do about the GDK?
For those, that don't know... the GDK is a set of methods we use to enhance standard Java classes. For example we have an "each" method on Collections, to iterate over the list using a Groovy Closure. Grumpy will support all GDK methods, but to enable type checking in those Closure blocks, there will have to be much more work.

Much more work?
In for example "int i=1; [1,2].each {i += it}" you want to ensure nothing is added to i, that is not compatible with int. Since we cannot possible let the compiler know about how each of those GDK methods is working by code, we will have to find a way to do that more automatically. The Java type system with generics is not really able to express our needs here. For example if you iterate a Map using each, you have one variant, that takes a Map.Entry, another one, that takes key and value. If you just declare everything as Object, you won't gain all that much in terms of type checking. most probably we will have a second annotation for this kind of thing, in which we will store a String with extra type information. The goal here is to let the compiler add this annotation on its own for Groovy code, but for predefined Java code we will of course have to depend on the user doing that, or not having the type information.

Anyway, I am sure Grumpy will be an interesting project. Feel free to suggest names

Wednesday, September 07, 2011

About primtive optimizations - Come into being

New idea?
The idea to those kind of optimizations is by far not new in Groovy. In very early days the compiler had two modes of operation, where one was doing a more or less direct compilation as static language. But this compiler mode was not paid attention to for years and then finally removed by me. The normal compiler evolved far beyond that one and the language itself had a long history of semantic and syntactic changes back then already. As some may remember, it took Groovy 5 years to get to an 1.0 version.

Even if we ignore those first failed attempts, my optimizations are still not really new. Many other languages know the concept of slow and fast paths, ways of switching between them and so on. Going to the JVM language summit for several years is going to influence you.

So what is the idea?
The idea is to have some kind of guard, which decides if we want to do a fast path or a slow path. Of course we prefer the fast path, but it is not doable if for example we want to do a 1+1 and the integer meta class defines a new add method, making your fast path resulting in a wrong value compared with the slow path.

Synchronization is bad
The first big problem was thus to find a way to recognize meta class changes and react to them. This guard has to be as simple as possible. If we compare ourselves here with Java for example, then every little extra does cost you badly. The classic way, as call site caching does it, is far from ideal. to ensure no category is active we have to test - in the end - a volatile int. That may not be much you think, but this volatile int represents a big barrier for inlining and internal compilation by hotspot. Volatiles are something we have to avoid.

But if our guard is not using volatiles and not using any kind of synchronization, lock or even fence, then this means meta class changes done in one thread may not be seen in another thread. It is not that they would be invisible, there are normally enough synchronization mechanisms involved during execution to give a piggyback ride (see piggybacking on synchronization from Java Concurrency in Practice). If the user wants better controlled synchronization the user would then have to add synchronization code of his own, that enforces such happens-before-relationship... for example waiting with a BarrierLock in the Thread that is supposed to see the change and do the change in the other Thread, to later then reactivate the waiting thread. the change will become visible to the other Thread, even though we didn't synchronize the guard directly. In most cases though I dare to say this is not really needed.

Simple boolean flags
The next problem with checking, if a meta class is unchanged is to actually get the meta class. Since we are talking about for example int here, we would have to go the ugly way and request the current meta class from the registry. This again involves a lot of synchronization and many many method calls on top of that. If our guard should be as cheap as possible, then this is bad. Maybe even so bad, your guard is eating up all the performance gain of the fast path. I decided for a different way then.

I plug a listener into the MetaClassRegistry, which reacts to set meta classes for Integer and then set a boolean according to that. The default of the flag indicates, no custom meta class has been set and since MetaClassImpl does not allow for modification later on, we are safe in our assumptions for the fast path. The first guard is therefore a boolean flag in DefaultMetaClassInfo and some easy method calls to see on this flag. I implemented a logic that allows to enable the flag again, after a custom meta class was used. Another way for custom meta classes is the meta class creation handle. Setting that one will thus also disable all the fast paths. For this I use a different boolean. This boolean then represents: no active category && no meta class creation handle set. This normally means a globally enabled ExpandoMetaClass will not allow for the fast path changes to play out.

My branch point for slow and fast path for int+int is then guarded by two boolean checks, one for the standard meta class usage in general, the other for Integer specifically.

Piggybacking
For the reader this solution may look easy and simple, but I must confess coming up with that one took quite some time on my side. I was facing the volatile problem and looked into fences for a while before I found it is nothing for Groovy. Only then I started wondering what may happen if I don't synchronize at all and stumbled upon piggybacking, which fits my needs well enough.

Now that we know when to take the fast path, it is time to actually implement one. The theory is that int+int in Java is quite fast compared to Integer+Integer, because the JVM has special instructions for operations on primitives. That means we need to use the IADD operation.

Object operands
People knowing internals of Groovy may realize at this point, Groovy has primitives in the fields, in method signatures, but not for local variables. Even worse, every primitive gets boxed right away. This has partially historical reasons, but in the end it boils down to having less trouble with emitting bytecode. Not only comes there a big amount of additional instructions for primitives, some of them take also 2 slots (long and double), making basic operations like swapping the last two operands on the stack a little problematic. Of course, if I want to benefit from the fast operations on primitives  I have to have them primitive. Of course without losing the ability to handle them as objects if needed.

So I had to rewrite the compiler to trace the operand stack in a way that allows me to see if I need to do boxing or not. From the first tests to actually having that for Groovy 1.8.0 this took a big part of my time. I used the opportunity to refactor the gigantic AsmClassGenerator into many smaller parts and establish some logic, allowing everything to be visited twice (one time slow path, one time fast path), with some more abstraction what to use here and there.

Type extraction
Another problem to be faced was: When do I actually have an int? If I have a local variable or field declared as int, then well, then I know I have. If it is an int constant, I know I have, but beyond that? My first tests also showed another problem: If I check the guards for each expression, then checking the guards consumes too much time.

So he decision was to guard statements, or if possible blocks of statements. For the statement there are then two versions, the fast and the slow version, where each expression in the fast version, may be handled as part of the fast path. If I have now i = a+b+c+d+e, I still have only one int guard - assuming i,a,b,c,d,e are all ints.

In this there is yet another part we have to look at. Is int+int an int? Only then we can guard using an int and safely do a+b+c. Luckily Groovy does not have type promotion, so there is no danger of a+b giving a long as result. a+b will stay int, even in the overflow case. Staying in the same group are also minus and multiplication, devision not, because int/int results in a BigDecimal. Other allowed operators are the shifts of course, the binary operations and modulo. The last mathematical operation is not allowed, the power operator ** has type promotion, so I cannot safely assume it working on int.

Stupid Fibonacci
A typical test I used, that concentrates on not too many operations, is a Fibonacci function:

int fib(int n){
if (n<2) return 1
return fib(n-1)+fib(n-2)
}
But if I am going to only implement n-1 and n-1 as fast path elements, I will not gain pretty much performance. The function is dominated by one compare, 2 minus operations, 1 plus and two method calls. The two minus operations are therefore only a small part in this. To also optimize the plus I need to know the return types of the fib method calls. In ordinary Groovy I will not know the return types.

This led me to want to optimize the method call in this case as well. In general this is a problem not really easily solvable in Groovy, but if I limit the optimization to only calls on "this", then I have a chance, because then I need to check only the meta class of the current class and since the current class is a Groovy class I have some freedom. I implemented another flag, that indicates, the current class is using the default meta class.

The result was that I now could emit optimized code for fib(n-1)+fib(n-2) guarded by thee guards now. Testing the result was not encouraging though. I gained maybe a third in performance. Compared to Java this was by far too slow.

Groovy standard compare is slow
By using a profiler I found that a lot of time is actually burned in the compare. The n<2 branches of into a gigantic internal function for all kinds of cases. I imagine hotspot simply going on strike for this one. That means I have to emit specialized code for the compare as well. Actually the principle is the same as before. int+int normally results into int.plus(int), and n<2 in n.compareTo(2). Of course I don't want to call the compareTo, just the same as I didn't want to call the plus. There are special VM instructions for this kind of thing and I should use them. And I did. I also did a small optimization in this code. Since now the first statements is guarded by 2 flags and the second statement by 3,  of which 2 are identical I made all 3 flags be checked for both statements. In the AST they are in a BlockStatement, so I kind of guarded that one instead of the single statements. That the BlockStatement has nor representation in the bytecode should not bother us. in the end it is some checks and some jumps only. The shortened bytecode looks then like this:
  public fib(I)I
L0
INVOKESTATIC test.$getCallSiteArray ()[Lorg/codehaus/groovy/runtime/callsite/CallSite;
ASTORE 2
INVOKESTATIC org/.../BytecodeInterface8.isOrigInt ()Z
IFEQ L1
GETSTATIC test.__$stMC : Z
IFNE L1
INVOKESTATIC org/.../BytecodeInterface8.disabledStandardMetaClass ()Z
IFNE L1
GOTO L2
L1
/* slow path ... */
L2
/* fast path ... */
ILOAD 1
LDC 2
IF_ICMPGE L5
ICONST_1
GOTO L6
L5
ICONST_0
L6
IFEQ L7
LDC 1
IRETURN
GOTO L7
L7
ALOAD 0
ILOAD 1
LDC 1
ISUB
INVOKEVIRTUAL test.fib (I)I
ALOAD 0
ILOAD 1
LDC 2
ISUB
INVOKEVIRTUAL test.fib (I)I
IADD
IRETURN

Quite visible the BytecodeInterface8 calls for the check for categories and the original int meta class, as well as __$stMC, the guard for the default meta class. Compared to normal Javac code:

   public fib(I)I
L0
ILOAD 1
ICONST_2
IF_ICMPGE L1
ICONST_1
IRETURN
L1
ALOAD 0
ILOAD 1
ICONST_1
ISUB
INVOKEVIRTUAL X.fib (I)I
ALOAD 0
ILOAD 1
ICONST_2
ISUB
INVOKEVIRTUAL X.fib (I)I
IADD
IRETURN
We can see that those two version are not all that different. The compare looks a bit different, some LDC should be ICONST, but all in all, quite similar. The result for fib(38) on my machine is 940ms in the groovy version and 350ms in the java version. In Groovy 1.7 we would have had something over 14s and most probably a stack overflow. And I think that is a pretty good result. If I would have only one guard, I would be almost at Java speed.

Invokedynamic and others
I mentioned earlier this technique is not really new, but I have actually never seen it being used in this way. Normally you have some kind of interpreter, representing the slow path. At runtime you find the parameters for the path and then emit a specialized fast path for these. You have to check the parameters after of course, but it is similar to what I do with the guards. Only in the interpreter you are working with actual runtime information and you can go far beyond what I did. I optimized for example method calls on "this". I have this limitation because I know of no good way to check the metaclass of the receiver in general. In the interpreter this is most probably no problem to care about so much and you get that call optimized as well. But Groovy has no interpreter that could be used for this, so that was no option. And while this approach here avoids the generation of classes at runtime it also bloats the bytecode quite a bit. Which means methods can now contain less Groovy code, a drawback I have not yet encountered in real life, but it probably is one.

The of course a word on invokedynamic here. This work was done for JVMs of the pre Java7 era. Java 5+6 will be around for at least another 2 years I imagine and thus we wanted to have something for those cases. Also the problems with the guards is one for invokedynamic as well, so I thought back then. Now I know that in the cases I would normally guard, I will simply invalidate all call site on demand and be done with it. What stays is the static code analysis for the types. If I know I will get only ints then I don't have to check for other types. If I have for example a+b and now nothing more of a and b, then they might be Integer in one case and String in another. If I make a callsite target for the Integer I will have to check a and b for each invocation to ensure they really are Integer. If it then turns out they are suddenly Strings, I will have to make a new callsite target and check for Strings.

Saturday, March 19, 2011

Conferences I am going to this year

I decided I could at least my update my blog with the conferences I am planing to attend this year.

So first is JAX 2011 May 2 till May 6 2011 in Mainz. The JAX is a pretty big German conference about Java and other things, including Groovy. I will speak on May 3 11:45.

Following tightly is GR8Conf Europe 2011 May 18 to May 19 2011 in Copenhagen. This conference is surely not as big as the JAX, but in exchange it is about all the great (gr+8) technologies around Groovy. I am especially looking forward to the Hackergarten there again. I will speak there on May 19 12:10. My third time for this conference.

Next and last that I am planing to attend is the JVM Language Summit. Well, at the moment the link still leads to the 2010 page, a final date is not up yet. I strongly hope there will be another one and that I can visit it the third... or was it forth? time. This conference was for me as language implementer on the JVM always extremely interesting. Also being able to bug the different JVM engineers about the dark corners of the JVM is just plain fun.

Besides that I did not yet plan anything more, but who knows ;)