This is a continuation of highlights from ICFP. I won’t cover all days, and it’s not in chronological order. On Sunday, I attended the HOPE workshop, Higher-Order Programming with Effects; I would very much have liked to attend LFMTP and WGP as well, but you can only be in one place… Here are a few highlights of the talks.

Handlers in Action, by Ohad Kammar

(Joint work with Sam Lindley and Nicolas Oury)

Ohad presented their work on using the “effect handlers” approach to programming with effects, introduced by the Eff language of Andrej Bauer and Matija Pretnar, and also related to the Frank experiment by Conor McBride.

A rough idea of an “effects handler” for the working OCamler is that it looks like a try..with block, but in stead of handling only exceptions in the “with” part, it handles all sort of effects that may be “raised” by the inner code: asking to read a reference, asking a random number, etc. The handler’s task is to describe what this effect should actually do, given a (delimited) continuation allowing to continue the program at the point the effect was raised.

The programmer is therefore free to define its own effects locally, override the behavior of surrounding effects, and decide in which order the effects are performed by layering different handlers.

Their talk gave an operational semantics for this effect handler system. An operational semantics is a way to define the execution of a program as a series of reduction of atomic “execution steps”, that helps a lot in thinking about the language, and writing programs.

This approach competes with monads as the main tool to write effectful programs in an otherwise effect-free base language. They were careful to note that there is no consensus yet on which way is more convenient; they developped libraries to write code in an effect-handler style in several languages (Haskell, OCaml, SML, Racket…) and encouraged the audience to try them.

Unfortunately, I haven’t found slides on the web yet (they were quite good), but you can have a look at this 12-pages draft covering this research.

Koka, by Dan Leijen

I already knew of Dan Leijen as he has worked on a lot of different topics that are highly related to what we’ve been doing in Gallium: extensible records with row types, MLF-inspired type inference, and more recently “simple enough” effect systems. I was eager to see his presentation (and meet him), and it did not disappoint.

Dan’s goal is to define an effect system that is not too complex to use. His aim is to have effect inference, and generally be as transparent as possible to the user that should program like in any ML, and yet at the same time track at the type level the possibility of raising exceptions, accessing mutable state or even being non-terminating.

The talk demonstrated that after a few tries, he settled on a system based on row polymorphism (as the theoretical underpinnings of the OCaml object system), and he indeed showed some examples that were both expressive and easy to infer.

Dan is extremely humble and he kept presenting his work as something simple and almost too naive. However, he actually knows to use advanced type systems idea quite well; he uses a very nice technique to prevent non-termination from being unnoticed when mixing higher-order functions and mutable-state, that relies on qualified types (the theoretical underpinnings of type classes). His system is also able to check that local use of mutation are observationally effect-free, à la ST monad, but without any explicit annotation from the programmer.

He has a website with a compilation-to-javascript to allow you to experiment live. Try it!

Logical Relations for Fine-Grained Concurrency, by Aaron Turon

(Joint work with Jacomb Thamsborg, Amal Ahmed, Lars Birkedal and Derek Dreyer)

I’ve been impressed at OPLSS by the excellent lectures that Amal Ahmed gave on logical relations, and was hoping to get more of this goodness. This talk, however, was explicitly not focusing on this aspect of the presented work, and rather discussing the concurrency problems they studied and the techniques they developped.

The rough idea of the work is to specify a complex concurrent algorithm implementation (using fine-grained locking, compare-and-swap or what have you), not by giving a (possibly brittle) logic specification of its working, but by comparing it to another simpler implementation. If you prove that the complex and efficient implementation simulates the simple one (using for example a simple lock on the whole structure), you get a simple idea of what it does.

This idea of specifying program by other, simpler programs (reference implementations) pops up in a lot of different place and I like it very much. One advantage of the approach is that it is a form of specification that can easily be “executed” to check, on a concrete example, that it really specifies what you mean it does. With formally verified software, the bugs come not from the code itself, but from the specification that may not say what you think it says. Having a readable and testable specification is the therefore the most important part for trusting the software.

Aaron was an excellent speaker and he clearly explained the issues with the fine-graind algorithm they were working with. He presented some of the proof techniques they used to check algorithm for the litterature, some of them quite simply, some of them requirement the use of all their different proof techniques at the same time.

Again, I didn’t find the slides online, but there is a 12-pages paper for a more formal view of the topic.