## Formally verifying the complexity of OCaml programs with CFML -- part 1

- July 18, 2015
- Last updated on 2015/08/03

In a recent paper, Arthur Charguéraud and François Pottier present a formal proof of an OCaml implementation of Tarjan's union-find algorithm. The proof covers two aspects: first, functional correctness ("the algorithm is correct"), but also asymptotic complexity. For example, one of the results of the paper is that the `link`

function runs in \(O(α(n))\) elementary steps, \(α\) being the inverse of the Ackermann function.

Actually, the complexity results of the paper are not presented using the "big-O" notation, commonly used in asymptotic complexity proofs: the various constants are explicit and manipulated through the proof.

In these blog posts (current, interlude and final), I'll describe a tentative extension of CFML, the Coq library François and Arthur used for their proof. Its goal is to enable asymptotic reasoning, big-O notations and such to be used to prove complexity of programs.

### CFML: verify OCaml programs in Coq

CFML is a Coq library useful to prove properties about OCaml programs. Its typical usage is as follows: (1) write some OCaml code; (2) use CFML's generator to parse the OCaml code and produce a "characteristic formula"; expressed as a Coq axiom, that describes the semantics of the program; (3) import this formula and write a specification in Coq for the program; (4) prove the specification using CFML tactics. The specifications are expressed using separation logic, a logical framework convenient when talking about mutable data.

Let's illustrate this by proving correctness of the `incr`

function (a function that increments a reference).

OCaml code:

```
let incr r =
r := !r + 1
```

After calling CFML's generator, step (2) produces a Coq source file containing the characteristic formula for `incr`

. This formula has the same shape and structure as the OCaml program, which will be useful to read the goals produced by the CFML tactics.

```
Parameter incr_cf :
tag tag_top_fun Label_default
Body incr r =>
(LetApp _x4 := ml_get r in
App ml_set r (_x4 + 1);)
```

The user doesn't need to read the produced file nor the characteristic formula, though. We'll just need to `Require`

it at the beginning of our proof.

Step (3) is writing a specification: a pair of a precondition, that must hold for the function to run, and a postcondition, describing the returned value and the state of the memory after executing the function. In our case, before running `incr`

, `r`

must be a reference containing some integer value `i`

; after running `incr`

, `r`

must contain `i+1`

.

```
Lemma incr_spec :
Spec incr r |R>> forall (i: int),
R (r ~~> i) (# r ~~> (i+1)).
```

Some hairy notations are involved, but basically, `(r ~~> i)`

is the precondition, and `(# r ~~> (i+1))`

the postcondition. The `~~>`

notation denotes a "points-to" relation: here, `r`

is a pointer to a memory cell containing the value i. The `#`

means that `incr`

returns `unit`

. Let us prove interactively this specification. To prove a goal of the form `Spec foo ...`

, we first use the `xcf`

tactic to introduce the characteristic formula of `foo`

.

```
Lemma incr_spec :
Spec incr r |R>> forall (i: int),
R (r ~~> i) (# r ~~> (i+1)).
Proof.
xcf. intros.
```

The resulting goal is of the form `formula pre post`

, `formula`

being a part of the characteristic formula, `pre`

and `post`

the precondition and postcondition for the corresponding piece of code.

```
r : loc
i : int
============================
(LetApp _x4 := ml_get r in
App ml_set r (_x4 + 1);) (r ~~> i \* \[]) (# r ~~> (i + 1))
```

We make the proof progress step by step, traversing the program shape, by applying tactics that match the head constructor of `formula`

: here, it's a `LetApp`

; we use the `xapps`

tactic. If the goal started with a `If`

, we would have used `xif`

, etc. We need another `xapp`

after that, for the `App`

constructor. Finally, what remains to be proven is a "heap implication", stating that the final memory matches the post-condition.

```
============================
# r ~~> (i + 1) \* \[] ===> # r ~~> (i + 1) \* Hexists H', H'
```

The `hsimpl`

tactic does the job.

```
Lemma incr_spec :
Spec incr r |R>> forall (i: int),
R (r ~~> i) (# r ~~> (i+1)).
Proof.
xcf. intros.
xapps. xapp. hsimpl.
Qed.
```

### CFML + time credits: prove algorithmic complexity

We rely on (and admit) the following property of the OCaml compiler: if one ignores the cost of garbage collection, counting the number of function calls and for/while loop iterations performed by the source program is an accurate measure, up to a constant factor, of the number of machine instructions executed by the compiled program.

We thus consider a new kind of heap resource (something that can be put in pre/postconditions): time credits. The program is now required to consume a time credit at each step of computation.

If we produce the characteristic formula for `incr`

with the "-credits" option, its specification and proof becomes:

```
Lemma incr_spec :
Spec incr r |R>> forall (i: int),
R (\$ 1 \* r ~~> i) (# r ~~> (i+1)).
Proof.
xcf. intros.
xpay.
xapps. xapp. hsimpl.
Qed.
```

What changed? We now have to give one credit (`\$ 1`

) in the precondition of `incr`

, as it performs one step of computation. The credit does not appear in the postcondition: it is consumed by the function. In the proof, we use the `xpay`

tactic to justify that we are indeed able to pay for the computation step.

The complexity of a function is then represented as the number of credits contained in the precondition: that is, how much you have to pay for it to run. Interestingly, credits can also be stored in the heap for later consumption: this enables amortization (a standard reasoning in the literature - e.g. in Introduction to Algorithms by Cormen et al.).

### Asymptotic reasoning on time credits

The proof of Union-Find states and tracks explicitly how many credits are needed at every step of the proof. For example, the precondition of `find`

requires "\(α(N) + 2\) credits". We would like to be able to write instead "\(O(α(N))\) credits"; first to match usual informal algorithmic reasoning, but also to enable asymptotic reasoning, that is, proofs of properties true "for a big enough \(N\)".

Reasoning with big-Os is also more modular: the exact number of credits an auxiliary function needs may change without impacting proofs using it, as long as the asymptotic bound stays the same.

The prototype extension of CFML I developed tries to address this. Its features are as follows:

- Given an explicit cost function (a "number of credits"), allow to prove an asymptotic "big-O" bound for it. The cost function may be defined after other functions asymptotically bounded by big-Os.
- Allow various manipulations on big-O bounds: composition, parameter transformation, etc.

This does not include for the moment the proof of Swiss-army-knife theorems like the master theorem; but they can now be stated, using the new asymptotic notions.

This seems to be only a matter of writing down the right definitions and lemmas. This is partly true, but it also appears that the "big-O" notation is often used in a quite informal way, so formalizing it requires extra attention on some aspects.

For example, the "\(O\)" notation requires implicitly some notion of "going towards infinity". When manipulating cost functions with multiple parameters (e.g., "\(f(m,n)\) is \(O(g(m,n))\)"), which notion of "going to infinity" we should choose is not obvious, and all are not equivalent: it'll depend on later use.

In our example, we could say both \(m\) and \(n\) have to go to infinity. But maybe we will want to fix \(m\) afterwards, and still have \(f(m,n)\) be a \(O(g(m, n))\); in this case we need an other notion of "going to infinity": \(m \times n + n\) is indeed a \(O(m \times n)\) for both \(m\) and \(n\) growing to infinity, but it is clearly not the case when fixing \(m = 0\)...

### New notions introduced in CFML + asymptotic credits

To formalize these notions, and be able to talk more clearly about this little dilemma, we reuse the notion of *filter* from the literature.

The generic notion of filter describes a way of "going to infinity" in a set (\(\mathbb{Z}, \mathbb{Z}^2, \ldots\)). On \(\mathbb{Z}\), we will have one obvious filter, that will work in any situation, so the benefits of the method are not clear. However, as illustrated above, on \(\mathbb{Z}^2\) (i.e. for functions with two parameters), which notion of "going to infinity" should be used is not clear, and moreover depends on later usage of the specification. Filters allow to formally describe this situation: one can define various filters for \(\mathbb{Z}^2\), and clearly state which is used in a \(O()\).

Following are the new predicates introduced in CFML: `filter`

, and more. They are all needed to understand what's going on during the proofs, however in simple scripts most of them needn't be manipulated directly by the user.

```
Definition filter A := (A -> Prop) -> Prop.
Class Filter {A} (ultimately: filter A).
```

A filter is a set of sets (a set of elements of type `A`

being of type `A -> Prop`

), that represents a "set of neighborhoods"; in our case, neighborhoods of infinity. The Filter class bundles additional properties that must be satisfied, like e.g. stability by intersection. This definition allows us to write `ultimately P`

, when `ultimately`

is a `filter`

, meaning that `P`

is true at some point "when going towards infinity".

```
Definition dominated (ultimately: filter A) (f g: A -> B).
```

The textbook definition of "\(f\) is \(O(g)\)", using filters: more or less, `exists k, ultimately (fun x => |f x| ≤ k |g x|)`

.

```
Definition monotonic (leA: A -> A -> Prop) (leB: B -> B -> Prop) (f: A -> B).
Definition monotonic_after leA leB (f: A -> B) (a0: A).
```

Monotonicity of f wrt. the relations `leA`

on the domain and `leB`

on the codomain. `monotonic_after`

means "monotonic after a0"; usually combined with a filter to have a notion of asymptotic monotonicity.

```
Definition idominated (ultimately: filter A) leA (f g: A -> B).
```

A variant of `dominated`

, that we'll use in practice: it allows more lemmas to be automatic (without side-conditions to prove). `idominated _ _ f g`

basically means that either `f`

and `g`

are \(O(1)\), either `dominated _ f g`

(\(f\) is \(O(g)\)); and `g`

is asymptotically monotonic.

```
Definition SpecO (ultimately: filter A) leA (g: A -> Z)
(spec: (A -> Z) -> Prop) :=
exists (f: A -> Z),
(forall x, 0 <= f x) /\
monotonic _ _ f /\
idominated _ _ f g /\
spec f.
```

A wrapper useful to state specifications using big-Os: it basically means "there exists some `f`

function, which is a \(O(g)\), and `spec f`

is true" (plus some necessary additional facts). Usually, `spec`

is of the form `Spec .. |R>> ...`

, i.e. a standard CFML specification.

As an appetizer for the second part, I'll leave you with two specifications that use big-Os. The first is for our `incr`

function, the second is for a `mktree`

recursive function, that builds a complete tree of depth \(n\) is \(O(2^n)\) time.

```
Lemma incr_spec :
SpecO1 (fun F =>
Spec incr r |R>> forall (i: int),
R (\$ F tt \* r ~~> i) (# r ~~> (i+1))).
```

```
Lemma mktree_spec :
SpecO (fun n => 2 ^ n) (fun F =>
Spec mktree (depth: int) (x: a) |R>>
0 <= depth ->
R (\$ F depth) (fun (t: tree a) => \[])).
```