## Formally verifying the complexity of OCaml programs with CFML -- part 3

- August 3, 2015

In two previous blog posts (part 1 and interlude), I presented a set of notions to prove asymptotic complexity of OCaml programs, then an OCaml implementation of Okasaki’s “binary random access lists”. In this last blog post, I combine the two ideas, and present a formalization of binary random access lists, using CFML.

### A too simple example

As an appetizer, let’s try to prove the specification “with big-O” for the `incr`

function, of the first blog post:

```
Lemma incr_spec :
SpecO1 (fun F =>
Spec incr r |R>> forall (i: int),
R (\$ F tt \* r ~~> i) (# r ~~> (i+1))).
```

The goal starts with `SpecO1`

, which is a specialized form of `SpecO`

for \(O(1)\) cost functions. Just like we used the `xcf`

tactic for goals starting with `Spec`

, we use here the `xcfO`

tactic. It takes an argument, though: the expression of the cost function. Note that the domain of a \(O(1)\) cost function is `unit`

: its value does not depend of any parameter of the program.

```
Proof.
xcfO (fun (_:unit) => 1).
```

Which turns our goal into:

```
============================
Spec incr r |R>> forall (i: int),
R (\$ 1 \* r ~~> i) (# r ~~> (i+1)).
```

Oh well. This example is quite trivial, so the `xcfO`

tactic automatically proved all the \(O\)-related subgoals, giving us the same spec as before – with a precise credit count.

### A more interesting case study: binary random access lists

The OCaml implementation of binary random access lists I presented earlier has been proved correct in CFML, with big-O bounds, and is a more interesting example. In the rest of this post, I’ll present some key points of the proof, and explain how they use the new definitions, lemmas and tactics of our extension of CFML.

The complete Coq script (quite raw at the moment) for the proof can be found here.

#### Invariants of the structure

As mentioned in the previous post, the OCaml implementation carries a number of implicit invariants: in the Coq formalization, we are going to express explicitly these invariants.

Knowing that a random access list structure satisfies these invariants is key for the complexity proof: it gives us information about its size, and the size of its subtrees. Then, to know that our structures actually satisfy these invariants, we need to prove functional correctness of the OCaml code, i.e. prove that the functions do not break the invariants of the structure.

Consequently, our Coq proof is twofold: it proves both functional correctness, and algorithmic complexity.

##### Predicates

CFML automatically generates the Coq counterpart of the OCaml datatypes, `tree a`

and `rlist a`

. We start the proof by defining three predicates `btree`

, `inv`

and `Rlist`

, that make explicit the invariants of the structure.

First, a `btree`

predicate. `btree n t L`

means that the `t`

is a complete (binary) tree of depth `n`

which contains the sequence of elements in `L`

.

```
Inductive btree : int -> tree a -> list a -> Prop :=
| btree_nil : forall x,
btree 0 (Leaf x) (x::nil)
| btree_cons : forall p p' n t1 t2 L1 L2 L',
btree p t1 L1 ->
btree p t2 L2 ->
p' =' p+1 ->
n =' 2^p' ->
L' =' L1 ++ L2 ->
btree p' (Node n t1 t2) L'.
```

Then, an `inv`

predicate: the invariant for the whole structure. `inv p ts L`

means that `ts`

is a `rlist`

of complete trees of increasing depth, starting with depth `p`

. `L`

is the sequence of elements represented by `ts`

. `ts`

being a well-formed binary random access list corresponds to the case where `p`

is equal to `0`

. It is useful to consider the cases where `p`

is non-zero, though: reasoning by induction on `ts`

will lead to such cases.

```
Inductive inv : int -> rlist a -> list a -> Prop :=
| inv_nil : forall p,
p >= 0 ->
inv p nil nil
| inv_cons : forall p (t: tree a) ts d L L' T,
inv (p+1) ts L ->
L' <> nil ->
p >= 0 ->
(match d with
| Zero => L = L'
| One t => btree p t T /\ L' = T ++ L
end) ->
inv p (d :: ts) L'.
```

Finally, the `Rlist`

predicate corresponds to the `p = 0`

case: it describes a complete well-formed binary random access list.

```
Definition Rlist (s: rlist a) (L: list a) := inv 0 s L.
```

##### Bounds

Given structures verifying these invariants, we can deduce additional properties, in particular:

```
Lemma length_correct : forall t p L,
btree p t L -> length L = 2^p.
```

```
Lemma ts_bound_log : forall ts p L,
inv p ts L -> length ts <= Z.log2 (2 * (length L) + 1).
```

These lemmas will be key for proving our \(log\) complexity bounds, and constitute in fact our only mathematical analysis for this library.

`cons_tree`

: a first proof

Let us jump directly to the proof of the (internal) `cons_tree`

function.

```
let rec cons_tree (t: 'a tree) = function
| [] -> [One t]
| Zero :: ts -> One t :: ts
| One t' :: ts -> Zero :: cons_tree (link t t') ts
and link t1 t2 = Node (size t1 + size t2, t1, t2)
```

`cons_tree t ts`

adds a new tree `t`

to the `rlist`

`ts`

. It may recursively walk through the list, calling `link`

(the process is very similar to incrementing an integer represented as a list of bits).

As `link`

runs in constant time, `cons_tree`

performs \(O(|ts|)\) operations. Moreover, we showed earlier that \(|ts| = O(log(|L|))\) where \(L\) is the list of elements contained in `ts`

. Therefore, `cons_tree`

performs “in \(O(log(n))\)” (we want to eventually express the complexities depending on the number of elements in the structure - here “\(n\)”).

Our formal proof follows this two-step informal reasoning: first we prove a \(O(|ts|)\) complexity, reasoning by induction on `ts`

to follow the flow of the OCaml program; then we use our `ts_bound_log`

lemma to deduce a logarithmic bound depending on the number of elements stored in `ts`

.

##### First step: an auxiliary spec

We therefore prove an auxiliary specification, as our first step. Let’s walk through the proof.

```
Lemma cons_tree_spec_aux :
SpecO (fun n => n) (fun F =>
Spec cons_tree (t: tree a) (ts: rlist a) |R>>
forall p T L, btree p t T -> inv p ts L ->
R (\$ F (length ts)) (fun ts' => \[inv p ts' (T++L)])).
```

To prove a `SpecO`

goal, one must start by providing an explicit cost function. In this case however, we do not provide one right away:

As

`cons_tree`

calls the`link`

function, its cost function depends on`link`

’s one: we need to unpack`link`

’s spec in order to access its (abstract) cost function;The domain of our cost function (and our bounding function

`fun n => n`

) is`Z`

. However, we are only interested in non negative values, as is`length ts`

. We can apply a lemma allowing us to prove our spec for values greater that some bound, with a different cost function that equals ours on this subdomain (and is basically equal to zero elsewhere).

```
Proof.
destruct link_spec as (link_cost & link_cost_nonneg & ? & ?).
applys @SpecO_of_SpecO_after 0.
specialize (link_cost_nonneg tt). (* Help the automated tactics. *)
```

The cost function can be now provided, using `xcfO`

. As it is now, in order to “guess” the exact expression of the cost function, the usual method is to look at the OCaml program, and remember that a function consumes a credit each time it is called. Here, we give the exact number of credits needed, but we could give more, as long as the asymptotic bound still holds.

```
xcfO (fun n => 1 + (1 + (link_cost tt)) * n).
```

Our cost function is still relatively simple, so the additional goals (monotonicity, domination, ….) are automatically proven by `xcfO`

.

The last quirk is due to `SpecO_of_SpecO_after`

. The goal uses an abstract cost function `F'`

, and we are given a proof that `forall x, 0 <= x -> F’ x = 1 + (1 + (link_cost tt)) * x$. In practice, we just add this proof to our context, and rewrite it when needed.

```
intros F' FeqF'.
```

The rest of the proof uses standard CFML tactics, plus:

`xpay`

when presented to a`Pay; ...`

goal;`csimpl`

when presented to a heap implication involving credits: turns the goal into an (in)equality between the quantities of credits.

```
xinduction (fun (t:tree a) (ts:rlist a) => LibList.length ts).
xcf. intros ? ts. introv IH Rt Rts. rewrites~ FeqF'.
inverts Rts.
- xpay. csimpl. xgo; hsimpl; constructors~.
- { xpay. csimpl. simpl_nonneg~.
xmatch.
- xret; hsimpl; constructors~; subst; splits~.
- unpack; subst. xapps~.
{ csimpl. rew_length. math_nia. }
intros. xapps~.
{ rewrites~ FeqF'. csimpl; rew_length; math_nia. }
intros. xret.
{ hsimpl. constructors~. rew_list~. } }
Qed.
```

##### Second step: the main specification

The main specification can then use a cost function in \(O(log(|L|))\), \(L\) being the list of elements in the structure.

```
Lemma cons_tree_spec :
SpecO Z.log2 (fun F =>
Spec cons_tree (t: tree a) (ts: rlist a) |R>>
forall p T L, btree p t T -> inv p ts L ->
R (\$ F (length L)) (fun ts' => \[inv p ts' (T++L)])).
```

The proof is simple: we first reuse the cost function of the previous lemma `cons_tree_spec_aux`

, feeding it with a sufficient number of credits, as indicated by the `ts_bound_log`

lemma (“\(|ts| \leq log(2 \times |L| + 1)\)”).

```
Proof.
destruct cons_tree_spec_aux
as (cons_tree_cost & cost_pos & cost_mon & cost_dom & cons_tree_spec).
xcfO (fun n => cons_tree_cost (Z.log2 (2 * n + 1))).
```

This time, we have to prove some additional goals by hand, produced by `xcfO`

.

Monotonicity

`- applys~ @monotonic_comp. monotonic_Z_auto~.`

We first apply

`monotonic_comp`

: our cost function is monotonic as composition of two monotonic functions.`applys~`

includes a bit of automation, so the fact that`cons_tree_cost`

(present in the context) is automatically used. Remains to prove that`fun n => Z.log2 (2 * n + 1)`

is monotonic: the`monotonic_Z_auto~`

tactic is able to prove it automatically (and more generally solves monotonicity goals for most of simple functions composed of`+`

,`*`

,`Z.log`

and`Z.pow`

).Domination

`- applys @idominated_transitive. applys~ @idominated_comp cost_dom. monotonic_Z_auto. monotonic_Z_auto~. simpl. idominated_Z_auto~.`

Our initial goal is

`idominated _ _ (fun n => cons_tree_cost (Z.log2 (2 * n + 1))) Z.log2`

. We cannot directly apply a composition lemma; however we know that`cons_tree_cost`

is a \(O(n)\): we first invoke transitivity of`idominated`

, then apply a composition lemma.The remaining goals are proved automatically, either by

`monotonic_Z_auto`

which handles simple monotonicity goals, or`idominated_Z_auto`

which handles simple`idominated`

goals (here,`idominated_Z_auto~`

proves`idominated _ _ (fun n => Z.log2 (2 * n + 1) Z.log2`

).The specification

`- xweaken~. do 4 intro. intro spec. intros. xgc; [xapply~ spec |]; csimpl~. { apply~ cost_mon. apply~ ts_bound_log. } Qed.`

The specification itself is proved by weakening of `cons_tree_spec_aux`

, plus the following arguments:

`ts_bound_log`

: \(|ts| \leq log(2 \times |L| + 1)\)`cons_tree_cost`

is monotonic: needed to apply`ts_bound_log`

inequality under`cons_tree_cost`

.

`lookup`

: how to deal with multiple parameters

As explained in the first post, things can get tricky when the cost function depends on multiple parameters. More precisely, the user has to specify which notion of “going to infinity” she’s intending, by choosing the right filter for the domain (e.g. `Z*Z`

for a cost function with two parameters).

Proving a specification for the `lookup`

function involves precisely this kind of difficulty.

```
let rec lookup i = function
| [] -> raise (Invalid_argument "lookup")
| Zero :: ts -> lookup i ts
| One t :: ts ->
if i < size t
then lookup_tree i t
else lookup (i - size t) ts
```

We prove a `lookup i ts`

specification by induction on `ts`

. During the induction we have two parameters: \(|ts|\), and the depth \(p\) of `ts`

’s first tree (matching an `inv p ts L`

invariant).

The respective status of these two parameter differs, though. Once the proof by induction done, we’ll want, as for `cons_tree`

, express the cost function depending on \(|L|\). \(|ts|\) will tend to infinity with \(|L|\), but \(p\) will be fixed to \(0\), as `lookup`

is only supposed to be called on well-formed random access lists from the user point of view.

When proving `cons_tree`

, we did not have to provide any filter: the standard ilter for `Z`

was inferred. Here, we proceed as follows:

We establish on paper a first asymptotic bound of \(O(p + |ts|)\);

We provide a filter

`towards_infinity_xZ p`

on (a subset type of)`Z*Z`

, which makes its second component tend to infinity, while the first is fixed to`p`

(`p`

is a parameter of the filter);We prove an intermediate specification using this filter, for any fixed

`p`

. Note that sadly, we cannot use`SpecO`

to state our specification: to get a provable and useful specification, we need to quantify over`p`

“in the middle of`SpecO`

”.The result is a quite ugly intermediate specification, unfortunately; the result of unfolding

`SpecO`

and quantifying over`p`

in the middle.

```
Lemma lookup_spec_ind :
exists (F: Z * Z -> Z),
(forall m n, 0 <= m -> 0 <= n -> 0 <= F (m, n)) /\
(forall (p: Z),
0 <= p ->
monotonic (fixed_fst_le le p) le (fun p => F (proj1_sig p)) /\
idominated (FO := fo_towards_infinity_xZ p) _ _
(fun p => F (proj1_sig p))
(fun p => let '(m, n) := proj1_sig p in m + n)) /\
Spec lookup (i:int) (ts: rlist a) |R>>
forall p L, inv p ts L -> ZInbound i L ->
R (\$F (p, length ts)) (fun x => \[ZNth i L x]).
```

The proof has the same spirit as these shown before - basically applying `monotonic_*`

and `idominated_*`

lemmas - just more involved.

- Finally, we can write a nicer top-level specification to wrap it:

```
Lemma lookup_spec :
SpecO Z.log2 (fun F =>
Spec lookup (i: int) (ts: rlist a) |R>>
forall L, Rlist ts L -> ZInbound i L ->
R (\$F (length L)) (fun x => \[ZNth i L x])).
```

After instantiating the cost function of our intermediate spec as `lookup_spec_cost`

, and projecting the monotonicity and domination hypothesis with \(p = 0\), we provide the following cost function:

```
xcfO (fun n => lookup_spec_cost (0, Z.log2 (2 * n + 1))).
```

As `fun (m,n) => lookup_spec_cost (m, n)`

is a \(O(m + n)\), by composition, our cost function is indeed a \(O(log(n))\).

### Epilogue

The set of lemmas and tactics I just presented is, at the moment, only a unfinished tentative: it definitely has some rough edges, and can be unpleasant to use. However, I think an interesting aspect about it is that, trying to formalize reasoning on big-Os, it shows how much these tend to be informal when performed on paper. Thus, our bet is that the additional complexity is in most part not artificial, but reflects actual subtleties in the proofs.