# Using SPARK to prove 255-bit Integer Arithmetic from Curve25519

## by Joffrey Huguet – Apr 30, 2019

In 2014, Adam Langley, a well-known cryptographer from Google, wrote a post on his personal blog, in which he tried to prove functions from curve25519-donna, one of his projects, using various verification tools: SPARK, Frama-C, Isabelle... He describes this attempt as "disappointing", because he could not manage to prove "simple" things, like absence of runtime errors. I will show in this blogpost that today, it is possible to prove what he wanted to prove, and even more.

Algorithms in elliptic-curve cryptography compute numbers which are too large to fit into registers of typical CPUs. In this case, curve25519 uses integers ranging from 0 to 2255-19. They can't be represented as native 32- or 64-bit integers. In the original implementations from Daniel J. Bernstein and curve25519-donna, these integers are represented by arrays of 10 smaller, 32-bit integers. Each of these integers is called a "limb". Limbs have alternatively 26-bit and 25-bit length. This forms a little-endian representation of the bigger integer, with low-weight limbs first. In section 4 of this paper, Bernstein explains both the motivation and the details of his implementation. The formula to convert an array A of 32-bit integers into the big integer it represents is: $\sum_{i=0}^{9} A (i) \times 2^{\lceil 25.5i \rceil}$ where the half square brackets represent the ceiling function. I won't be focusing on implementation details here, but you can read about them in the paper above. To me, the most interesting part about this project is its proof.

# First steps

### Types

The first step was to define the different types used in the implementation and proof.

I'll use two index types. Index_Type, which is range 0 .. 9, is used to index the arrays that represent the 255-bit integers. Product_Index_Type is range 0 .. 18, used to index the output array of the Multiply function.

Integer_Curve25519 represents the arrays that will be used in the implementation, so 64-bit integer arrays with variable bounds in Product_Index_Type.  Product_Integer is the array type of the Multiply result. It is a 64-bit integer array with Product_Index_Type range. Integer_255 is the type of the arrays that represent 255-bit integers, so arrays of 32-bit integers with Index_Type range.

### Big integers library

Proving Add and Multiply in SPARK requires reasoning about the big integers represented by the arrays. In SPARK, we don't have mathematical integers (i.e. integers with infinite precision), only bounded integers, which are not sufficient for our use case. If I had tried to prove correctness of Add and Multiply in SPARK with bounded integers, the tool would have triggered overflow checks every time an array is converted to integer.

To overcome this problem, I wrote the specification of a minimal big integers library, that defines the type, relational operators and basic operations. It is a specification only, so it is not executable.

SPARK is based on why, which has support for mathematical integers, and there is a way to link a why3 definition to a SPARK object directly. This is called external axiomatization; you can find more details about how to do it here.

Using this feature, I could easily provide a big integers library with basic functions like "+", "*" or To_Big_Integer (X : Integer). As previously mentioned, this library is usable for proof, but not executable (subprograms are marked as Import).  To avoid issues during binding, I used a feature of SPARK named Ghost code. It takes the form of an aspect: "Ghost" indicates to the compiler that this code cannot affect the values computed by ordinary code, thus can be safely erased. It's very useful for us, since this aspect can be used to write non-executable functions which are only called in annotations.

### Conversion function

One of the most important functions used in the proof is a function that converts the array of 10 integers to the big integer it represents, so when X is an Integer_255 then +X is its corresponding big integer.

function "+" (X : Integer_Curve25519) return Big_Integer
renames To_Big_Integer;

After this, I can define To_Big_Integer recursively, with the aid of an auxiliary function, Partial_Conversion.

function Partial_Conversion
(X : Integer_Curve25519;
L : Product_Index_Type)
return Big_Integer
is
(if L = 0
then (+X(0)) * Conversion_Array (0)
else
Partial_Conversion (X, L - 1) + (+X (L)) * Conversion_Array (L))
with
Ghost,
Pre => L in X'Range;

function To_Big_Integer (X : Integer_Curve25519)
return Big_Integer
is
(Partial_Conversion (X, X'Last))
with
Ghost,
Pre => X'Length > 0;

### Specification of Add and Multiply

With these functions defined, it was much easier to write specifications of Add and Multiply. All_In_Range is used in preconditions to bound the parameters of Add and Multiply in order to avoid overflow.

function All_In_Range
(X, Y     : Integer_255;
Min, Max : Long_Long_Integer)
return     Boolean
is
(for all J in X'Range =>
X (J) in Min .. Max
and then Y (J) in Min .. Max);

function Add (X, Y : Integer_255) return Integer_255 with
Pre  => All_In_Range (X, Y, -2**30 + 1, 2**30 - 1),
Post => +Add'Result = (+X) + (+Y);

function Multiply (X, Y : Integer_255) return Product_Integer with
Pre  => All_In_Range (X, Y, -2**27 + 1, 2**27 - 1),
Post => +Multiply'Result = (+X) * (+Y);

# Proof

Proof of absence of runtime errors was relatively simple: no annotation was added to Add, and a few, very classical loop invariants for Multiply. Multiply is the function where Adam Langley stopped because he couldn't prove absence of overflow. I will focus on proof of functional correctness, which was a much more difficult step. In SPARK, Adam Langley couldn't prove it for Add because of overflow checks triggered by the tool when converting the arrays to the big integers they represent. This is the specific part where the big integers library is useful: it is possible to manipulate big integers in proof without overflow checks.

The method I used to prove both functions is the following:

1.   Create a function that allows tracking the content of the returned array.
2.   Actually track the content of the returned array through loop invariant(s) that ensure equality with the function in point 1.
3.  Prove the equivalence between the loop invariant(s) at the end of the loop and the postcondition in a ghost procedure.
4.  Call the procedure right before return; line.

I will illustrate this example with Add. The final implementation of Add is the following.

function Add (X, Y : Integer_255) return Integer_255 is
Sum : Integer_255 := (others => 0);
begin
for J in Sum'Range loop
Sum (J) := X (J) + Y (J);

pragma Loop_Invariant (for all K in 0 .. J =>
Sum (K) = X (K) + Y (K));
end loop;
return Sum;
end Add;

Points 1 and 2 are ensured by the loop invariant. No new function was created since the content of the array is simple (just an addition). At the end of the loop, the information we have is: "for all K in 0 .. 9 => Sum (K) = X (K) + Y (K)" which is an information on the whole array. Points 3 and 4 are ensured by Prove_Add. Specification of Prove_Add is:

procedure Prove_Add (X, Y, Sum : Integer_255) with
Ghost,
Pre  => (for all J in Sum'Range => Sum (J) = X (J) + Y (J)),
Post => To_Big_Integer (Sum) = To_Big_Integer (X) + To_Big_Integer (Y);

This is what we call a lemma in SPARK. Lemmas are ghost procedures, where preconditions are the hypotheses, and postconditions are the conclusions. Some lemmas are proved automatically, while others require solvers to be guided. You can do this by adding a non-null body to the procedure, and guide them differently, depending on the conclusion you want to prove.

I will talk about the body of Prove_Add in a few paragraphs.

### Multiply

Multiply was much harder than Add had been. But I had the most fun proving it. I refer to the implementation in curve25519-donna as the "inlined" one, because it is fully explicit and doesn't go through loops. This causes a problem in the first point of my method: it is not possible to track what the routine does, except by adding assertions at every line of code. To me, it's not a very interesting point of view, proof-wise. So I decided to change the implementation, to make it easier to understand. The most natural approach for multiplication, in my opinion, is to use distributivity of product over addition. The resulting implementation would be similar to TweetNaCl's implementation of curve25519 (see M function):

function Multiply (X, Y : Integer_255) return Product_Integer is
Product : Product_Integer := (others => 0);
begin
for J in Index_Type loop
for K in Index_Type loop
Product (J + K) :=
Product (J + K)
+ X (J) * Y (K) * (if J mod 2 = 1 and then K mod 2 = 1 then 2 else 1);
end loop;
end loop;
return Product;
end Multiply;

Inside the first loop, J is fixed and we iterate over all values of K, so on all the values of Y. It is repeated over the full range of J, so the entire content of X.  With this implementation, it is possible to track the content of the array in loop invariants through an auxiliary function, Partial_Product, that will track the value of a certain index in the array at each iteration. We add the following loop invariant at the end of the loop:

pragma Loop_Invariant (for all L in 0 .. K =>
Product (J + L) = Partial_Product (X, Y, J, L));

The function Partial_Product is defined recursively, because it is a sum of several factors.

function Partial_Product
(X, Y : Integer_255;
J, K : Index_Type)
return Long_Long_Integer
is
(if K = 9 or else J = 0
then (if J mod 2 = 1 and then K mod 2 = 1 then 2 else 1) * X (J) * Y (K)
else Partial_Product (X, Y, J - 1, K + 1)
+ (if J mod 2 = 1 and then K mod 2 = 1 then 2 else 1) * X (J) * Y (K));

As seen in the loop invariant above, the function returns Product (J + K), which is modified for all indexes L and M when J + K = L + M. The recursive call will be on the pair J + 1, K - 1, or J - 1, K + 1. Given how the function is designed in the implementation, the choice J - 1, K + 1 is preferable, because it follows the evolution of Product (J + K). The base case is when J = 0 or K = 9.

### Problems and techniques

Defining functions to track content was rather easy to do for both functions. Proving that the content is actually equal to the function was a bit more difficult for the Multiply function, but not as difficult as proving the equivalence between this and the postcondition. With these two functions, we challenge the provers in many ways:

• We use recursive functions in contracts, which means provers have to reason inductively and they struggle with this. That's why we need to /guide/ them in order to prove properties that involve recursive functions.
• The context size is quite big, especially for Multiply. Context represent all variables, axioms, theories that solvers are given to prove a check. The context grows with the size of the code, and sometimes it is too big. When this happens, solvers may be lost and not be able to prove the check. If the context is reduced, it will be easier for solvers, and they may be able to prove the previously unproved checks.
• Multiply contracts need proof related to nonlinear integers arithmetics theory, which provers have a lot of problems with.  This problem is specific to Multiply, but it explains why this function took some time to prove. Solvers need to be guided quite a bit in order to prove certain properties.

What follows is a collection of techniques I found interesting and might be useful when trying to prove problems, even very different from this one.

#### Axiom instantiation

When I tried to prove loop invariants in Multiply to track the value of Product (J + K), solvers were unable to use the definition of Partial_Product. Even asserting the exact same definition failed to be proven. This is mainly due to context size: solvers are not able to find the axiom in the search space, and the proof fails. The workaround I found is to create a Ghost procedure which has the definition as its postcondition, and a null body, like this:

procedure Partial_Product_Def
(X, Y : Integer_255;
J, K : Index_Type)
with
Pre  => All_In_Range (X, Y, Min_Multiply, Max_Multiply),
Post =>
(if K = Index_Type'Min (9, J + K)
then Partial_Product (X, Y, J, K)
= (if J mod 2 = 1 and then K mod 2 = 1 then 2 else 1)
* X (J) * Y (K)
else Partial_Product (X, Y, J, K)
= Partial_Product (X, Y, J - 1, K + 1)
+ X (J) * Y (K)
* (if J mod 2 = 1 and then K mod 2 = 1 then 2 else 1));
procedure Partial_Product_Def
(X, Y : Integer_255;
J, K : Index_Type)
is null;

In this case, the context size is reduced considerably, and provers are able to prove it without any body. During the proof process, I had to create other recursive functions that needed this *_Def procedure in order to use their definition. It has to be instantiated manually, but it is a way to "remind" solvers this axiom. A simple but very useful technique.

#### Manual induction

When trying to prove Prove_Add, I encountered one of the cases where the provers have to reason inductively: a finite sum is defined recursively. To help them prove the postcondition, the conversion is computed incrementally in a loop, and the loop invariant tracks the evolution.

procedure Prove_Add (X, Y, Sum : Integer_255) with
Ghost,
Pre  => (for all J in Sum'Range => Sum (J) = X (J) + Y (J)),
Post => To_Big_Integer (Sum) = To_Big_Integer (X) + To_Big_Integer (Y);
--  Just to remember

procedure Prove_Add (X, Y, Sum : Integer_255) with
Ghost
is
X_255, Y_255, Sum_255 : Big_Integer := Zero;
begin
for J in Sum'Range loop

X_255 := X_255 + (+X (J)) * Conversion_Array (J);
Y_255 := Y_255 + (+Y (J)) * Conversion_Array (J);
Sum_255 := Sum_255 + (+Sum (J)) * Conversion_Array (J);

pragma Loop_Invariant (X_255 = Partial_Conversion (X, J));
pragma Loop_Invariant (Y_255 = Partial_Conversion (Y, J));
pragma Loop_Invariant (Sum_255 = Partial_Conversion (Sum, J));
pragma Loop_Invariant (Partial_Conversion (Sum, J) =
Partial_Conversion (X, J) +
Partial_Conversion (Y, J));
end loop;
end Prove_Add;

What makes this proof inductive is the treatment of Loop_Invariants by SPARK: it will first try to prove the first iteration, as an initialization, and then it will try to prove iteration N knowing that the property is true for N - 1.

Here, the final loop invariant is what we want to prove, because when J = 9, it is equivalent to the postcondition. The other loop invariants and variables are used to compute incrementally the values of partial conversions and facilitate proof.

Fortunately, in this case, provers do not need more guidance to prove the postcondition.  Prove_Multiply also follows this proof scheme, but is much more difficult. You can access its proof following this link.

Another case of inductive reasoning in my project is a lemma, whose proof present a very useful technique when proving algorithms using recursive ghost functions in contracts.

procedure Equal_To_Conversion
(A, B : Integer_Curve25519;
L    : Product_Index_Type)
with
Pre  =>
A'Length > 0
and then A'First = 0
and then B'First = 0
and then B'Last <= A'Last
and then L in B'Range
and then (for all J in 0 .. L => A (J) = B (J)),
Post => Partial_Conversion (A, L) = Partial_Conversion (B, L);

It states that given two Integer_Curve25519, A and B, and a Product_Index_Type, L, if A (0 .. L) = B (0 .. L), then Partial_Conversion (A, L) = Partial_Conversion (B, L). The proof for us is evident because it can be proved by induction over L, but we have to help SPARK a bit, even though it's usually simple.

procedure Equal_To_Conversion
(A, B : Integer_Curve25519;
L    : Product_Index_Type)
is
begin
if L = 0 then
return;                         --  Initialization of lemma
end if;
Equal_To_Conversion (A, B, L - 1); --  Calling lemma for L - 1
end Equal_To_Conversion;

The body of a procedure proved by induction actually looks like induction: first thing to add is initialization, with an if statement ending with a return;. Then, the following code is the general case. We call the same lemma for L - 1, and we add assertions to prove postcondition if necessary. In this case, calling the lemma at L - 1 was sufficient to prove the postcondition.

#### Guide with assertions

Even if the solvers know the relation between Conversion_Array (J + K) and Conversion_Array (J) * Conversion_Array (K), it is hard for them to prove properties requiring non-linear arithmetic reasoning. The following procedure is a nice example:

procedure Split_Product
(Old_Product, Old_X, Product_Conversion : Big_Integer;
X, Y                                   : Integer_255;
J, K                                   : Index_Type)
with
Ghost,
Pre  =>
Old_Product
= Old_X * (+Y)
+ (+X (J))
* (if K = 0
then Zero
else Conversion_Array (J) * Partial_Conversion (Y, K - 1))
and then
Product_Conversion
= Old_Product
+ (+X (J)) * (+Y (K))
* (+(if J mod 2 = 1 and then K mod 2 = 1 then 2 else 1))
* Conversion_Array (J + K),
Post =>
Product_Conversion
= Old_X * (+Y)
+ (+X (J)) * Conversion_Array (J)
* Partial_Conversion (Y, K);

The preconditions imply the postcondition by arithmetic reasoning. With a null body, the procedure is not proved. We can guide provers through assertions, by splitting the proof into two different cases:

procedure Split_Product
(Old_Product, Old_X, Product_Conversion : Big_Integer;
X, Y                                   : Integer_255;
J, K                                   : Index_Type)
is
begin
if J mod 2 = 1 and then K mod 2 = 1 then
pragma Assert (Product_Conversion
= Old_Product
+ (+X (J)) * (+Y (K))
* Conversion_Array (J + K) * (+2));
pragma Assert ((+2) * Conversion_Array (J + K)
= Conversion_Array (J) * Conversion_Array (K));
--  Case where Conversion_Array (J + K) * 2
--             = Conversion_Array (J) * Conversion_Array (K).
else
pragma Assert (Conversion_Array (J + K)
= Conversion_Array (J) * Conversion_Array (K));
--  Other case
end if;
if K > 0 then
pragma Assert (Partial_Conversion (Y, K)
= Partial_Conversion (Y, K - 1)
+ (+Y (K)) * Conversion_Array (K));
--  Definition of Partial_Conversion, needed for proof
end if;
end Split_Product;

What is interesting in this technique is that it shows that to prove something with auto-active proof, you need to understand it yourself first. I proved this lemma by hand on paper, which was not difficult, and then I rewrote my manual proof through assertions. If my manual proof is easier when I split two different cases, so it should be for provers. It allows them to choose the first /good/ steps to the proof.

#### Split into subprograms

Another thing I have noticed, is that provers were really quickly overwhelmed by context size in my project. I had a lot of recursive functions in my contracts, but also quantifiers... I did not hesitate to split proofs into Ghost procedures in order to reduce context size, but also wrap some expressions into functions.

I have 7 subprograms that enable me to prove Prove_Multiply, which is not a lot in my opinion. It also increases readability, which is important if you want other people to read your code.

There is also another method I want to share to overcome this problem. When there is only one assertion to prove, but it requires a lot of guidance, it is possible to put all assertions, including the last one in a begin end block, and the last one has a Assert_And_Cut pragma, just like this:

code ...
--  It is possible to add the keyword declare to declare variables
--  before the block, if it helps for proof.
begin
pragma Assert (one assert needed to prove the last assertion);
pragma Assert (another one);
pragma Assert_And_Cut (Assertion to prove and remember);
end;

Assert_And_Cut asks the provers to prove the property inside them, but also will remove all context that has been created in the begin end; block. It will keep just this assertion, and might help to reduce context.

Sadly, this workaround didn't work for my project, because context was already too big to prove the first assertions. But adding a lot of subprograms also has drawbacks, e.g. you have to write preconditions to the procedures, and this may be more difficult than just writing the block with an Assert_And_Cut. Surely, both are useful in various projects, and I think it's nice to have these methods in mind.

# Takeaways

For statistics only: Add has a 8-line implementation for around 25 lines of Ghost code, to prove it. Multiply has 12 lines of implementation for more than 500 lines of proof. And it's done with auto-active proof, which means that verification was all done automatically! In comparison, verification of TweetNaCl's curve25519 has more than 700 lines of Coq in order to prove Multiply for an implementation of 5 lines, but they have a carry operation to prove on top of the product.

I would say this challenge is at an intermediary level, because it is not difficult to understand the proof of Multiply when you write it down. But it presents several techniques that can apply to various other problems, and this is the main interest of the project to me.

As done in the blogpost by Adam Langley, I tried to prove my project with alt-ergo only, because it was the only prover available for SPARK back in 2014. Even today, alt-ergo alone is unable to prove all code. However, it doesn't make alt-ergo a bad prover, in fact, none of the provers available in SPARK (CVC4, Z3 and Alt-Ergo) are able to prove the entire project on their own. I think it shows that having multiple provers available increases greatly chances of code to be proved.

At the beginning, working on this project was just a way to use my big integers library in proof. But in the end, I believe it is an interesting take on the challenge of verifying elliptic curves functions, especially when projects like this appear for example with fiat-crypto or verification of TweetNaCl's curve25519, and I had a lot of fun experimenting with SPARK to prove properties that usually are badly handled by provers. You can access full proof of my project in this repository. 