AdaCore Blog

GNATprove Tips and Tricks: What’s Provable for Real?

by Yannick Moy

Since its first release in 2014, GNATprove supports both floating-point arithmetic (a.k.a. IEEE-754) and fixed-point arithmetic. There are limitations to what kind of properties can be proved automatically though, which differ slightly whether you consider floating-point or fixed-point arithmetic. In this post, I take as example the computation of Pi using the well-known Leibniz formula and its improvement using the Shanks transformation. Pi is approximated using the first ten terms of the Leibniz formula, or the corresponding eighth term of the Shanks transformation of Leibniz formula, using straightforward computations on either floating-point of fixed-point numbers.

Let's start with the computation of Pi using Leibniz formula on fixed-point numbers. We'll use a 32-bit fixed-point type with a "small" (the smallest representable quantity) of 2^-20 defined as follows:

type Fixed is delta 2.0**(-20) range - (2.0**11) .. 2.0**11 - 1.0
     with Size => 32;

Function Leibniz_Fixed computes the desired quantity. Note that I've on purpose defined local variables (and not constants) Ti to store each term of the series, so that GNATprove does not pre-compute the result, as it would do for literals or constants (as the GNAT compiler would also do).

function Leibniz_Fixed return Fixed
     with Post => Leibniz_Fixed'Result = 3.0418358
   is
      T1  : Fixed := 1.0;
      T2  : Fixed := 1.0 / 3.0;
      T3  : Fixed := 1.0 / 5.0;
      T4  : Fixed := 1.0 / 7.0;
      T5  : Fixed := 1.0 / 9.0;
      T6  : Fixed := 1.0 / 11.0;
      T7  : Fixed := 1.0 / 13.0;
      T8  : Fixed := 1.0 / 15.0;
      T9  : Fixed := 1.0 / 17.0;
      T10 : Fixed := 1.0 / 19.0;
   begin
      return 4 * (T1 - T2 + T3 - T4 + T5 - T6 + T7 - T8 + T9 - T10);
   end Leibniz_Fixed;

The result of this computation is exactly 3.0418358, as expressed in the postcondition of Leibniz_Fixed, which can be checked at runtime by enabling assertions when compiling. GNATprove is able to prove this postcondition with its default configuration (where only CVC4 prover is run, and timeout is set at 1s). What makes it so easily provable is that fixed-point numbers are represented internally (both in the GNAT compiler and GNATprove analyzer) as integer multiples of the "small" of the type. Hence, adding and subtracting fixed-point numbers amounts to simply adding and subtracting integers in GNATprove, which automatic provers handle easily. Note that the multiplication by 4 on the last line is really seen as an addition by provers (4 * X = X + X + X + X).

Let's turn to the same computation on floating-point numbers, using the standard 32-bits single precision floating-point type Float. Function Leibniz_Float computes the desired quantity.

function Leibniz_Float return Float
--   with Post => Leibniz_Float'Result = 3.041839599609375
     with Post => Leibniz_Float'Result in 3.0418 .. 3.0419
   is
      T1  : Float := 1.0;
      T2  : Float := 1.0 / 3.0;
      T3  : Float := 1.0 / 5.0;
      T4  : Float := 1.0 / 7.0;
      T5  : Float := 1.0 / 9.0;
      T6  : Float := 1.0 / 11.0;
      T7  : Float := 1.0 / 13.0;
      T8  : Float := 1.0 / 15.0;
      T9  : Float := 1.0 / 17.0;
      T10 : Float := 1.0 / 19.0;
   begin
      return 4.0 * (T1 - T2 + T3 - T4 + T5 - T6 + T7 - T8 + T9 - T10);
   end Leibniz_Float;

The result of this computation is exactly 3.041839599609375, but the postcondition that expresses this property is commented out, as it cannot be proved automatically. At level 4 (suing thus all provers and a timeout of 1mn), GNATprove is able to prove automatically that the result lies in the interval 3.0418 .. 3.0419, but not that it lies in the more precise interval 3.04183 .. 3.04184. What makes floating-point computations harder to analyze for GNATprove is that floating-point numbers are represented internally in GNATprove as real numbers, and arithmetic operations on floating-point numbers are represented as the corresponding arithmetic operations on real numbers followed by rounding. The latter is defined by an axiom which bounds the error made by rounding, depending on the magnitude of its argument. Thus, provers are given an imprecise representation of the actual computation, which does not allow them to prove the most precise postcondition. Still, the proved bounding interval may be precise enough for the user objective.

Let's consider now a refinement of Leibniz formula known as Shanks transformation. From the initial terms Ti of the series, partial sums Ai are computed, and the Shanks transformation generates a new series that converges faster than the original. Here, we only need to compute the last term of this series, based on partial sums A8, A9 and A10. Function Shanks_Fixed computes the desired quantity using fixed-point numbers.

function Shanks_Fixed return Fixed
--   with Post => Shanks_Fixed'Result = 3.1412525177001953125
     with Post => Shanks_Fixed'Result in 0.0 .. 100.0
   is
      T1  : Fixed := 1.0;
      T2  : Fixed := 1.0 / 3.0;
      T3  : Fixed := 1.0 / 5.0;
      T4  : Fixed := 1.0 / 7.0;
      T5  : Fixed := 1.0 / 9.0;
      T6  : Fixed := 1.0 / 11.0;
      T7  : Fixed := 1.0 / 13.0;
      T8  : Fixed := 1.0 / 15.0;
      T9  : Fixed := 1.0 / 17.0;
      T10 : Fixed := 1.0 / 19.0;

      A1  : Fixed := T1;
      A2  : Fixed := A1 - T2;
      A3  : Fixed := A2 + T3;
      A4  : Fixed := A3 - T4;
      A5  : Fixed := A4 + T5;
      A6  : Fixed := A5 - T6;
      A7  : Fixed := A6 + T7;
      A8  : Fixed := A7 - T8;
      A9  : Fixed := A8 + T9;
      A10 : Fixed := A9 - T10;

      Num : Fixed := 4 * (A10 * A8 - A9 * A9);
      Den : Fixed := A10 - 2 * A9 + A8;
   begin
      pragma Assert (Num in -0.4 .. -0.3);
--    pragma Assert (Num = -0.35010528564453125);
      pragma Assert (Den = -0.111454010009765625);
      return Num / Den;
   end Shanks_Fixed;

The exact values of the numerator Num, the denominator Den, and the result of their division, are respectively -0.35010528564453125, -0.111454010009765625 and 3.1412525177001953125 with GNAT (results could be slightly different with another compiler, as multiplication and division between fixed-point numbers in Ada can be rounded to either one of the closest representable numbers). One can see that indeed Shanks transformation gives a better approximation of Pi here than the original Leibniz formula on fixed-point numbers. This time, GNATprove can only prove automatically the exact value of Den, a coarse interval on Num, and an even coarser interval on the function result, as expressed in the assertions and postcondition that are not commented out. That's quite different from the results on Leibniz_Fixed on which GNATprove proved the exact value of the result. The reason is that the computation of Num involves multiplications and the computation of the function result involves a division, while the computation of Den involves only additions and subtractions. Multiplication and division of fixed-point numbers both involve multiplications and divisions between integers, which are handled less precisely by automatic provers than addition and subtraction.

Finally, let's consider function Shanks_Float which applies the Shanks transformation to the version of Leibniz formula on floating-point numbers.

function Shanks_Float return Float
--   with Post => Shanks_Float'Result = 3.14125514030456543
   is
      T1  : Float := 1.0;
      T2  : Float := 1.0 / 3.0;
      T3  : Float := 1.0 / 5.0;
      T4  : Float := 1.0 / 7.0;
      T5  : Float := 1.0 / 9.0;
      T6  : Float := 1.0 / 11.0;
      T7  : Float := 1.0 / 13.0;
      T8  : Float := 1.0 / 15.0;
      T9  : Float := 1.0 / 17.0;
      T10 : Float := 1.0 / 19.0;

      A1  : Float := T1;
      A2  : Float := A1 - T2;
      A3  : Float := A2 + T3;
      A4  : Float := A3 - T4;
      A5  : Float := A4 + T5;
      A6  : Float := A5 - T6;
      A7  : Float := A6 + T7;
      A8  : Float := A7 - T8;
      A9  : Float := A8 + T9;
      A10 : Float := A9 - T10;

      Num : Float := 4.0 * (A10 * A8 - A9 * A9);
      Den : Float := A10 - 2.0 * A9 + A8;
   begin
--    pragma Assert (Num = -0.350108861923217773);
--    pragma Assert (Num <= 0.0);
--    pragma Assert (Num in -10000.0 .. 10000.0);
--    pragma Assert (Den = -0.111455082893371582);
      pragma Assert (Den in -0.11146 .. -0.11145);
      return Num / Den;
   end Shanks_Float;

The exact values of the numerator Num, the denominator Den, and the result of their division, are respectively -0.350108861923217773, -0.111455082893371582 and 3.14125514030456543 (with GNAT or any other compiler respecting IEEE-754 standard). On this version of the function, GNATprove can only prove a precise interval on Den, but neither the sign nor a huge interval on Num. That's much less precise than the results obtained on Shanks_Fixed. The reason is that multiplication and division on floating-point numbers are modeled as multiplication and division on real numbers for provers, followed by rounding, which are handled with the lowest precision by provers. Thus, while a precise interval can be proved for Den whose computation involves only additions and subtractions, no reasonable bounds can be proved on Num whose computation involves multiplications.

The analysis results on these four versions of approximations of Pi are representative of what's provable with GNATprove, from exact results on fixed-point computations involving only additions and subtractions to unprovable results on floating-point computations involving multiplications and divisions. Analysis of fixed-point computations and floating-point computations rely on two different capabilities of automatic provers: support for integer arithmetic on one side, and for real on the other side. Improvements in provers are expected on both sides. We are currently investigating the use of prover support for floating-point arithmetic to improve the precision of GNATprove on floating-point computations.

Posted in #Formal Verification    #SPARK   

About Yannick Moy

Yannick Moy

Yannick Moy is Head of the Static Analysis Unit at AdaCore. Yannick contributes to the development of SPARK, a software source code analyzer aiming at verifying safety/security properties of programs. He frequently talks about SPARK in articles, conferences, classes and blogs (in particular blog.adacore.com). Yannick previously worked on source code analyzers for PolySpace (now The MathWorks) and at Université Paris-Sud.