AdaCore Blog

How Our Compiler Learnt From Our Analyzers

by Yannick Moy

Program analyzers interpret the source code of a program to compute some information. Hopefully, the way they interpret the program is consistent with the way that the compiler interprets it to generate an executable, or the information computed is irrelevant, possibly misleading. For example, if the analyzer says that there are no possible run-time errors in a program, and you rely on this information to compile with dynamic checking off, it is crucial that no run-time error could occur as a result of a divergence of opinion between the analyzer and the compiler on the meaning of an instruction.

At AdaCore, we're developing both the GNAT compiler and two source code analyzers, CodePeer and GNATprove, so we are very much aware of the need for a consistent interpretation of programs. In fact, we even try as much as possible to ensure that our analyzers are consistent with how other compilers besides GNAT interpret programs. In general, this means that we adapt our analyzers to how compilers behave, because it's easier to change our analyzers than to change all the existing Ada compilers.

But recently, we got a case which lead to a modification in the GNAT compiler. The issue was that GNAT interpreted different the value of the exponentiation expression A**B when B was statically known or not. To see that, take any version of GNAT prior to April, 20th 2015, and compile the following program with assertions (using -gnata):

procedure Exptest is
    Static_B : constant := 3;

    function Ident_Int (X : Integer) return Integer is (X);

    procedure Test (A : Float) is
       Dynamic_B : constant Integer := Ident_Int (Static_B);

       V1 : constant Float := A ** Static_B;
       V2 : constant Float := A ** Dynamic_B;
       pragma Assert (V1 = V2);
    end Test;

    F : Float := 0.5;
    while F < 1.0 loop
       Test (F);
       F := Float'Succ (F);
    end loop;
  end Exptest;

When running this program, you will get an assertion failure:


This shows that for a value of A between 0.5 and 1.0, A**3 is different from A**B where B=3.

The reason for this difference is that GNAT optimizes the cases of A**2, A**3 and A**4 to perform direct multiplications in that case instead of calling the run-time exponentiation function, because it is much faster. On the contrary, when B is not statically known, the generated code always calls the run-time exponentiation function. In that case, the compiler converts the argument A to the most precise floating-point type on the machine, and inserts a call to the run-time exponentiation function defined in file s-exnllf.adb which performs exponentiation by squaring. This was known to be possibly different from the direct multiplication done in the case of small static exponents, but was acceptable as both results are within the expected precision of exponentiation in Ada.

Our analyzers (both CodePeer and GNATprove) do not distinguish between static and non-static exponents, as efficiency is not an issue here. Hence both CodePeer and GNATprove can prove that the assertion in the code of Exptest can never fail! Thus, this was a very clear case of inconsistency between the compiler and analyzers that we strive to avoid. And in this case, we concluded that the behavior of the compiler was possibly misleading not only to analyzers, but possibly to humans too, so we chose to modify the run-time exponentiation function to optimize the case of exponents 2, 3 and 4 like the compiler does for static values. [To be complete, this also required generating intermediate code for which the GCC backend would prevent the use of extended precision floating-point representation.]

You can now try to compile and run the above program with a recent version of GNAT, you won't get any assertion failure anymore!

Posted in #Compilation    #Formal Verification    #SPARK   

About Yannick Moy

Yannick Moy

Yannick Moy is Head of the Static Analysis Unit at AdaCore. Yannick contributes to the development of SPARK, a software source code analyzer aiming at verifying safety/security properties of programs. He frequently talks about SPARK in articles, conferences, classes and blogs (in particular Yannick previously worked on source code analyzers for PolySpace (now The MathWorks) and at Université Paris-Sud.