Research Corner - Focused Certification of SPARK in Coq
by Yannick Moy –
The SPARK toolset aims at giving guarantees to its users about the properties of the software analyzed, be it absence of runtime errors or more complex properties. But the SPARK toolset being itself a complex tool, it is not free of errors. So how do we get confidence in the results of the analysis? The established means for getting confidence in tools in industry in through a process called sometimes tool certification, sometimes tool qualification. It requires to describe at various levels of details (depending on the criticality of the tool usage) the intended functionality of the tool, and to demonstrate (usually through testing) that the tool correctly implements these functionalities.
The academic way of obtaining confidence is also called "certification" but it uncovers a completely different reality. It requires to provide mathematical evidence, through mechanized proof, that the tool indeed performs a formally specified functionality. Examples of that level of certification are the CompCert compiler and the SEL4 operating system. This level of assurance is very costly to achieve, and as a result not suitable for the majority of tools.
For SPARK, we have worked with our academic partners from Kansas State University and Conservatoire National des Arts et Métiers to achieve a middle ground, establishing mathematical evidence of the correctness of a critical part of the SPARK toolset. The part on which we focused is the tagging of nodes requiring run-time checks by the frontend of the SPARK technology. This frontend is a critical and complex part, which is shared between the formal verification tool GNATprove and the compiler GNAT. It is responsible for generating semantically annotated Abstract Syntax Trees of the source code, with special tags for nodes that require run-time checks. Then, GNATprove relies on these tags to generate formulas to prove, so a missing tag means a missing verification. Our interest in getting better assurance on this part of the technology is not theoretical: that's a part where we repeatedly had errors in the past, leading to missing verifications.
Interested in knowing more? See the paper attached which has been accepted at SEFM 2017 conference, or look at the initial formalization work presented at HILT conference in 2013.