Larger than it looks (storage pools)
This post shows how to implement a special storage pool that allocates an extra header every time it allocates some memory. This can be used to store type specific information, outside of the type itself.
This post shows how to implement a special storage pool that allocates an extra header every time it allocates some memory. This can be used to store type specific information, outside of the type itself.
One of the main challenges to get certification in Ada projects is the achievement of 100% code coverage but in most projects an amount of more than 95% structural coverage is hard to achieve. What can you do with the last 5% of code that can't be covered? DO-178C for example, provides a framework for the integration of various techniques in the development process to solve the problem. In this webinar you learn how static analysis and dynamic testing can help complete analysis for pieces of code that are not covered.
In SPARK, as in most programming languages, there are a bunch of bounded integer types. On the other hand, Why3 only has mathematical integers and a library for bitvectors. Since bitwise operations can only be done on modular types in Ada, we currently translate arithmetic operations on signed integer types as operations on mathematical integers and arithmetic operations on modular types as operation on bitvectors. The only remaining question now is, how do we encode specific bounds of the Ada types into our Why3 translation ? In this post, I will present three different ways we tried to do this and explain which one we currently use and why.
I recently joined AdaCore as a Technical Account Manager with an initial focus on the UK and Scandinavian regions, but for the last 12 months I've been busy working on the AdaCore University. The most recent addition to which is a course on Mixed Language Programming with Ada, and it includes lectures on the integration of Ada with C, C++ and Java. The course covers some advanced topics like mixed language object orientation, techniques for using Ada strong typing to combat known issue with C pointers and the pitfalls that are encountered when mixing native Ada code with Java at runtime. This course clearly demonstrates that Ada has strong support for integration with C, C++ and Java and it proves there are no technical barriers to its adoption in modern mixed language software systems.
As automatic proof is time consuming, it is important that rework following a change in source code is minimized. GNATprove uses a combination of techniques to ensure that, both for a single user, and when working in a team.
The ProofInUse joint laboratory is currently improving the way SPARK deals with modular types and bitwise operators. Until now the SPARK tool was trying its best to translate those into equivalent operations on integers. It is now using native theory of smt-solvers when available resulting in much better support, and guaranteeing state of the art handling of bitwise operations. We present some examples in this post.
I've recently written an article (in two parts) over at Electronic Design about applying different methods of verification to the same small piece of code. The code in question is an implementation of binary search, and I applied Testing, Static Analysis (using the AdaCore tool CodePeer) and Formal Verification (using the AdaCore tool SPARK 2014).
Contracts may be quite complex, as complex as code in fact, so it is not surprising that they contain errors sometimes. GNATprove can help by pinpointing suspicious constructs that, although legal, do not make much sense. These constructs are likely to be caused by mistakes made by the programmer when writing the contract. In this post, I show examples of incorrect constructs that are signaled by GNATprove.
I was at Bruxelles on January 31st to present the components of GNAT GPL 2015 : SPARK 2014 and GNAT GPL for ARM bare-board. This is not unrelated to a previous blog entry on Tetris in SPARK on ARM Cortex M4, in particular I presented that Tetris demo (I brought some boards with me and despite the simple package, none were broken!). The slides contain technical details on the ravenscar profile (main principles), how to build a program for the stm32f4-discovery board and how to port the runtime. There are also less technical slides such as why we choose the stm32f4 board and photos of some graphical demos. As that could be useful to anyone interested in Ravenscar or in porting the runtime to other boards or other platforms, we've made the slides available here.
While attribute Old allows expressing inside postconditions the value of objects at subprogram entry, this is in general not enough to conveniently express how record and array objects are modified by a procedure. A special attribute Update is defined in SPARK to make it easy to express such properties.
Object Oriented Programming is known for making it particularly difficult to analyze programs, because the subprograms called are not always known statically. The standard for civil avionics certification has recognized this specific problem, and defines a specific verification objective called Local Type Consistency that should be met with one of three strategies. SPARK allows using one of these strategies, by defining the behavior of an overridden subprogram using a special class-wide contract and checking that the behavior of the overriding subprogram is a suitable substitution, following the Liskov Substitution Principle.
Tetris is a well-known game from the 80's, which has been ported in many versions to all game platforms since then. There are even versions of Tetris written in Ada. But there was no version of Tetris written in SPARK, so we've repaired that injustice. Also, there was no version of Tetris for the Atmel SAM4S ARM processor, another injustice we've repaired.
A common situation when proving properties about a program is that you end up writing additional code whose only purpose is to help proving the original program. If you're careful or lucky enough, the additional code you write will not impact the program being verified, and it will be removed during compilation, so that it does not inflate binary size or waste execution cycles. SPARK provides a way to get these benefits automatically, by marking the corresponding code as ghost code, using the new Ghost aspect.
There are cases expressing all the specification of a package in SPARK is either impossible (for example if you need to link them to elements of the mathematical world, like trigonometry functions), cumbersome (especially if they require concepts that cannot easily be described using contracts, like transitivity, counting, summation...), or simply inefficient, for big and complex data structures like containers for example. In these cases, a user can provide directly a manually written Why3 translation for an Ada package using a feature named external axiomatizations. Coming up with this manual translation requires both a knowledge of the WhyML language and a minimal understanding of GNATprove's mechanisms and is therefore reserved to advanced users.
Guiding automatic solvers by adding intermediate assertions is a commonly used technique. We can go further in this direction, by adding complete pieces of code doing nothing, generally called ghost code, to guide the automated reasoning. This is an advanced feature, for people willing to manually guide proofs. Still, it is all in SPARK 2014 and thus does not require the user to learn a new language. We explain here how we can achieve inductive proofs on a permutation function.
In SPARK 2014, we can write a function F and annotate it with a pre and a postcondition. In this post, we explain how the SPARK 2014 proof tool handles a call to such a function.
In a previous blog post we described how aspect Global can be used to designate the specific global variables that a subprogram has to read and write. So, by reading the specification of a subprogram that has been annotated with aspect Global we can see exactly which variables, both local and global, are read and/or written each time the subprogram is called. Based purely on the Global aspect, this pretty much summarizes the full extent of our knowledge about the flow of information in a subprogram. To be more precise, at this point, we know NOTHING about the interplay between the inputs and outputs of the subprogram. For all we know, all outputs could be randomly generated and the inputs might not contribute in the calculation of any of the outputs. To improve this situation, SPARK 2014 uses aspect Depends to capture the dependencies between a subprogram's outputs and inputs. This blog post demonstrates through some examples how aspect Depends can be used to facilitate correct flow of information through a subprogram.
The article explains how we improved the performance of the SPARK 2014 toolset when multiple CPU cores are available for proof.
I recently looked at string manipulation functions in a library and tried to prove absence of run-time errors of one of them. Although the function and it's proof are quite simple in the end, the process of obtaining the correct code and the proof was interesting enough to write this blog post.
SPARK only supported array initialization using aggregates, as array initialization in loops raised a false alarm in flow analysis. Read on to learn how the situation has been improved in SPARK 2014.