The AdaCore Blog An insight into the AdaCore ecosystem en-us Wed, 07 Dec 2022 21:10:30 -0500 Wed, 07 Dec 2022 21:10:30 -0500 Coroutines in Ada, a Clean but Heavy Implementation Tue, 06 Dec 2022 12:30:00 -0500 Fabien Chouteau

A few months ago I was reading this article about coroutines in game development and how they are great tools for writing scripts (as in movie scripts) in the same language as the game engine. Until then I heard about coroutines but never really gave them a thought, this piqued my curiosity.

A coroutine is a kind of subprogram that can be paused and then resumed later on.

Let’s take a look at an example from Python

>>> def my_coroutine():
...     print("step 1")
...     yield
...     print("step 2")
...     yield
...     print("step 3")
>>> coro = my_coroutine()
>>> next(coro)
step 1
>>> next(coro)
step 2
>>> next(coro)
step 3

As you can see the execution of the function “my_coroutine” is paused and resumed at “yield”.

Lightweight, fast, easy to use coroutines usually require some kind of language, compiler, run-time support because a second call stack is required as well as context switching at the right places.

In the article, Emil Ernerfeldt presents an implementation of C++ coroutines based only on cooperatives threads. The coroutine is executed in a separate thread (“inner”) and communicates with the “outer” thread so that only one is executing at a time. For instance, the equivalent of Python’s “yield” is the inner thread resuming outer thread and suspending itself.

This way of seeing coroutines as collaborative threads is actually very close to the original definition provided by Donald Knuth in The Art Of Computer Programming (Volume 1):

Subroutines are special cases of more general program components, called coroutines. In contrast to the unsymmetric relationship between a main routine and a subroutine, there is complete symmetry between coroutines, which call on each other.

My immediate thought was that it should be fairly easy to do the same with Ada tasks. I was actually quite right about this. But it is only after I implemented a somewhat over-engineered solution, that my colleague Ben Brosgol made me realize there is an even more clean way to implement coroutines in Ada.

So I will first cover the simplest and cleanest solutions and then the over-engineered one.

Coroutines with Rendezvous

The easiest way to implement coroutines in Ada is to use a type of task synchronization called rendezvous. If you first want to know more about rendezvous, you can have a look at the dedicated chapter on

Let’s define a coroutine “inner” task with an entry:

task type My_Coroutine is
   entry Continue;
end My_Coroutine;

task body My_Coroutine is
   accept Continue;

   Ada.Text_IO.Put_Line ("Step 1");

   accept Continue;

   Ada.Text_IO.Put_Line ("Step 2");

   accept Continue;

   Ada.Text_IO.Put_Line ("Step 3");
end My_Coroutine;

The task will run until it reaches an accept statement and then waits for some other task to synchronize with it, and so on.

On the “outer” side we instantiate the coroutine and call the "Continue" entry:

   Coro : My_Coroutine;

This code will produce the following output:

Step 1
Step 2
Step 3

As you can see it is quite simple.

Now we can even go further and call the entry in a loop using the “`Callable” attribute:

   Coro : My_Coroutine;
   while Coro'Callable loop
      Ada.Text_IO.Put_Line ("Continue...");
   end loop;

This code will produce the following output:

Step 1
Step 2
Step 3

This is not completely what we expected, right? Why are there two consecutive “Continue” lines? Well contrary to the schema shown above, the outer task is not instantly suspended when the coroutine task is released, so it will loop and print the second “Continue” line before being suspended again.

Going One Step Further with Task Interfaces

Ada provides task polymorphism through task interfaces. We can define what it means for a task to be a coroutine task like so:

type Coroutine is task interface;
procedure Continue (This : Coroutine) is abstract;

And a task that implements this interface:

task type My_Coroutine is new Coroutine with
   entry Continue;
end My_Coroutine;

We can also define an access type for any task that implement the interface:

type Any_Coroutine_Access is access all Coroutine’Class;

The point here is that we can now manipulate a pool of different coroutines in a unified way. Going back to the video example of the beginning, we might have several coroutine scripts running at the same time in a video game engine.

We need a container to hold our accesses to coroutines:

package Coroutine_List
is new Ada.Containers.Doubly_Linked_Lists (Any_Coroutine_Access);

And a couple of subprograms to manipulate the container:

procedure Insert (This : in out Coroutine_List.List;
                  C    : not null Any_Coroutine_Access)
   if not This.Contains (C) then
      This.Append (C);
   end if;
end Insert;

procedure Remove (This : in out Coroutine_List.List;
                  C    : not null Any_Coroutine_Access)
   Cur : Coroutine_List.Cursor;
   if This.Contains (C) then
      Cur := This.Find (C);
      This.Delete (Cur);
   end if;
end Remove;
procedure Poll (This : in out Coroutine_List.List) is
   for Elt of This loop
      if Elt.all'Callable then
      end if;
   end loop;
end Poll;

Note that in the Poll procedure here, all the coroutines might run in parallel as the outer task is not waiting for a previously released coroutine to suspend before releasing the next one.

What About Generators?

Generators are a special kind of coroutines that can produce values. We can also make generators using the same idea of collaborative Ada tasks and entries. Let’s see another little example:

task type My_Generator (From, To : Integer) is
   entry Next (Value : out Integer);
end My_Generator;

task body My_Generator is
   for X in From .. To loop
      accept Next (Value : out Integer) do
         Value := X;
      end Next;
   end loop;
end My_Generator;

And then use it like so:

   Gen : My_Generator (1, 10);
   Val : Integer;
   while Gen'Callable loop
      Gen.Next (Val);
      Ada.Text_IO.Put_Line ("Task Gen:" & Val'Img);
   end loop;0

The (Maybe) Over-Engineered Task Based Coroutines

As I said in the introduction, my first approach was not based on task rendezvous. I implemented an Ada library that provides coroutine features using tasks and suspension objects. It is available here or as a crate in the Alire ecosystem:

The implementation is more involved than the rendezvous based presented above. However this is under the hood, and using the library is quite simple. Also the behavior is closer to what one would expect, with only one of the collaborating tasks being executed at a given time. Let's see how to use it.

First we define our Coroutine procedure:

procedure My_Coroutine (Ctrl : in out Coroutine.Inner_Control'Class) is
   Ada.Text_IO.Put_Line ("Step 1")
   Ada.Text_IO.Put_Line ("Step 2")
   Ada.Text_IO.Put_Line ("Step 3")
end My_Coroutine;

As you can see, it takes an "Inner_Control" object as an argument. This object provides the interfaces to synchronize with the outer task. "Ctrl.Yield" is the equivalent of the "yield" keyword in Python. When the coroutine calls "Ctrl.Yield", its execution is suspended and the outer task execution resumed.

Speaking of the outer task, here is how to create and run the coroutine:

   C : aliased Coroutine.Instance; -- Create a coroutine object
   Ada.Text_IO.Put_Line ("Start the Coroutine");
   C.Start (My_Coroutine'Access); -- Start the coroutine

   --  At this point the Coroutine has begun its execution up to the first call to Ctrl.Yield
      Ada.Text_IO.Put_Line ("Poll");
      C.Poll; -- Let the Coroutine execute again (like next(coro)) in the Python example
      exit when C.Done; --  Check if the Coroutine has reached the end of its execution
   end loop;

This code will produce the following output:

Start the Coroutine
Step 1
Step 2
Step 3


I also implemented generators in the same way. The package is generic, it has to be instantiated with the type of data that we want to generate:

package Int_Generator is new Generator (Integer);

The definition of the generator sub-program goes like this:

procedure Gen_Positive (Ctrl : in out Int_Generator.Inner_Control'Class) is
   for X in 1 .. 5 loop
      Ctrl.Yield (X); -- Yield a value
   end loop;
end Gen_Positive;

We have the same concept with the "Inner_Control" type and its "Yield" subprogram, except in this case it takes an argument. This is the value that is returned to the outer task.

The "Int_Generator.Instance" type implements the iterator interface, so we can use it in a “for of” loop like so:

   G : Int_Generator.Instance;
   G.Start (Gen_Positive'Access);
   for Elt of G loop
      Ada.Text_IO.Put_Line ("Gen returned: " & Elt'Img);
   end loop;

This code will produce the following output:

Gen returned: 1
Gen returned: 2
Gen returned: 3
Gen returned: 4
Gen returned: 5

Pros and Cons

I presented several solutions for coroutines and generators in this post. The common advantage across all of them is that they rely on standard Ada features, and therefore should be portable on any platform with Ada tasking available. On the other hand, spawning a task is a relatively costly operation both in terms of time and memory usage, so these coroutines are not lightweight.


A few years ago, my colleague Pierre-Marie de Rodat made a prototype implementation of coroutines and generators based on the Portable Coroutine Library (PCL). It might provide better performance at the cost of relying on an external dependency. The project is hosted here:

What about Ravenscar and embedded?

The rendezvous based approach is not compatible with the restrictions of the Ravenscar profile, primarily because task entries are not allowed. And the current implementation of the “task_coroutines” library relies on features that are not available in the Ravenscar profile (entry select), so it is also incompatible as is. I think it could be doable with a different API, but then there is the question of the usefulness of this kind of coroutines when tasks cannot be created dynamically at run-time.

At some point I would like to investigate another approach to coroutines for embedded systems, like the one presented here for the GameBoy Advance. Dealing with the GNAT secondary stack might be a problem though, we’ll see.

New Learn Course: Introduction To Embedded Systems Programming Mon, 28 Nov 2022 00:00:00 -0500 Pat Rogers

A new online Learn course has been published offering an Introduction To Embedded Systems Programming.

The course is based directly on decades of embedded systems development experience using Ada. Although an introduction, it is both in-depth and extensive, with numerous code examples and real-world issues addressed.

In the course, we dedicate a lot of time to low-level programming, such as how to specify the layout of types, how to map variables of those types to specific addresses, when and how to do unchecked programming (and how not to), and how to determine the validity of incoming data. Ada has considerable support for this activity so there is much to explore.

Likewise, we cover development using Ada in combination with other languages, a not uncommon approach today. Specifically, we show how to interface with code and data written in other languages, and how (and why) to work with assembly language. Development in just one language is becoming less common over time so these are important aspects to know.

One of the more distinctive activities of embedded programming involves interacting with the outside world via embedded devices, such as A/D converters, timers, actuators, sensors, and so forth. (This can be one of the more entertaining activities as well.) We cover how to interact with these memory-mapped devices using specifications for their representation, data structures that simplify the functional code, and time-honored aspects of software engineering, including abstract data types.

Finally, we explore how to handle interrupts in Ada, another distinctive part of embedded systems programming. We explain the canonical interrupt handling model, that model's correspondence to the model described by the Ada standard, and how Ada provides direct support for the resulting functionality. As we explain, Ada has extensive support for handling interrupts, using the same building blocks used in concurrent programming. These constructs provide a way to handle interrupts that is as portable as possible, in what is otherwise a very hardware-specific endeavor. Two primary idioms, compatible with Ravenscar, are included.

Tis the Season to be Giving falalalala lalalala Thu, 24 Nov 2022 05:27:00 -0500 Fabien Chouteau

Every year since 2015, a team of dedicated individuals led by Eric Wastl organizes an online programming challenge called: Advent of Code. The concept is simple yet brilliant: from December 1st to 25th, every day a new small programming exercise is published on the website. Participants get points for each completed exercise.

This event is great to start learning a new language as the exercises are small and simple, at least for the first few days, and can be completed using any programming language. It is gaining traction every year, including more and more people implementing the challenges in Ada/SPARK.

This year we want to join the fun, and bring a little bit of extra motivation for a good cause. For each person completing one of the Advent of Code challenges using the Ada programming language, AdaCore will donate $10 to the Ada Developers Academy, up to a total of $5,000. And for those willing to go an extra mile, AdaCore will donate $20 if the solution is implemented in SPARK with at least proof of absence of run-time errors (a.k.a. Silver level).

How to participate?

A special thread on the forum was created for anyone to get their solutions registered. Once you have completed an exercice, head over to that thread and post a message with a link to the sources of your solution using the following format:

[<pseudonym>][<day>][<Ada or SPARK>]<link to solution source code>

For instance, if I solve the 3rd day problem using Ada:


And the 5th day using SPARK:


If you prefer, you can just have one post and edit it every time you solve a new problem.

  • You don’t have to solve all the 25 problems

  • You don’t have to solve the problems on the day they are submitted

  • You can switch back and forth between Ada and SPARK

  • You don’t have to be a beginner in Ada/SPARK. Advent of Code is great for learning new languages but it’s also fun for experienced programmers

Early January we will count all the submissions, and donate the corresponding amount to the Ada Developers Academy.

Happy hacking season!

NVIDIA Security Team: “What if we just stopped using C?” Mon, 07 Nov 2022 05:17:00 -0500 Fabien Chouteau

Today I want to share a great story about why many NVIDIA products are now running formally verified SPARK code. This blog post is in part a teaser for the case study that NVIDIA and AdaCore published today.

Our journey begins with the NVIDIA Security Team. Like many other security-oriented teams in our industry today, they were looking for a measurable answer to the increasingly hostile cybersecurity environment and started questioning their software development and verification strategies.

“Testing security is pretty much impossible. It’s hard to know if you’re ever done,” said Daniel Rohrer, VP of Software Security at NVIDIA.

In my opinion, this is the most important point of the case study - that test-oriented software verification simply doesn’t work for security. Once you come out of the costly process of thoroughly testing your software, you can have a metric on the quality of the features that you provide to the users, but there’s not much you can say about security.

Rohrer continues, “We wanted to emphasize provability over testing as a preferred verification method.” Fortunately, it is possible to prove mathematically that your code behaves in precise accordance with its specification. This process is known as formal verification, and it is the fundamental paradigm shift that made NVIDIA investigate SPARK, the industry-ready solution for software formal verification.

Back in 2018, a Proof-of-Concept (POC) exercise was conducted. Two low-level security-sensitive applications were converted from C to SPARK in only three months. After an evaluation of the return on investment, the team concluded that even with the new technology ramp-up (training, experimentation, discovery of new tools, etc.), gains in application security and verification efficiency offered an attractive trade-off. They realized major improvements in the security robustness of both applications (See NVIDIA's Offensive Security Research D3FC0N talk for more information on the results of the evaluation).

As the results of the POC validated the initial strategy, the use of SPARK spread rapidly within NVIDIA. There are now over fifty developers trained and numerous components implemented in SPARK, and many NVIDIA products are now shipping with SPARK components.

I encourage everyone to read the full case study, which covers some important topics that should be very interesting for others questioning their own cybersecurity strategies, such as:

  • Performance compared to C: “I did not see any performance difference at all.”

  • Overcoming skepticism: “others who ... were initially detractors but have subsequently become champions”

  • Impact on audits: “Hey, look, we’ve got this tool. We were able to prove these properties, let’s focus on other areas of security.”

  • Customer relationships: “we didn’t just run some bug-checking hunting tool, we formally verified it—that’s huge”

Avoiding Vulnerabilities in Crypto Code with SPARK Fri, 04 Nov 2022 08:52:00 -0400 Daniel King

Writing secure software in C is hard. It takes just one missed edge case to lead to a serious security vulnerability, and finding such edge cases is difficult. This blog post discusses a recent vulnerability in a popular SHA-3 library and how the same problems were avoided in my own SHA-3 library written in SPARK.

The Vulnerability

The vulnerability is a buffer overflow in the eXtended Keccak Code Package (XKCP), recently discovered by Nicky Mouha and assigned CVE-2022-37454. The XKCP is a reference implementation of SHA-3 by its designers, written in C with some assembly for several platforms. It is used by several other projects such as the Python and PHP scripting languages, which are also impacted by the vulnerability.

Let's take a look into how the vulnerability occurs. Nicky's announcement provides a simple example of triggering the buffer overflow from some Python code:

import hashlib
h = hashlib.sha3_224()
h.update(b"\x00" * 1)
h.update(b"\x00" * 4294967295)
partialBlock = (unsigned int)(dataByteLen - i);
if (partialBlock + instance->byteIOIndex > rateInBytes)
    partialBlock = rateInBytes - instance->byteIOIndex;
SnP_AddBytes(instance->state, curData, instance->byteIOIndex, partialBlock);
--  Process complete blocks
while Remaining_Bits >= Initial_Bit_Rate loop
   pragma Loop_Invariant (Offset + Remaining_Bytes = Initial_Data_Len
                           and Remaining_Bytes <= Data'Length
                           and Remaining_Bytes = (Remaining_Bits + 7) / 8
                           and (Bit_Length mod 8) = (Remaining_Bits mod 8)
                           and Ctx.Curr_State = Absorbing
                           and Rate_Of (Ctx) = Initial_Bit_Rate);
   pragma Loop_Variant (Decreases => Remaining_Bytes,
                        Decreases => Remaining_Bits,
                        Increases => Offset);

   XOR_Bits_Into_State (Ctx.State,
                        Data (Data'First + Offset ..
                              Data'First + Offset + (Ctx.Rate - 1)),
   Permute (Ctx.State);

   Offset          := Offset          + Ctx.Rate;
   Remaining_Bytes := Remaining_Bytes - Ctx.Rate;
   Remaining_Bits  := Remaining_Bits  - Initial_Bit_Rate;
end loop;

--  No more complete blocks. Store the leftovers
if Remaining_Bits > 0 then
   Ctx.Block (0 .. Remaining_Bytes - 1)
      := Data (Data'First + Offset ..
                  Data'First + (Offset + (Remaining_Bytes - 1)));
end if;
SPARK Analysis results        Total          Flow   CodePeer                                           Provers   Justified   Unproved
Data Dependencies               899           899          .                                                 .           .          .
Flow Dependencies               371           371          .                                                 .           .          .
Initialization                 5069          4991          .                                                 .          78          .
Non-Aliasing                    232           232          .                                                 .           .          .
Run-time Checks               11423             .          .               11423 (CVC4 99%, Z3 1%, altergo 0%)           .          .
Assertions                     1405             .          .    1405 (CVC4 96%, Trivial 1%, Z3 2%, altergo 0%)           .          .
Functional Contracts           2176             .          .                           2176 (CVC4 100%, Z3 0%)           .          .
LSP Verification                  .             .          .                                                 .           .          .
Termination                     114             .          .                                        114 (CVC4)           .          .
Concurrency                       .             .          .                                                 .           .          .
Total                         21689    6493 (30%)          .                                       15118 (70%)     78 (0%)          .
Adding Ada to Rust Tue, 25 Oct 2022 07:29:00 -0400 Johannes Kliemann println!(“cargo:rustc-link-lib=dylib=mycustomadalib”);
    .args([“-j0”, “-p”, “-P”, “mycustomadalib.gpr”)
use gpr;

let project = gpr::Project::load(Path::new("/path/to/project.gpr")).unwrap();


name = "ada_hello"
version = "0.1.0"
edition = "2021"

gpr = “0.1.0”
project Ada_Hello

   for Source_Dirs use ("src");
   for Object_Dir use "obj";
   for Create_Missing_Dirs use "True";
   for Library_Name use "adahello";
   for Library_Kind use "dynamic";
   for Library_Standalone use "encapsulated";
   for Library_Interface use ("ada_hello");
   for Library_Dir use "lib";

end Ada_Hello;

package Ada_Hello

   procedure Hello with
      Convention => C,
      External_Name => "ada_hello";

end Ada_Hello;

with Ada.Text_IO;

package body Ada_Hello is

   procedure Hello
      Ada.Text_IO.Put_Line ("Hello from Ada!");
   end Hello;

end Ada_Hello;
use gpr::Project;
use std::{path::Path, process::Command};

fn main() {
    let ada_hello = Project::load(Path::new("ada_hello/ada_hello.gpr")).unwrap();
extern "C" {
    fn ada_hello();

fn main() {
    println!("Hello from Rust!");
    unsafe {
LD_LIBRARY_PATH=ada_hello/lib cargo run        
   Compiling gpr v0.1.0 (/.../gpr-rust)
   Compiling ada_hello v0.1.0 (/.../gpr-rust/examples/ada_hello)
    Finished dev [unoptimized + debuginfo] target(s) in 6.02s
     Running `target/debug/ada_hello`
Hello from Rust!
Hello from Ada!
When Formal Verification with SPARK is the Strongest Link Wed, 12 Oct 2022 10:06:00 -0400 Yannick Moy

Security is only as strong as its weakest link. That's important to keep in mind for software security, with its long chain of links, from design to development to deployment. Last year, members of NVIDIA's Offensive Security Research team (aka "red team") presented at DEF CON 29 their results on the evaluation of the security of a firmware written in SPARK and running on RISC-V. They ended up not finding vulnerabilities in the code but in the RISC-V ISA instead. This year, the same team presented at DEF CON 30 a retrospective on the security evaluation of 17 high-impact projects since 2020. TL;DR: using SPARK makes a big difference for security, compared to using C/C++.

The security researchers start by stating a well-known fact (that they support with the reports on 12 years of security bugs at Microsoft, and 5 years of security bugs at Google Chrome team): 70% of all security bugs are related to memory safety. That's the kind of bugs you won't see in SPARK code, as they are either prevented by the design of SPARK or detected by formal verification.

Then, at 15:39 in the recording, Zabrocki asks the most interesting question: do they see benefits in using SPARK, compared to similar developments in C/C++? He then goes on to compare the results on 3 projects in SPARK and 6 projects in C/C++, and the results are clear. As he says: there is a "huge difference". Not only do they detect fewer security bugs in SPARK projects, but they detect "deeper" bugs related to software design and hardware-software interfaces, as they do not need to spend time looking for "simple" bugs like memory safety ones.

If you want to peek at what are the "deeper" bugs that they detected in SPARK projects, jump to 22:38 of the recording. I personally enjoyed it! And I'm both amazed and grateful that NVIDIA shows such openness in disclosing the kind of bugs that escaped all their regular development and verification activities.

If you have only 2 minutes, just jump to the conclusion at 39:00. There should be some inspiration for you if you're facing similar security issues in your software products.

New features for string literals and comments in GNAT Studio Fri, 23 Sep 2022 09:05:00 -0400 Andry Ogorodnik
We have added several new convenience features that help work with comments and string literals in Ada code.

You can now use Shift-Enter to continue writing a multi-line comment, without having to type '-- ':
The same combination can be used to continue the writing of a string literal on the following line:
Sometimes, you may need to insert a string variable inside a string literal. You can use the Shift-Space shortcut to split the string literal at the cursor position, and simply type the variable name:
If you have copied the variable name into the clipboard, you can use the Shift-Control-V key shortcut to insert this name into the string literal at the cursor position - the splitting and quoting of the string literal is automated for you:
These new features will be available in GNAT Studio 23.0. We hope that these will help save time in your day-to-day work.
Fuzzing Out Bugs in Safety-Critical Embedded Software Fri, 02 Sep 2022 09:28:00 -0400 Paul Butcher

Software testing is inherently multifaceted. However, the recommended approach is not to pick and choose a single tool. Instead, modern-day safety and security critical verification testing guidelines propose that campaigns should incorporate multiple strategies. The icing on the cake is to leverage the results of one tool to inject as an input into another. The cherry on top is to construct an automated cyclic toolchain where multiple tools complement and feed into one another. In addition, by adding an automated test case generation aspect into the mix we can help ensure the campaigns remain dynamic which encourages growth in the test suite across the life of a program, from development to deployment and eventual decommissioning.

Unit and Fuzz testing are complementary technologies that very much fit the bill.

I spoke with Brandon Lewis from Embedded Computing Design about fuzz testing and the added assurance benefits of chaining Unit and Fuzz testing campaigns.

GNAT DAS: GNATcoverage | GNATtest | GNATfuzz

To learn more about the Embedded Toolbox series of PodCasts see here

Embedded Ada/SPARK, There's a Shortcut Thu, 25 Aug 2022 04:42:00 -0400 Fabien Chouteau

For years in this blog my colleagues and I have published examples, demos, and how-to’s on Ada/SPARK embedded (as in bare-metal) development.

Most of the time, if not always, we focused on one way of doing things: to start from scratch and write everything in Ada/SPARK, from the low level drivers to the application.

There are good reasons for that. First, we are passionate about the Ada language and we truly believe that it is the best solution for embedded systems, from low level drivers to high level functional code. Second, this is a common situation for our customers in the safety critical domain where the entire software stack has to be reliable, trusted, and certified.

That being said, while this way of doing Ada/SPARK embedded will yield the best results in terms of software quality, it might not be the most efficient in all cases. In this blog post I want to present an alternative method to introduce Ada/SPARK into your embedded development projects.

Low level vendor drivers

The alternative is to introduce Ada/SPARK on the high level functional part of the application first, and rely on the drivers provided by the hardware vendors to quickly get the project rolling.

Virtually every piece of hardware you can get your hands on comes with a Software Development Kit (SDK) of low level drivers written in C. The quality of the SDK may vary from vendor to vendor. It’s not uncommon even for C projects to re-write those drivers from scratch, but most of the time SDKs will get you started easily and quickly.

Establish a good Hardware Abstraction Layer (HAL)

The most important thing here is to define the delineation between the low level drivers from the SDK and application code in Ada/SPARK. This is usually called the Hardware Abstraction Layer (HAL). Doing so is actually a good practice in general, whether or not you are mixing Ada and C. Having a good hardware abstraction will also allow you to limit the cost of migrating your project onto a different hardware platform.

This task is not trivial at all, let’s take an example. If I want my application to log data on an SD card, where to put the abstraction:

  1. A subprogram that takes a piece of data and the HAL will do all the work of opening, writing, and closing the file?

  2. A posix-like file system API that the application will use to open, write, and close the file?

  3. An API to read/write the SD card memory blocks and use an Ada/SPARK file system implementation?

There is no right or wrong answer here, but since we are exploring the combination of vendor SDK and Ada I would recommend starting by using as many features as possible from the SDK. It’s always possible to “lower” the abstraction level later, and replace SDK features with Ada/SPARK implementations. So for the example above I would say either option 1 if there is only one kind of access to the SD card file system, or option 2 if there are more uses of the file system throughout the application.

Who is in charge here?

Another important design choice is to decide which part of the system will be the main unit.

You can have a main application loop in C that calls Ada/SPARK code from time to time to provide specific features. Or the other way around, Ada/SPARK main loop that calls the C SDK from time to time.

If you want to use an RTOS written in C, it might be easier to have the cyclic tasks implemented in C and call Ada/SPARK functions from those tasks. Again there is no right or wrong, the answer will depend on your application and the features available in the SDK or RTOS.

Note that in case of a multitask/multithreaded application you have to be careful with the handling of the GNAT secondary stack. You will have to provide an implementation of the __gnat_get_secondary_stack function that returns a different stack pointer for each thread.

Integrating Ada/SPARK and C

In this part I will not go over the details of interfacing Ada/SPARK and C. I recommend having a look at the section on this topic: Interfacing With C.

Instead we will look at the integration on the toolchain/compilation side of things. The easiest way to go here is to bundle all the Ada/SPARK code into one or several static libraries that will then be linked to the C application built with the SDK. The Ada run-time has to be linked as well.

To make an Ada/SPARK static library only a couple of attributes are required in the GPR file.

First we specify the library name and kind:

for Library_Name use "my_library";
for Library_Kind use "static";

Then a library interface:

for Library_Interface use ("ada_code");

This attribute is very important as the binder will use it to work the elaboration and produce a function to run the elaboration code. The value “ada_code” is the name of an Ada package, discussed below.

The elaboration function will be called “<Library_Name>init” and it is required that the C code execute this function before calling any Ada/SPARK code.

Finally, linking the Ada/SPARK static library with the C application will of course depend on the build system used in the SDK, whether it’s Makefiles, CMake, or anything else. The SDK documentation likely provides instructions for how to do this.

A Concrete Example

If you are a regular reader of our blog you know that we like to show examples of the solutions we present. So far this blog has been very theoretical, so let’s dive into actual code.

For this example we will use an ESP32-C3 dev-kit. The ESP32-C3 is a 32-bit RISC-V microcontroller with a Wi-Fi and Bluetooth 5 controller. The application will connect to Wi-Fi and run an https server. When a page is requested, the Ada code will provide the HTML content and call a C function to change the color of the on-board LED.

Ada Project Setup

We will use Alire to set up the Ada project, it is not the only way but the tool will generate a few files for us. We first run the init command with the –lib switch to create a library project:

$ alr init –lib ada_code

There is a GPR scenario variable to control the kind of library this will produce, the default is set to static so we don’t have to touch anything here.

We do have to set the Target, Runtime, and Library_Interface attributes:

for Target use "riscv64-elf";
for Runtime ("Ada") use "zfp-rv32imac";
for Library_Interface use ("ada_code");

C Project Setup

For the C side we just follow the getting started instructions from ESP32-C3 documentation.

Instead of using the hello_world example, we copy the https_server one: $IDF_PATH/examples/protocols/https_server

The Hardware Abstraction Layer

For this example we define the HAL as such:

  • The Ada code provides a function that fills a memory buffer with the content of an HTML page.

  • The C code provides a function to set the color of the on-board LED.

Here are the specifications for the Ada side:

with System;
with Interfaces;

package Ada_Code is

   -- API exported to the C code --

   function Generate_Page (Buffer     : System.Address;
                           Buffer_Len : Interfaces.Unsigned_32)
                           return Interfaces.Unsigned_32;
   pragma Export (C, Generate_Page, "generate_page");

   -- API imported from C code --

   type LED_Color is (Off, Red, Green, Blue);
   for LED_Color use (Off => 0, Red => 1, Green => 2, Blue => 3);

   procedure Set_LED_Color (Color : LED_Color);
   pragma Import (C, Set_LED_Color, "set_led_color");

end Ada_Code;

And for the C side:


/* API exported to Ada/SPARK */

enum LEDColor {OFF = 0, RED = 1, GREEN = 2, BLUE = 3};

void set_led_color(enum LEDColor color);

void __gnat_last_chance_handler(void);

/* API imported from Ada/SPARK */

extern uint32_t generate_page(void *page_buffer, uint32_t buffer_size);

extern void Ada_Codeinit(void);


There are a couple things I need to explain here:

  • __gnat_last_chance_handler: a function that will be called when an (unhandled) Ada exception is raised. It is implemented in C to use the event logging features of the ESP SDK.

  • Ada_Codeinit: the Ada elaboration function I mentioned above. It is provided in the Ada static library and must be called from the C code.

  • Set_led_color: Here I could have made a reusable binding for the ESP SDK LED strip library. But I really only care about setting the color of one LED. So as I explained above I keep the HAL as simple as possible.

Who is in charge here?

The application is relatively simple; we just want to serve one HTML page. The C code will be mostly in charge and just sporadically call the Ada code to fill a buffer with the content of the page.

Integrating Ada/SPARK and C

The ESP SDK is based on CMake which is fairly flexible if you put in the time to learn how things work. I will just show you the code that we have to add to the CMakeLists.txt file in the root directory.

What is going on here is:

  • Call Alire to build the Ada code

  • Link the Ada code static library

  • Find the location of the Ada run-time library

  • Link the Ada run-time library

# Build the Ada code using Alire
execute_process(COMMAND alr -n build
                WORKING_DIRECTORY "${PROJECT_SOURCE_DIR}/../ada_code/")

# Add the Ada code static library
add_library(ada_code STATIC IMPORTED GLOBAL)
set_property(TARGET ada_code PROPERTY IMPORTED_LOCATION "${CMAKE_SOURCE_DIR}/../ada_code/lib/libAda_Code.a")
target_link_libraries(https_server.elf PUBLIC ada_code)

# Get path of Ada run-time library (libgnat.a)
execute_process(COMMAND bash -c "alr exec -- riscv64-elf-gnatls --RTS=zfp-rv32imac -v 2>&1 | grep adalib"
                WORKING_DIRECTORY "${PROJECT_SOURCE_DIR}/../ada_code/"
                RESULT_VARIABLE gnatls_result
                OUTPUT_VARIABLE gnatls_output)

string(STRIP "${gnatls_output}" ada_runtime_dir)

# Add the Ada run-time static library
message(STATUS "Link Ada run-time ${ada_runtime_dir}/libgnat.a")
add_library(libgnat STATIC IMPORTED GLOBAL)
set_property(TARGET libgnat PROPERTY IMPORTED_LOCATION "${ada_runtime_dir}/libgnat.a")
target_link_libraries(https_server.elf PUBLIC libgnat)

I am sure there are better ways to do this with CMake, don’t hesitate to comment below if you have some suggestions.

Filling the blanks

The only thing left to do is to implement the HAL functions on both sides and call the Ada function to generate HTML from the root_get_handler function. All the code is available in this repository if you want to see the full picture:


Once the application is built and flashed to the ESP32-C3, we have our little device serving HTML pages generated from Ada code over https.


With this simple example we can see how easily one can integrate Ada/SPARK code into an embedded project. The most difficult part really was to figure out the right way to link static libraries with CMake.

Join us at the High Integrity Software (HIS) Conference 2022! Tue, 09 Aug 2022 00:00:00 -0400 Paul Butcher

After two years of virtual events, we are very happy to report that the High Integrity Software Conference (HIS) will be making a physical comeback on Tuesday 11th October 2022 at the Bristol Marriott Hotel City Centre, Bristol, UK. Since 2014, AdaCore has been co-organising the event with Capgemini Engineering (previously known as Altran Technologies, SA). The success and growth of the conference have ensured it remains a regular fixture for returning delegates, and the exciting lineup for this year's event will ensure HIS 2022 is no exception!

The core themes of the conference this year are software techniques and methods that impact modern-day national infrastructure programs; moreover, large-scale projects that need to remain operational for the long term, often over several decades. The aim is to understand how early decisions around cyber-physical system architectures and adopting effective software development lifecycles and verification techniques will later impact defect rates and maintainability. Since the challenge for such large-scale and long-term systems is multi-faceted, the conference will also consider broader issues within the software development ecosystem, such as sustainable supply chains and talent streams.

There will be a keynote by Jan Bosch, Professor of Software Engineering, Chalmers University Technology, multiple programme tracks throughout the day, networking opportunities, as well as an exhibition.

This year we are particularly excited that our partner, Ferrous Systems’ CEO Florian Gilcher, will speak about Rust and the coming age of high-integrity languages. During his talk, Florian will argue that the successful adoption of Rust across multiple industries is partly due to a resurging interest in software development techniques where safety-critical practices are being applied to non-safety-related, mission-critical environments. Florian will also talk about upcoming changes and opportunities not only for Rust but also for other languages, like Ada.

We are also looking forward to welcoming Professor John Goodacre to HIS and learning more about Digital Security by Design, an initiative supported by the UK government to transform digital technology and create a more resilient, and secure foundation for a safer future.

About HIS

The HIS mission is to share challenges, best practices, and experiences between software engineering practitioners. The conference features talks from industrial and academic specialists which disseminate experience and knowledge of essential techniques and methods that are applicable across industry sectors.

We'd love to meet you at the show! For the full agenda and to register visit:

Announcing Publication of the Draft Ferrocene Language Specification Tue, 26 Jul 2022 00:00:00 -0400 Quentin Ochem

Since AdaCore first announced our partnership with Ferrous Systems back in February, we have been working diligently to further develop their Ferrocene Rust toolchain with the goal of qualifying it under relevant industry software safety standards for Rust users in high integrity markets, such as automotive, avionics, space and rail.

At this stage, our qualification work is primarily focused on documentation. And we are pleased to announce the publication of the initial draft of the Ferrocene Language Specification (FLS) - a qualification-oriented document that details the Rust language as it specifically relates to Ferrocene.

The FLS effort leverages existing Rust language documentation, Ferrous Systems’ Rust technical expertise, and AdaCore’s experience in programming language standardization and software safety certification. Our longstanding active involvement with the evolution of the Ada language standard and its defining Ada Reference Manual inspired the structure and the level of detail that we are using to write the FLS.

While initial development of the FLS is primarily a joint effort between Ferrous Systems and AdaCore, the document is now publicly available on GitHub and has been published under Rust’s standard open source licenses. Our team will continue to improve the FLS in the open, with a plan of finalizing it by the end of the year. We have no intention to replace Rust’s decision-making process. Our documentation will be responsive to ongoing Rust project changes, decisions, and Request for Comments (RFCs), and we will do our best to consider contributions from the community. Check out the contribution guidelines for more information.

If you are interested in more information about Ferrous Systems, AdaCore, or Ferrocene, please contact us.

Announcing The 2022 Ada/SPARK Crate Of The Year Award Tue, 28 Jun 2022 05:00:00 -0400 Fabien Chouteau

We're happy to announce the second edition of our programming competition, the Ada/SPARK Crate Of The Year Award! We believe the Alire package manager is a game changer for Ada/SPARK, so we want to use this competition to reward the people contributing to the ecosystem.

Why “Crate”? This is the name the Alire project uses to designate a software project, library or executable written using the Ada and/or SPARK programming languages and contributed to the Alire ecosystem. The word comes from the Cargo package manager.


The competition is starting today and ends on Friday December 31st 2022 at 23:59 CEST. We'll announce the results in January 2023. As mentioned before, you can submit projects you started before the competition, months or even years ago. The only thing that matters is that your crate has to be available in the Alire community index by the end of the competition.

How to enter?

The competition is hosted on GitHub. To enter, participants must open an "issue" on the competition repository using the provided template. Read the terms and conditions for more details.


This competition has 3 prizes of $2,000 each, for:

  • The Ada Crate Of The Year Prize, for best overall Ada crate;
  • The SPARK Crate Of The Year Prize, for the best crate written in SPARK and/or contributing to the SPARK ecosystem;
  • The Embedded Crate Of The Year Prize, for the best Ada or SPARK crate for embedded software;

Getting started with Alire and Ada/SPARK

You can have a look at the Alire documentation to start your first crate. If you don’t know Ada/SPARK programming, we recommend starting with our interactive online courses here.

We also recommend getting in touch with the Ada/SPARK and Alire community. Here are some links that you may find useful:

Of course, you should also have a look at the existing Alire ecosystem to see if your awesome project idea already exists or to see which existing crates might help in your endeavor.

Have fun, and happy hacking!

I can’t believe that I can prove that it can sort Thu, 23 Jun 2022 08:00:00 -0400 Yannick Moy

Sorting algorithms are to computer science what “Hello World!” is to programming. A way for beginners to get their hands dirty. Which also means that most programmers don’t write “Hello World!” programs past their studies, and computer scientists don’t look at sorting algorithms past their PhD.

Which made it surprising that a new “interesting” sorting algorithm was published at the end of 2021, whose appeal drew attention from both computer scientists and programmers. Here is the algorithm in full details:

As an expert user of Ada, and an anecdotal user of SPARK, one of us (Lionel) took it as a challenge to try some functional proof with SPARK. Despite a hopeful start, this ended up badly. As a SPARK expert thus contacted to help with the challenge, one of us (Yannick) took it as a way to show how functional proof should be approached in SPARK. Including some false starts, this ended up well (and under an hour). This is the story of this challenge, and the tips we think are important to share with those who aim at functional proof with SPARK.

I am Lionel, I’ll start.

I recently stumbled upon a tweet about a paper on an “unbelievable” sorting algorithm:

I read the paper intro, and couldn’t realize what was wrong with it instinctively, but quickly seeing all the replies to the tweet brought back memories of learning programming, writing cool (at the time) programs for years, and then later on getting some formal computer science education.

I dug up my first ever sorting algorithm. I’d used it in a silly 3D renderer that I’d implemented in 2000 on a Pentium 75, all implemented from scratch from what some might call nowadays “first principles” (i.e. I had absolutely no clue what I was doing). In a 3D renderer, once you’ve got your list of triangles to paint, and the direction of the camera set up, you want to z-sort the triangles so you can run the Painter algorithm (Wikipedia here is very generous, and calls it “depth-sort algorithm”). I needed a sort algorithm and I had no books, just my C compiler, SDL headers and some old French magazines about setting up VESA…

So I built one. I mean I built this one, and to me it looked like it worked. If you look at this video, it makes sense, somehow, especially for small array sizes, for a beginner that has no notion of smart sorting, big-O notation and no internet. My “unit tests” (generous term again) from the time show how little I understood about testing because they seem to be chosen for this algorithm (to validate this implementation and not sorting in general).

I’d put it together without videos and all the fancy visualizations one can use when learning sorting algorithms, and I didn’t think about this code for years afterwards.

When I later learned Ada, first year of formal studies, I needed a sort function for a project and I just ported that algorithm and went with it (and I lived with it until I some month later I had to sort 2 millions entries and after a night at 100% CPU with almost no progress, I caved and cracked my copy of Sedgewick and started learning the science and not just the hacking-stuff-up).

Paraphrasing the old code, it looked very simple:

procedure Stupid_Sort
   type A_Type is array (Natural range 1 .. 5) of Natural;

   procedure Sort (A : in out A_Type)
      for I in A'Range loop
         for J in A'Range loop
            if A (I) < A (J) then
                  Tmp : constant Natural := A (I);
                  A (I) := A (J);
                  A (J) := Tmp;
           end if;
         end loop;
      end loop;
   end Sort;

   A : A_Type;
   for I in A'Range loop 
     A (I) := A'Last - I + 1;
   end loop;

   Sort (A);
end Stupid_Sort;

Tip: Start small (simple small data types, a single subprogram)

When I read the tweet and all the replies, I admit I felt compassion for old me (well, young me), even though I saw other people admitting they’d come up with this sorting algorithm once upon a time. I wondered whether applying modern tech (other than having a sorting algorithm in the standard library and knowing about it…) blindly would comfort such a developer in his or her “wrong” version. I decided to launch myself a challenge: proving the algorithm using SPARK, without looking at the proof in the paper. Should be easy enough, right (famous last words)?

I fired up the ultimate Ada IDE (vim) and just copy-pasted the old code, compiled it (gnatmake -gnatA stupid_sort.adb) and ran it (through gdb to get the content of A after Sort returns). It works, for that input.

Tip: Start with a passing test

Then I wanted to prove that Sort sorts all possible arrays so I added the SPARK pixie dust:

procedure Stupid_Sort with SPARK_Mode => On

and the necessary post-condition for Sort:

procedure Sort (A : in out A_Type)
   with Post => (for all J in A'First .. A'Last - 1 => A (J) <= A (J + 1))

Which reads:

  • when the Sort procedure returns

  • for every element A(J) of the array (except the last one)
    • the element A(J) is smaller or equal to the next element A(J+1)

Then I created a GPR project to run GNATprove:

project Stupid_Sort is

   for Main use ("stupid_sort.adb");
   for Source_Files use ("stupid_sort.adb");

   package Compiler is
      for Switches ("Ada") use
           ("-gnata",     -- enable assertions and runtime checks
            "-gnat2022",  -- for newest forms of expressions in Ada 2022
            "-g", "-O0"); -- for debugging
    end Compiler;

end Stupid_Sort;

And on I went:

> gnatprove -Pstupid_sort.gpr -j0
Phase 1 of 2: generation of Global contracts ...
Phase 2 of 2: flow analysis and proof ...

stupid_sort.adb:6:58: medium: postcondition might fail, cannot prove A (J) <= A (J + 1)
    6 |     with Post => (for all J in A'First .. A'Last - 1 => A (J) <= A (J + 1))
      |                                                         ^~~~~~~~~~~~~~~~~

Which was kind of expected, but there’s a --level knob, if you’re lazy like me you’ll just try it whenever you find something GNATprove balks at (sadly it only goes to 4, you can’t turn it to eleven… yet):

> gnatprove -Pstupid_sort.gpr -j0 --level=2
Phase 1 of 2: generation of Global contracts ...
Phase 2 of 2: flow analysis and proof ...

And… GNATprove managed to prove the functional correctness of that postcondition!

Tip: Use proof automation (and turn up the knob)

Let’s check the synthesis of what GNATprove did (the gnatprove.out file):

Summary of SPARK analysis

SPARK Analysis results        Total       Flow   CodePeer    Provers   Justified   Unproved
Data Dependencies                 .          .          .          .           .          .
Flow Dependencies                 .          .          .          .           .          .
Initialization                    1          1          .          .           .          .
Non-Aliasing                      .          .          .          .           .          .
Run-time Checks                   .          .          .          .           .          .
Assertions                        .          .          .          .           .          .
Functional Contracts              1          .          .     1 (Z3)           .          .
LSP Verification                  .          .          .          .           .          .
Termination                       .          .          .          .           .          .
Concurrency                       .          .          .          .           .          .
Total                             2    1 (50%)          .    1 (50%)           .          .

max steps used for successful proof: 11967

Analyzed 1 unit
in unit stupid_sort, 2 subprograms and packages out of 2 analyzed
  Stupid_Sort at stupid_sort.adb:1 flow analyzed (0 errors, 0 checks, 0 warnings and 0 pragma Assume statements) and proved (0 checks)
  Stupid_Sort.Sort at stupid_sort.adb:5 flow analyzed (0 errors, 0 checks, 0 warnings and 0 pragma Assume statements) and proved (1 checks)

Here GNATprove tells us it managed to prove our postcondition (a functional contract) with Z3. So it works! SPARK can prove the algorithm! Victory? Not so fast.

Let’s try to go for larger array sizes, e.g. 1 .. 10. Now after 16 seconds, GNATprove goes back to:

stupid_sort.adb:6:58: medium: postcondition might fail, cannot prove A (J) <= A (J + 1)
    6 |     with Post => (for all J in A'First .. A'Last - 1 => A (J) <= A (J + 1))
      |                                                         ^~~~~~~~~~~~~~~~~

Turning the level up to the max (--level=4) doesn’t get us better results, but takes more than 3 minutes, for the same result. So back to square one.

The first reflex I had (wrongly) ingrained was to try and state obvious things through assertions. Keep in mind I didn’t want to read the paper, with the many juicy proofs and insights it might contain. So there I went:

First I put the swap code in its own procedure:

procedure Swap (A : in out A_Type; I : A_Index_Type; J : A_Index_Type)
  with Post => A (J) = A'Old (I) and A (I) = A'Old (J)
   Tmp : constant Natural := A (I);
   A (I) := A (J);
   A (J) := Tmp;
end Swap;

… which (spoiler) wasn’t a very good idea (see “framing conditions”, later). I didn’t get better results, so I went on adding assertions.

At some point I ended up with lots of tautological asserts, and it felt more and more like I really didn’t understand what the problem was.

if A (I) < A (J) then
   pragma Assert (A (I) < A (J)); -- *that* should always be true, right?
   Swap (A, I, J);
   pragma Assert (A (I) >= A (J)); -- *that* too, no?
end if;
pragma Assert (A (I) >= A (J)); -- doubting everything…

And still, no progress on the proof of the postcondition.

Tip: Avoid the Assertocalypse

The message from GNATprove was hinting at a Loop_Invariant:

stupid_sort.adb:22:14: medium: postcondition might fail
   22 |     Post => Sorted (A, A'First, A'Last)
      |               ^~~~~~~~~~~~~~~~~~~~~~~~~~
  possible fix: loop at line 25 should mention A in a loop invariant
   25 |      for I in A'range loop
      |                       ^ here

So I looked at the videos of the sort again, and came up with two (very wrong) loop invariants:

pragma Loop_Invariant (if I > A'First then A (I) >= A (A'First));
pragma Loop_Invariant (for all K in A'First .. I - 1 => A(I) >= A(K));

This made GNATprove very mad:

stupid_sort.adb:16:59: medium: postcondition might fail, cannot prove A (J) <= A (J + 1)
   16 |      with Post => (for all J in A'First .. A'Last - 1 => A (J) <= A (J + 1))
      |                                                          ^~~~~~~~~~~~~~~~~

stupid_sort.adb:28:53: medium: loop invariant might not be preserved by an arbitrary iteration, cannot prove A (I) >= A (A'first)
   28 |         pragma Loop_Invariant (if I > A'First then A (I) >= A (A'First));
      |                                                    ^~~~~~~~~~~~~~~~~~~

stupid_sort.adb:29:66: medium: loop invariant might fail in first iteration, cannot prove A(I) >= A(K)
   29 |         pragma Loop_Invariant (for all K in A'First .. I - 1 => A(I) >= A(K));
      |                                                                 ^~~~~~~~~~~

stupid_sort.adb:29:66: medium: loop invariant might not be preserved by an arbitrary iteration, cannot prove A(I) >= A(K)
   29 |         pragma Loop_Invariant (for all K in A'First .. I - 1 => A(I) >= A(K));

Tip: Understand tool messages

That’s when I decided to call up my local SPARK friend Yannick, to teach me about loop invariants, and how you prove such an algorithm with SPARK.

Tip: Have an expert on call

I am Yannick, I’ll jump in.

Let’s return first to the addition of the Swap procedure in Lionel’s code. Remember the postcondition he wrote for Swap:

procedure Swap (A : in out A_Type; I : A_Index_Type; J : A_Index_Type)
   with Post => A (J) = A'Old (I) and A (I) = A'Old (J)

That’s true, but not sufficient. Indeed, all the provers know about variable A after a call to Swap (since this parameter is modified in Swap) is what the postcondition of Swap says about it. And… it says nothing about all the values of A outside of indexes I and J! So there is no chance that GNATprove will be able to prove our sorting code.

This need to identify what has changed during a call with enough precision is known as the frame condition, and it’s a typical beginner’s mistake to forget it. Here, a suitable postcondition would be:

procedure Swap (A : in out A_Type; I : A_Index_Type; J : A_Index_Type)
   with Post => A = (A’Old with delta I => A(J)’Old, J => A(I)’Old)

which states exactly the content of A after a call to Swap.

Tip: Beware the frame condition

Ironically, you get the same result if you don’t specify a postcondition at all on Swap, because GNATprove will inline the call in that case! Inlining of calls and unrolling of loops are powerful techniques for automating the proof of programs, without the need for a user to specify contracts and loop invariants. But as usual with automation, the risk is that, when the situation gets more complex, automation fails and the user is left with a complex situation that she does not understand (a.k.a. the curse of automation). Loop unrolling explains why Lionel was initially "feeling lucky!" with Z3 proving the postcondition of Sort for small array sizes, without having to write loop invariants. But the curse of automation stroke back when increasing the array size, as the loops are not unrolled anymore, or the unrolling leads to unmanageable formulas for automatic provers.

One way to remain aware of the choices in terms of automation made by GNATprove is to use the switch --info which outputs such information:

stupid_sort.adb:11:14: info: local subprogram "Sort" only analyzed in the context of calls
  add a contract to analyze it separately from calling contexts
stupid_sort.adb:33:21: info: unrolling loop
stupid_sort.adb:39:04: info: analyzing call to "Sort" in context
stupid_sort.adb:39:04: info: in inlined body at line 18
  unrolling loop
stupid_sort.adb:38:04: info: in inlined body at line 17
  unrolling loop

Tip: Use the right tool configuration (and switches!)

Contrary to the heroic (masochistic?) Lionel, I did not aim at rediscovering how the algorithm worked by looking at the code. Instead, I took the time to read the short article, and to convince myself that I understood why it worked.

Tip: Understand the code that you want to prove

Well, at least I thought I understood the algorithm. More on that later.

I started by defining suitable types for the index of the array, one being slightly larger than the other, in order to accommodate empty ranges of values (when we start the iteration):

type I_Type_Base is new Integer range 0 .. 5;
subtype I_Type is I_Type_Base range 1 .. I_Type_Base'Last;
type A_Type is array (I_Type) of Natural;

Tip: Define suitable types with the tightest constraints

Types are the best specifications, because you get their properties everywhere a value of the type is used, without having to repeat the properties in assertions, preconditions, etc.

Then, I defined expression functions for the important properties used in the article: that the array over a given range is sorted, and that the maximum of the array over a given range is at a given index.

function Sorted (A : A_Type; From : I_Type; To : I_Type_Base) return Boolean 
   (for all I in From .. To =>
      (for all J in I .. To =>
         A(I) <= A(J)));

function Is_Max (M : I_Type; A : A_Type; From, To : I_Type) return Boolean 
  (for all I in From .. To => A (I) <= A (M))
  Pre => M in From .. To;

Tip: Define suitable expression functions for important properties

In general, it’s better to give a name to important properties that will be used multiple times in specifications, assertions and ghost code, because it makes the specification and proof more readable, and because it can help automatic provers. Here, it’s all the more helpful for automatic provers because it isolates quantifiers “for all”.

The form of these properties is also quite important. Instead of using the natural expression of sortedness as “every element is less or equal to the next”, I have expressed it as the equivalent transitive closure of this property, that is, “every element is less or equal to every element that follows”. That makes a big difference for automatic provers, as establishing the latter from the former requires inductive reasoning, which automatic provers are poor at, while the former is immediately deduced from the latter.

Similarly, I could have defined Is_Max as a function that takes as input the maximum Max of the array over a given range, instead of the index M of the maximum, and computes the conjunction:

(for all I in From .. To => A (I) <= Max)
   and then
(for some M in From .. To => Max = A (M))

But this uses an existential quantification “for some” which is hard to establish for automatic provers, as this requires exhibiting the “witness” index M here. So I went for the definition that did not require an existential quantification, by passing the index M as parameter instead.

Tip: Use idiomatic definitions of properties that help automatic provers

With these definitions, the loop invariant of the external loop can be expressed very easily, based on the properties of that loop described in the article:

pragma Loop_Invariant (Sorted (A, A'First, I));
pragma Loop_Invariant (Is_Max (I, A, A'First, A'Last));

It gets a bit trickier for the internal loop, as the explanations in the article in terms of values of K before and after I cannot be directly translated into loop invariants. Plus the article talks of going from index I to index I+1 while the code goes from index I-1 and index I (when expressing the loop invariant). That’s where I tried out various loop invariants in my head, with the support of pen-and-paper to understand how the algorithm really worked. And it went 💡 the array stayed sorted at every iteration of the internal loop for all indexes lower than I!

pragma Loop_Invariant (Sorted (A, A'First, I));

And during the internal loop, the maximum value over the whole array was either located at index I-1 (at the beginning of the iteration) or at index I (after getting to iteration J = I-1):

pragma Loop_Invariant
       M : constant I_Type := (if J < I then I-1 else I);
       Is_Max (M, A, A'First, A'Last));

Tip: Use pen-and-paper to really understand the code that you want to prove

It’s all too easy to “understand” that something works by going through the steps of an explanation/demonstration (like in the article), without being able to really understand why it works. Proof requires us to understand why it works.

As GNATprove could not prove the loop invariant of the inner loop, even at level 4 (all provers get called at that level, with a substantial timeout of 60 seconds per check), I tried running the program through its test with assertions enabled, and… the loop invariant failed at runtime! No wonder GNATprove could not prove it.

Tip: Execute assertions during tests to help debug them

Just looking at the failing loop invariant, I realized that it could not be true during the first iteration of the loop, where we have not yet identified the maximum value of the array. So I added a special case for I=A’First:

pragma Loop_Invariant
  (if I = A'First then
     Is_Max (I, A, A'First, J)
          M : constant I_Type := (if J < I then I-1 else I);
          Is_Max (M, A, A'First, A'Last)));

But the test was still failing at runtime! This time, I ran the test in the debugger (gdb inside GNAT Studio), to display values of all variables when hitting the failing loop invariant. That was a case of off-by-one error, the test “J < I” in the definition of constant M should be “J < I-1”. With that, the test was running without error.

Tip: Debug failing assertions by running tests with assertions in the debugger

I reran GNATprove on the code. It reported cases of runtime errors when calling functions Sorted/Is_Max, which I fixed. But GNATprove could still not prove the loop invariant of the internal loop stating that A remained sorted from A’First to I-1. I read again the explanations in the article, which confirmed that it was true. Yet it was not proved. So I looked at the code, to see how the loop invariant at iteration J can be deduced from the loop invariant at iteration J-1 and the execution of the current iteration. And it was not provable! Because we were missing the information that, up to value J=I-1, the value at index I is greater than all values seen so far:

pragma Loop_Invariant
   (if J < I then
      (for all K in A'First .. J => A(K) <= A(I)));

Now, GNATprove proves the code easily (at level 2).

Tip: Debug failing loop invariants by reasoning inductively, from one iteration to the next

All that remained was to generalize the array index type to range over all positive integers instead of the range 1..5, and to allow unconstrained arrays whose length is not known statically:

type I_Type_Base is new Integer range 0 .. Integer'Last;
subtype I_Type is I_Type_Base range 1 .. I_Type_Base'Last;
type A_Type is array (I_Type range <>) of Natural;

Because we’ve used so far relative attributes A’First/A’Last instead of the equivalent magic numbers 0/5, the adjustments needed are minimal.

Tip: Use language features to facilitate the generalization of assertions and ghost code

By reviewing the final code, I realized that the last loop invariant could be written without quantification, as we already get that the array is sorted up to index I-1:

pragma Loop_Invariant (if J < I then A(J) <= A(I));

Tip: Simplify ghost code once the program is proved

That concludes with the functional proof of this program.

Or does it? We have not proved here that the result of sorting is a shuffling of the entry. This is doable with SPARK, but just… not easy. In 99.9% of the cases, in practice you’d stop here because:

  • It is obvious from looking at the code to see that the result is a shuffling of the entry, as A is only modified by swapping two of its elements. So this can be verified by review easily.

  • Doing it by proof brings not much more assurance than the review, but a high cost both for the initial proof and for maintenance of the contracts and lemmas as the code or the tools evolve.

Tip: Don’t prove what you don’t need to prove

To recap, we saw 18 tips that could greatly facilitate your use of proof with SPARK (or any similar program proof environment):

Tip: Start small (simple small data types, a single subprogram)
Tip: Start with a passing test
Tip: Use proof automation (and turn up the knob)
Tip: Avoid the Assertocalypse
Tip: Understand tool messages
Tip: Have an expert on call
Tip: Beware the frame condition
Tip: Use the right tool configuration (and switches!)
Tip: Understand the code that you want to prove
Tip: Define suitable types with the tightest constraints
Tip: Define suitable expression functions for important properties
Tip: Use pen-and-paper to really understand the code that you want to prove
Tip: Execute assertions during tests to help debug them
Tip: Debug failing assertions by running tests with assertions in the debugger
Tip: Debug failing loop invariants by reasoning inductively, from one iteration to the next
Tip: Use language features to facilitate the generalization of assertions and ghost code
Tip: Simplify ghost code once the program is proved
Tip: Don’t prove what you don’t need to prove

Here is the final code for our version of the Stupid Sort algorithm:

pragma Ada_2022;

procedure Stupid_Sort with SPARK_Mode => On
   type I_Type_Base is new Integer range 0 .. Integer'Last;
   subtype I_Type is I_Type_Base range 1 .. I_Type_Base'Last;
   type A_Type is array (I_Type range <>) of Natural;

   function Sorted (A : A_Type; From, To : I_Type_Base) return Boolean is
      (for all I in From .. To =>
         (for all J in I .. To =>
            A(I) <= A(J)))
     Pre => From in A'Range
       and then To <= A'Last;

   function Is_Max (M : I_Type; A : A_Type; From, To : I_Type_Base) return Boolean is
     (for all I in From .. To => A (I) <= A (M))
     Pre => M in From .. To
       and then From in A'Range
       and then To in A'Range;

   procedure Sort (A : in out A_Type)
     Post => (if A'Length > 0 then Sorted (A, A'First, A'Last))
      for I in A'range loop
         for J in A'range loop
            if A(I) < A(J) then
                  Tmp : constant Natural := A(I);
                  A (I) := A (J);
                  A (J) := Tmp;
            end if;

            pragma Loop_Invariant (if J < I then A(J) <= A(I));
            pragma Loop_Invariant (Sorted (A, A'First, I-1));
            pragma Loop_Invariant
              (if I = A'First then
                Is_Max (I, A, A'First, J)
                   M : constant I_Type := (if J < I-1 then I-1 else I);
                   Is_Max (M, A, A'First, A'Last)));
         end loop;

         pragma Loop_Invariant (Sorted (A, A'First, I));
         pragma Loop_Invariant (Is_Max (I, A, A'First, A'Last));
      end loop;
   end Sort;

   A : A_Type (1 .. 1000);
   for I in A'Range loop
     A (I) := Integer (A'Last - I + 1);
   end loop;

   Sort (A);
end Stupid_Sort;
A New Era For Ada/SPARK Open Source Community Thu, 02 Jun 2022 00:00:00 -0400 Fabien Chouteau

Today we have two exciting announcements for the future of the Ada/SPARK ecosystem.

Policy for new software libraries

Until now, the default license of choice for new software libraries at AdaCore was GPLv3 as complemented by the GCC run-time exception. The reason for this choice dates back to the origin of the company. This is the license used for the Ada run-time library that is part of the GCC/GNAT compiler, the same as the C runtime library. As we were familiar with this license, we then reused it for software libraries that were used outside of the compilation context.

We believe that the competitive domain of programming languages where open-source is now the standard requires us to take actions which will increase adoption. We aim to convince application developers of all kinds to use Ada/SPARK and our libraries. Therefore, we need to make that easy by licensing the libraries under a license that is more commonly used in the open-source world and one that is more permissive. For those reasons we announce that, as of today, new libraries developed at AdaCore will use the Apache License 2.0 by default.

In addition, in the coming months we will also change the license of some of our libraries hosted on GitHub to Apache 2.0:

  • ada-traits-containers

  • gpr

  • langkit

  • libadalang

  • platinum_reusable_stack

  • spawn

  • VSS

We chose the Apache License 2.0 because we consider it to be the best option for our customers and the Ada/SPARK ecosystem at large.

For existing users of these libraries, this shouldn’t change much. People using these libraries either through GitHub or through our commercial services will still be able to create proprietary software. This will not impact the license of the tools or compilers that we distribute either.

Modernizing the ecosystem

Since 2005 AdaCore releases every year a special version of our GNAT toolchain for the free software developer, hobbyists, and students. This release is called GNAT Community, formerly GNAT GPL.

Two years ago we thought about modernizing the ecosystem. A cleaner and more familiar ecosystem with two variants: A GNAT provided and supported by AdaCore for commercial/industrial projects, GNAT Pro, and a GNAT provided by the community for open source projects with familiar licensing and without pure GPL run-times, GNAT FSF. This results in a decision by AdaCore to stop further releases of GNAT Community and have the community handle its successor.

We organized a survey to get the opinion of the community regarding the future of GNAT Community. We learned that our idea was mostly approved by the people that responded. We also confirmed our understanding that people not in favor of this idea would be concerned about the ease of use and quality. To address those concerns, we thought about solutions and ways to replace, and even improve on what GNAT Community offered.

Those thoughts led to our sponsorship and contribution to the Alire package manager created by Alejandro Mosteo from the Cen­tro Uni­ver­si­tario de la Defen­sa de Zaragoza. I said in a previous blog post that Alire is a game changer for the Ada/SPARK ecosystem and it really is. After two years of such collaboration, we feel like the project is now ready to become the main tool for the Ada/SPARK open source programming community.

Alire is a source-based package manager for the Ada and SPARK programming languages. It is a way for developers to easily build upon projects (libraries or programs) shared by the community, but also to easily share their projects for others to build upon. On top of that, everything you could do with GNAT Community you can now do with Alire: GNAT Studio, SPARK, native, cross ARM, and cross RISC-V compilers.

So today we announce the end of the GNAT Community releases with 2021 being the last one. We encourage all GNAT Community users to transition to Alire going forward. Full details and explanations on how to transition to Alire can be found at

Feel free to ask questions in the comment section of this blog post, or on the Ada forums such as Reddit r/ada, Telegram, Gitter, comp.lang.ada, etc. We will do our best to answer all the questions.

We are excited to embark in this exciting new era for the Ada/SPARK community.

Happy hacking!

Announcing Updates to Tue, 22 Mar 2022 00:00:00 -0400 Gustavo A. Hoffmann Ada Crate of the Year: Interactive code search Thu, 17 Mar 2022 00:00:00 -0400 Paul Jarrett -- trendy_terminal.gpr Platform : Platform_Type := external ("Trendy_Terminal_Platform"); case Platform is when "windows" => Trendy_Terminal_Sources := Trendy_Terminal_Sources & "src/windows"; when "linux" => Trendy_Terminal_Sources := Trendy_Terminal_Sources & "src/linux"; when "macos" => Trendy_Terminal_Sources := Trendy_Terminal_Sources & "src/mac"; end case;
# alire.toml
windows = { Trendy_Terminal_Platform = "windows" }
linux = { Trendy_Terminal_Platform = "linux" }
macos = { Trendy_Terminal_Platform = "macos" }
type c_lflag_t is (ISIG,

for c_lflag_t use
   (ISIG    => 16#0000001#,
    ICANON  => 16#0000002#,
    XCASE   => 16#0000004#,
    ECHO    => 16#0000010#,
    ECHOE   => 16#0000020#,
    ECHOK   => 16#0000040#,
    ECHONL  => 16#0000100#,
    NOFLSH  => 16#0000200#,
    TOSTOP  => 16#0000400#,
    ECHOCTL => 16#0001000#,
    ECHOPRT => 16#0002000#,
    ECHOKE  => 16#0004000#,
    FLUSHO  => 16#0010000#,
    PENDIN  => 16#0040000#

type Local_Flags is array (c_lflag_t) of Boolean
    with Pack, Size => 32;
Std_Input.Settings.c_lflag (Linux.ISIG) := not Enabled;
Handling Aliasing through Pointers in SPARK Mon, 14 Mar 2022 00:00:00 -0400 Claire Dross

As I explained in a blog post a couple of years ago, pointers are subjected to a strict ownership policy in SPARK. It prevents aliasing and allows for an efficient formal verification model. Of course, it comes at the cost of restrictions which might not be applicable to all usage. In particular, while ownership makes it possible to represent certain recursive data-structures, those involving cycles or sharing are de-facto forbidden. This is a choice, and not every proof tool did the same. For example, the WP plug-in of Frama-C supports pointers with arbitrary aliasing. If some information about the separation of memory cells is necessary to verify a program, then the user shall give the annotation explicitly. I have investigated modeling pointers with aliasing in SPARK as indices in a big memory array. I will present the results of my experiments in this blog post. We will see that, while such a representation is indeed possible modulo some hiding in SPARK, it can quickly become rather heavy in practice.

First of all, as pointers in SPARK are always subjected to ownership, we need to hide the access type from the analysis tool. For that, we can use a private type whose full view is annotated with SPARK_Mode => Off. Since we might want to declare pointers designating different types, I have declared this private type inside a generic package:

   type Object (<>) is private;
package Pointers_With_Aliasing with SPARK_Mode is
   type Pointer is private;

   procedure Create (O : Object; P : out Pointer);
   function Deref (P : Pointer) return Object;
   procedure Assign (P : Pointer; O : Object);
   procedure Dealloc (P : in out Pointer);

   pragma SPARK_Mode (Off);
   type Pointer is access Object;
end Pointers_With_Aliasing;

The functionalities are the same as basic pointers: they can be allocated, dereferenced, assigned, and deallocated. Deallocation takes a parameter of mode IN OUT, because it resets it to null, just like normal deallocation in Ada. Note that the Create subprogram, used for the allocation, is a procedure and not a function. Indeed, functions cannot have side-effects in SPARK, and we want allocations to have some global effect on the memory. To model the effects of these subprograms, I have declared a Memory object that I can use inside global contracts. I could have used an abstract state instead, but an actual object is more appropriate as it makes it possible to copy a previous value of the memory into a ghost constant to refer to it later if necessary:

Memory : Memory_Type;
procedure Create (O : Object; P : out Pointer) with
   Global => (In_Out => Memory);
function Deref (P : Pointer) return Object with
   Global => Memory;

Now let's consider some functional specifications. We represent the memory as a big map from addresses to objects: a pointer is valid if and only if its address, computed through an Address function, is associated with an object in the map, in which case it designates the object. This model can be used to annotate our primitives in a mostly straight-forward way. However, we can already get a taste at this stage of the annotation burden. For all procedures which update the memory, we must not only describe the effect on the modified cells, but also state that all the other cells are preserved. To make it easier, I have defined three helper functions, Writes, Allocates, and Deallocates. They take as a parameter a representation of the memory foot-print of the subprogram. For example, here is the contract of Assign. It states that no cells are either allocated or deallocated by Assign, and that all the previous mappings are preserved except for the address of P:

procedure Assign (P : Pointer; O : Object) with
   Global => (In_Out => Memory),
   Pre  => Valid (Memory, Address (P)),
   Post => Get (Memory, Address (P)) = O
     and then Allocates (Memory'Old, Memory, None)
     and then Deallocates (Memory'Old, Memory, None)
     and then Writes (Memory'Old, Memory, Only (Address (P)));

There is no magic behind the helper functions, they are simply defined using quantified formulas on the memory. The function Writes for example states that the mappings of all valid cells which are not in the memory foot-print are preserved. Foot-prints are represented here as a big array from addresses to a boolean value. An address is included in the foot-print if the array associates it with the value True:

function Writes (M1, M2 : Memory_Type; Target : Addresses) return Boolean is
   (for all A in Address_Type =>
      (if not Target (A) and Valid (M1, A) and Valid (M2, A)
       then Get (M1, A) = Get (M2, A)))
with Ghost;

Note that this representation of pointers has some draw-backs with respect to the use of normal (ownership-based) pointers, some of which can be alleviated. First, there is no way with these pointers to use SPARK to check the absence of memory leaks. As a work-around, it is probably possible to use some sort of ref-counting mechanism in the implementation of the library, but this mechanism will not be verified by SPARK. I have not tried it. Another disadvantage is the lack of a possible way to update a value in place (calling Deref and then Assign will copy the designated value twice). To alleviate this concern, I have introduced Constant_Reference and Reference functions which turn one of our pointers into an ownership pointer while observing or borrowing the whole memory. It still remains a bit more cumbersome to use than a direct update inside a visible pointer though. Finally, using a big memory object as a representation of the designated data might prevent SPARK from verifying that the program is thread safe. Indeed, if all pointers to a type are conceptually designating the same memory object, then all threads using these pointers are considered to access this object, potentially leading to a race condition. We would need some higher-level memory model to handle this case, maybe based on separation logic.

Now that we have defined a modelisation for our new pointer with aliasing, let's try to write and verify a few programs to assess their usability. We start with a very simple example, a Swap procedure. It is well known that it is not necessary to have two distinct pointers for swap to work as expected. Let's check it. First, we need to choose a type of objects and instantiate our generic to declare the pointer type. I have chosen a small record type with two fields. Swap can be defined straightforwardly from there:

type Object is record
   F : Integer;
   G : Integer;
end record;
package Pointers_To_Obj is new Pointers_With_Aliasing (Object);

procedure Swap (X, Y : Pointer) with
   Pre => Valid (Memory, Address (X)) and Valid (Memory, Address (Y)),
   Post => Deref (X) = Deref (Y)'Old and Deref (Y) = Deref (X)'Old
     and Allocates (Memory'Old, Memory, None)
     and Deallocates (Memory'Old, Memory, None)
     and Writes (Memory'Old, Memory, [for A in Address_Type => A in Address (X) | Address (Y)])
    Tmp : constant Object := Deref (X);
    Assign (X, Deref (Y));
    Assign (Y, Tmp);
end Swap;

We can see that I have annotated Swap with already a fair amount of specifications, especially considering its size and its complexity. The precondition states that X and Y are valid pointers in the memory. It is necessary since aliasing may result in the pointer referencing some deallocated data in one of the parameters. In the first line of the postcondition, I use Deref to state that the values designated by the pointers have indeed been swapped. The following 3 lines are here to express the frame condition of Swap: all other memory cells but the ones taken as parameters by Swap are preserved. These 3 lines would not have been needed if we were using ownership pointers, as the separation is handled in a built-in way by the tool. Still, this is not horrendous, and the SPARK analysis tool can check Swap without any issues. We can call it on the same memory cell for both parameters, which would not have been possible if we were using ownership pointers.

Let's try our modelisation on a (slightly) more complex example: a simply linked list. Linked lists can be defined using pointers with ownership too, but let's assume that we want to allow sharing between linked lists. The first issue we encounter is technical, we cannot instantiate our generic package with an incomplete type, so we cannot construct our recursive data-structure directly. This means that we need to use a class-wide type for our designated element, so we can declare a pointer before the actual designated object:

type Object is tagged null record;
package Pointers_To_Obj is new Pointers_With_Aliasing (Object'Class);
type L_Cell is new Object with record
   V : Natural;
   N : Pointers_To_Obj.Pointer;
end record;

In addition, as our memory model uses the equality operator on the object type, we'd better know what it is. To this effect, we introduce a Valid_Memory predicate which states that all the objects in the memory are list cells, and not some other type derived from Object for which the "=" operator could behave unexpectedly:

function Valid_Memory (M : Memory_Type) return Boolean is
   (for all A in M => Get (M, A) in L_Cell);

As our memory could theoretically contain cycles, it seems safer to store the length of the list on the side. It makes it easy to define what it means for an address to designate a valid (acyclic) list of a given length as follows:

function Valid_List (L : Address_Type; N : Natural; M : Memory_Type) return Boolean is
   --  if L is a null pointer, the list is empty
   (if L = 0 then N = 0
   --  otherwise, L designates a valid pointer in M and L.N is a list of length N - 1
    else N /= 0
      and then Valid (M, L)
      and then Valid_List (Address (L_Cell (Get (M, L)).N), N - 1, M))
with Pre => Valid_Memory (M);

type List is record
   Length : Natural;
   Values : Pointer;
end record;
--  Type for lists. We store the length together with the pointer.

function Valid_List (L : List) return Boolean is
  (Valid_List (Address (L.Values), L.Length, Memory))
with Pre => Valid_Memory (Memory);

Now that we have defined our linked list objects, let's try to write a utility program for them. To keep it simple, I have chosen to consider an Append procedure which takes a list and inserts it at the end of another list. We don't consider the actual elements stored in the list and focus on the memory safety only. As an example, let's assume that we have three valid lists L1, L2, L3 and that we want to be able to prove that it is safe to use Append to concatenate L2 to both L1 and L3 so we get two valid lists which share a sublist:

  L1 : List;
  L2 : List;
  L3 : List;
  --  Create L1, L2, and L3
  pragma Assert (Valid_List (L1));
  pragma Assert (Valid_List (L2));
  pragma Assert (Valid_List (L3));

  Append (L1, L2);
  Append (L3, L2);
  pragma Assert (Valid_List (L1));
  pragma Assert (Valid_List (L2));
  pragma Assert (Valid_List (L3));

Let's try to come up with a minimal contract for Append that would allow us to prove this kind of code. The first thing we need is to be able to say that two lists are disjoint (they do not share any memory cell). This is necessary as Append should only be called on disjoint lists if we do not want to create a cycle. To express it, we define a reachability predicate which returns True when there is a path from an address to another in the list structure. Using this predicate, we can say that two lists are disjoint if there is no cell reachable from both:

function Reachable (L : Address_Type; N : Natural; A : Address_Type; M : Memory_Type) return Boolean is
  --  A is reachable in the acyclic list starting at L in M iff:
  --  L is not null,
  (L /= 0 and then
  --  and either L is A or A is reachable from L.N
     (L = A
      or else Reachable (Address (L_Cell (Get (M, L)).N), N - 1, A, M)))
with Pre => Valid_Memory (M) and then Valid_List (L, N, M);

function Disjoint (L1, L2 : List) return Boolean is
  (for all A in Address_Type =>
     (if Reachable (Address (L1.Values), L1.Length, A, Memory)
      then not Reachable (Address (L2.Values), L2.Length, A, Memory)))
with Pre => Valid_Memory (Memory) and then Valid_List (L1) and then Valid_List (L2);

Using the above definitions, I have written the following specification for Append, which I have tried to make as simple as possible. In the precondition, I state that L1 and L2 are disjoint valid lists. In the postcondition, I need to express that L1 is still a valid list, the fact that other, disjoint lists are preserved is expressed through the frame condition: we only update memory cells reachable from L1 before the call. Finally, to be able to continue tracking the partitioning of memory after the call, I need to describe the cells which are reachable from L1 after the call. Note that I could have been more precise here, and constrain the list structure in a stronger way:

procedure Append (L1 : in out List; L2 : List) with
  Global => (In_Out => Memory),
  Pre => Valid_Memory (Memory)
     --  L1 and L2 are valid lists
     and then Valid_List (L1) and then Valid_List (L2)
     --  L1 and L2 are disjoint
     and then Disjoint (L1, L2)
     --  the sum of their lengths is a natural
     and then Natural'Last - L1.Length >= L2.Length,

  Post => Valid_Memory (Memory)
     --  L1 is a valid list
     and then Valid_List (L1)
     --  It is long as L1 + L2
     and then L1.Length = L1.Length'Old + L2.Length'Old
     --  The new list contains the same pointers as the 2 input lists
     and then (for all A in Address_Type => Reachable (Address (L1.Values), L1.Length, A, Memory) =
                 (Reachable (Address (L1.Values)'Old, L1.Length'Old, A, Memory'Old)
                  or Reachable (Address (L2.Values), L2.Length, A, Memory'Old)))
     --  Nothing has been allocated or deallocated
     and then Allocates (Memory'Old, Memory, None)
     and then Deallocates (Memory'Old, Memory, None)
     --  Only cells reachable from L1 before the call have been modified
     and then Writes (Memory'Old, Memory, Reachable_Locations (L1)'Old);

To prove the natural implementation of Append, I had to write 6 lemmas, all involving proofs by induction, which are currently out-of-reach of the automatic solvers at the backend of SPARK. Mostly, they state what happens to the validity of lists and reachability on preserved parts of the memory. Applications of the same lemmas were also necessary to prove the memory safety of the two consecutive applications of Append above. Here again, the complexity turned out to be tractable, even if definitely non-trivial. Note that an Append function on simply linked lists using regular SPARK pointers does not require any contract or ghost code to be proven correct, so the advantages of using regular pointers are obvious here. Whether they are coming from ownership or the fact that the pointers are supported in a built-in way by the tool is less clear. The need for a classwide type to construct the list would definitely disappear with built-in support, but the reachability predicate, the frame conditions, as well as some auxiliary lemmas would most probably remain, as can be seen on list examples in the Frama-C framework.

We have reached the end of this post. If you are still with me, I think that we have demonstrated that, while it is possible to define and use pointers with aliasing in SPARK, it definitely does not come for free. If you are interested, you can find the complete example in the spark testsuite.

Quite Proved Image Format Thu, 10 Mar 2022 04:05:00 -0500 Fabien Chouteau

A few weeks ago a piece of code went viral in the online dev community. The “Quite OK Image Format” (QOI) is a fast, lossless image compression designed to have a very simple implementation (about 300 lines of C).

Shortly, a few alternative implementations popped up here and there, and in this kind of situation we are eager to show what Ada/SPARK can bring to the table.

So we started by writing a translation from the C implementation to Ada. The code was already quite different thanks to Ada features such as nested subprograms or range tests. We also used representations clauses to easily read/write QOI “chunks” and avoid bit shifts and masks that can be difficult to understand:

The next step was to use SPARK to prove the absence of run-time error. With this algorithm that mostly translates data from one buffer to another, the main benefit is to guarantee that the code does not have out-of-bounds access when encoding the image.

Like in the C implementation, we decided to leave the responsibility of allocating the output buffer to the caller, and to provide a function to compute the worst case output size. This worst case output size is one of the preconditions for the Encode subprogram. With SPARK, users can therefore prove that images will always be successfully encoded.

Passing the proof required adding pre- and post-conditions to specify what each function expects and guarantees, and loop invariants in the main loop to help the prover analyze the behavior of the loop. The main encoding loop is iterating over every pixel of the image, and some iterations will "push" more data to the output buffer than others. But we don't overflow because that means we did not push anything during the previous iteration.

SPARK is also quite picky about initialization, so by default it requires that the output buffer is fully initialized on return. That’s not so convenient here, as the encoded data size will most of the time be smaller than the worst case buffer size provided by the caller. To have more flexibility, we added the aspect Relaxed_Initialization on the output buffer to specify that it may not be fully initialized, and we specify in the postcondition which part of the array is initialized, and thus can be read by the user after the procedure call.

A possible next step for the user interface would be to add contracts on Encode and Decode that would enable the user to prove the preconditions of one with the postconditions of the other, which is likely to happen in practice.

This SPARK QOI encoder/decoder code is hosted on GitHub and available in the Alire ecosystem in the “qoi” crate. You can get it with:

$ alr get qoi

And if you go into the tests directory you can build a simple PNG/QOI converter:

$ cd qoi_*/tests/
$ alr build
$ ./bin/tests test.png test.qoi

To use the library in your own projects, simply run

$ alr with qoi

The code is compatible with ZFP run-times, so you can use it in embedded projects.

Ada GameDev Part 2: Making 2D maps with Tiled Tue, 08 Mar 2022 03:42:00 -0500 Fabien Chouteau

In part one of this series we saw how the GESTE library brings 8-bit era graphics rendering to modern microcontrollers. In this second post we will see how to create your game maps and export them to a format that is compatible with GESTE.

The journey begins with graphic assets. To create our levels we need a tileset (see part one). Of course you can draw your own tileset with a drawing program like gimp or photoshop, but if (like me) you are not a great artist, the best way is to use assets from For this post I will use the “EverCrazy 8x8 Tile Palette” by EverCrazy.

Next we need a tool to design the levels. We are going to use an open source map editor Tiled (

This is not going to be a tutorial on how to use Tiled, but we will go through the basic steps.

Make a tileset in Tiled

The first step when starting a Tiled project is to create a tileset. Click on File -> New -> New Tileset…

A dialog box will open:

Click on “Browse...” to select the tileset picture downloaded from The important parameters are:

  • The transparent color. You can click on the colored box to select in the image what the transparent color is.

  • Tile width and height. Here we have 8x8 pixel tiles

Click on “Save As...” to create a “.tsx” file that contains the tileset information.

Now you should have the tileset available in Tiled:

You can have more than one tileset in Tiled and use them in your maps.

Make a map in Tiled

Now that we have a tileset we can create a map. Click on File -> New -> New map…

A dialog box will open:

The parameters:

  • Orientation

  • Tile layer format

  • Tile render order

should not be changed. Otherwise the map will not be compatible with GESTE.

The tile size should be 8x8 pixel as this is what our tileset uses. The map size doesn’t matter much, it can be changed later if need be.

Click on “Save As...” to create a “.tmx” file that contains the map information.

Now you should have a map drawing window with the map grid in the middle and tileset at the bottom right.

From there you just have to select a tile in the tileset and click in the map grid to place it. And that’s how you create your maps:

Generate code for GESTE

Now that we have a map and tileset from Tiled, the next step is to convert them to a format compatible with GESTE.

I developed a command line tool that takes Tiled files (.tmx, .tsx) and generates GESTE data inside Ada packages. The tool is called “tiled_code_gen” and is available in the Alire package manager.

tiled_code_gen reads maps and tilesets from Tiled and creates both a common color palette and a common tileset. So only the colors and tiles actually used in your maps are in the generated output. This will optimize memory usage, which is often a scarce resource on microcontrollers.

Here’s what the command line looks like:

$ tiled_code_gen -f RGB565_Swap --geste --root-package-name=game_assets *.tmx

  • -f: defines the output pixel format. Here we use a 16-bit RGB that is byte swapped for compatibility with the screen of the PyGamer board.

  • --geste: enables the GESTE output in the form of Ada packages

  • --root-package-name=game_assets: defines the name of the root package for the generated Ada code

  • *.tmx: the Tiled map(s)

The output will contain:

  • a package specification that provides constants for GESTE configuration (e.g. tile size, number of tiles, color format)

  • a package specification that contains the color palette

  • a package specification that contains the tileset

  • game_assets-*.ads: one package specification for each Tiled map that contains the map data

If you want a more complete example, have a look at the assets from my game “Shoot'n Loot”. The Tiled files are here and the corresponding generated code here.


Combined, GESTE and tiled_code_gen make game asset development a breeze.

I will try to cover in a follow-up post the extra features of tiled_code_gen that I did not cover here, for example the handling of tile collision boxes.

In the meantime you can try to modify “Shoot'n Loot” levels or even add new levels; contributions are welcome.

SPARK Crate of the Year: Unbounded containers in SPARK Wed, 02 Mar 2022 00:00:00 -0500 Manuel Hatzl function Alloc return Name is pragma SPARK_Mode (Off); -- Only the allocation has to be in SPARK off begin return new Object; exception when Storage_Error => return null; end Alloc;
with Spark_Unbound.Safe_Alloc;

procedure Test is
  type Alloc_Record is record
    V1 : Integer;
    V2 : Natural;
    V3 : Positive;
  end record;

  type Record_Acc is access Alloc_Record;
  package Record_Alloc is new Spark_Unbound.Safe_Alloc.Definite(T => Alloc_Record, T_Acc => Record_Acc);
  Rec_Acc : Record_Acc;
  Rec_Acc := Record_Alloc.Alloc; -- Note: No `new` is set before 

  -- check if Rec_Acc is NOT null and then do something

end Test;
with Spark_Unbound.Arrays;

procedure Test is
  package UA_Integer is new Spark_Unbound.Arrays(Element_Type => Integer, Index_Type => Positive);
  Test_UA : UA_Integer.Unbound_Array := UA_Integer.To_Unbound_Array(Initial_Capacity => 3);
  Success : Boolean;
  -- Fill Array
  UA_Integer.Append(Test_UA, 1, Success);
  UA_Integer.Append(Test_UA, 2, Success);
  UA_Integer.Append(Test_UA, 3, Success);

  -- Now Append() needs to resize
  UA_Integer.Append(Test_UA, 4, Success);
end Test;
package Documentation is
  for Doc_Pattern use "^-";
  -- This considers comments beginning with "---" to be documentation
  -- Needed to ignore commented Ghost functions that would break GNATdoc
end Documentation;
Ada GameDev Part 1: GEneric Sprite and Tile Engine (GESTE) Mon, 28 Feb 2022 04:14:00 -0500 Fabien Chouteau

Today I am starting a series of blog posts about video game development with Ada.

Video games are what first drew me to programming and computer science. As far as I remember, my first ever programming experience was scripting my own maps in the first Medal Of Honor PC game. But except for a couple of "rush" projects during my studies, I actually did not make a lot of games. So a couple of years ago I started a journey back to my early programming experience and began to develop some games and game development frameworks in Ada. In this series I will present some of these projects and how you can use them to make your own games in Ada.

Although this is the first post in the series, an earlier project of mine could be considered my first video game in Ada. In 2015, I wrote a post about my interactive Apollo 11 moon landing simulator. This is effectively a video game in which only the most talented pilots will be able to land safely on the moon. This game is made with gtkada, and you can now try it from the Alire package manager:

$ alr get eagle_lander
$ cd eagle_lander*
$ alr run


In this first entry of the series, I want to present my GEneric Sprite and Tile Engine (GESTE) project. The goal of GESTE is to bring the rendering and aesthetic of 8bit era game consoles to modern microcontrollers.

Picture Processing Unit

First, we have to understand how the consoles of that time (NES, game boy, mega drive/genesis, etc.) managed to render complex images on the screen despite their weak computing capabilities (a couple of megahertz from an 8-bit CPU). The solution was to offset all the heavy work to a graphics coprocessor, sometimes called a Picture Processing Unit (PPU). PPUs worked around four main features: the color palette, the tile set, tile maps, and sprites.

  • The color palette is a quite simple concept, an indexed collection of colors that can be displayed on the screen. For some consoles the palette was fixed, others supported multiple palettes that could be used at the same time. One color of the palette is usually reserved for transparency.

  • The tile set is a collection of small graphics (tiles), usually of fixed size like 8 by 8 pixels, that are effectively matrices of indexes in a color palette. The same tile could sometimes be used with different palettes. For example, Mario and Luigi can be drawn from the same tile but using two different palettes (red and green).

  • Tile maps are matrices of indexes in the tile set that form a large background image composed of tiles. Depending on the console, you can have multiple tile maps on the screen.

  • The sprites are graphics objects, composed of one or a couple tiles, that can be placed freely on the screen.

The picture below shows an example of palette, tiles, tile maps and sprites combined to compose a complex image on screen:

The PPU was a piece of hardware, so rendering complex scenes was fast (60 frames per second) and didn’t involve the CPU at all.

Simulating the PPU

As I said above, the goal of GESTE is to bring this kind of picture rendering to modern microcontrollers. Of course today’s generic microcontrollers don’t have PPUs, so everything has to be rendered from the CPU. And even if microcontroller CPUs are significantly more powerful than they used to be, it is a challenge to render multiple maps and sprites. Let’s see how we can achieve that.


A picture in GESTE is made of layers that roughly correspond to the tile maps and sprites of PPUs. There are three kinds of layers built-in, but you can implement your own:

  • Sprite layers to display a single tile at a given position

  • Grid layers to display a grid of tiles at a given position, similar to PPU tile maps

  • Text layers to display text at a given position

The Sprite and Grid layers are made of tiles, themselves made of colors in a palette, just like in a PPU.

Layers implement a function that returns a color for the given coordinates:

function Pix (This : Layer_Type;
                  X, Y : Integer)
                  return Output_Color;


The rendering algorithm is somewhat similar to ray casting. Instead of taking each object of the scene and drawing it on the screen, the engine takes each pixel and tries to find its color from the different objects of the scene.

For each pixel of the area that is being drawn, GESTE will go through the list of layers and see if the corresponding pixel inside the layer is transparent or not. When a non transparent pixel is found, the pixel is pushed to the screen and the procedure starts again for the next pixel. If all the layers have a transparent pixel, a background color is used.

Depending on the performance of the CPU, the time it takes to update pixels on the screen or the complexity of the scene, it might not be possible to render the full screen for every frame.

For those reasons, GESTE is capable of rendering only the objects that have changed or moved since the previous frame. And for those objects, it will try to update the smallest area possible.

Pushing pixels to the screen

Most of the time when using a microcontroller and a screen, the screen will be connected through some kind of serial protocol (e.g. SPI or I2C) and it might not be possible to send a full frame 60 times per second. It is also possible that there is not enough RAM available to have a full frame buffer in memory all the time.

This is why GESTE rendering does not draw directly on the screen, it pushes pixels into a buffer (provided by the user) and then calls a function (provided by the user) to send that buffer to the screen:

The best performance is achieved when sending the pixel buffer is offloaded to a Direct Memory Access (DMA) controller and the CPU can start to render the next pixel buffer while the first one is being sent to the screen.


With all those techniques combined, GESTE is capable of rendering complex scenes with multiple background layers, sprites and transparency on a 120MHz Cortex-M microcontroller with enough cycles left for game logic, physics engine and sound.

Here is one of the games that I developed with GESTE, “Shoot’n’loot”. It runs on an AdaFruit PyGamer board with a Microchip samd51 microcontroller:

If you own a PyGamer board, you can download the latest release of Shoot’n’loot here.

Otherwise, a PC version is available in the Alire package manager and should work fine on Windows and Linux (at least):

$ alr get shoot_n_loot
$ cd shoot_n_loot*
$ alr run


That’s it for the first part of this series. In the next one we will see how to create levels and graphics for GESTE using a tool called Tiled.

Proving the Correctness of GNAT Light Runtime Library Thu, 10 Feb 2022 08:32:00 -0500 Yannick Moy

As a programming language, Ada offers a number of features that require runtime support, e.g. exception propagation or concurrency (tasks, protected objects). The GNAT compiler implements this support in its runtime library, which comes in a number of different flavors, with more or less capability. The GNAT light runtime library is a version of the runtime library targeted at embedded platforms and certification, with an Operating System or without it (baremetal). It contains around 180 units focused mostly on I/O, numerics, text manipulation, memory operations.

Variants of the GNAT light runtime library have been certified for use at the highest levels of criticality in several industrial domains: avionics (DO-178), space (ECSS-E-ST40C), railway (EN 50128), automotive (ISO-26262). Details vary across certification regimes, but the common approach to certification used today is based on written requirements traced to corresponding tests, supported by test coverage analysis. Despite this strict certification process, some bugs were found in the past in the code. An ongoing project at AdaCore is applying formal proof with SPARK to the light runtime units, in order to prove their correctness: that the code is free of runtime errors, and that it satisfies its functional specifications. So far, 40 units (out of 180) have been proved, and a few bugs fixed along the way (including a buffer overflow).

But first, let’s consider a motivating example of why one may need formal proof to get confidence in the correctness of runtime units. Back in 2012, the late great programmer (and co-founder of AdaCore) Robert Dewar implemented runtime support for big integers in the GNAT compiler, in order to allow intermediate arithmetic computations without overflows (say, if you compute (A * B) / C but (A * B) might overflow, this allows you to tell the compiler to compute (A * B) / C with big integers, so that only the final result has to fit in a machine integer). The most complex function was the division between big integers, for which he implemented algorithm D by Donald Knuth from The Art of Computer Programming Vol 2, 2nd Edition - 1981, section 4.3.1. One of the code reviewers reported a possible integer overflow in a test, when computing the quantity ((u (j) & u (j + 1)) - DD (qhat) * DD (v1)) * b. Robert was initially not worried, given that this closely followed Knuth’s published algorithm, but got concerned when it was shown that the overflow could be exercised! So that the computation of (A * B) / C with A = 18446744069414584318, B = 4294967296 and C = 18446744069414584319 was giving the result 2147483648 instead of the correct 4294967295.

Thankfully, we were not the first to spot the bug, which had already been corrected in 1995. Here is the relevant section of errata of TAOCP Vol 2, 2nd Edition, replacing the buggy test with new code (to the right of the strange arrow):

Errata of TAOCP Vol 2, 2nd Edition

In fact, with this patch, the rewritten test might still lead to an overflow! This was detected a decade later, in 2005. Here is the relevant section of errata of TAOCP Vol 2, 3rd Edition, changing the comparison operation:

Errata of TAOCP Vol 2, 3rd Edition

After careful code reviews, we convinced ourselves that the new version was correct, but, already at the time, we wondered whether this could be proved using SPARK tools (after all, the GNAT compiler is written itself in Ada, so we could hope to prove part of it). That was not possible at the time, but we kept it as a future challenge.

Of course, the same algorithm may get implemented numerous times in a given application, and GNAT was no exception. There were two other implementations of algorithm D in GNAT, one in uintp.adb for arbitrary-precision computation at compile time, and one in s-arit64.adb for runtime support of fixed-point arithmetic. In the specific context of these two other implementations, we found no clear bug: the fixes were propagated to uintp.adb which was using a similar test, while s-arit64.adb used a different comparison which could not overflow. But given that the 1st Edition of Vol 2 was published in 1969, there must be hundreds of implementations of this algorithm out there that did not apply later fixes and are still incorrect!

Five years later, in 2019, our interest in the implementation of algorithm D in s-arit64.adb was raised by a remark of an external auditor, as part of the certification of this runtime unit for use in space. The auditor noted the high complexity of this function and asked for the addition of more comments in the code to be able to assess its correctness. Prompted by this request, we reviewed again this implementation and discovered that the code failed to raise an exception in a case where it should have done so (because the result of the division was too large), and that the code of another function in that unit contained two possible integer overflows when converting between signed and unsigned values. Thankfully, none was critical, because the former concerned a case of incorrect inputs, and because the overflows in the latter were silent in the runtime at that time (the runtime was compiled without runtime checking). Still, that was a close-enough call for us to wish that we could increase our confidence in the correctness of this code through proof.

And this is what we did in the summer of 2021! Our intern Pierre-Alexandre Bazin used SPARK to prove that s-arit64.adb was correctly implementing all its functions: there were no possible runtime errors in the code, and all the functions implemented their specification faithfully. This required expressing the specification as contracts in SPARK, that is, preconditions and postconditions, like here for the function Scaled_Divide implementing algorithm D:

Contract of Scaled_Divide

The postcondition uses big numbers to express that the resulting quotient Q is the mathematical operation (X * Y / Z) and the resulting remainder R is the rounded value of the mathematical remainder. The precondition states that these values for Q and R should fit in the machine integer type Double_Int. See the code for the definition of the ghost functions Round_Quotient and Same_Sign which are used to define this contract.

The implementation of Scaled_Divide was slightly modified to make it provable, but more critically, Pierre-Alexandre had to use quite a lot of ghost code to guide automatic provers, including basic arithmetic lemmas to enunciate and prove mathematical properties, as well as a number of more complex lemmas to isolate parts of the proof, and a few intermediate assertions to simplify and share the proofs between provers.

Encouraged by this initial success, we have added contracts expressing the full functional specification of many other units in the GNAT light runtime, and proved with SPARK that the code correctly implemented these contracts. This includes units for character and string handling (like, units for support of language attributes ‘Width, ‘Value and ‘Image (like, and, support for exponentiation (like We have so far proven 40 such units, and, along the way, we have discovered and fixed a few cases of overflow check and range check failures, one of which could lead to a buffer overflow on a runtime built without runtime checks. As you can see from the source files, that required adding many specifications (around 400 preconditions and 500 postconditions) and ghost code (around 150 loop invariants, 400 assertions, 300 ghost entities), and the daily proof takes 1.5 hours on a Linux server with 36 cores.

Most remaining units remain out of reach for SPARK today, either because they rely on an untyped memory model (converting between raw Address values and typed pointers) or because they require precise reasoning on bitwise floating-point representation. Most units that use Address-to-pointer conversions use very simple algorithms, and those that manipulate floating-point values are direct translations in Ada of either reference C implementations or textbook algorithms, which increases confidence in their correctness. Our vision for the future is to both maintain the automatic proof of the 40 units proved so far as the analysis tool and provers get updated, so that we can benefit from the associated assurance in certification, and to grow the set of proved units as SPARK language allows more constructs and tooling improves.

The fact that this effort has not led to the discovery of serious bugs is a testament to the quality of the GNAT light runtime code, which has been submitted to a very high level of scrutiny in the past 20 years as it has been certified to the highest levels of multiple certification standards for avionics, railway, space, etc. Proof with SPARK is a new way to achieve this high level of assurance, with stronger guarantees about the absence of whole classes of errors, and about the faithfulness of all code paths to the specification.

This work was presented in the Ada devroom at FOSDEM 2022.

AdaCore and Ferrous Systems Joining Forces to Support Rust Wed, 02 Feb 2022 00:00:00 -0500 Quentin Ochem

For over 25 years, AdaCore has been committed to supporting the needs of safety- and mission-critical industries. This started with an emphasis on the Ada programming language and its toolchain, and over the years has been extended to many other technologies. AdaCore’s product offerings today include support for the Ada language and its formally verifiable SPARK subset, C and C++, and Simulink and Stateflow models. We have accomplished this while addressing the requirements of various safety standards such as DO-178B/C, EN 50128, ECSS-E-ST-40C / ECSS-Q-ST-80C, IEC 61508 and ISO 26262.

About a decade ago, a new programming language named Rust emerged with the goal of improving industry-wide programming practices towards higher reliability. Over the years, the technology evolved and is now appealing to the high-integrity embedded markets such as automotive. It is on these premises that a new company, Ferrous Systems, was formed in Germany by a number of Rust community members, aiming at providing safety-critical and certified toolchains for Rust users through its Ferrocene technology.

The story could have stayed there, with two separate tracks for two separate approaches to solving the same problem. Instead, AdaCore and Ferrous Systems started to realize that the two companies shared a fundamental understanding and approach both from a technical and business standpoint. Ferrous Systems and AdaCore have the same desire to support programmers with better languages, the same commitment to open-source software, the same drive towards facilitating software certification, all with very similar technologies. And both companies came to the same conclusion: by working together, they could more quickly bring a safety-certified Rust toolchain to the high integrity market.

Ferrous Systems and AdaCore are announcing today that they’re joining forces to develop Ferrocene - a safety-qualified Rust toolchain, which is aimed at supporting the needs of various regulated markets, such as automotive, avionics, space, and railway.

For Ferrous Systems, Ferrocene is an opportunity to leverage their Rust technical expertise and their relationship with the Rust community to turn the language into a first-class citizen for mission- and safety-critical embedded software development.

For AdaCore, this effort complements our long-standing Ada commitment and offers an opportunity to extend to the Rust community the expertise that we developed around safety-certified Ada toolchains.

Together we believe that there is a need for both Ada and Rust in the safety- and security-critical arena, and we intend to support both.

Concretely, that means qualifying the Ferrocene Rust compiler according to various safety standards, an effort that will eventually include the development and qualification of the necessary dynamic and static analysis tools. Ferrous Systems and AdaCore are also looking at safety-certified libraries, including language support (libcore) or additional user libraries. We are aiming at targeting various architectures and operating systems relevant to these markets. This vision will take time to come to fruition, and Ferrous Systems and AdaCore are poised to start by focusing on some specific aspects. Eventually, our objective is to support Rust as comprehensively as any other programming language relevant for high integrity application development.

While our initial work will be focused on pure Rust applications, our long-term commitment to Rust and Ada extends to developers who will be using both languages at the same time. We are looking at interoperability between them - including, in particular, the idea of developing bi-directional binding generators. We are also looking at using that interoperability ourselves, perhaps by developing formally proven and certified libraries in SPARK to be used by both Ada and Rust users.

Above and beyond, we are excited to open this new chapter of collaboration between the Rust and Ada communities. You can expect more news in the years to come!

If you are interested in more information about Ferrocene, please fill out this form.

AdaCore at FOSDEM 2022 Tue, 01 Feb 2022 03:55:00 -0500 Fabien Chouteau

Like previous years, AdaCore will participate in FOSDEM. Once again the event will be online only, but this won’t prevent us from celebrating Open Source software and it is an opportunity for even more people to participate around the world.

AdaCore engineers will give three talks in the Ada devroom on Sunday the 6th of February:

Hope to see you virtually at FOSDEM this week-end!

Ada/SPARK Crate Of The Year 2021 Winners Announced! Thu, 27 Jan 2022 05:04:00 -0500 Fabien Chouteau In June of 2021 we announced the launch of a new programming competition called Ada/SPARK Crate Of The Year Awards. We believe the Alire source package manager is a game changer for Ada/SPARK, so we want to use this competition to reward the people contributing to the ecosystem. Today we are pleased to announce the results.

But first, we want to congratulate all the participants, and the Alire community at large, for reaching 200 crates in the ecosystem in January of this year. We truly believe in a bright future for the Ada/SPARK open-source ecosystem with Alire at the forefront. Reaching this milestone is a great sign, inside and outside the Ada/SPARK community, of the evolution and the energy of the ecosystem.

Without further ado, the winners of the 2021 Ada/SPARK Crate of the Year Awards are:

The Ada Crate of the Year Prize is awarded to Septum by Paul Jarrett.

Septum is a context-based code search tool, in the author's words:

“Septum is like grep, but searches and returns matching contexts of contiguous lines, rather than just single lines or a multi-line search mode.”

It is an “end user” application that is not only useful for Ada/SPARK programmers but to the developer community at large. Septum is based on a couple of other crates that Paul also contributed (dir_iterators, trendy_terminal, trendy_test) showing a good use of the source package manager’s modularity.

Last but not least, Paul used GitHub Actions and his own unit testing framework (trendy_test) to perform automatic quality assurance on the project. A great showcase of the best open-source software practices for Ada development.

The Embedded Crate of the Year Prize is awarded to rp2040_hal by Jeremy Grosser.

The RP2040 is a new ARM microcontroller developed by the Raspberry Pi foundation, and HAL stands for Hardware Abstraction Layer. rp2040_hal is a crate providing drivers for the peripherals of this trendy new embedded computer.

The RP2040 is getting very popular in the open-source/open-hardware community. In part because it is not as much impacted by the worldwide semiconductor shortage, but mostly because of its great design and low price. Therefore, having first class support for the RP2040 in Ada is great for the adoption of the languages and opens a lot of possibilities for the community.

Along with the rp2040_hal crates, Jeremy and other contributors added a few other crates providing a Board Support Package (BSP) and examples for the official development board of the Raspberry Pi, the Raspberry Pi Pico. Once again a nice use of package management to improve code re-use, as other Ada developers can now develop BSPs for new boards based on the RP2040.

And special mention to the excellent documentation provided by Jeremy, and the low level driver unit-testing.

The SPARK Crate of the Year Prize is awarded to spark_unbound by Manuel Hatzl.

Rather than expecting every allocation to succeed, users of this library have to explicitly handle cases where fresh memory is not available. For instance, resizing an array may not be possible in case of memory exhaustion.

The advantage of SPARK here is two fold. On one hand, Manuel used SPARK and GNATprove to ensure that his implementation of an unbound array is safe. On the other hand, users of this library programming in SPARK, will be able to prove their application is protected against memory exhaustion.

We are eager to see the evolution of this crate, with maybe an option to provide a custom memory allocator for embedded applications.


Thanks again to all the participants and stay tuned for more news on the Ada/SPARK Crate Of The Year Awards.

SPARKNaCl - Two Years of Optimizing Crypto Code in SPARK (and counting) Wed, 15 Dec 2021 06:59:00 -0500 Yannick Moy

SPARKNaCl is a SPARK ver­sion of the Tweet­Na­Cl cryp­to­graph­ic library, developed by formal methods and security expert Rod Chapman. For two years now, Rod has been developing and optimizing this open-source cryptographic library while preserving the automatic type-safety proof across code changes and tool updates. He has recently given a talk about this experience that I highly recommend.

In case you'd like to know more, but you're not yet ready to jump into the code on GitHub, you can check out the blog series that Rod has posted over the last two years about the project:

  1. Proving properties of constant-time crypto code in SPARKNaCl
  2. Performance analysis and tuning of SPARKNaCl
  3. Doubling the Performance of SPARKNaCl (again...)
  4. SPARKNaCl with GNAT and SPARK Community 2021: Port, Proof and Performance

Makes for nice reading over Christmas!

Fuzz Testing in International Aerospace Guidelines Mon, 06 Dec 2021 08:36:00 -0500 Paul Butcher

For obvious reasons, civilian aerospace is steeped in safety regulation. Long-standing international governing bodies mandate and oversee the specification, design, and implementation of civil avionics such that failure conditions that could lead to safety hazards are identifiable, assessed, and mitigated.

During FuzzCon Europe 2021, Paul Butcher talked about considerations over why international aerospace regulatory bodies felt additional guidelines that combine aviation safety and security were needed in the form of an "Airworthiness Security Process".

Through the HICLASS UK research group, AdaCore has been developing security-focused software development tools that are aligned with the objectives stated within the avionics security standards. In addition, they have been developing further guidelines that describe how vulnerability identification and security assurance activities can be described within a Plan for Security Aspects of Certification.

What Do We Mean by Airworthiness?

The number one priority with civilian air travel is human safety. Everything else is secondary and, while other factors, including security, are important, none of them will ever be placed before human safety. In this context, human safety is focused on any persons involved in the operation of the air vehicle (i.e., passengers, the flight crew, the ground crew etc).

Military Aviation AuthorityMaster Glossary (MAA 02 Glossary)

This regulatory enforced approach to air travel is set out in international legislation and can be summarised by the term "Airworthiness". Operators wanting to fly their air vehicles need to gain airworthiness from the regulatory authority responsible for the airspace the operator wants to travel within. More specifically, organisations such as the Federal Aviation Administration (FAA) (as part of the U.S. Department of Transportation) and the European Union Aviation Safety Agency (EASA) (for air corridors across Europe), plus others.

Ensuring our air vehicles are airworthy, and therefore safe for flight is a challenge. However a bigger challenge is convincing the regulatory authorities that the vehicles are safe! Here approaches like safety cases are used to document clear, concise, and convincing safety arguments. The goal of these arguments is to convince the certification authority that the risk of an air-vehicle systems failure (that could lead to a safety hazard) is as low as reasonably practical.

Fortunately, the regulatory authorities provide help in the form of "Advisory Circulars" (ACs) that stipulate that certain standards and guidelines are deemed as an acceptable means of compliance (AMC) with specific aspects of airworthiness. DO-178C, titled "Software Considerations in Airborne Systems and Equipment Certification", is a prime example and is often used to gain approval of the safe usage of commercial software-based aerospace systems.

It is also fair to say that, even considering the complex, thorough, and mandated regulatory safety processes, the industry is very good at achieving airworthiness certifications. This is good, and we should all sleep better knowing that these safeguards are in place!

Security Trends in Modern Civil Avionics

So, if civilian air travel is already very safe, why do we need an "Airworthiness Security Process"? This can be partially answered by a keynote address made by Robert Hickey during the 2017 CyberSat Summit:

We got the airplane on Sept. 19, 2016. Two days later, I was successful in accomplishing a remote, non-cooperative, penetration” (Ref: Aviation Today)

In order to understand the full context of that statement, I would encourage you to read the full article. However, what was fascinating to me about this hack was that Robert Hickey stated that this was not conducted in a laboratory but on a civilian aircraft parked at the airport in Atlantic City.

[Which] means I didn’t have anybody touching the airplane, I didn’t have an insider threat. I stood off using typical stuff that could get through security and we were able to establish a presence on the systems of the aircraft.

What is maybe more worrying is that the report goes on to imply that the involved Avionics Original Equipment Manufacturers (OEMs) later declared they were aware of the exploit path, as well as many others.

Another interesting statement made by Robert Hickey during that keynote address in 2016 was around the estimated staggering cost to patch software in a deployed avionics system:

The cost to change one line of code on a piece of avionics equipment is $1 million, and it takes a year to implement.

Clearly, this emphasises the obvious secondary need for the aircraft industry to construct safe and secure aircraft systems.

Why the Need for Aviation Security Standards?

One reason this situation could have occurred is likely due to the terminology used within the existing safety guidelines. "Failure Conditions" are widely understood within the industry to be resulting scenarios that directly affect the vehicle and/or its occupants. These conditions are caused by internal failures, system errors, environmental operating conditions, extreme external events such as atmospheric conditions, and other scenarios such as bird strikes and baggage fires.

In order to gain airworthiness, all aircraft Failure Conditions need to be identified, analysed, and understood such that the resulting effect can be categorized, associated with any known safety hazard, and mitigated if appropriate.

Failure Conditions are then sorted into the following categories:

  • Catastrophic - Failure may cause deaths, usually with loss of the airplane.
  • Hazardous - Failure has a large negative impact on safety or performance, or reduces the ability of the crew to operate the aircraft due to physical distress or a higher workload, or causes serious or fatal injuries among the passengers.
  • Major - Failure significantly reduces the safety margin or significantly increases crew workload. May result in passenger discomfort (or even minor injuries).
  • Minor - Failure slightly reduces the safety margin or slightly increases crew workload. Examples might include causing passenger inconvenience or a routine flight plan change.
  • No Effect - Failure has no impact on safety, aircraft operation, or crew workload.

The problem, however, is that there is no explicit consideration of cyber-threats acting as events that can lead to a failure condition. In order to address this shortfall two, new working groups within the RTCA and EUROCAE were formed and tasked with producing an Airworthiness Security Process.

This led to the birth of a set of standards and guidelines widely known as the ED-202A/DO-326A set, and an early action of this joint committee was to bring a new term to the table, namely a "Threat Condition".

"A condition having an effect on the aeroplane and/or its occupants, either direct or consequential, which is caused or contributed to by one or more acts of intentional unauthorised electronic interaction, involving cyber threats, considering flight phase and relevant adverse operational or environmental conditions. Also see failure condition." (Ref: ED-202A/DO-326A)

Here the terminology is deliberately succinct; a Threat Condition focuses on the effect of a cyberattack on the air vehicle. This also makes it very clear that the primary purpose of the Airworthiness Security Process is to ensure the safety of flight.

Airworthiness Security Process (AWSP)

The process comprises seven main stages broken down into sub-stages with identified stage inputs and stage outputs. The initial phase is known as the "Plan for Security Aspects of Certification" (PsecAC), and it is here that we set our security goals and how we intend to security test our application. Much like a "Plan for Safety Aspects of Certification", we need to ensure our regulatory authority accepts our plan before we commence with our development and test phases.

However, the narrative of the process should not be considered in any way linear. Instead, it tends to jump between sub-stages and loops around groups of stages as security assurance is reassessed, risk mitigation readdressed, and security development reworked. The process details are too complex to address in any detail within this blog post. However, one area of particular interest, where fuzz testing can play a crucial role, is within the "Security Effectiveness Assurance" stage.

Security Effectiveness Assurance

This phase aims to show compliance with security requirements and to evaluate the effectiveness of implemented security measures. More specifically, we need to verify that we have satisfied any explicit security requirements, demonstrate the effectiveness of any security measures to protect our identified security assets, and provide evidence to argue that our system is free of vulnerabilities. Note that in the context of ED-202A/DO-326A, the definition of vulnerability states that it has to be demonstrably exploitable.

Fuzz testing is one such means of meeting objectives within the Security Effectiveness Assurance phase due to three primary reasons:

  1. Fuzz testing can assess the effectiveness of a security measure
  2. Fuzz testing can identify vulnerabilities in the form of exploitable software bugs
  3. and therefore, Fuzz testing can help identify security assets

DO-356A / ED-203A and the Introduction of Security Refutation

The ED-202A/DO326A Airworthiness Security Process is supported by a set of guidelines stated within ED-203A/DO-356A, titled "Airworthiness Security Methods and Considerations". Here, the reader is introduced to the term "Refutation". The aim of the Refutation phase is to assess the security assurance of the system under test.

Refutation is all about refuting that the system is secure, and this negative take on standard verification-based testing (positive testing) is very deliberate. The intention is to direct the focus of the activity towards the mindset of an attacker. It is advised that multiple activities should be adopted to make up the Refutation testing phase, and the guidelines suggest that the following should be considered:

  • Security penetration testing
  • Fuzzing
  • Static code analysis
  • Dynamic code analysis
  • Formal proofs

Fuzz testing is traditionally considered a negative testing capability and is therefore considerably well suited to refutation testing. Unfortunately, the guidelines around how to include a fuzzing campaign within a PsecAC are lacking. An industrial working group within HICLASS highlighted this gap and a clear appeal for a better understanding of the technology was made.

Guidelines and Considerations Around ED-203A / DO-356A Security Refutation Objectives

In order to meet this industry need, AdaCore produced a technical paper to provide additional considerations and guidelines over how to include a security refutation activity (including fuzz testing) within a PsecAC.

The paper is freely available via AdaCores tech papers website, and we gratefully accept any feedback that experts in the field of fuzz testing want to provide. One area of particular interest within this paper is the recommendation that a fuzzing campaign plan includes both a "starting criteria" and a "stopping criteria".

The starting criteria focus on the quality of a particular fuzz test’s starting corpus and argue that a good aim is to achieve 100% statement coverage. Once the starting criteria has been satisfied, we can commence the fuzzing campaign until the stopping criteria is met.

The stopping criteria guidelines state that a formula should be derived that determines the campaign duration. The formula's goal is to argue that the resulting duration is complementary to the level of security assurance the test is trying to achieve. Factors should include (but not be limited to):

  • the average achievable test execution speed;
  • the security assurance level of the targeted security measures;
  • the cyclomatic complexity of the control flow of the application under test;
  • the measured complexity of the test input data structure.

GNATfuzz for Airworthiness Security Assurance

Within HICLASS, AdaCore has researched and developed a fuzz testing capability for applications written in Ada and SPARK. GNATfuzz is being developed with security effectiveness assurance and security refutation objectives at the forefront of its high-level requirements. In addition, a secondary aim is to ensure the complexity of the setup, build, and execution of the fuzzing campaign is encapsulated away from the user, and this is achieved through a high level of test harness code automation.

How to fuzz software applications with GNATfuzz

More information about GNATfuzz and why Ada's rich runtime constraint checking capability makes it an excellent language of choice for fuzz testing can be found within the following AdaCore blog. In addition, if you would like to hear further thoughts from AdaCore about fuzz testing, please have a listen to our interview by Philip Winston on the IEEE Software Engineering Radio Podcast.

For a demonstration of the capabilities of this tool and the entire Fuzz Con Europe 2021 talk, please follow this link. To learn more about the HICLASS initiative, please look here.

Final Thoughts...

Fuzz testing is not a traditionally used technique within aerospace. However, the emergence of security guidelines such as ED-202A/DO-326A forces the industry to think again. Mature software testing approaches now need to adapt to new regulatory requirements around cyber threats.

In addition, avionics software development life-cycle plans now need to include considerations around security assurance testing and identifying exploitable software bugs. Where other industries, such as automotive and IoT, have been early adopters of fuzzing, aerospace is now playing catch up.

An Embedded USB Device stack in Ada Wed, 03 Nov 2021 10:28:00 -0400 Fabien Chouteau

Since the early days of my embedded Ada/SPARK adventures, starting the Ada Drivers Library project, making demos on various micro-controllers or publishing projects on this blog, my goal has always been to develop pure Ada/SPARK embedded software, drivers and support libraries.

It is not necessarily the most effective way of integrating Ada/SPARK in a project. Writing a binding for a tried and tested C library will often be a smarter choice. But my drive is not always a short term solution, I want to show the capabilities of Ada/SPARK for embedded systems and build an ecosystem.

This is why a couple years ago I started to tackle what was probably my most daunting project at the time, an embedded USB Device stack written 100% in Ada.

USB is a complex protocol stack without a lot of beginner-friendly documentation available online. And up until a few years ago, USB stacks provided by hardware vendors were the only option.

Things are changing now with, for instance, the TinyUSB project. The turning point for me was the discovery of the “USB in a nutshell” website. Reading this guide to the internals of the USB protocol gave me the confidence I needed to start this project, and I highly recommend it to anyone interested in USB.

I had a first working version of the USB stack for the STM32F405 micro-controller about four years ago, but I couldn’t find the time to make this implementation clean and micro-controller agnostic until last year. I resumed my work, this time on the Microchip SAMD51, with the goal of making the stack reusable and available in the Alire ecosystem.

Scope and State of the Project

I call the project presented here an “embedded USB Device stack”.

“Embedded” because it can run on resource limited systems (micro-controllers) and it is compatible with limited Ada run-times including a Zero-FootPrint (ZFP).

“Device” because there are two sides to the USB protocol: devices (keyboards, webcams, external drives, printers, etc.) and hosts (computers to plug devices into). The stack presented here implements the device side, meaning it can be used to implement a USB device (keyboard, webcam, etc.). Some micro-controllers are also capable of acting as USB hosts, therefore the stack could be improved in the future to also support this mode.

About the state of the project, I want to say that it is mostly a prototype at this stage and therefore not recommended for production use.


The overall design of my USB stack is inspired by both the libopencm3 and TinyUSB implementations, which are written in C. It is of course adapted for the strengths and weaknesses of the Ada language. The main difference is probably in the heavy use of C preprocessing in many USB stacks. It is very convenient for assembling USB descriptors or enabling/disabling features at compile time.

Even if, in theory, preprocessing is possible in Ada, I decided to stay away from it and embrace the difference. The stack will, for instance, build device descriptors at run-time, which will use more CPU, but that means it can be reconfigured to provide different USB device classes depending on the run-time context.

Part of the design decisions also originate from my desire to move this implementation to SPARK at some point. Right now I am far from it, but I did avoid some Ada features that are not available in SPARK.

Control Transfer State Machine

The main part of the USB Stack is the handling of control transfers. It is based on a state machine that handles Setup Requests and the data payload associated with them. The payload can be either sent by the Host with the Setup Packet (Host to Device), or sent by the Device as an answer to the Setup Packet (Device to Host). A zero-length-packet (ZLP) is sometimes used for acknowledgement of the Setup Request. If the Device doesn't support a given Setup Packet sent by the Host, the control end-points are stalled to indicate an error.

Some standard Setup Requests are handled directly by the stack, while others are dispatched to the USB classes.

USB Classes

In the USB protocol, a class is a standardized service that a USB device can provide to the host. In a way, they are comparable to HTTP or FTP for the TCP/IP stack.

Some of the most common classes are:

  • Human Interface Device (HID): for mice, keyboard, joystick, game pads, etc.

  • Mass Storage Class (MSC): for memory stick or external hard drives

  • Video: for webcam

  • Audio: for sound cards, microphones but also MIDI controllers

A USB device can implement one or more classes, this is how your webcam can do both video and audio.

As long as a device implements standard classes, it can be used on any host with the corresponding class support. This is a key part of USB’s success in my opinion, devices are portable across operating systems (Windows, Linux, macOS) as long as standard class drivers are available.

The standard also allows for vendor specific classes. One should stay away from implementing vendor classes because they require specific drivers on the host and therefore lose the portability of USB.

In my USB stack, classes are implementations of a limited Ada interface called “USB_Device_Class”. To implement a new class, a set of primitives have to be overridden:

  • Initialize: to request resources from the USB stack such as end-points and transfer buffers

  • Fill_Config_Descriptor: to provide the class specific descriptor during enumeration

  • Setup_Read_Request and Setup_Write_Request: to handle class specific USB setup requests

  • Transfer_Complete: to handle the completion of a transfer on one of the class endpoint

  • etc.

Users of the stack can then dynamically register classes to compose the services of their device. For instance, combining HID and MSC.

A couple of common off-the-shelf classes are already implemented in the stack and can be used as is:

  • HID (keyboard, mouse, and gamepad)

  • Serial over USB

  • MIDI (instrument).

The plan is to provide more off-the-shelf classes in the future.

Porting the USB Device stack

The stack is based on a Hardware Abstraction Layer (HAL) that defines an Ada interface called USB.HAL.Device.USB_Device_Controller. To use the USB Device stack on a micro-controller, one must provide an implementation of this interface.

Here are some of the primitives to be overridden:

  • Poll: this function returns a record that notifies an event from the USB bus (e.g. transfer complete, setup request)

  • EP_Write_Packet: this procedure configures an End-Point (EP) to send data to the host

  • EP_Ready_For_Data: this procedure configures an End-Point to be ready to receive data from the host

  • Set_Address: this procedure sets the address of the device on the bus

  • etc.

Testing the USB Device stack

In theory, testing USB involves at least two machines with kernel code on one side for host drivers. That is difficult to put in place and maintain. It is also not very compatible with Continuous Integration setups. Instead I decided to create a framework to simulate USB exchanges.

The framework plays predefined scenarios, such as descriptor requests, set address, enumeration and checks that the stack behaves as expected. It can also test the stack behavior when the host requests unknown descriptors. More test scenarios can be added in the future as soon as problems are detected.

This framework is very good for regression testing and it is easy to run, on the other hand it is not easy to write scenarios. Therefore I am looking at other options for testing the stack against a real host, like using the Linux Raw_Gadget interface to simulate a USB device from Linux userland, or the QEMU emulator.


The stack is now available in Alire under this usb_embedded crate:

A USB_Device_Controller implementation is available for the Microchip samd51 in the samd51_hal Alire crate. And if you own one of the AdaFruit PyGamer boards, you can try the example project I made which makes the PyGamer act as a USB gamepad:

Starting micro-controller Ada drivers in the Alire ecosystem Mon, 18 Oct 2021 04:12:00 -0400 Fabien Chouteau $ alr init --lib samd21_hal $ cd samd21_hal $ alr with gnat_arm_elf # Add a dependency on the arm-elf compiler
for Target use "arm-elf";
for Runtime ("Ada") use "light-cortex-m0p";
$ mv src/ src/
$ sed -i 's/Samd21_Hal/SAM/g' src/
$ alr build
$ wget
$ unzip Atmel.SAMD21_DFP.1.3.395.atpack -d samd21_atpack
$ svd2ada samd21_atpack/samd21a/svd/ATSAMD21G18A.svd --boolean -o src -p SAM_SVD --base-types-package HAL --gen-uint-always
$ alr with hal
$ alr build
$ cd ..
$ alr init --bin metro_m0_example
$ cd metro_m0_example
$ alr with samd21_hal --use=../samd21_hal
for Target use "arm-elf";
for Runtime ("Ada") use "light-cortex-m0p";
$ alr build
warning: cannot find entry symbol _start; defaulting to 0000000000008000
package Device_Configuration is
   for CPU_Name use "ARM Cortex-M0P";
   for Float_Handling use "soft";

   for Number_Of_Interrupts use "42";

   for Memories use ("RAM", "FLASH");

   --  Specify from which memory bank the program will load
   for Boot_Memory use "FLASH";

   --  Specification of the RAM
   for Mem_Kind ("RAM") use "ram";
   for Address ("RAM") use "0x20000000";
   for Size ("RAM") use "0x8000";

   --  Specification of the FLASH
   for Mem_Kind ("FLASH") use "rom";
   for Address ("FLASH") use "0x08000000";
   for Size ("FLASH") use "0x40000";
end Device_Configuration;
$ cd ..
$ alr get --build startup_gen
$ cd metro_m0_example
$ eval `alr printenv`
$ ../startup_gen_21.0.0_75bdb097/startup-gen -P metro_m0_example.gpr -l src/link.ld -s src/crt0.S
CPU: ARM Cortex-M0P
Float_Handling: SOFT
Name    : RAM
Address : 0x20000000
Size    : 0x8000
Kind    : RAM
Name    : FLASH
Address : 0x08000000
Size    : 0x40000
Kind    : ROM
for Languages use ("Ada", "ASM_CPP");
package Linker is
   for Switches ("Ada") use ("-T", Project'Project_dir & "/src/link.ld",
end Linker;
$ alr build
Memory region         Used Size  Region Size  %age Used
           FLASH:         756 B       256 KB      0.29%
             RAM:        4120 B        32 KB     12.57%
$ cd ..
$ alr init --lib metro_m0_bsp
$ cd metro_m0_bsp
$ alr with samd21_hal --use=../samd21_hal
$ cd ../metro_m0_example/ 
$ alr with metro_m0_bsp --use=../metro_m0_bsp
$ alr build
Enhancing the Security of a TCP Stack with SPARK Tue, 12 Oct 2021 04:21:00 -0400 Yannick Moy

You've probably never heard of CycloneTCP, an open source dual IPv4/IPv6 stack dedicated to embedded applications. That may be because people don't find and publish vulnerabilities for this stack. The quality of CycloneTCP is even acknowledged by the AMNESIA:33 report, which classifies it as one of the most resilient TCP/IP stacks.

To go beyond the usual best development practices and use of industrial testsuites, the developers of CycloneTCP at Oryx Embedded partnered with AdaCore. We worked together to replace the TCP part of the C codebase with SPARK code, and used the SPARK tools to prove both that the code is not vulnerable to the usual runtime errors (like buffer overflow) and that it correctly implements the TCP automaton specified in RFC 793. As part of this work, we found two subtle bugs related to memory management and concurrency.

For more details, see our article or watch our online presentation on October 20th at IEEE SecDev 2021.

Task Suspension with a Timeout in Ravenscar/Jorvik Tue, 05 Oct 2021 06:25:00 -0400 Pat Rogers

This blog entry shows how to define an abstract data type that allows tasks to block on objects of the type, waiting for resumption signals from other components, for at most a specified amount of time per object. This "timeout" capability has been available in Ada from the beginning, via select statements containing timed entry calls. But what about developers working within the Ravenscar and Jorvik tasking subsets? Select statements and timed calls are not included within either profile. This new abstraction will provide some of the functionality of timed entry calls, with an implementation consistent with the Ravenscar and Jorvik subsets.

In a previous blog entry we showed how to have a set of "conditions" that tasks can await, suspended, eventually to be awakened when "signaled" by some other task. Callers could await one of several conditions at the same time. However, these waiting callers blocked indefinitely, without a timeout option. That's often appropriate, but not in all cases. We will now take a different approach, defining a simpler version of "conditions" more like a condition variable or semaphore. Tasks still call Wait, now for a specific condition object passed as a parameter, and also specify how long they are willing to be suspended, waiting for some other task or interrupt handler to call Signal for that same object.

The reason these blog entries focus on "conditions" is that "condition synchronization" is one of the two forms of synchronization required for concurrent programming. (Mutual exclusion is the other form.) For example, a consumer task must wait until a shared buffer is not empty before it can remove a value from the buffer. Likewise, a producer task inserting items into the shared buffer must wait until the buffer is not full. Protected entry barriers exist for the sake of expressing these sorts of Boolean conditions. However, as mentioned, for Ravenscar and Jorvik use we need an alternative mechanism.

You should understand that this mechanism does not provide the full capabilities of timed entry calls. Condition objects are not entries, they are just flags, or events, and as such do not include an entry body that can provide application-specific functionality. Unlike a timed entry call, a call to Wait is not a request for a service to be provided (strictly, started to be provided) within a given time. Instead, a call to Wait requests notification that a condition has been satisfied, or if you like, an event has occurred, within the specified time. The analogue in full Ada would be a select statement containing a timed call to a protected entry with a null body. Any application-specific functionality corresponding with the Wait call's return -- that which a protected entry body would otherwise provide -- must be programmed separately from the call itself.

Ada defines some standard lower-level facilities that can be used to define synchronization mechanisms, as well as used directly by applications. The most important of these are within the subsets defined by Ravenscar and Jorvik. We will use some of them to define the new capability.

Having shown how to implement the facility within the Ravenscar and Jorvik subsets, we then provide a demonstration on bare-metal hardware.


As usual, the new mechanism is designed as an abstract data type (ADT), hence a private type in Ada. As a synchronization mechanism, clients of the type have no business doing assignment between objects of this type, and language-defined equality on such objects makes no sense. Therefore, the type is limited as well as private. (As you will see, there is another good reason for the type to be limited.) The enclosing package, type declaration, and primitive subprogram declarations are as follows:

package Timed_Conditions is

   type Timed_Condition is limited private;
   procedure Wait 
     (This      : in out Timed_Condition; 
      Deadline  : Time;
      Timed_Out : out Boolean);

   procedure Wait 
     (This      : in out Timed_Condition; 
      Interval  : Time_Span;
      Timed_Out : out Boolean);

   procedure Signal (This : in out Timed_Condition);
end Timed_Conditions;

With this API clients can declare objects of type Timed_Condition and can pass them to calls to Wait and Signal. Procedure Wait is overloaded to allow expression of the timeout value either in terms of an absolute time, i.e., a point on the timeline, or a time interval. With the latter, the actual timeout is the sum of the time when the call takes place and the interval specified. Tasks calling Wait for a given Timed_Condition object suspend until either the time is reached or a call to Signal takes place for the same object. In both cases Wait returns a Boolean value indicating whether or not the call has returned due to the expiration of the time specified.

For example, we could declare an object of this type like so:

with Timed_Conditions;  use Timed_Conditions;

package User_Button is

   Pressed : Timed_Condition;


end User_Button;

Let's say, arbitrarily, that we want to wait at most 2 seconds for Pressed to be signaled. The task in the code below does so:

with User_Button;
task body Waiter is
   Time_Expired : Boolean;
   Timeout      : constant Time_Span := Milliseconds (2_000); -- arbitrary
      Wait (User_Button.Pressed, Timeout, Time_Expired);         
      if Time_Expired then
end Waiter;

The Implementation

As hinted earlier, Ada defines standard lower-level mechanisms useful for building new kinds of concurrency constructs. We will use two: "timing events" and "suspension objects," both appearing in the full definition of the ADT in the private part of the package.

The type Timing_Event is language-defined in the Ada.Real_Time.Timing_Events package. Objects of this type allow clients to specify a time when an "event" should occur. When that time is reached a user-defined protected procedure "handler" is invoked by the runtime library, performing whatever functional steps are required to implement the event. Clients may also cancel the future event, such that the handler will not be triggered. As you can imagine, this type will provide much of our timeout implementation. The pertinent parts of the API are as follows:

package Ada.Real_Time.Timing_Events is

   type Timing_Event is tagged limited private;

   type Timing_Event_Handler
     is access protected procedure (Event : in out Timing_Event);

   procedure Set_Handler
     (Event   : in out Timing_Event;
      At_Time : Time;
      Handler : Timing_Event_Handler);

   procedure Cancel_Handler
     (Event     : in out Timing_Event;
      Cancelled : out Boolean);


end Ada.Real_Time.Timing_Events;

Procedure Set_Handler allows clients to set a time when the given Timing_Event object is to be triggered, and, as well, to specify a pointer to the protected procedure to be invoked when the time is reached. Procedure Set_Handler is overloaded for convenience, the difference being a parameter of type Time_Span instead of type Time.

Note the formal parameter defined for the protected procedure handler, designated by the Timing_Event_Handler access type. Any handler must be a protected procedure with a conforming formal parameter profile.

Procedure Cancel_Handler cancels the timeout trigger for the given Timing_Event object. On return from the call the parameter Cancelled is True if the object was set prior to it being cancelled; otherwise, on return it is False. An object being "set" means that a timeout was pending and a pointer to a handler was currently assigned.

The other required lower-level mechanism, "suspension objects," is provided by the type Suspension_Object declared in the Ada.Synchronous_Task_Control package. The pertinent parts of that package are as follows:

package Ada.Synchronous_Task_Control is

   type Suspension_Object is limited private;

   procedure Set_True (S : in out Suspension_Object);

   procedure Suspend_Until_True (S : in out Suspension_Object);

end Ada.Synchronous_Task_Control;

A Suspension_Object variable amounts to a thread-safe Boolean flag. Clients can call Set_True and Set_False to assign the values.

Most significantly, via procedure Suspend_Until_True a client task can suspend itself until the specified flag becomes True. However, at most one task can be suspended on a given Suspension_Object variable at any given moment. Violations of that constraint raise Program_Error.

Suspension_Object variables are initially False, automatically, and are set back to False automatically on return from a call to Suspend_Until_True. As a result, in typical code Set_False is not used.

The operations Set_True and Set_False are atomic with respect to each other and with respect to Suspend_Until_True.

The full declaration of our ADT using these two facilities is as follows:

with Ada.Real_Time;                use Ada.Real_Time;
with Ada.Real_Time.Timing_Events;  use Ada.Real_Time.Timing_Events;
with Ada.Synchronous_Task_Control; use Ada.Synchronous_Task_Control;

package Timed_Conditions is

   type Timed_Condition is limited private;
   procedure Wait 
     (This      : in out Timed_Condition; 
      Deadline  : Time;
      Timed_Out : out Boolean);

   procedure Wait 
     (This      : in out Timed_Condition; 
      Interval  : Time_Span;
      Timed_Out : out Boolean);

   procedure Signal (This : in out Timed_Condition);

   type Timed_Condition is new Timing_Event with record
      Timed_Out        : Boolean := False;
      Caller_Unblocked : Suspension_Object;
   end record;

   protected Timeout_Handler is 
      pragma Interrupt_Priority;      
      procedure Signal_Timeout (Event : in out Timing_Event);
   end Timeout_Handler;
   --  A shared, global PO defining the timing event handler procedure. All
   --  objects of type Timed_Condition use this one handler. Each execution of
   --  the procedure will necessarily execute at Interrupt_Priority'Last, so
   --  there's no reason to have a handler per-object.

end Timed_Conditions;

Our type Timed_Condition is visible to clients as a limited private type, so they must use it accordingly. The full view of the type in the package private part, however, indicates that much more is possible.

In particular, the full type declaration in the private part extends type Timing_Event to define the Timed_Condition type. As a result, the new type inherits all the Timing_Event capabilities, and is a tagged type because Timing_Event is tagged.

However, by design, neither the inherited operations nor the tagged nature are made part of the client API. We only want Timed_Condition clients to have the Wait and Signal operations. Completing the type declaration via inheritance in the private part of the package, rather than the public part, achieves that effect. Clients only have compile-time visibility to the partial view defined before the package private part. In contrast, the private part and package body have the full view, so the inherited operations are available there and will provide most of our timeout semantics.

Furthermore, our extended type includes a Boolean component indicating whether a timeout occurred, and a Suspension_Object component used to block and unblock caller tasks.

We made Timed_Condition a limited type in the visible part of the package (the client's partial view) for the reasons stated initially. In fact, the language requires us to do so, because the full type declaration in the private part (the full view) is itself limited. That required correspondence between the partial and full view makes sense because the client's view must be realistic with regard to the operations possible. If the type really is limited, as defined by the full view, then assignment really isn't possible. It wouldn't make sense for the client's view to indicate that assignment is possible if it really isn't. (By the same token, if the full view is not limited, the partial view is not required to be limited, but can be. If the partial view is limited but the full view is not, clients simply cannot do something that the full view allows within the package, i.e., assignment.)

So, why is the full view of type Timed_Condition limited, even though the reserved word doesn't appear in our full view? It's because we are extending a limited type. Our new package is a client of Ada.Real_Time.Timing_Events so we have the partial view of type Timing_Event. That partial view is tagged and limited. Therefore any extension is also tagged and limited.

In addition to the completion for Timed_Condition, the private part of the package also declares a single protected object, the Timeout_Handler. This protected object declares the protected procedure that will be invoked whenever any Timing_Condition object has timed out. (Note the required formal parameter's type. More on that in a moment.)

When the Ceiling_Locking protocol is applied, as it is in both Ravenscar and Jorvik, the language requires Timing_Event handlers to execute at priority System.Interrupt_Priority'Last. The pragma Interrupt_Priority achieves that effect. (The expectation is that timeout handlers are executed directly by the clock interrupt handler.)

It may seem surprising to have a single handler routine shared amongst all Timed_Condition objects. This approach works for a few reasons. First, the formal parameter to the handler gives us the specific object that has been triggered. Second, and most important, under these two profiles all handlers for Timing_Events must execute at a priority of System.Interrupt_Priority'Last, so all handlers will execute atomically, not concurrently. Therefore there is no benefit to having a dedicated protected object per Timing_Condition object.

Given that full definition, here is the corresponding package body:

package body Timed_Conditions is

   -- Wait --

   procedure Wait 
     (This      : in out Timed_Condition; 
      Deadline  : Time;
      Timed_Out : out Boolean) 
      This.Set_Handler (Deadline, Timeout_Handler.Signal_Timeout'Access);
      Suspend_Until_True (This.Caller_Unblocked);
      Wait.Timed_Out := This.Timed_Out;
   end Wait;

   -- Wait --

   procedure Wait 
     (This      : in out Timed_Condition; 
      Interval  : Time_Span;
      Timed_Out : out Boolean) 
      Wait (This, Clock + Interval, Timed_Out);
   end Wait;

   -- Signal --

   procedure Signal (This : in out Timed_Condition) is
      Handler_Was_Set : Boolean;
      This.Cancel_Handler (Handler_Was_Set);
      if Handler_Was_Set then
         --  a caller to Wait is suspended
         This.Timed_Out := False;
         Set_True (This.Caller_Unblocked);
      end if;      
   end Signal;

   -- Timeout_Handler --

   protected body Timeout_Handler is

      -- Signal_Timeout --

      procedure Signal_Timeout (Event : in out Timing_Event) is
         This : Timed_Condition renames Timed_Condition (Timing_Event'Class (Event));
         This.Timed_Out := True;
         Set_True (This.Caller_Unblocked);
         --  note: Event's pointer to a handler becomes null automatically
      end Signal_Timeout;

   end Timeout_Handler;

end Timed_Conditions;

When called, procedure Wait sets a timeout deadline for the specified Timed_Condition, along with a pointer to the shared Signal_Timeout handler, and then suspends the caller. If the time expires, Signal_Timeout sets the Boolean Timed_Out flag to True and then unblocks the suspended caller in Wait. If, on the other hand, procedure Signal is called prior to the timeout, the timeout is canceled, Timed_Out is set to False, and again the caller in Wait is unblocked. In either case the Wait caller is unblocked and the Caller_Unblocked variable goes back to False automatically. (It is False initially, automatically.) At that point the internal Timed_Out Boolean flag can be assigned to the Timed_Out formal parameter. Wait then exits.

Note that Signal could be called before a call to Wait has occurred for the same Timed_Condition object. And of course, it might be called after a timeout has expired. Therefore, the body of procedure Signal checks to see if Cancel_Handler actually cancelled an event timeout. It does this check via the Boolean parameter passed to Cancel_Handler, named Handler_Was_Set. If True, the timeout was pending, which means there was a caller suspended in Wait for this Timed_Condition object. In that case we set the Timed_Out Flag to False and unblock the suspended caller. If Handler_Was_Set is False, there was no pending timeout, hence no caller suspended in Wait, so nothing further is done.

An important aspect of the Timing_Event operations is that they are free of race conditions, per language rules, when operating on any given Timing_Event object. In addition, execution of Set_Handler is atomic with respect to the execution of the handler for that same object. Therefore, execution of these operations' internal statements will not be interleaved.

However, calls to them might be interleaved. For example, let's assume a task will call Wait and another task will call Signal, for the same Timed_Condition object. Wait could be about to make the call to Set_Handler, and then be preempted by the other task calling Cancel_Handler (via Signal). We know that Set_Handler and Cancel_Handler will be executed atomically, so either Set_Handler or Cancel_Handler will execute first, followed by the other. The if-statement in Signal ensures that either order works. If Set_Handler executes first, followed immediately by Cancel_Handler, the Boolean parameter Handler_Was_Set will return True and hence Caller_Unblocked will be set to True. When Wait resumes execution it will call Suspend_Until True but will find Caller_Unblocked True, so it will return immediately, and then finish Wait's execution. Alternatively, if Cancel_Handler executes first, Handler_Was_Set will be False and nothing further will be done in Signal. The call to Wait will then continue as usual, waiting for Signal to be called. The application must be structured such that another call to Signal does eventually occur, if that should happen prior to the timeout. These are not persistent signals.

Finally, recall we said there was something to mention about the formal parameter profile for Signal_Timeout. Specifically, the type for the formal parameter must always be type Timing_Event, otherwise such a protected procedure would not be compatible with the access type. The runtime system will automatically call Signal_Timeout for us if/when the timeout expires, and will pass the specific Timed_Condition object to the handler. But although it is a Timed_Condition object, the view is as a Timing_Event object because that is the type of the formal parameter. Therefore, we have to convert the view inside the procedure from type Timing_Event to type Timed_Condition. Without the conversion, the view as a Timing_Event parameter would not allow references in the handler body to the extension components Timed_Out and Caller_Unblocked. The syntax to do the view conversion is used in the renaming declaration. It's a bit ugly, but is always the same approach: convert "up" to the "base" type, i.e., the root class-wide type, and then "down" to the specific derived type. The compiler may issue code to check that the right target type is actually involved, or it might recognize the fact that, in this case, the view conversion is always correct.

The Example

Now that we have the facility in place, let's have an example. We'll use one of the STM32 Discovery Kit boards that has a user button and some LEDs on it. A task will call Wait on a Timed_Condition variable, and an interrupt handler for the user button will Signal that same Timed_Condition variable. If the user doesn't press the button prior to the timeout deadline, the waiting task will turn on the orange LED and turn off the green LED. If the user does press the button in time, the waiting task will turn on the green LED and turn off the orange LED. This processing continues until power is pulled.

First, here's the declaration for the library package containing the waiter task. Ravenscar and Jorvik require all tasks to be declared at the library level:

package LED_Controller is
   task Control;
end LED_Controller;

We'll take the default task priority and stack size for Control.

Next, the package body, which I promise is more interesting:

with Ada.Real_Time;  use Ada.Real_Time;
with STM32.Board;    use STM32.Board;
with User_Button;
with Timed_Conditions; use Timed_Conditions;

package body LED_Controller is
   -- Control--

   task body Control is
      Time_Expired : Boolean;
      Timeout      : constant Time_Span := Milliseconds (2_000); -- arbitrary
         Wait (User_Button.Pressed, Timeout, Time_Expired);
         if Time_Expired then
         end if;
      end loop;
   end Control;
end LED_Controller;

As the comment indicates, the timeout of two seconds is entirely arbitrary.

Package STM32.Board defines the devices on the STM32F407 Discovery board. In this code we use the two LEDs and the blue user button. Package User_Button is defined here to declare the Timed_Condition variable Pressed, the button hardware initialization routine, and the button interrupt handler. Here's the package declaration:

with Timed_Conditions; use Timed_Conditions;

package User_Button is

   procedure Initialize (Use_Rising_Edge : Boolean := True);
   Pressed : Timed_Condition;

end User_Button;

There we see the variable and the hardware initialization procedure. The interrupt handler is declared within the package body:

with STM32.Board;   use STM32.Board;
with STM32.Device;  use STM32.Device;
with STM32.GPIO;    use STM32.GPIO;
with STM32.EXTI;    use STM32.EXTI;
with System;

package body User_Button is

   Button_High : Boolean := True;

   EXTI_Line : constant External_Line_Number := User_Button_Point.Interrupt_Line_Number;
   -- Button --

   protected Button with
     Interrupt_Priority => System.Interrupt_Priority'Last
      procedure Interrupt with
        Attach_Handler => User_Button_Interrupt;
   end Button;

   -- Button --

   protected body Button is

      -- Interrupt --

      procedure Interrupt is
         Clear_External_Interrupt (EXTI_Line);
         if (Button_High and then User_Button_Point.Set)
           or else (not Button_High and then not User_Button_Point.Set)
            --  we would de-bounce the button, but no need for this demo
            Timed_Conditions.Signal (Pressed);
         end if;
      end Interrupt;

   end Button;

   -- Initialize --

   procedure Initialize (Use_Rising_Edge : Boolean := True) is
      Enable_Clock (User_Button_Point);
        ((Mode      => Mode_In,
          Resistors => (if Use_Rising_Edge then Pull_Down else Pull_Up)));
      --  Connect the button's pin to the External Interrupt Handler
        (if Use_Rising_Edge then Interrupt_Rising_Edge else Interrupt_Falling_Edge);
      Button_High := Use_Rising_Edge;
   end Initialize;

end User_Button;

The protected procedure Button.Interrupt is the handler for the interrupt, not surprisingly. When the hardware interrupt occurs, if the physical button has been pressed Signal is called. The details of setting up the interrupt are not particularly pertinent. It is worth mentioning, for the sake of clarity, that User_Button_Point is a GPIO port/pin pair that is defined by package STM32.Board.

Finally, the main procedure. The task and interrupt handler do all the work, but the main procedure first initializes the hardware, including the two LEDs.

with Ada.Real_Time; use Ada.Real_Time;
with Last_Chance_Handler;  pragma Unreferenced (Last_Chance_Handler);
--  The "last chance handler" is the user-defined routine that is called when
--  an exception is propagated. We want it in the executable, therefore it
--  must be somewhere in the closure of the context clauses.
with STM32.Board;   
with System;
with LED_Controller; pragma Unreferenced (LED_Controller);
with User_Button;

procedure Test is
   pragma Priority (System.Priority'Last);

      delay until Time_Last;
   end loop;
end Test;

The main procedure specifies the highest non-interrupt priority for the environment task so all the hardware initialization occurs first. The task in package LED_Controller is activated automatically, and eventually calls Wait. At some point someone will press the physical button on the board, generating the interrupt. The LEDs will be lit accordingly.

The use of pragma Unreferenced prevents the compiler from issuing warnings about the fact that the main procedure doesn't do anything with certain packages. Ordinarily we'd want such warnings. But if they are not referenced, why mention them? For the packages to appear in the executable, they must appear somewhere in the transitive closure of the with-clauses. There's no reference to them elsewhere in the example code, so we pull them in here, in the main, and tell the compiler that it's OK.

After setting up the hardware, the main procedure goes into an infinite loop. That's required because tasks in Ravenscar and Jorvik should never complete and terminate -- including the implicit environment task that calls the main procedure. My personal preference is to have an extremely long delay inside the loop so that the main doesn't consume CPU cycles. But a null loop would work too, as long as the environment task is given a priority lower than any tasks that will be doing actual application processing. In this example we wanted the main procedure to do something before the Control task, without actually synchronizing with the task, so we gave it the highest priority. With that priority something that actually suspended the environment task was required, rather than a null loop. An elegant alternative to the long delay would be to suspend on another Suspension_Object variable that will never be set to True. (Thanks to Bob Duff for that suggestion.)

The code we used for the STM32 board and drivers is part of the Ada Drivers Library (ADL) provided by AdaCore and the Ada community. The ADL is available on GitHub for both non-proprietary and commercial use here:

As always, questions and comments are welcome!

A design pattern for OOP in Ada Mon, 13 Sep 2021 11:04:00 -0400 Fabien Chouteau

When I do Object Oriented Programming with Ada, I tend to follow a design pattern that makes it easier for me and hopefully also for people reading my code.

For this post I will use the all time classic example of OOP, a Graphic User Interface framework. Here’s what a widget specification looks like:

package Widget.Button is

   subtype Parent is Widget.Instance;

   type Instance is new Parent
   with private;
   subtype Class is Instance'Class;

   type Acc is access all Instance;
   type Any_Acc is access all Class;

   procedure Event (This : in out Instance; Evt : Event_Kind);

   procedure Draw (This : in out Instance);

   subtype Dispatch is Instance'Class;

   type Instance is new Parent
   with record
      C : Boolean := False;
   end record;

end Widget.Button;

Now let’s have a look at each element.

Type Instance is [...]

package Widget.Button is

   type Instance is [...]
   with private;

I define one tagged type (object) per package and the name of this type is always “Instance”.

As we all know, naming is the hardest thing in programming, so having to find meaningful names for both the package and type is annoying at best.

Another solution is to use plural for packages names and singular for the types:

package Widgets.Buttons is

   type Button is [...]
   with private;

But there is one other benefit to using the same type name in every package: easier refactoring.

We know that Ada is an amazing language when it comes to safe refactoring, thanks to its strong typing and powerful specifications. One might say that Ada provides “fearless refactoring”. The drawback is that changing the signature of a method, for instance, means a lot of code to edit.

With this design pattern, an inherited methods look exactly the same for all types:

procedure Event (This : in out Instance; Evt : Event_Kind);

So we can just copy/past it everywhere it is needed when the signature changes.

Using the other plural/singular naming convention, the type of “This” changes every time.

procedure Event (This : in out Button; Evt : Event_Kind);

procedure Event (This : in out Checkbox; Evt : Event_Kind);

This may look like a detail but it makes sense to me, and I like the consistency of this convention.

The declaration of widgets looks like this:

with Widget.Button;
with Widget.Checkbox;
use Widget;
   B : Button.Instance;
   C : Checkbox.Instance;

Class, Acc and Any_Acc

subtype Class is Instance'Class;

type Acc is access all Instance;
type Any_Acc is access all Class;

The next types and subtype declaration follow the same idea, they are always the same for every object.

The subtype “Class” is useful when writing class-wide subprograms, e.g.:

procedure Something (B : Button.Class);

The type “Acc” is a general access type for the instance:

B : Button.Acc := new Button.Instance;

The type “Any_Acc” is a access type for any object in the hierarchy:

procedure Something (B : not null Button.Any_Acc);

You can even define more access types like:

type Const_Acc is access constant Instance;
type Any_Const_Acc is access constant Class;

And if you don’t like “Acc” you can use “Reference” or Ref, just stay consistent across your hierarchy:

type Reference is access all Instance;
type Any_Reference is access all Class;

type Ref is access all Instance;
type Any_Ref is access all Class;

subtype Parent is Widget.Instance;

subtype Parent is Widget.Instance;

type Instance is new Parent
with private;

Emmanuel Briot already wrote a blog post here on this pattern. Always defining a subtype that names the parent type of the object makes inherited subprogram call easy, readable and safe:

procedure Event (This : in out Instance; Evt : Event_Kind) is
   Parent (This).Event (Evt);
end Event;

I will let you read Emmanuel’s post to see the pitfalls of other approaches.

subtype Dispatch is Instance'Class;

   subtype Dispatch is Instance'Class;

In Ada, dynamic dispatching on subprograms only occurs on class-wide types. This is unsettling for many, and I was myself caught by this when I started OOP in Ada.

By defining a “Dispatch” subtype, we can make dispatching call explicit, easier to spot and use:

procedure Draw (This : in out Instance) is
   Dispatch (This).Event (Draw_Event);
end Draw;


I hope this pattern will be useful to some of you. Let me know in the comments what is your opinion on this, and maybe what other patterns you are using in Ada.

When the RISC-V ISA is the Weakest Link Thu, 02 Sep 2021 04:26:00 -0400 Yannick Moy

NVIDIA has been using SPARK for some time now to develop safety- and security-critical firmware applications. At the recent DEF CON 29, hackers Zabrocki and Matrosov presented how they went about attacking NVIDIA firmware written in SPARK but ended up attacking the RISC-V ISA instead!

Zabrocki starts by explaining the context for their red teaming exercise at NVIDIA, followed by a description of SPARK and their evaluation of the language from a security attack perspective. He shows how they used an extension of Ghidra to decompile the binary code generated by GNAT and describes the vulnerability they identified in the RISC-V ISA thanks to that decompilation. Matrosov goes on to explain how they glitched the NVIDIA chip to exploit this vulnerability. Finally, Zabrocki talks about projects used to harden RISC-V platforms.

What I found amazing about this presentation is that because of the protection provided by the NVIDIA team’s developing the software in SPARK and proving it free of runtime errors, the hackers had to turn to something other than the software to find vulnerabilities - which led them to find one in the RISC-V ISA itself!

Zabrocki correctly pointed out that memory exhaustion issues are not detected by the SPARK tool, GNATprove. Instead, you should use, for example, GNATstack to detect (some of) them. This is a perfect example of non-functional requirements that are not checked by the SPARK tool. Other similar requirements include timing constraints for real-time software and robustness against cosmic rays for satellite software. Finally, note that SPARK supports safe pointers (an enhancement added in SPARK Pro 2020), and that the classes of problems detected by the tool are clearly defined in the tool documentation.

Security-Hardening Software Libraries with Ada and SPARK Thu, 08 Jul 2021 03:34:00 -0400 Kyriakos Georgiou

Part of AdaCore's ongoing efforts under the HICLASS project is to demonstrate how the SPARK technology can play an integral part in the security-hardening of existing software libraries written in other non-security-oriented programming languages such as C. This blog post presents the first white paper under this work-stream, “Security-Hardening Software Libraries with Ada and SPARK”.

In this paper, we have taken a quantitative approach, where SPARK is put under test to demonstrate that it can be relatively easily adopted and that the massive benefits of its adoption do not come with a significant negative impact on the performance of a program. To showcase this, a set of benchmarks from a modern C embedded benchmark suite, namely the EMBENCH suite, are converted to SPARK to guarantee the Absence of RunTime Errors (AoRTE). Runtime errors are a well-known source of security-related vulnerabilities, with dozens of system security breaches related to them. Thus, such runtime errors should be eliminated in the context of high assurance systems to avoid potential exploitations that can lead to safety issues. The work not only achieves the above goals but in the process, a plethora of valuable guidelines and best practices are offered to enable others to rapidly adopt the technology for hardening software libraries.

SPARK Levels of Software Assurance

Adopting SPARK for a new project can seem like an intimidating task, especially when having no prior experience with the technology. The difficulty of adopting SPARK depends on the scope of the analysis, such as a whole project or units of a project, and the software assurance level required. Fortunately, there are well-defined guidelines on the adoption of the SPARK technology that simplify the process. These guidelines are offered in the form of five levels of SPARK assurance, which are incremental in both the effort required and the amount of software assurance they provide. The five levels are:

  1. Stone level - valid SPARK
  2. Bronze level - initialization and correct data flow
  3. Silver level - Absence of RunTime Errors (AoRTE)
  4. Gold level - proof of key integrity properties
  5. Platinum level - full functional proof of requirements

For this work, we are aiming to reach up to the third level of SPARK assurance. With this level, programs are guaranteed AoRTE, a major source of security vulnerabilities. This also eliminates the need for runtime checks. To move at the highest levels of SPARK requires the precise intent and specifications of the application to be known, something not always feasible for code that is not written from scratch or well documented, as is the case with the EMBENCH benchmarks conversion.

The EMBENCH Benchmark Suite

EMBENCH is a recently formed free and open-source embedded benchmark suite, intending to provide benchmarks that reflect today’s embedded Internet of Things (IoT) applications’ needs. The initiative for establishing the benchmark suite came from Prof. David A. Patterson, one of the principal figures behind the RISC-V processor. The main idea is to move away from outdated, artificial benchmarks, such as the Dhrystone and CoreMark, which are no longer representative of modern embedded applications, and introduce a benchmark suite that will continuously evolve to keep up with the new trends in embedded systems.

Currently, EMBENCH includes 19 benchmarks carefully selected to be a representative range of the application space that is typically found in today’s IoT. The suite’s build-system supports both native (x-86, Linux) and cross-compilation of the benchmarks, currently for RISC-V and Arm’s Cortex-M4 based development boards.

Selection of Benchmarks

For a fair performance comparison between the programming languages C and SPARK, the underlying algorithmic logic and the main program structure of the original EMBENCH benchmarks had to be retained in their new correspondent SPARK versions. This limitation can significantly impact the effective use of features and capabilities of the Ada/SPARK languages. For example, casting from one scalar type to another scalar type of smaller range or shifting signed integers are things that you typically do not expect to be part of an Ada or SPARK program. Furthermore, there is a limitation on which level of SPARK assurance can be achieved on each benchmark that stems from the direct translation of the C code to firstly equivalent Ada code and then SPARK code; note that the Ada intermediate stage of translation is needed to facilitate a step-wise process and reduce the complexity of the translation.

The original code was not developed with SPARK's formal verification technology in mind. This can make it challenging to preserve the original code's logic and structure while enabling the verification tools to perform at their optimum level. Finally, in many cases, the lack of sufficient design documentation, particularly in the form of comments within the benchmarks' code, makes it challenging to apply SPARK contracts that capture the semantics of the code. Thus, we can only achieve at best the Silver level of SPARK, AoRTE, which is the default level aimed for high-assurance software.

Under the above considerations, eleven benchmarks, shown in the table below, were deemed applicable for this work. The table also shows the level of SPARK assurance achieved for each benchmark. If a benchmark is not at Silver, the number of its sub-modules that reached the Silver level is also given.

Benchmark Level Of SPARK No. of Submodules at
Silver level
aha-mont64 Silver all
crc32 Silver all
edn Silver all
huffbench Silver all
matmult-int Silver all
nettle-aes Silver all
nsichneu Silver all
st Bronze 3/4
ud Bronze 0/1
minver Bronze 1/2
nbody Bronze 1/2
Table 1. EMBENCH benchmarks converted to SPARK and their achieved level of SPARK.

Achieving the Different SPARK Levels

The Implementation Guidance for the Adoption of SPARK manual offers excellent guidance on how to achieve SPARK's several levels of software assurance. These guidelines form the basis for the white paper's work. Its purpose is not to give a comprehensive manual as this is already done in the aforementioned SPARK adoption manual, but rather to give enough context for the reader to understand the process and share the practical experiences of hardening software libraries with Ada and SPARK. Also, any significant steps/findings not found in the SPARK manual used as a starting point, such as a new SPARK feature, are highlighted.

The detailed stepwise process followed, and the several challenges and findings for each step of converting the benchmarks to Ada and SPARK and then achieving the different levels of SPARK software assurance can be found under Section 4.2 of the relevant white paper.

One thing I would like to highlight regarding how SPARK can be effectively applied to gain software assurance is the bottom-up modular approach that SPARK technology offers. The modularity is mainly two-fold:

  1. Being able to apply the three modes (check, flow analysis, proof) of GNATprove at the line, region, subprogram, and module (file) level.
  2. Each level of SPARK assurance achieved ensures the conformance of all of its lower SPARK assurance levels.

It is highly recommended to take a bottom-up modular approach when hardening software libraries with SPARK, such that before moving to a higher level of SPARK, all of the lower levels are achieved. Besides, smaller parts of the code, with the lowest amount of interaction, such as leaf-node subprograms in the call-tree of a program, should be targeted first with a gradual move towards the file level. This significantly reduces the effort of achieving the different levels of SPARK assurance.

Although for this work SPARK Pro and GNAT Pro versions 21.0w are used, GNAT Community 2021 is also applicable. GNAT Studio 21.0w was the Integrated Development Environment (IDE) used to convert the C benchmarks into SPARK and invoke the GNATprove SPARK static analyser.


The white paper includes a thorough discussion on the performance evaluation, issues captured by SPARK on the original benchmarks, assessment of the time needed for the completion of the SPARK-related tasks, and some issues found and improved within the SPARK technology. We will cover only the effort/time needed and the performance evaluation for this post.

Effort/Time Evaluation

One of the aims of this work was to evaluate the effort needed in achieving the absence of runtime errors with the SPARK technology. The completion of this work lasted around 35 working days. The level of experience with the Ada and SPARK technologies was around four months, although the engineer (myself) that carried out the work had an overall programming experience of about 15 years. Taking this into account and that the Silver level of SPARK was achieved within this time frame for the majority of the benchmarks, 7 out of 11, and for most of the functions for the remaining benchmarks, it is fair to say that SPARK technology is easily accessible and its adoption can yield significant benefits for hardening software libraries for security in a short time.

Performance Evaluation

The performance evaluation was done on an STM32F4DISCOVERY development board. The board is equipped with a 32-bit ARM Cortex-M4 with FPU core, 1-Mbyte Flash memory, 192-Kbyte RAM, and it can run at a maximum frequency of 168MHz. The ARM Cortex-M4 and the RISK-V 32-bit processors are the two embedded processors currently supported in the EMBENCH benchmark suite due to their popularity in the embedded industry.

Both in C and SPARK, the benchmarks were compiled with the GNAT Pro 21.0w compiler using the -O2 optimization flag and with link-time optimizations enabled. The link-time optimizations were essential to allow a fairer comparison between the C and the SPARK versions, as the SPARK code for each benchmark is separated from its test harness source file while the C benchmarks’ code lives in the same file as their test-harness. Thus, inlining of the benchmark code was feasible in the C versions and not in the SPARK versions. By enabling link-time optimizations, inlining was also enabled for the SPARK code similarly to the C code. Furthermore, Ada’s runtime checks were disabled, using the -gnatp flag, to make the performance comparison fair with C. Nevertheless, subprograms proved at the SPARK Silver level come with an AoRTE guarantee, and thus runtime checks can be safely disabled. This is essential towards certifying at the highest levels of software assurance of safety standards, such as the DO-178C for avionics or the CENELEC EN 50128 for rail systems. This is equally applicable when certifying against security-critical standards and guidelines such as ED-202A/ED-203A and DO-326A/DO-356A for aviation.

Benchmark Level Of SPARK SPARK vs C (Performance)
aha-mont64 Silver -24.07%
crc32 Silver 0.00%
edn Silver 8.56%
huffbench Silver 2.54%
matmult-int Silver 4.27%
nettle-aes Silver 10.91%
nsichneu Silver 7.55%
st Bronze -16.70%
ud Bronze 9.49%
minver Bronze -0.66%
nbody Bronze 38.83%
Table 2. Performance comparison of the C and SPARK versions of each benchmark. Note that a positive percentage represents the percentage increase in execution time for a benchmark written in SPARK when compared to the execution time of its corresponding C version, and vice versa, a negative percentage represents the percentage decrease in execution time.

Table 2 and Figure 1 show the performance comparison for the C and SPARK versions of the benchmarks. Note that even though the "minver" original benchmark is found to be functionally incorrect, the SPARK level implementation matches that behaviour. Thus, a performance comparison is still valid. With the exception of the "nbody" benchmark, there is no significant sacrifice in performance when moving from C to SPARK. This is because there were no significant intrusive modifications needed to the code to support SPARK proves.

Figure 1. Performance comparison of the C and SPARK versions of each benchmark.


The hardening of existing code-bases is expected to become a commonplace security exercise due to the rise of industry-mandated cyber-security standards and guidelines. This is especially predicted within aerospace with the recent adoption by the European Union Aviation Safety Agency (EASA) of the ED-202A/ED-203A security set as the currently only “Acceptable Means of Compliance” for aviation cyber-security airworthiness certification. This is expected to be equally true of the Federal Aviation Administration (FAA) and the technically equivalent DO-326A/DO-356A set.

This is where SPARK can play an integral role in designing a new security architecture (or when security hardening an existing architecture). The white paper summarized by this post shows that existing systems can benefit from the application of a SPARK hardening approach. Elevation of a security component to SPARK Silver level or higher provides strong evidence to support a security effectiveness assurance argument. This is particularly true when arguing over the effectiveness of security measures against threat scenarios relating to application security vulnerabilities.

By design, SPARK aims to eradicate all security bugs, flaws, errors, faults, holes, or weaknesses in software architecture regardless of if threat actors can exploit them. When proven to have achieved Silver level or higher, the guaranteed AoRTE is a powerful countermeasure against cyber-attacks. This work demonstrated that the effort of adopting SPARK is not as hard as perceived; in an arguably short amount of time, a set of benchmarks from the EMBENCH benchmark suite were converted from C to SPARK, and AoRTE was achieved in most of the cases, (see Table 1). In addition, had the complete functional specifications been available for the remaining, not fully proven benchmarks, the complete set of benchmarks could have achieved the Silver level of SPARK assurance. The SPARK technology was also able to identify a faulty benchmark, namely the "minver", where its implementation was incomplete and produced the wrong results.

Furthermore, as demonstrated, the adoption of the SPARK technology did not significantly compromise the C-enabled performance. This and the significant security benefit of proving AoRTE make SPARK a default choice when it comes to the hardening of software libraries. Finally, the work demonstrates the steps and best practices for adopting SPARK and highlights the use of new features. These, and the provided references to external documentation, can be used as up-to-date guidelines for the easy adoption of the SPARK technology.


The "High-Integrity Complex Large Software and Electronic Systems'' (HICLASS) working group was created to ensure that the UK stays a leading force within civilian aerospace manufacturing. HICLASS is made up of UK academia, tier-one aerospace manufacturers, and associated software development tool providers. This group has been assembled to action the technology strategy set out by the Aerospace Technology Institute (ATI). AdaCore's involvement within this group is to evolve SPARK and associated technologies to ensure they align with the objectives set out within the ever-emerging cyber-security guidelines.

HICLASS is supported by the Aerospace Technology Institute (ATI) Programme, a joint Government and industry investment to maintain and grow the UK’s competitive position in civil aerospace design and manufacture, under grant agreement No. 113213. The programme, delivered through a partnership between the ATI, Department for Business, Energy & Industrial Strategy (BEIS), and Innovate UK, addresses technology, capability, and supply chain challenges. More information about the HICLASS project can be found here.

Announcing The First Ada/SPARK Crate Of The Year Award Mon, 28 Jun 2021 06:05:00 -0400 Fabien Chouteau

We're happy to announce our new programming competition, the Ada/SPARK Crate Of The Year Award! We believe the Alire package manager is a game changer for Ada/SPARK, so we want to use this competition to reward the people contributing to the ecosystem.

Why “Crate”? This is the name the Alire project uses to designate a software project, library or executable written using the Ada and/or SPARK programming languages and contributed to the Alire ecosystem. The word comes from the Cargo package manager.

How does it differ from our previous competition, Make With Ada?

  • Make with Ada was only for embedded projects: this competition has a prize dedicated to embedded, but the other two prizes are open for any kind of software project. So you can go wild with the topic of your submission.
  • You can submit a project that you started years ago: it doesn’t have to be developed from scratch during the competition.


The competition is starting today and ends on Friday December 31st 2021 at 23:59 CEST. We'll announce the results in January 2022. As mentioned before, you can submit projects you started before the competition, months or even years ago. The only thing that matters is that your crate has to be available in the Alire community index by the end of the competition.

How to enter?

The competition is hosted on GitHub. To enter, participants must open an "issue" on the competition repository using the provided template. Read the terms and conditions for more details.


This competition has 3 prizes of $2,000 each, for:

  • The Ada Crate Of The Year Prize, for best overall Ada crate;
  • The SPARK Crate Of The Year Prize, for the best crate written in SPARK and/or contributing to the SPARK ecosystem;
  • The Embedded Crate Of The Year Prize, for the best Ada or SPARK crate for embedded software;

Getting started with Alire and Ada/SPARK

You can have a look at the Alire documentation to start your first crate. If you don’t know Ada/SPARK programming, we recommend starting with our interactive online courses here.

We also recommend getting in touch with the Ada/SPARK and Alire community. Here are some links that you may find useful:

Of course, you should also have a look at the existing Alire ecosystem to see if your awesome project idea already exists or to see which existing crates might help in your endeavor.

Have fun, and happy hacking!

SPARKNaCl with GNAT and SPARK Community 2021: Port, Proof and Performance Fri, 25 Jun 2021 03:54:00 -0400 Roderick Chapman for I in Index_16 loop Do_X; end loop; for I in Index_16 loop Do_Y; end loop;
for I in Index_16 loop
end loop;
   T  : GF64_PA;
   LT : GF64_Normal_Limb;
   T := (others => 0);

   for I in Index_16 loop

      LT := I64 (Left (I));
      T (I) := T (I) + (LT * I64 (Right (0)));
      --  and so on for T (I + 1), T (I + 2) ...
   subtype U32_Normal_Limb is U32 range 0 .. LMM1;
   T  : GF64_PA;
   LT : U32_Normal_Limb;
   T := (others => 0);

   for I in Index_16 loop
      LT := U32_Normal_Limb (Left (I));
      T (I) := T (I) + I64 (LT * U32_Normal_Limb (Right (0)));
      -- and so on...
   LT := U32_Normal_Limb (Left (0));

   T := GF64_PA'(0  => I64 (LT * U32_Normal_Limb (Right (0))),
                 1  => I64 (LT * U32_Normal_Limb (Right (1))),
                 2  => I64 (LT * U32_Normal_Limb (Right (2))),
                 3  => I64 (LT * U32_Normal_Limb (Right (3))),
                 4  => I64 (LT * U32_Normal_Limb (Right (4))),
                 5  => I64 (LT * U32_Normal_Limb (Right (5))),
                 6  => I64 (LT * U32_Normal_Limb (Right (6))),
                 7  => I64 (LT * U32_Normal_Limb (Right (7))),
                 8  => I64 (LT * U32_Normal_Limb (Right (8))),
                 9  => I64 (LT * U32_Normal_Limb (Right (9))),
                 10 => I64 (LT * U32_Normal_Limb (Right (10))),
                 11 => I64 (LT * U32_Normal_Limb (Right (11))),
                 12 => I64 (LT * U32_Normal_Limb (Right (12))),
                 13 => I64 (LT * U32_Normal_Limb (Right (13))),
                 14 => I64 (LT * U32_Normal_Limb (Right (14))),
                 15 => I64 (LT * U32_Normal_Limb (Right (15))),
                 others => 0);

   --  Iteration "0" is done, so only loop over 1 .. 15 now...
   for I in Index_16 range 1 .. 15 loop
      --  and so on as before...
Celebrating Women Engineering Heroes - International Women in Engineering Day 2021 Wed, 23 Jun 2021 03:36:00 -0400 Jessie Glockner

Women make up roughly 38% of the global workforce, yet they constitute only 10–20% of the engineering workforce. In the U.S., numbers suggest that 40% of women who graduate with engineering degrees never enter the profession or eventually leave it. Why? The reasons vary but primarily involve socio-economic constraints on women in general, workplace inequities, and lack of support for work-life balance. Sadly, history itself has often failed to properly acknowledge the instrumental contributions of women inventors, scientists, and mathematicians who have helped solve some of our world's toughest challenges. How can young women emulate their successes if they don't even know about them?

On this International Women in Engineering Day (INWED), we'd like to take the opportunity to celebrate several remarkable women who not only overcame insurmountable challenges to share their exceptional talents but also actively developed some of the most important technologies of humankind. We hope their stories will inspire young women to pursue engineering studies and encourage women engineers to remain in the profession. The world needs you!

Starting off our list is a woman who is particularly near and dear to our hearts here at AdaCore, Lady Ada Lovelace. Enjoy!

Ada Lovelace (1815-1852)

Lady Ada Lovelace was an English mathematician and writer and is considered the world's first programmer. A woman of many talents, she demonstrated a particular leaning towards mathematics and science early on. Such challenging subjects were not standard fare for women at the time. Still, throughout her childhood she received instruction from private tutors and family friends, including Mary Somerville, a Scottish astronomer and mathematician who became one of the first women to be admitted into the Royal Astronomical Society, and Charles Babbage, a mathematician and inventor. Through Babbage, Lovelace began studying advanced mathematics with University of London professor Augustus de Morgan. Lovelace wrote the first algorithm used by Charles Babbage on his Analytical Engine, a computing machine designed to perform complex mathematical calculations. In 1843, she translated an article on Babbage's analytical engine written by an Italian engineer, Luigi Menabrea. In addition to the translation, Lovelace added extensive notes of her own, including visionary statements that expressed the potential for computers beyond mathematics, leading others to deem her a 'prophet of the computer age.' Lovelace's contributions to the field of computer science were not acknowledged until her notes were reintroduced to the world by B.V. Bowden, who republished them in Faster Than Thought: A Symposium on Digital Computing Machines in 1953. Since then, Ada has received many posthumous honors for her work. In 1980, The U.S. Department of Defense named a newly developed computer programming language, "Ada," after Lovelace. The Ada language continues to be used to create reliable, safe, and secure software.

Grace Hopper (1906-1992)

Grace Hopper was a computer scientist, programmer, and a rear admiral in the U.S. Navy. She earned a Ph.D. in mathematics from Yale University and was a professor of mathematics at Vassar College. Hopper began her computing career in 1944 as a member of the Harvard Mark I team to develop one of the first computers made for commercial use in the United States. In 1949, she joined the Eckert–Mauchly Computer Corporation, where she was part of the team that developed the UNIVAC I - a commercial data-processing computer designed to replace punched-card accounting machines of the day. She also managed the development of the first COBOL compiler and the COBOL language that is still in use today. Hopper served in the Navy Reserves from 1943 to 1966, but she was recalled to active duty the following year to help standardize the Navy's computer languages. When she retired again in1986, at the age of 79, she was the oldest active-duty commissioned officer in the United States Navy. After she retired from the Navy, she worked as a consultant for Digital Equipment Corporation, sharing her computing experience until her death at age 85. Hopper was elected a fellow of the Institute of Electrical and Electronics Engineers in 1962, named the first computer science Woman of the Year by the Data Processing Management Association in 1969, received the National Medal of Technology in 1991, and the Presidential Medal of Freedom in 2016 (posthumously).

Joan Clarke (1917-1996)

Joan Clarke was a cryptanalyst and numismatist and is known as one of the greatest code-breakers in history. In 1940, while attending Cambridge University, Joan was recruited to join Alan Turing's Hut 8 team at Bletchley Park, best known for breaking the Nazi's Enigma Code and helping end World War II. She was initially placed in an all-women group, referred to as "The Girls," who mainly did routine clerical work. She quickly became the only female practitioner of Banburismus (the cryptanalytic process developed by Alan Turing to decipher German encrypted messages) and deputy head of Hut 8 in 1944. Clarke's work won her many awards and citations, including an appointment as a Member of the Order of the British Empire (MBE) in 1946. After the War, Clarke worked for Government Communications Headquarters (GCHQ). Clarke also developed an interest in numismatics history. She established the sequence of the complex series of gold unicorn and heavy groat coins that were in circulation in Scotland during the reigns of James III and James IV. In 1986, Her research was recognized by the British Numismatic Society when she received the Sanford Saltus Gold Medal. Issue No. 405 of the Numismatic Circular described her paper on the topic as "magisterial." Keira Knightley portrayed Clarke in the film The Imitation Game (2014).

Katherine Johnson (1918-2020)

Katherine Johnson was an American mathematician and the first African-American woman to work as a NASA scientist. Her calculations of orbital mechanics as a NASA employee were critical to the success of the first and subsequent American-crewed spaceflights. Johnson's work included calculating trajectories, launch windows, and emergency return paths for Project Mercury space flights, including those for astronauts Alan Shepard, the first American in space, and John Glenn, the first American in orbit. Her calculations helped synch Project Apollo's Lunar Module with the lunar-orbiting Command and Service Module. She also worked on the Space Shuttle and the Earth Resources Technology Satellite (ERTS, later renamed Landsat), authored or co-authored 26 research reports, and worked on plans for a mission to Mars. She retired in 1986, after 33 years at Langley. In 2015, President Barack Obama awarded Johnson the Presidential Medal of Freedom, America's highest civilian honor. In 2016, she was presented with the Silver Snoopy Award by NASA astronaut Leland D. Melvin and a NASA Group Achievement Award. She was portrayed as a lead character in the 2016 film Hidden Figures. In 2019, Johnson was awarded the Congressional Gold Medal by the United States Congress, and in 2021, she was inducted into the National Women's Hall of Fame.

Frances V. Spence (1922 - 2012)

Frances V. Spence is considered one of the first computer programmers in history. Spence was one of eighty women programmers originally hired by the University of Pennsylvania's Moore School of Engineering to develop the ENIAC project - a classified U.S. Army project designed to construct the first all-electronic digital computer to compute ballistics trajectories during World War 2. In addition to her larger programming duties, Spence was also assigned to a smaller computational development team of six women programmers (called "Computers") to operate an analog computing machine known as a Differential Analyzer, used to calculate ballistics equations. When the War ended, Spence continued working with the ENIAC team and collaborated with other leading mathematicians. In 1997, Spence and the other original ENIAC programmers were inducted into the Women in Technology International Hall of Fame. Their work paved the way for the electronic computers of the future, and their innovation kick-started the rise of electronic computing and computer programming in the Post-World War II era.

Annie J. Easley (1933-2011)

Annie J. Easley was an American computer scientist, mathematician, rocket scientist, and one of the first African-Americans to work as a computer scientist at NASA. Easley began her career doing computations for researchers, analyzing problems, and doing calculations by hand. Her earliest work involved running simulations for the newly planned Plum Brook Reactor Facility. She became an adept computer programmer, using languages like the Formula Translating System (Fortran) and the Symbolic Optimal Assembly Programming (SOAP) to support several of NASA's programs. She developed and implemented code used in researching energy-conversion systems by analyzing alternative power technology, including the battery technology for early hybrid vehicles and the Centaur upper-stage rocket. Later in her career, she became NASA's equal employment opportunity (EEO) counselor. In this role, she helped supervisors address gender, race, and age issues in discrimination complaints at the lowest level and in the most cooperative way possible. Easley retired from NASA in 1989, but she remained an active participant in the Speaker's Bureau and the Business & Professional Women's association. She has inspired many through her enthusiastic participation in outreach programs, breaking down barriers for women and people of color in STEM.

Margaret Hamilton (born 1936)

Margaret Hamilton is an American computer scientist, systems engineer, and business owner. She was director of the Software Engineering Division of the MIT Instrumentation Laboratory, which developed on-board flight software for NASA's Apollo program, which successfully landed the first humans on the Moon. On July 20, 1969, as the lunar module, Eagle, approached the Moon's surface, its computers began flashing warning messages. Fortunately, the software developed by Hamilton and her team was not only informing everyone that there was a hardware-related problem, but the software was compensating for it. And with only enough fuel for 30 more seconds of flight, Neil Armstrong reported, "The Eagle has landed." The achievement was a monumental task, given that computer technology was still in its infancy. The astronauts had access to only 72 kilobytes of computer memory (a 64-gigabyte cell phone today carries almost a million times more storage space), and programmers had to use paper punch cards to feed information into room-sized computers with no screen interface. Hamilton's work guided the remaining Apollo missions that landed on the Moon and benefitted Skylab, the first U.S. space station. In 1972, Hamilton left MIT and started her own company, Higher Order Software. Fourteen years later, she launched another company, Hamilton Technologies, Inc., where she created the Universal Systems Language to make the process of designing systems more dependable. Hamilton has published more than 130 papers, proceedings, and reports about sixty projects and six major programs. She is one of the people credited with coining the term "software engineering." NASA honored Hamilton with the NASA Exceptional Space Act Award in 2003. And in 2016, Hamilton received the Presidential Medal of Freedom from President Barack Obama for her work leading to the development of on-board flight software for NASA's Apollo Moon missions.

Sally K. Ride (1951-2012)

Dr. Sally K. Ride was an American astronaut and physicist and the first American woman to travel into space. On June 18, 1983, she served as a mission specialist aboard the space shuttle Challenger. She also became the first American woman to travel to space a second time when she participated in another Challenger mission on Oct. 5, 1984. Ride served on the accident investigation boards set up in response to the two space shuttle tragedies (Challenger in 1986 and Columbia in 2003). And in 2009, she participated in the Augustine committee that helped define NASA's spaceflight goals. Ride stopped working for NASA in 1987 and joined Stanford University Center for International Security and Arms Control. She later became a professor of physics at the University of California, San Diego. She also served as president of from 1999 to 2000. Until her death in 2012, Ride was a champion for science education and a role model for women, and especially girls. She wrote books for students and teachers and worked with science programs and festivals around the United States. She also came up with the idea for NASA's EarthKAM project, which lets middle school students take pictures of Earth using a camera on the International Space Station. In 2003, Ride was added to the Astronaut Hall of Fame.

Mae Jemison (born 1956)

Mae Carol Jemison is an American engineer, physician, and former NASA astronaut. Jemison graduated from Stanford University in 1977 (one of the only African American students in her class) with a Bachelor of Science degree in Chemical Engineering and a Bachelor of Arts degree in African and African-American studies. She later attended Cornell and received her Doctorate in Medicine in 1981. Shortly after her graduation, she became an intern at the Los Angeles County Medical Center and then practiced general medicine. Fluent in Russian, Japanese, and Swahili, Jemison joined the Peace Corps in 1983 and served as a medical officer for two years in Africa. After working with the Peace Corps, Jemison opened a private practice as a doctor. However, once Sally Ride became the first American woman in space in 1983, Jemison decided to apply to the astronaut program at NASA. In 1987 she was one of 15 people chosen out of over 2,000 applications for NASA's Astronaut Group 12, the first group selected after the Challenger explosion. When she served as a mission specialist aboard the Space Shuttle Endeavour, she became the first African-American woman to travel into space. Jemison left NASA in 1993 and founded The Jemison Group, a technology research company that encourages science, technology, and social change. She also began teaching environmental studies at Dartmouth College and directed the Jemison Institute for Advancing Technology in Developing Countries. She later formed a non-profit educational foundation and, through the foundation, became the principal of the 100 Year Starship project funded by DARPA. Jemison has written several books for children and appeared on television several times, including in a 1993 episode of Star Trek: The Next Generation. She has received multiple awards and honorary doctorates and serves on the Board of Directors for many organizations. She has been inducted into the National Women's Hall of Fame. National Medical Association Hall of Fame, the Texas Science Hall of Fame, and the International Space Hall of Fame.

Donna Auguste (born 1958)

Donna Auguste is an African-American engineer, entrepreneur, and philanthropist. Auguste received a Bachelor of Science degree in electrical engineering and computer sciences (EECS) from the University of California at Berkeley, a Master of Science from Regis University, and was the first African-American woman to enter the doctoral program in computer science at Carnegie Mellon. Early in her career, Auguste worked at Xerox and was part of the engineering team at IntelliCorp that introduced some of the world's first commercial artificial intelligence knowledge. She also spent several years at Apple Computer. She was awarded four patents for her innovative engineering work on the Apple Newton Personal Digital Assistant, a forerunner to the Palm Pilot. After receiving her Ph.D., she became the founder and CEO of Auguste Research Group, LLC, involved in research around sensors and actionable data science for IoT. In 1996 she founded Freshwater Software, Inc. to provide companies with tools that would help them monitor and enhance their presence on the Internet. She served as CEO of Freshwater until she sold it in 2000 for $147 million. August also founded the Leave a Little Room Foundation, LLC - a philanthropic organization that helps provide housing, electricity, vaccinations, and improve education and infrastructure to poor communities worldwide. Her current research, DataTip, involves using smartphone sensors to engage non-technical youth and adults in STEM learning to create content relevant to daily living. She was recognized as one of "25 Women Who Are Making It Big in Small Business" by Fortune Magazine. She also won the 2001 Golden Torch Award for Outstanding Women in Technology.

Limor Fried (born 1980)

Fried is an American electrical engineer and businesswoman. Fried studied at MIT, earning a B.S. in Electrical Engineering and Computer Science (EECS) in 2003 and a Master of Engineering in EECS in 2005. From her dorm room, she created Adafruit Industries (@adafruti). The company designs and resells open-source electronic kits, components, and tools, mainly for the hobbyist market. Adafruit currently ranks #11 of the top 20 USA manufacturing companies and #1 of "fastest-growing private companies in New York City" by Inc. 5000. Fried has been influential in the open-source hardware community. She participated in the first Open Source Hardware Summit and drafted the Open Source Hardware definition. In 2009, she received the Pioneer Award from the Electronic Frontier Foundation for her participation. Fried's numerous accolades continue. She was awarded the Most Influential Women in Technology award in 2011 by Fast Company magazine and became the first female engineer featured on the cover of Wired magazine. In 2012 she was the only female on a list of 15 finalists for entrepreneur's "Entrepreneur of the Year" award. She was named a White House Champion of Change in 2016, became one of Forbes' America's Top 50 Women In Tech in 2018, and received a Women in Open Source Award (Community) by Red Hat in 2019. Known by her moniker ladyada, a homage to Lady Ada Lovelace, she continues to motivate countless girls, young women, and others toward DIY frontiers and in science, technology, engineering, and mathematics.

If you'd like to learn about more amazing women, check out INWED's article highlighting five women leaders who have made a considerable impact in the tech world and serve as an inspiration to many.

"Women belong in all places where decisions are being made."

- Ruth Bader Ginsberg

Going beyond Ada 2022 Thu, 03 Jun 2021 10:27:00 -0400 Arnaud Charlet

As we've seen previously in Ada 2022 support in GNAT, the support for Ada 2022 is now mostly there for everyone to take advantage of. We're now crossing fingers for this new revision to be officially stamped by ISO in 2022.

In practice, making new ISO revisions of the language is a long process, which so far happens roughly every ten years. On our side, following the general evolution of the language design culture, and some feedback we've received from our community of users, we're looking to have a shorter and more lively feedback loop.

This is why we have started in 2019 a new initiative, centered around the Ada/SPARK RFCs platform, which will allow us to experiment with new language features for Ada.

With this platform, we want to give anyone an opportunity to propose language evolutions through RFCs ("request for comments"), discuss the merits of RFCs publicly with the community, select those that are the most promising for prototyping, prototype the features in GNAT (and/or SPARK as appropriate), gather feedback from users of the feature, and depending on that feedback, either abandon the feature, modify it, or keep it. Finally when relevant, propose the features we kept for inclusion in the next version of Ada to the Ada Rapporteur Group, the international body in charge of Ada standardization.

In order to assess the needs of current and future users of the Ada programming language, we asked people inside and outside AdaCore what they wish for the future of Ada and SPARK. We got a lot of insights from these answers, coming from programmers with different backgrounds. One of the most common request is to get more compile-time guarantees, in areas such as data initialization before use, access to discriminated fields, dereference of possibly null pointers, dynamic memory management. Note that SPARK already offers such guarantees, at the cost of constraining the language and requiring an analysis which is much more costly than compilation. Here, our goal will be to provide the guarantees above for Ada programs through simpler compilation. Other common requests were: more powerful generics with richer specifications and implicit instantiation, better string handling that properly supports unicode, a more universally available mechanism for data finalization as well as some frequently requested syntax additions. The peculiar object model in Ada turned out to be a contentious issue, with some strong supporters and strong opponents, who advocated for rebuilding it more alike to the dominant model of Java/C++.

Thanks to the suggestions received so far from both external contributors and from the language design team at AdaCore, we have started adding some of these new experimental features, with a first implementation available in the latest GNAT Community 2021 release as well as the latest GNAT Pro 22 Continuous Release, under the -gnatX switch and detailed below.

Most Wanted Features

Let's first start with two "most wanted" features that many Ada users have been asking for years:

Fixed Lower Bound

Detailed in RFC#38, you can now specify a lower bound for unconstrained arrays that is fixed to a certain value.

Use of this feature increases safety by simplifying code, and can also improve the efficiency of indexing operations.

For example, a matrix type with fixed lower bounds of zero for each dimension can be declared by the following:

type Matrix is array (Natural range 0 .. <>, Natural range 0 .. <>) of Integer;

Objects of type Matrix declared with an index constraint must have index ranges starting at zero:

M1 : Matrix (0 .. 9, 0 .. 19);
M2 : Matrix (2 .. 11, 3 .. 22);  -- Warning about bounds; will raise CE

Similarly, a subtype of String can be declared that specifies the lower bound of objects of that subtype to be 1:

subtype String_1 is String (1 .. <>);

If a string slice is passed to a formal of subtype String_1 in a call to a subprogram S, the slice’s bounds will “slide” so that the lower bound is 1. Within S, the lower bound of the formal is known to be 1, so, unlike a normal unconstrained String formal, there is no need to worry about accounting for other possible lower-bound values:

procedure Str1 is
   subtype String_1 is String (1 .. <>);

   procedure Proc (S : String_1) is
      --  S'First = 1
      Put_Line (S);
   end Proc;

   S : String_1 := "hello world";

   Proc (S (7 .. S'Last));
   --  sliding on S (7 .. S'Last) occurs automatically when calling Proc,
   --  so this will pass a String_1 (1 .. 5) whose content is "world"
end Str1;

Generalized Object.Op Notation

Detailed in RFC#34, the so called prefixed-view notation for calls is extended so as to also allow such syntax for calls to primitive subprograms of untagged types. The primitives of an untagged type T that have a prefixed view are those where the first formal parameter of the subprogram either is of type T or is an anonymous access parameter whose designated type is T. This is another "most wanted" feature since the introduction of this notation in Ada 2005! For example:

   type Elem_Type is private;
package Vectors is
    type Vector is private;
    procedure Add_Element (V : in out Vector; Elem : Elem_Type);
    function Nth_Element (V : Vector; N : Positive) return Elem_Type;
    function Length (V : Vector) return Natural;
end Vectors;

package Int_Vecs is new Vectors(Integer);
V : Int_Vecs.Vector;
pragma Assert (V.Length = 2);
pragma Assert (V.Nth_Element(1) = 42);

Additional "when" Constructs

This smaller syntactic addition discussed in RFC#73 adds the ability to use the "when" keyword to "return", "goto" and "raise" statements, in addition to the existing "exit when" control structure.

For example:

procedure Do_All (Element : access Rec; Success : out Boolean) is
   raise Constraint_Error with "Element is null" when Element = null;

   Do_1 (Success);
   return when not Success;

   Do_2 (Success);
   return when not Success;

   Do_3 (Success);
   return when not Success;

   Do_4 (Success);   
end Do_All;

Pattern Matching

This feature on the other hand (detailed in RFC#50) is a large one and is still being worked on. It provides an extension of case statements to cover records and arrays, as well as finer grained casing on scalar types and will in particular in the future provide more compile time guarantees when accessing discriminated fields.

For example, you can match on several scalar values:

type Sign is (Neg, Zero, Pos);

function Multiply (S1, S2 : Sign) return Sign is
  (case (S1, S2) is
     when (Neg, Neg) | (Pos, Pos) => Pos,
     when (Zero, <>) | (<>, Zero) => Zero,
     when (Neg, Pos) | (Pos, Neg) => Neg);

Matching composite types is currently only supported on records with no discriminants. Support for discriminants and arrays will come later.

The selector for a case statement may be of a composite type. Aggregate syntax is used for choices of such a case statement; however, in cases where a “normal” aggregate would require a discrete value, a discrete subtype may be used instead; box notation can also be used to match all values.

Consider this example:

type Rec is record
   F1, F2 : Integer;
end record;

procedure Match_Record (X : Rec) is
   case X is
      when (F1 => Positive, F2 => Positive) => Do_This;
      when (F1 => Natural, F2 => <>) | (F1 => <>, F2 => Natural) => Do_That;
      when others => Do_The_Other_Thing;
   end case;
end Match_Record;

If Match_Record is called and both components of X are Positive, then Do_This will be called; otherwise, if either component is nonnegative (Natural) then Do_That will be called; otherwise, Do_The_Other_Thing will be called.

If the set of values that match the choice(s) of an earlier alternative overlaps the corresponding set of a later alternative, then the first set shall be a proper subset of the second (and the later alternative will not be executed if the earlier alternative “matches”). All possible values of the composite type shall be covered.

In addition, pattern bindings are supported. This is a mechanism for binding a name to a component of a matching value for use within an alternative of a case statement. For a component association that occurs within a case choice, the expression may be followed by “is <identifier>”. In the special case of a “box” component association, the identifier may instead be provided within the box. Either of these indicates that the given identifier denotes (a constant view of) the matching subcomponent of the case selector.

Consider this example (which uses type Rec from the previous example):

procedure Match_Record2 (X : Rec) is
   case X is
      when (F1 => Positive is Abc, F2 => Positive) => Do_This (Abc);
      when (F1 => Natural is N1, F2 => <N2>) |
           (F1 => <N2>, F2 => Natural is N1) => Do_That (Param_1 => N1, Param_2 => N2);
      when others => Do_The_Other_Thing;
   end case;
end Match_Record2;

This example is the same as the previous one with respect to determining whether Do_This, Do_That, or Do_The_Other_Thing will be called. But for this version, Do_This takes a parameter and Do_That takes two parameters. If Do_This is called, the actual parameter in the call will be X.F1.

If Do_That is called, the situation is more complex because there are two choices for that alternative. If Do_That is called because the first choice matched (i.e., because X.F1 is nonnegative and either X.F1 or X.F2 is zero or negative), then the actual parameters of the call will be (in order) X.F1 and X.F2. If Do_That is called because the second choice matched (and the first one did not), then the actual parameters will be reversed.

Simpler Accessibility Rules

This one (see RFC#47) is still a moving target and would definitely welcome some user experimentation and feedback! It starts with the observation that over the years, the rules that govern accessibility in Ada, that is, what operations on pointers are allowed, have grown to a point where they are barely understood by implementers and even less so by users. So by introducing a new restrictions pragma, we want to both simplify the rules and propose a model where runtime accessibility checks related to the use of anonymous access types are suppressed and replaced by compile time checks, in particular because the runtime accessibility checks are either impossible to implement fully, or worse, may produce false alarms (raising an exception in cases where no dangling access is actually occurring).

So if you add as part of your configuration pragmas the following:

pragma Restrictions (No_Dynamic_Accessibility_Checks);

This will enable this new mode. Currently two variants are implemented:

Designated type model

The default model when using No_Dynamic_Accessibility_Checks in GNAT Community Edition 2021 although in the more recent GNAT Pro development version, we've switched this model with the other one and this one will be enabled via the -gnatd_b debug switch, in addition to the restriction.

In this model, anonymous access types are implicitly declared at the point where the designated type is declared.

Point of declaration model

Available via the additional use of the -gnatd_b switch in GNAT CE 2021 and by default when using No_Dynamic_Accessibility_Checks in more recent GNAT Pro versions, the anonymous access types are implicitly declared at the point where the anonymous access is used (as part of a subprogram parameter, object declaration, etc...).

Both models may be refined further based on the feedback received on actual code and what users would find most useful and practical, so do not hesitate to give it a try and let us know!

Next Steps

Do you find some of these features useful? Do you want to give them a try and tell us what you think? Do you have some ideas for other new Ada features or other changes to existing features?

We encourage you to give it a try, give your feedback, and make new suggestions in the Ada/SPARK RFC platform! On our side we'll continue prototyping other RFCs and refine existing ones, so stay tuned.

GNAT Community 2021 is here! Tue, 01 Jun 2021 04:20:00 -0400 Fabien Chouteau

We are happy to announce that the GNAT Community 2021 release is now available via Here are some release highlights:

GNAT Compiler toolchain

The 2021 GNAT Community compiler includes tightening and enforcing of Ada rules, performance enhancements, and support for many Ada 2022 features:

  • Jorvik real-time tasking profile
  • Support for infinite precision numbers
  • Declare expressions
  • Contracts on Access-to-Subprogram
  • Static expression functions
  • Iterator Filters
  • Renames with type inference
  • Container aggregates

There are also some future language features - watch this space for further news on this.

The compiler back-end has been upgraded to GCC 10 on all platforms.

GNAT Studio

This release includes GNAT Studio, our multi-language IDE for Ada, SPARK, C, C++ and Python. Notable features are:

  • Integration of a new engine - clangd - for C/C++ navigation
  • Various improvements one Ada/SPARK navigation (handling of dispatching calls, dependency browsers)
  • Improved Search view (highlighted search area and new preferences)
  • Various UI improvements (better code folding, a new "toggle comments" action, and more)
  • Many bug fixes and performance improvements


Libadalang, a library for parsing and semantic analysis of Ada code, has made a lot of progress in the past year. In this GNAT Community release, you'll find:

  • Improved name resolution
  • Improved memory footprint
  • Many new features accessible through the public APIs (details here)


The possibility to prove that your Ada programs are correct with SPARK now applies to more programs with pointers and to programs using the latest features of Ada.

GNATprove messages have been enhanced to be more helpful both on the command-line and inside IDEs. You can also now visualize the generated data flow contracts inside GNAT Studio and verify the termination of recursive functions.

SPARK support for pointers was enhanced to:

  • allow dynamic memory (de)allocation in regular functions
  • use allocators more liberally inside expressions
  • support named access-to-constant types
  • support taking the address of a variable with 'Access
  • support access-to-subprogram types
  • provide read-only and read-write access to elements inside formal containers without copy

SPARK supports the following features of Ada 2022: declare expressions, delta aggregates, contracts on access-to-subprogram types, the @ symbol, iterated component associations.

An Introduction to Jorvik, the New Tasking Profile in Ada 2022 Wed, 26 May 2021 03:11:00 -0400 Pat Rogers

In 2016, AdaCore developed and deployed a new tasking profile based directly on the standard Ravenscar profile, but with some restrictions relaxed or replaced. We presented the new profile in 2017 at Ada Europe [1], providing the justification, additional capabilities, execution-time costs, and resulting schedulability analysis supported. That same year, AdaCore extended SPARK to include the new profile, thus supporting both tasking subsets.

The new profile is included in the Ada 2022 draft with some refinements and an official name: Jorvik (pronounced “Yourvick”). Jorvik was the Viking name for a Roman fort/settlement that eventually became known as York, in northern England. Said to be England's "Second City," some well-known cities around the world are named after it. The cover picture shows the city today, with the 13th-century Gothic cathedral and its Rose window in the background. York is not far from the village of Ravenscar, where that profile was introduced.

Similarly, Jorvik is not far, technically speaking, from Ravenscar. The differences can be summarized by the small handful of restrictions that are removed or replaced from the Ravenscar list. Everything else in Jorvik remains as it is in Ravenscar. Although the changes are few in number, they are nonetheless significant. We will explore the details below.

Before we do, you should understand that Jorvik is not a replacement for Ravenscar. The Ada community can benefit from both profiles. To understand why, let's start with a little background.

Ravenscar is intended for applications using tasking in four distinct application domains:

  • safety-critical systems requiring stringent, exhaustive certification analyses,
  • high-integrity applications requiring formal static analysis and verification,
  • hard real-time applications requiring predictability and schedulability analysis,
  • embedded applications requiring a small memory footprint, high performance, or both.

Note the changing requirements. They begin with very expensive, comprehensive and rigorous analyses, shift to less costly forms of analysis and non-functional properties (predictability), and end with only non-functional properties (space and speed).

Applications can be in more than one of these domains. A hard-real-time application might also be a high-integrity application, for example, and any of them might also be embedded. The requirements for such applications is then the union of all the applicable domains' requirements.

In response to that set of requirements, the Ravenscar profile restrictions remove complexity, both in the application source and in the run-time library (RTL), where Ada tasking is largely implemented. The results address the domains' requirements in three ways:

  1. A simplified RTL, and resultingly simpler application code, are less costly to analyze for certification and safety. A subset of any modern language is essential to make these analyses feasible, both technically and economically.
  2. At the application level, the tasking subset facilitates the various forms of analysis. The structure of the tasks, especially their possible control and data interactions, enables at the task level the sort of safety analysis previously applied to entire (sequential) programs. That same set of restricted interactions also simplifies schedulability analysis for the application code.
  3. A simplified RTL can be far more efficient in both object-code space and speed. For example, abort statements impose distributed costs, i.e., object code size and execution performance penalties whether or not they are actually used in an application. Some other constructs do not impose distributed costs but do require complicated run-time support. Removing support for these constructs reduces object code size and improves speed, dramatically.

Ravenscar is designed to maximize simplicity to the degree necessary to meet all of the domains' requirements. Therefore, when certification or formal (e.g., safety) analysis is required, Ravenscar is clearly the right choice. When the smallest possible object code size, and absolutely utmost performance is required, Ravenscar is again the best choice.

The cost of a simplified RTL, however, is reduced expressive power at the language level. The most expressive language constructs require RTL support and cannot be made available without it. The requeue statement is a good example, as are task entries and accept statements. A rendezvous is effectively an atomic action with two participants, a very potent facility that requires integrated run-time support for both tasking and exceptions.

Both profiles trade away expressive power, but to different extents. The controlling factor is the analyses: the high-integrity and safety-critical domains necessitate stringent analyses, whereas the real-time and embedded domains do not. It follows that they do not, in isolation, require the same degree of simplicity. Consequently, Jorvik is designed to enhance expressive power for applications that are only in the real-time and/or embedded domains. As we said earlier, Ravenscar can be used in this case, and sometimes should be used, but the additional expressive power of Jorvik makes it an attractive alternative.

With that understood, let’s explore the Jorvik facilities.

The differences between the two profiles can be summarized by the restrictions that are removed or replaced from the Ravenscar restrictions list. The pertinent Ravenscar list is as follows::

  • Max_Entry_Queue_Length => 1
  • Max_Protected_Entries => 1
  • Simple_Barriers
  • No_Relative_Delay
  • No_Dependence => Ada.Calendar
  • No_Dependence => Ada.Synchronous_Barriers
  • No_Implicit_Heap_Allocations

The first two restrictions are the most significant. Removing the first allows multiple callers to be queued simultaneously on a protected entry in Jorvik, rather than at most one. Removing the second restriction allows multiple protected entries per protected object (PO). Note that task schedulability analysis remains possible, and the new restriction Max_Entry_Queue_Length can be used to good effect when performing that analysis.

Given those two protected entry relaxations, classic protected type idioms are much more likely allowed in Jorvik. Our example is a concurrent bounded buffer, with two entries and as many queued callers at runtime as necessary:

   type Element is private;
package Concurrent_Bounded_Buffers is

   type Content is array (Positive range <>) of Element;

   protected type Bounded_Buffer (Capacity : Positive) is
      entry Put (Item : in Element);
      entry Get (Item : out Element);
      function Is_Empty return Boolean;
      function Is_Full return Boolean;
      Values   : Content (1 .. Capacity);
      Next_In  : Positive := 1;
      Next_Out : Positive := 1;
      Count    : Natural  := 0;
   end Bounded_Buffer;

end Concurrent_Bounded_Buffers;

The other major difference between the two profiles is the content of entry barrier expressions. Ravenscar applies the Simple_Barriers restriction that requires these expressions to consist of either a static expression or a name that statically denotes a Boolean component of the enclosing protected object. Loosely speaking that means either Boolean literals (e.g., “when True”), or single Boolean components (e.g., “when Some_Boolean”). Jorvik replaces Simple_Barriers with a new restriction named Pure_Barriers. The new restriction allows more complex Boolean expressions, within limits.

Loosely speaking, Pure_Barriers allows the following content for scalar expressions comprising protected entry barriers. You should assume that there are restrictions in the details that I am glossing over. (See RM clauses D.7 and 4.9 for the full definitions.)

  • a static expression (numeric literals, named numbers, static constants, static calls to static functions, certain attributes, etc.);
  • a name for a scalar (i.e., a discrete or real type) component of the enclosing protected unit;
  • a Count attribute reference for an entry in the enclosing protected unit;
  • a call to a predefined relational operator or Boolean logical operator;
  • a membership test;
  • a short-circuit control form;
  • a conditional_expression; or
  • an allowed expression that is enclosed in parentheses.

No other language entities are allowed in the barrier expressions. Content is restricted so that side effects, exceptions, and recursion are impossible. Precluding them is important because the language does not specify the number of times a given barrier is evaluated. With these restrictions in place the number of evaluations won’t matter.

Given this relaxed content, the typical implementations for our Bounded_Buffer entry bodies are allowed without changes, including especially the entry barriers:

protected body Bounded_Buffer is

   entry Put (Item : in Element) when Count /= Capacity is
      Values (Next_In) := Item;
      Next_In := (Next_In mod Capacity) + 1;
      Count := Count + 1;
   end Put;

   entry Get (Item : out Element) when Count > 0 is
      Item := Values (Next_Out);
      Next_Out := (Next_Out mod Capacity) + 1;
      Count := Count - 1;
   end Get;

   function Is_Empty return Boolean is
     (Count = 0);

   function Is_Full return Boolean is
     (Count = Capacity);

end Bounded_Buffer;

For the sake of comparison, imagine we have some other protected object and are using the Ravenscar profile. There will be only one entry, but let’s reuse entry Get above just for illustration. In Ravenscar, we would have a new Boolean component used solely for the entry barrier. It could be named Not_Empty, would be initialized to False, and then updated in the entry body:

entry Get (Item : out Element) when Not_Empty is
   Item := Values (Next_Out);
   Next_Out := (Next_Out mod Capacity) + 1;
   Count := Count - 1;
   Not_Empty := Count > 0;
end Get;

We must include the negation in the name and value because Simple_Barriers requires static expressions. We could not say “not Empty” in the barrier. But note that the assignment to Not_Empty in the entry body has no barrier-oriented restrictions, so the expression comparing Count to zero is allowed, as would much more complex, potentially non-static references. This approach certainly works, but it isn’t the way one would write an entry body and barrier normally, and we’d like to use implementations without requiring code changes when possible. That won’t always be possible, though, because Pure_Barriers does restrict barrier content. The Ravenscar approach might be used occasionally in Jorvik applications too, in combination with the content Jorvik allows.

Note that, in the list of content allowed by Jorvik’s Pure_Barriers, “a name that statically names a scalar subcomponent of the immediately enclosing protected unit” has a specific meaning you need to understand. Remember that the Pure_Barriers restriction doesn’t allow anything that can raise exceptions. Therefore, any part of an expression that has to be checked at run-time is not allowed. For example, maybe you have a discriminated record type with dependent components, a default for the discriminant, and a PO component of this type that takes the default. The dependent record components don’t exist except for specific values of the discriminant, which, thanks to the default, can vary as the program executes. Ada checks to make sure that references to those components are consistent with the current value of the discriminant, raising an exception if the check fails. Such a reference would be rejected by the compiler in an expression required to be consistent with the Pure_Barrier restriction. It is not “pure-barrier-eligible” to use the technical term. Likewise, if you have an object of some array type, the correctness of a variable used as the index must be checked (in certain cases). That usage would not be allowed. In other cases, though, the index need not be checked, because the index value is static -- determinable at compile-time -- and so can be checked then.

To make this discussion concrete, let’s change the private part and body of the Bounded_Buffer type so that we use a record object. Currently, the private part is as shown earlier:

   Values   : Content (1 .. Capacity);
   Next_In  : Positive := 1;
   Next_Out : Positive := 1;
   Count    : Natural  := 0;
end Bounded_Buffer;

Let’s say that Next_In, Next_Out, and Count are to be components of a record object instead of direct PO components. In this case that wouldn’t really be worth doing, but we’ll use it to illustrate what’s allowed. In real code, though, especially an application specific PO, you might very well compose the PO from various abstract data types declared in their own packages (that being good software engineering, after all).

type Management is record
   Next_In  : Positive := 1;
   Next_Out : Positive := 1;
   Count    : Natural  := 0;
end record;

protected type Bounded_Buffer (Capacity: Positive) is
   … as before
   Values : Content (1 .. Capacity);
   State  : Management;
end Bounded_Buffer;

The entry barriers then become:

entry Put (Item : in Element) when State.Count /= Capacity is

entry Get (Item : out Element) when State.Count > 0 is

The references are slightly more verbose, but the point is that the barriers can reference those record components because they are scalar components, because State is declared immediately within the PO, and because the called functions (the relational operators) are static functions statically called. Your real, application-specific PO may very well contain objects of composite types, and you can reference their components in the barriers as long as they follow the rules.

Similarly, and perhaps abandoning realistic code altogether, we could use an array of three Integers in place of the three distinct variables. We could say that Next_In will now be State (1), Next_Out will now be State (2), and Count will be State (3).

type Management is array (1 .. 3) of Positive;
protected type Bounded_Buffer (Capacity : Positive) is
   Values : Content (1 .. Capacity);
   State  : Management;
end Bounded_Buffer;

The entry barriers would be like so:

entry Put (Item : in Element) when State (3) /= Capacity is

entry Get (Item : out Element) when State (3) > 0 is

That approach would not be an improvement, all other things being equal. But it serves to show that array indexing is allowed when the indexes are static.

What would be an improvement, however, is using the two existing functions, Is_Full and Is_Empty, in the barriers. They directly express what the reader must otherwise deduce:

entry Put (Item : in Element) when not Is_Full is

entry Get (Item : out Element) when not Is_Empty is

Sadly, those barrier expressions are not allowed by Pure_Barriers because Is_Full and Is_Empty are not static functions. They cannot be made to be static, either.

The other restrictions removed in Jorvik are mostly a matter of application developer convenience.

We fully expect Jorvik applications to delay periodic tasks with absolute delay statements, just as in Ravenscar applications. Elsewhere, a relative delay statement can be appropriate, and they are now allowed. For example, an electro-mechanical relay may have a requirement that it not be actuated more than N times per second in order to prevent burn-out. The semantics of a relative delay match that requirement nicely.

In a way, relative delay statements were already allowed in Ravenscar, via the ugly “hack” of using an absolute delay statement to delay until the value of Clock + some-time-span. Jorvik isn’t really adding much here, but the direct expression is cleaner and simpler.

Jorvik removes the restriction prohibiting use of the Ada.Calendar package. This restriction is present in Ravenscar because the Ada.Real_Time package has more appropriate semantics for real-time/embedded applications. However, not all usage of Ada.Calendar is unreasonable, for example time-stamping log messages. That said, Ada.Real_Time will surely remain the primary facility.

Jorvik removes the restriction prohibiting use of the Ada.Synchronous_Barriers package. A “barrier” in this case is an abstract data type, not a Boolean expression controlling a protected entry. The semantics are much like visiting a restaurant that requires all members of the dinner party to be present before any are seated. An object of type Synchronous_Barrier has a discriminant that specifies how many tasks are in the “party.” Once that many tasks “arrive” by calling the entry for that object, the entire set of caller tasks are allowed to continue. The obvious implementation of type Synchronous_Barrier is as a protected type that, by definition, must allow multiple callers to queue on a single entry. Jorvik allows that implementation so there was no need to restrict the package.

Finally, the restriction No_Implicit_Heap_Allocations is removed. That restriction is most pertinent to the domains requiring certification and/or safety analyses, but Jorvik is not targeted to those domains. Some implicit allocations would not be a problem for Jorvik applications. Nevertheless, related restrictions are needed in this regard. Recall that in both Ravenscar and Jorvik no protected objects or tasks are ever allocated. They are always declared. There are cases, however, in which GNAT would allocate a task or protected object dynamically, transparently, even though an allocation is not visible in the source code. The restriction No_Implicit_Heap_Allocations would catch that, but we’ve removed it in Jorvik.

For example, consider a composite object declared in a library package, say a String object. If the bounds of the object are not known at compile-time GNAT will allocate the object, implicitly.

with Max_Size;  -- a function
package P is
   Name : String (1 .. Max_Size);
end P;

Now, instead of being a library object, imagine Name is a component of a protected type or protected object:

with Max_Size;  -- a function
package P is

   protected PO is
      procedure Q;
      Name : String (1 .. Max_Size);
   end PO;

end P;

It’s the same problem, except now the compiler will be allocating the enclosing protected object, thus violating the restriction. The same behavior is possible for task objects. Therefore, GNAT adds two new restrictions to Jorvik to prevent these specific cases: No_Implicit_Task_Allocations and No_Implicit_Protected_Object_Allocations.

With those two restrictions in place, the above code causes this error message from GNAT: violation of restriction "No_Implicit_Protected_Object_Allocations"

These two restrictions are not part of the Jorvik profile in the Ada 2022 standard. They are specific to the GNAT implementation. However, we intend to argue for them in a subsequent update to the standard.

In conclusion, if you are unsure when to use one of the profiles, or any subset, there is an applicable maxim, originally expressed for Ravenscar: “If an application cannot be reasonably expressed within the Ravenscar subset, it isn’t a Ravenscar application.” In other words, the application code in these domains, particularly those undergoing rigorous analyses, must be very simple, and, consequently, will be expressible in the subset. Otherwise, the project lead should review whether adhering to Ravenscar is appropriate. That maxim is true for the Jorvik profile as well. If an application “genuinely requires” requeue statements, for example, maybe a larger subset is appropriate.

Of course, “genuinely requires” is difficult to define precisely, especially because one can work around some of the two profiles’ restrictions via additional application source code. For example, multiple entry queues in a single protected object can be simulated via multiple protected objects, each with a single entry. This additional application code, in effect, implements in a bespoke manner that which the run-time library would have implemented more generally, had the corresponding restriction not been in place. However, that additional source code injects complexity back into the system under analysis. In a very real sense, the complexity has been “moved” from the run-time library up to the application level. At some point, additional application code complexity argues against use of the profiles. That said, Ravenscar is widely used, and justly so. We think Jorvik will be as well.

[1] P. Rogers, J. Ruiz, T. Gingold, and P. Bernardi, A New Ravenscar-Based Profile in Reliable Software Technologies Ada-Europe 2017, Johann Blieberger and Marcus Bader (eds) (2017), LNCS 10300, Springer-Verlag, pp. 169-183.

From Rust to SPARK: Formally Proven Bip-Buffers Wed, 05 May 2021 00:00:00 -0400 Fabien Chouteau function In_Writable_Area (This : Buffer; Offset : Buffer_Offset) return Boolean is (if Is_Inverted (This) then -- Already inverted -- |---W==========R----| -- Inverted (R > W): -- We can write between W .. R - 1 Offset in This.Write .. This.Read - 1 else ( -- |====R---------W=====| -- Not Inverted (R <= W): -- We can write between W .. Size - 1, or 0 .. R - 1 if we invert (Offset in This.Write .. This.Size - 1) or else (Offset in 0 .. This.Read - 1)));
function Valid_Write_Slice (This : Buffer; Slice : Slice_Rec) return Boolean
is (Valid_Slice (This, Slice)
    and then In_Writable_Area (This, Slice.From)
    and then In_Writable_Area (This, Slice.To));

procedure Grant (This : in out Buffer;
                 G    : in out Write_Grant;
                 Size : Count)
     with Post => (if Size = 0 then State (G) = Empty)
                   and then
                  (if State (G) = Valid
                       then Write_Grant_In_Progress (This)
                   and then Slice (G).Length = Size
                   and then Valid_Slice (This, Slice (G))
                   and then Valid_Write_Slice (This, Slice (G)));
--  Request indexes of a contiguous writeable slice of exactly Size elements
   Left  : Sample_Array (1 .. 64) := (others => 0);
   Right : Sample_Array (Left’Range) := (others => 0);
   Q     : aliased Offsets_Only (Left'Length);
   WG    : Write_Grant := BBqueue.Empty;
   S     : Slice_Rec;
   Grant (Q, WG, 8);
   if State (WG) = Valid then
      S := Slice (WG);
      Left (Left'First + S.From .. Left'First + S.To) := (others => 42);
      Right (Right'First + S.From .. Right'First + S.To) := (others => -42);
      Commit (Q, WG);
   end if;
   type My_Data is record
      A, B, C : Integer;
   end record;
   type My_Data_Array is array (Natural range <>) of My_Data;

   Buf : My_Data_Array (1 .. 64);
   Q   : aliased Offsets_Only (Buf'Length);
   WG  : Write_Grant := BBqueue.Empty;
   S   : Slice_Rec;
   Grant (Q, WG, 8);
   if State (WG) = Valid then
      S := Slice (WG);
      Buf (Buf'First + S.From .. Buf'First + S.To) := (others => (1, 2, 3));
      Commit (Q, WG);
   end if;
   Q   : aliased Buffer (64);
   WG  : Write_Grant := Empty;
   S   : Slice_Rec;
   Grant (Q, WG, 8);
   if State (WG) = Valid then
         B : Storage_Array (1 .. Slice (WG).Length)
           with Address => Slice (WG).Addr;
         B := (others => 42);
      Commit (Q, WG);
   end if;
   Q   : aliased Framed_Buffer (64);
   WG  : Write_Grant := Empty;
   RG  : Read_Grant := Empty;
   S   : Slice_Rec;
   Grant (Q, WG, 8); -- Get a grant of 8
   Commit (Q, WG, 4); -- Only commit 4
   Grant (Q, WG, 8); -- Get a grant of 8
   Commit (Q, WG, 5); -- Only commit 5
   Read (W, RG); -- Returns a grant of size 4
   type T is mod <>;
package Atomic.Generic8
with Preelaborate, Spark_Mode

   type Instance is limited private;
   --  This type is limited and private, it can only be manipulated using the
   --  primitives below.

   procedure Add_Fetch (This   : aliased in out Instance;
                        Val    : T;
                        Result : out T;
                        Order  : Mem_Order := Seq_Cst)
     with Post => Result = (Value (This)'Old + Val)
                   and then
                  Value (This) = Result;
On the Benefits of Families ... (Entry Families) Wed, 28 Apr 2021 05:04:00 -0400 Pat Rogers

Ada has a concurrency construct known as “entry families” that, in some cases, is just what we need to express a concise, clear solution.

For example, let’s say we want to have a notion of “conditions” that application tasks can await, suspending until the specified condition is "signaled." At some point, other tasks will signal that these conditions are ready to be handled by the waiting tasks. Understand that conditions don't have any state of their own, they are more like "events" that have either happened or have not, and may happen more than once.

For the sake of discussion let’s generalize this idea to an enumeration type representing four possible conditions:

type Condition is (A, B, C, D);

These condition names are not very meaningful but they are just placeholders for those that applications would actually define. Perhaps a submersible's code would have conditions named Hatch_Open, Hatch_Closed, Umbilical_Detached, and so on.

Responding tasks can suspend, waiting for an arbitrary condition to be signaled, and other tasks can signal the occurrence of conditions, using a “condition manager” that the two sets of tasks call.

Here’s the declaration of the condition manager type:

type Manager is limited private;

The type is limited because it doesn’t make sense to assign one manager to another, nor to compare them via predefined equality. There’s another reason you’ll see shortly. The type is private because that’s the default for good software engineering, and there’s no reason to override that default to make the implementation visible to clients. Our API will provide everything clients require, when combined with the capabilities provided by any limited type (e.g., declaring objects, and passing them as parameters).

Tasks can wait for a single condition to be signaled, or they can wait for one of several. Similarly, tasks can signal one or more conditions at a time. Such groups of conditions are easily represented by an unconstrained array type:

type Condition_List is array (Positive range <>) of Condition;

We chose Positive as the index subtype because it allows a very large number of components, far more than is likely ever required. An aggregate value of the array type can then represent multiple conditions, for example:

Condition_List'(A, C)

Given these three types we can define a useful API:

procedure Wait
  (This         : in out Manager;
   Any_Of_These :        Condition_List;
   Enabler      :    out Condition);
--  Block until/unless any one of the conditions in Any_Of_These has been
--  Signaled. The one enabling condition chosen will be returned in the Enabler
--  parameter, and is cleared internally as Wait exits. Any other signaled
--  conditions remain signaled.

procedure Wait
  (This     : in out Manager;
   This_One : Condition);
--  Block until/unless the specified condition has been Signaled. This
--  procedure is a convenience routine that can be used instead of an
--  aggregate with only one condition component.

procedure Signal
  (This         : in out Manager;
   All_Of_These : Condition_List);
--  Indicate that all of the conditions in All_Of_These are now set. The
--  conditions remain set until cleared by Wait.

procedure Signal
  (This     : in out Manager;
   This_One : Condition);
--  Indicate that This_One condition is now set. The condition remains set
--  until cleared by Wait. This procedure is a convenience routine that can
--  be used instead of an aggregate with only one condition component.

Here’s a task that waits for either condition A or B, using a global Controller variable of the Manager type:

task body A_or_B_Processor is
   Active : Condition;
      Wait (Controller, Any_Of_These => Condition_List'(A, B), Enabler => Active);
      Put_Line ("A_or_B_Processor responding to condition " & Active'Image);
   end loop;
end A_or_B_Processor;

When the call to Wait returns, at least one of either A or B was signaled. One of those signaled conditions is then selected and returned in the Enabler parameter. That selected condition is no longer signaled when the call returns, and will stay that way until another call to procedure Signal changes it. The other condition is not affected, whether or not it was also signaled.

A signaling task could use the API to signal one condition:

Signal (Controller, This_One => B);

or to signal multiple conditions:

Signal (Controller, All_Of_These => Condition_List'(A, C, D));

Now let’s consider the Manager implementation. As this is a concurrent program, we need it to be thread-safe. We’ve declared the Manager type as limited, so either a task type or a protected type would be allowed as the type’s completion. (That’s the other reason the type is limited.) There’s no need for this manager to do anything active, it just suspends some tasks and resumes others when called. Therefore, a protected type will suffice, rather than an active thread of control.

Clearly, tasks that await conditions must suspend until a requested condition has been signaled, assuming it was not already signaled when the call occurred, so a protected procedure won’t suffice. Protected procedures only provide mutual exclusion. Hence we'll use a protected entry for the waiters to call. As you will see later, there is another reason to use protected entries here.

Inside the Manager protected type we need a way to represent whether conditions have been signaled. We can use an array of Boolean components for this purpose, with the conditions as the indexes. For any given condition, if the corresponding array component is True the condition has been signaled, otherwise it has not.

type Condition_States is array (Condition) of Boolean;

Signaled : Condition_States := (others => False);

Thus, for example, if Signaled (B) is True, a task that calls Wait for B will be able to return at once. Otherwise, that task will be blocked and cannot return from the call. Later another task will set Signaled (B) to True, and then the waiting task can be unblocked.

Since an aggregate can also contain only one component if desired, we can use a single set of protected routines for waiting and signaling in the Manager protected type. We don't need one set of routines for waiting and signaling a single condition, and another set of routines for waiting and signaling multiple conditions. Here then is the visible part:

protected type Manager is
   entry Wait
     (Any_Of_These : Condition_List;
      Enabler      : out Condition);
   procedure Signal (All_Of_These : Condition_List);
end Manager;

Both the entry and the procedure take an argument of the array type, indicating one or more conditions. The entry, called by waiting tasks, also has an output argument, Enabler, indicating which specific entry enabled the task to resume, i.e., which condition was found signaled and was selected to unblock the task. We need that parameter because the task may have specified that any one of several conditions would suffice, and more than one could have been signaled.

The bodies of our API routines are then just calls into the protected Manager argument. For example, here are two of the four:

procedure Wait
  (This         : in out Manager;
   Any_Of_These :        Condition_List;
   Enabler      :    out Condition)
   This.Wait (Any_Of_These, Enabler);
end Wait;

procedure Signal
  (This     : in out Manager;
   This_One : Condition)
   This.Signal (Condition_List'(1 => This_One));
end Signal;

Now let’s examine the implementation of the protected type. It gets slightly complicated, but only a little.

Our entry Wait allows a task to request suspension until one of the indicated conditions is signaled, as specified by the entry argument. Normally we’d expect to use the entry barrier to express this so-called “condition synchronization” by querying the conditions’ state array. If one of the requested conditions is True the barrier would allow the call to execute and complete. However, barriers do not have compile-time visibility to the entry parameters, so they cannot be referenced in the barriers. That's also true for the Boolean guards controlling task entry accept statements within select statements.

Why not? Ada synchronization constructs are based on “avoidance synchronization,” meaning that 1) the user-written controls that enable/disable the execution of task entry accept statements and protected entry bodies are intended to enable them only when they can actually provide the requested service, and 2) that runtime determination is based on information known prior to the execution of the accept statement or entry body. For example, at runtime, if a bounded buffer is full, that fact can be determined from the buffer's state: is the count of contained items equal to the capacity of the backing array? If so, the controls disallow the operation to insert another value. Likewise, if the buffer is empty, the removal operation is disallowed. When we write the buffer implementation we know beforehand what the operations will try to do, so we can write the controls to disallow them at runtime until they can succeed. Most of the time that’s sufficient, but not always. When we can't know precisely what the operations will do when we write the code, avoidance synchronization doesn't fit the bill. That's the case with the condition manager: we don’t know beforehand which conditions the Wait caller will specify, and we can't refer to the parameters in the barrier, therefore we cannot use the barrier to enable or disable execution of the Wait entry body.

To handle cases in which avoidance synchronization is insufficient Ada defines the “requeue” statement. Calling an entry that uses a requeue statement is much like calling a large company on the telephone. Calling the main number connects you to a receptionist (if you're lucky and don't get an annoying menu). If the receptionist can answer your question, they do so and then you both hang up. Otherwise, the receptionist forwards ("requeues") the call to the person you need to speak with. After doing so, the receptionist hangs up, because from their point of view the call is complete. The call is not complete from your point of view, though, until you finish your conversation with the new receiver. And of course you may have to wait to speak to that person.

In this metaphor, a task calling entry Wait is the person calling the large corporation. Like the receptionist, Wait must take (execute) the call without knowing the requested conditions, because the entry barrier cannot reference the entry arguments. The specified conditions and their states are only known once the entry body executes. Therefore, Wait may or may not be able to allow the caller to return from the call immediately. If not, it requeues the call and finishes, leaving the call still pending on the requeue target. Because Wait always takes a call, the entry barrier is just hard-coded to True. (That’s always a strong indication that requeue is involved.) Even though this barrier always allows a call, much like a protected procedure, we must use an entry because only protected entries can requeue callers.

Inside the entry body the specified conditions’ states are checked, looking for one that is True. If one is found, the entry body completes and the caller returns to continue further, responding to the found condition. If no requested condition is True, though, we cannot let the caller continue. We block it by requeueing the caller on to another entry. Eventually that other entry will allow the caller to return, when an awaited condition finally becomes True via Signal.

Here then is the full declaration for the protected type Manager:

type Condition_States is array (Condition) of Boolean;

protected type Manager is
   entry Wait
     (Any_Of_These : Condition_List;
      Enabler      : out Condition);
   procedure Signal (All_Of_These : Condition_List);
   Signaled          : Condition_States := (others => False);
   Prior_Retry_Calls : Natural := 0;
   entry Retry
     (Any_Of_These : Condition_List;
      Enabler      : out Condition);
end Manager;

The private part contains the condition states, a management variable, and the other entry, Retry, onto which we will requeue when necessary. Note that this other entry is only meant to be called by a requeue from the visible entry Wait, so we declare it in the private part to ensure there are no other calls to it.

Here’s the body of the entry Wait:

entry Wait 
  (Any_Of_These : Condition_List;  
   Enabler      : out Condition) 
   Found_Awaited_Condition : Boolean;
   Check_Conditions (Any_Of_These, Enabler, Found_Awaited_Condition);
   if not Found_Awaited_Condition then
      requeue Retry;
   end if;
end Wait;

The hard-coded entry barrier ("when True") always allows a caller to execute, subject to the mutual exclusion requirement. In the body, we call an internal procedure to check the state of the requested condition(s). If we don’t find one of the specified conditions True, we requeue the caller to the Retry entry. The entry parameters are the same in this case so they go to the Retry entry as usual. On the other hand, if we did find a specified condition True, we just exit the call, Enabler having been set already.

Eventually, presumably, an awaited False condition will become True. That happens when Signal is called:

procedure Signal (All_Of_These : Condition_List) is
   for C of All_Of_These loop
      Signaled (C) := True;
   end loop;
   Prior_Retry_Calls := Retry'Count;
end Signal;

After setting the specified conditions' states to True, Signal captures the number of queued callers waiting on Retry. (The variable Prior_Retry_Calls is an internal component declared in the protected type. The value is never presented to callers, but is, instead, used only to manage callers.)

At long last, here’s the body of Retry:

entry Retry 
  (Any_Of_These : Condition_List;  
   Enabler      : out Condition) 
   Prior_Retry_Calls > 0
   Found_Enabling_Condition : Boolean;
   Prior_Retry_Calls := Prior_Retry_Calls - 1;
   Check_Conditions (Any_Of_These, Enabler, Found_Enabling_Condition);
   if not Found_Enabling_Condition then
      requeue Retry;
   end if;
end Retry;

Recall that when a protected procedure or entry exits a protected object, the run-time system re-evaluates all the object’s entry barriers, looking for an open (True) barrier with a caller queued, waiting. If one is found, that entry body is allowed to execute on behalf of that caller. On exit, the evaluation/execution process repeats. This process is known as a “protected action” and is one reason protected objects are so expressive and powerful. The protected action continues iterating, executing enabled entry bodies on behalf of queued callers, until either no barriers are open or no open barriers have callers waiting.

Therefore, when procedure Signal sets Prior_Retry_Calls to a value greater than zero and then exits, the resulting protected action allows Retry to execute. Furthermore, Retry continues to execute, attempting to service all the prior callers in the protected action, because its barrier is False only when all those prior callers have been serviced.

For each caller, Retry attempts the same thing Wait did: if a requested condition is True the caller is allowed to return from the call. Otherwise, the caller is requeued onto Retry. So yes, Retry requeues the caller onto itself! Doing so is not necessarily a problem, but in this case a caller would continue to be requeued indefinitely when the requested condition is False, unless something prevents that from happening. That’s the purpose of the count of prior callers. Only that number of callers are executed by the body of Retry in the protected action. After that the barrier is closed by Prior_Retry_Calls becoming zero, the protected action ceases when the entry body exits, and any unsatisfied callers remain queued.

All well and good, this works, but have you noticed the underlying assumption? The code assumes that unsatisfied callers are placed onto the entry queue at the end of the queue, i.e., in FIFO order. Consequently, they are not included in the value of the Prior_Retry_Calls count and so do not get executed again until Signal is called again. But suppose we have requested that entry queues (among other things) are ordered by caller priority? We’d want that for a real-time system. But then a requeued caller would not go to the back of the entry queue and would, instead, execute all over again, repeatedly. The prior caller count wouldn’t solve that problem.

If priority queuing might be used, we must change the internal implementation so that the queuing policy is irrelevant. We’ll still have Wait do a requeue when necessary, but no requeue will ever go to the same entry executing the requeue statement. Therefore, the entry queuing order won't make a difference. The entry family facilitates that change, and rather elegantly, too.

An entry family is much like an array of entries, each one identical to the others. To work with any one of the entries we specify an index, as with an array. For example, here’s a requeue to Retry as a member of an entry family, with Active_Retry as the index:

requeue Retry (Active_Retry)

In the above, the caller uses the value of Active_Retry as an index to select a specific entry in the array/family.

The resulting changes to the Manager type are as follows:

type Retry_Entry_Id is mod 2;
type Retry_Barriers is array (Retry_Entry_Id) of Boolean;

protected type Manager is
   … as before
   Signaled      : Condition_States := (others => False);
   Retry_Enabled : Retry_Barriers := (others => False);
   Active_Retry  : Retry_Entry_Id := Retry_Entry_Id'First;
   entry Retry (Retry_Entry_Id)
     (Any_Of_These : Condition_List;
      Enabler      : out Condition);
end Manager;

Our entry family index type is Retry_Entry_Id. We happen to need two entries in this implementation, so a modular type with two values will suffice. Modular arithmetic will also express the logic nicely, as you’ll see. The variable Active_Retry is of this type, initialized to zero.

The entry Retry is now a family, as indicated by the entry declaration syntax specifying the index type Retry_Entry_Id within parentheses. Each entry has the same parameters as any others in the family, in this case the same parameters as in the previous implementation.

We thus have two Retry entries so that, at any given time, one of the entries can requeue onto the other one, instead of onto itself. An entry family makes that simple to express.

At runtime, one of the Retry entries will field requeue calls from Wait, and will also be the entry enabled by Signal. That entry is designated the “active” retry target, via the index held in the local variable Active_Retry.

Here’s the updated body of Wait accordingly:

entry Wait 
  (Any_Of_These : Condition_List;  
   Enabler      : out Condition) 
   Found_Enabling_Condition : Boolean;
   Check_Conditions (Any_Of_These, Enabler, Found_Enabling_Condition);
   if not Found_Enabling_Condition then
      requeue Retry (Active_Retry) with abort;
   end if;
end Wait;

The body is as before, except that the requeue target depends on the value of Active_Retry. (We'll discuss "with abort" shortly.)

When Signal executes, it now enables the “active retry” entry barrier:

procedure Signal (All_Of_These : Condition_List) is
   for C of All_Of_These loop
      Signaled (C) := True;
   end loop;
   Retry_Enabled (Active_Retry) := True;
end Signal;

The barrier variable Retry_Enabled is now an array, using the same index type as the entry family.

The really interesting part of the implementation is the body of Retry, showing the expressive power of the language. The entry family member enabled by Signal goes through all its pending callers, attempting to satisfy them and requeuing those that it cannot. But instead of requeuing onto itself, it requeues them onto the other entry in the family. As a result, the order of the queues is immaterial. Again, the entry family makes this easy to express:

entry Retry (for K in Retry_Entry_Id)
  (Any_Of_These : Condition_List;
   Enabler      : out Condition)
   Retry_Enabled (K)
   Found_Enabling_Condition : Boolean;
   Check_Conditions (Any_Of_These, Enabler, Found_Enabling_Condition);
   if Found_Enabling_Condition then
   end if;
   --  otherwise...
   if Retry (K)'Count = 0 then -- current caller is the last one present
      Retry_Enabled (K) := False;
      Active_Retry := Active_Retry + 1;
   end if;
   requeue Retry (K + 1) with abort;
end Retry;

Note the first line:

entry Retry (for K in Retry_Entry_Id)

as well as the entry barrier (before the reserved word “is”):

when Retry_Enabled (K)

“K” is the entry family index, in this case iterating over all the values of Retry_Entry_Id (i.e., 0 .. 1).

We don’t have to write a loop checking each family member’s barrier; that happens automatically, via K. When a barrier at index K is found to be True, that corresponding entry can execute a prior caller. Slick, isn’t it? Ada is a very powerful language.

Note the last statement, the one performing the requeue:

requeue Retry (K + 1) with abort;

Like the Active_Retry variable, the index K is of the modular type with two possible values, so K + 1 is always the “other” entry of the two. The addition wraps around, conveniently. As a result, the requeue is always onto the other entry, never itself, so the entry queue ordering makes no difference.

The “with abort” is important but is not a controlling design factor. In a nutshell, it means that task abort is enabled for the requeued task. The significance is that an aborted task that is suspended on an entry queue is removed from that queue. That’s allowable in this case because we are not using the count of prior callers to control the number of iterations in the protected action, unlike the FIFO implementation. In that other implementation we could not allow requeued tasks to be aborted because the count of prior callers would no longer match the number of queued callers actually present. The protected action would await a caller that would never execute. In this implementation that cannot happen so it is safe to allow aborted tasks to be removed from the queue.

Note that we do still check the count of pending queued callers, we just don't capture it and use it to control the number of iterations in the protected action. If we’ve processed the last caller for member K, we close member K’s barrier immediately, and then set the active retry index to the other entry member so that Wait will requeue to the new “active retry” entry and Signal will, eventually, enable it.

Because we did not make the implementation visible to the package’s clients, our internal changes will not require users to change any of their code.

Note that both the Ravenscar and Jorvik profiles allow entry families, but Ravenscar only allows one member per family because only one entry is allowed per protected object. Such an entry family wouldn't be much use. Jorvik allows multiple entry family members because it allows multiple entries per protected object. However, neither profile allows requeue statements, for the sake of simplifying the underlying run-time library implementation.

For more on tasking and topics like this, see the book by Burns and Wellings, Concurrent and Real-Time Programming In Ada, Cambridge University Press, 2007. Yes, 2007, but it is still excellent and directly applicable today. Indeed, this solution to the condition manager is based on their Resource_Controller example and was supplied to a customer this year.

Thanks to Andrei Gritsenko (Андрей Гриценко@disqus_VErl9jPNvR) for suggesting a nice simplification of the FIFO version of the facility. This blog entry has been updated accordingly.

The full code for the entry-family approach follows. Note that we have used a generic package so that we can factor out the specific kind of conditions involved, via the generic formal type. As long as the generic actual type is a discrete type the compiler will be happy. That’s essential because we use the condition type as an index for an array type.

--  This package provides a means for blocking a calling task until/unless any
--  one of an arbitrary set of "conditions" is signaled.

--  NOTE: this implementation allows either priority-ordered or FIFO-ordered
--  queuing.

   type Condition is (<>);
package Condition_Management is

   type Manager is limited private;

   type Condition_List is array (Positive range <>) of Condition;

   procedure Wait
     (This         : in out Manager;
      Any_Of_These :        Condition_List;
      Enabler      :    out Condition);
   --  Block until/unless any one of the conditions in Any_Of_These has been
   --  Signaled. The one enabling condition will be returned in the Enabler
   --  parameter, and is cleared internally as Wait exits. Any other signaled
   --  conditions remain signaled.

   procedure Wait
     (This     : in out Manager;
      This_One : Condition);
   --  Block until/unless the specified condition has been Signaled. This
   --  procedure is a convenience routine that can be used instead of an
   --  aggregate with only one condition component.

   procedure Signal
     (This         : in out Manager;
      All_Of_These : Condition_List);
   --  Indicate that all of the conditions in All_Of_These are now set. The
   --  conditions remain set until cleared by Wait.

   procedure Signal
     (This     : in out Manager;
      This_One : Condition);
   --  Indicate that This_One condition is now set. The condition remains set
   --  until cleared by Wait. This procedure is a convenience routine that can
   --  be used instead of an aggregate with only one condition component.


   type Condition_States is array (Condition) of Boolean;

   type Retry_Entry_Id is mod 2;

   type Retry_Barriers is array (Retry_Entry_Id) of Boolean;

   protected type Manager is
      entry Wait
        (Any_Of_These : Condition_List;
         Enabler      : out Condition);
      procedure Signal (All_Of_These : Condition_List);
      Signaled      : Condition_States := (others => False);
      Retry_Enabled : Retry_Barriers := (others => False);
      Active_Retry  : Retry_Entry_Id := Retry_Entry_Id'First;
      entry Retry (Retry_Entry_Id)
        (Any_Of_These : Condition_List;
         Enabler      : out Condition);
   end Manager;

end Condition_Management;

package body Condition_Management is

   -- Wait --

   procedure Wait
     (This         : in out Manager;
      Any_Of_These :        Condition_List;
      Enabler      :    out Condition)
      This.Wait (Any_Of_These, Enabler);
   end Wait;

   -- Wait --

   procedure Wait
     (This     : in out Manager;
      This_One : Condition)
      Unused : Condition;
      This.Wait (Condition_List'(1 => This_One), Unused);
   end Wait;

   -- Signal --

   procedure Signal
     (This         : in out Manager;
      All_Of_These : Condition_List)
      This.Signal (All_Of_These);
   end Signal;

   -- Signal --

   procedure Signal
     (This     : in out Manager;
      This_One : Condition)
      This.Signal (Condition_List'(1 => This_One));
   end Signal;

   -- Event --

   protected body Manager is

      procedure Check_Conditions
        (These   : Condition_List;
         Enabler : out Condition;
         Found   : out Boolean);

      -- Wait --

      entry Wait 
        (Any_Of_These : Condition_List;  
         Enabler      : out Condition) 
         Found_Enabling_Condition : Boolean;
         Check_Conditions (Any_Of_These, Enabler, Found_Enabling_Condition);
         if not Found_Enabling_Condition then
            requeue Retry (Active_Retry) with abort;
         end if;
      end Wait;

      -- Signal --

      procedure Signal (All_Of_These : Condition_List) is
         for C of All_Of_These loop
            Signaled (C) := True;
         end loop;
         Retry_Enabled (Active_Retry) := True;
      end Signal;

      -- Retry --

      entry Retry (for K in Retry_Entry_Id)
        (Any_Of_These : Condition_List;
         Enabler      : out Condition)
         Retry_Enabled (K)
         Found_Enabling_Condition : Boolean;
         Check_Conditions (Any_Of_These, Enabler, Found_Enabling_Condition);
         if Found_Enabling_Condition then
         end if;
         --  otherwise...
         if Retry (K)'Count = 0 then -- current caller is the last one present
            Retry_Enabled (K) := False;
            Active_Retry := Active_Retry + 1;
         end if;
         requeue Retry (K + 1) with abort;
      end Retry;

      -- Check_Conditions --

      procedure Check_Conditions
        (These   : Condition_List;
         Enabler : out Condition;
         Found   : out Boolean)
         for C of These loop
            if Signaled (C) then
               Signaled (C) := False;
               Enabler := C;
               Found := True;
            end if;
         end loop;
         Found := False;
      end Check_Conditions;

   end Manager;

end Condition_Management;
Showing Global contracts with GNAT Studio Fri, 26 Feb 2021 00:30:00 -0500 Simon Buist

In SPARK, data-flow analysis performs 2 steps: (1) verifying the user-defined data-flow contracts (e.g. Global / Depends) and (2) generating them when they are missing.

Accurate data-flow analysis is a necessary prerequisite for proof of absence of run-time-errors (AoRTE) checks.

SPARK 2005 only did the first step (assuming null global data in the absence of user-defined global contracts). The second step, generation, is useful for users that want to get the benefits of flow-analysis, including proven AoRTE, without bothering to annotate their code with contracts.

However, the global contracts that get generated may not meet expectations / requirements. For example, you may write to a variable that should only be read from. In this case, flow analysis and AoRTE proof could pass, but your code would not meet its requirements.

So there is a need to see what global contracts SPARK generates.

The generated global contracts were previously hidden from the user. They could be exposed with the switch --flow-show-gg, but then SPARK would output the generated global contracts to the console, which made them hard to utilise effectively.

In the integrated development environment, GNAT Studio, there is now a plugin that inserts the generated global contracts inline with the code.

The GNAT Studio contextual menu

A package with two Global variables: G1, and G2.

Here, we have a screenshot of some Ada source code being edited in GNAT Studio. The code uses two globals, G1 and G2. Function Potato reads both. Function Kitty reads G1.

The user has right-clicked to select the new plugin, “Show generated Global contracts”.

Global contracts displayed, after right-clicking "SPARK --> Globals --> Show generated Global contracts "

Once the user clicks “Show generated Global contracts”, the generated global contracts get inserted into the editor window, so that the user can see what SPARK data-flow analysis detects as globals.

The user can then inspect the globals, and if they wish, copy-paste from this into their code, to add contracts. From this point, they can check whether the system fulfills global data-flow requirements.

We see this as a great learning tool for beginners to SPARK.


Let’s test the plugin on the Tokeneer project - a codebase that makes extensive use of Globals. Tokeneer has been fully verified in SPARK, and has a comprehensive set of user-written Global contracts, so we need to hide these contracts from our plugin by adding the following directives to the file we want to test:

pragma Ignore_Pragma (Global);
pragma Ignore_Pragma (Refined_Global);
pragma Ignore_Pragma (Depends);
pragma Ignore_Pragma (Refined_Depends);

This will allow us to compare the user-written Global contracts with the ones displayed by our plugin.

The results:

File enclave.adb has a procedure named ValidateAdminToken:

procedure ValidateAdminToken, with user-supplied Global contracts shown above the generated Global contracts

We can see that the user-supplied Global contract (white background) closely matches the generated one (grey background). GNATprove has resolved Output => Status conservatively as In_Out => Enclave.Status. The plugin correctly summarises the effect on individual variables by the aggregated effect on the corresponding abstract state, when the variables belong to one.

We chose to prefix Global variable names with their full path, even when they are inside the current program unit. We decided to do this to disambiguate in such situations where we have two different variables sharing the same name, for example where we have Global variable X in package Outer, and Global variable X in package Inner, with Inner nested inside Outer:

Global contracts generated for nested packages

Coming back to the Tokeneer code, we also ran the plugin on file keystore.adb, which has a function named GetBlock:

function GetBlock has no Global state, and the plugin has generated Global => null.

The plugin has correctly identified that there is no Global state. It’s useful to annotate this function with Global => null, so that we know it is not modifying any Global state.

We’d love to know how you use this feature and if you see any useful enhancements. One feature we could add is the ability to automatically insert a generated contract in the code if the user wishes so.

Doubling the Performance of SPARKNaCl (again...) Thu, 18 Feb 2021 00:00:00 -0500 Roderick Chapman function Product_To_Seminormal (X : in Product_GF) return Seminormal_GF with Pure_Function, Global => null;
--  "LM"   = "Limb Modulus"
--  "LMM1" = "Limb Modulus Minus 1"
LM   : constant := 65536;
LMM1 : constant := 65535;

--  "R2256" = "Remainder of 2**256 (modulo 2**255-19)"
R2256 : constant := 38;

--  "Maximum GF Limb Coefficient"
MGFLC : constant := (R2256 * 15) + 1;

--  "Maximum GF Limb Product"
MGFLP : constant := LMM1 * LMM1;

subtype Product_GF is GF
  with Dynamic_Predicate =>
    (for all I in Index_16 =>
      Product_GF (I) >= 0 and
      Product_GF (I) <=
        (MGFLC - 37 * GF_Any_Limb (I)) * MGFLP);

--  A "Seminormal GF" is the result of applying a single
--  normalization step to a Product_GF
--  Least Significant Limb ("LSL") of a Seminormal GF.
--  LSL is initially normalized to 0 .. 65535, but gets
--  R2256 * Carry added to it, where Carry is (Limb 15 / 65536)
--  The upper-bound on Limb 15 is given by substituting I = 14
--  into the Dynamic_Predicate above, so
--    (MGFLC - 37 * 14) * MGFLP = 53 * MGFLP
--  See the body of Product_To_Seminormal for the full
--  proof of this upper-bound
subtype Seminormal_GF_LSL is I64
  range 0 .. (LMM1 + R2256 * ((53 * MGFLP) / LM));

--  Limbs 1 though 15 are in 0 .. 65535, but the
--  Least Significant Limb 0 is in GF_Seminormal_Product_LSL
subtype Seminormal_GF is GF
  with Dynamic_Predicate =>
    (Seminormal_GF (0) in Seminormal_GF_LSL and
      (for all I in Index_16 range 1 .. 15 =>
        Seminormal_GF (I) in GF_Normal_Limb));
Performance analysis and tuning of SPARKNaCl Tue, 09 Feb 2021 05:29:00 -0500 Roderick Chapman #define FOR(i,n) for (i = 0;i < n;++i) #define sv static void typedef long long i64; typedef i64 gf[16]; sv car25519(gf o) { int i; i64 c; FOR(i,16) { o[i]+=(1LL<<16); c=o[i]>>16; o[(i+1)*(i<15)]+=c-1+37*(c-1)*(i==15); o[i]-=c<<16; } }
if (i == 15)
   o[0] += c-1+37*(c-1);
   o[i+1] += c-1;
--  returns equivalent of X >> 16 in C, doing an arithmetic
--  shift right when X is negative, assuming 2's complement
--  representation
function ASR_16 (X : in I64) return I64
is (To_I64 (Shift_Right_Arithmetic (To_U64 (X), 16)))
  with Post => (if X >= 0 then ASR_16'Result = X / LM else
                               ASR_16'Result = ((X + 1) / LM) - 1);
function Car (X : in GF) return GF
   Carry : I64;
   R     : GF;
   R := X;
   for I in Index_16 range 0 .. 14 loop
      Carry := ASR_16 (R (I));
      R (I + 1) := R (I + 1) + Carry;
      R (I) := R (I) mod LM;
   end loop;

   Carry := ASR_16 (R (15));
   R (0) := R (0) + R2256 * Carry;
   R (15) := R (15) mod LM;

   return R;
end Car;
   Carry : I64;
   R     : GF with Relaxed_Initialization;
   Carry := ASR_16 (X (0));
   R (0) := X (0) mod LM;
   R (1) := X (1) + Carry;

   pragma Assert
     (R (0)'Initialized and R (1)'Initialized);

   for I in Index_16 range 1 .. 14 loop
      Carry := ASR_16 (R (I));
      R (I) := R (I) mod LM;
      R (I + 1) := X (I + 1) + Carry;
      pragma Loop_Invariant
        (for all K in Index_16 range 0 .. I => R (K)'Initialized);
      pragma Loop_Invariant (R (I + 1)'Initialized);
   end loop;

   pragma Assert (R'Initialized);

   -- as before...
function Scalarmult (Q : in GF_Vector_4;
                     S : in Bytes_32) return GF_Vector_4
  with Global => null;
function Scalarmult (Q : in GF_Vector_4;
                     S : in Bytes_32) return GF_Vector_4
   CB     : Byte;
   Swap   : Boolean;
   LP, LQ : GF_Vector_4;
   LP := (0 => GF_0,
          1 => GF_1,
          2 => GF_1,
          3 => GF_0);
   LQ := Q;

   for I in reverse U32 range 0 .. 255 loop
      CB   := S (I32 (Shift_Right (I, 3)));
      Swap := Boolean'Val (Shift_Right (CB, Natural (I and 7)) mod 2);

      CSwap (LP, LQ, Swap);
      --  Note user-defined "+" for GF_Vector_4 called here
      LQ := LQ + LP;
      LP := LP + LP;
      CSwap (LP, LQ, Swap);
   end loop;

   return LP;
end Scalarmult;
--  For each byte of S, starting at the MSB
for I in reverse Index_32 loop
   --  For each bit, starting with bit 7 (the MSB)
   for J in reverse Natural range 0 .. 7 loop
      CB := S (I);
      Swap := Boolean'Val (Shift_Right (CB, J) mod 2);
      CSwap (LP, LQ, Swap);
      LQ := LQ + LP;
      LP := LP + LP;
      CSwap (LP, LQ, Swap);
   end loop;
end loop;
--  For each byte of S, starting at the MSB
for I in reverse Index_32 loop
   CB := S (I);
   --  For each bit of CB, starting with bit 7 (the MSB)
   for J in reverse Natural range 0 .. 7 loop
      Swap := Boolean'Val (Shift_Right (CB, J) mod 2);
      CSwap (LP, LQ, Swap);
      LQ := LQ + LP;
      LP := LP + LP;
      CSwap (LP, LQ, Swap);
   end loop;
end loop;
function "*" (Left, Right : in Normal_GF) return Normal_GF
   T  : GF_PA; -- 31 digits
   T := (others => 0);
   --  "Textbook" ladder multiplication
   for I in Index_16 loop
      for J in Index_16 loop
         T (I + J) := T (I + J) + (Left (I) * Right (J));
      end loop;
   end loop;
T (I)      := T (I)      + (Left (I) * Right (0));
T (I + 1)  := T (I + 1)  + (Left (I) * Right (1));
T (I + 2)  := T (I + 2)  + (Left (I) * Right (2));
--  and so on...
LT         := Left (I);
T (I)      := T (I)      + (LT * Right (0));
T (I + 1)  := T (I + 1)  + (LT * Right (1));
T (I + 2)  := T (I + 2)  + (LT * Right (2));
--  and so on...
function DL64 (X : in Bytes_8) return U64
   U : U64 := 0;
   for I in X'Range loop
      U := Shift_Left (U, 8) or U64 (X (I));
   end loop;
   return U;
end DL64;
W := (0  => DL64 (Bytes_8 (M (CB + 0 .. CB + 7))),
      1  => DL64 (Bytes_8 (M (CB + 8 .. CB + 15))),
      2  => DL64 (Bytes_8 (M (CB + 16 .. CB + 23))),
      3  => DL64 (Bytes_8 (M (CB + 24 .. CB + 31))),
      -- and so on...   
      15 => DL64 (Bytes_8 (M (CB + 120 .. CB + 127))));
function DL64 (X : in Byte_Seq;
               I : in N32) return U64
  with Global => null,
       Pre => X'Length >= 8 and then
              I >= X'First and then
              I <= X'Last - 7;
function DL64 (X : in Byte_Seq;
               I : in N32) return U64
   LSW, MSW : U32;
   --  Doing this in two 32-bit groups avoids the need
   --  for 64-bit shifts on 32-bit machines.
   MSW := Shift_Left (U32 (X (I)),     24) or
     Shift_Left (U32 (X (I + 1)), 16) or
     Shift_Left (U32 (X (I + 2)), 8) or
     U32 (X (I + 3));
   LSW := Shift_Left (U32 (X (I + 4)), 24) or
     Shift_Left (U32 (X (I + 5)), 16) or
     Shift_Left (U32 (X (I + 6)), 8) or
     U32 (X (I + 7));

   --  Just one 64-bit shift and an "or" is now required
   return Shift_Left (U64 (MSW), 32) or U64 (LSW);
end DL64;
W := (0  => DL64 (M, CB),
      1  => DL64 (M, CB + 8),
      2  => DL64 (M, CB + 16),
      --  and so on...
      15 => DL64 (M, CB + 120));
A := F (B, C);
function "+" (Left, Right : in GF_Vector_4) return GF_Vector_4
   A, B, C, D, E, F, G, H : Normal_GF;
   function Double (X : in Normal_GF) return Normal_GF
     is (X + X)
     with Global => null;
   A := (Left (1) - Left (0)) * (Right (1) - Right (0));
   B := (Left (0) + Left (1)) * (Right (0) + Right (1));
   C := (Left (3) * Right (3)) * GF_D2;
   D := Double (Left (2) * Right (2));

   E := D - C;
   F := D + C;
   G := B - A;
   H := B + A;

   return GF_Vector_4'(0 => G * E,
                       1 => H * F,
                       2 => F * E,
                       3 => G * H);
end "+";
e = sparknacl."-" (&d, &c);
f = sparknacl."+" (&d, &c);
g = sparknacl."-" (&b, &a);
h = sparknacl."+" (&b, &a);

R.6 = sparknacl."*" (&g, &e); [return slot optimization]
<retval>[0] = R.6;

R.7 = sparknacl."*" (&h, &f); [return slot optimization]
<retval>[1] = R.7;

R.8 = sparknacl."*" (&f, &e); [return slot optimization]
<retval>[2] = R.8;

R.9 = sparknacl."*" (&g, &h); [return slot optimization]
<retval>[3] = R.9;

return <retval>;
e = sparknacl."-" (&d, &c); [return slot optimization]
f = sparknacl."+" (&d, &c); [return slot optimization]
g = sparknacl."-" (&b, &a); [return slot optimization]
h = sparknacl."+" (&b, &a); [return slot optimization]
<retval>[0] = sparknacl."*" (&g, &e); [return slot optimization]
<retval>[1] = sparknacl."*" (&h, &f); [return slot optimization]
<retval>[2] = sparknacl."*" (&f, &e); [return slot optimization]
<retval>[3] = sparknacl."*" (&g, &h); [return slot optimization]
return <retval>;
AdaCore at FOSDEM 2021 Thu, 04 Feb 2021 03:55:00 -0500 Fabien Chouteau

Like previous years, AdaCore will participate in FOSDEM. This time the event will be online only, but this won’t prevent us from celebrating Open Source software.

AdaCore engineers will give two talks in the Safety and Open Source devroom, a topic at the heart of AdaCore since its inception:

Hope to see you (virtually) at FOSDEM this week-end!

Mini SAM M4 Ada BSP Tue, 02 Feb 2021 03:40:00 -0500 Fabien Chouteau
$ alr get minisamd51_example
$ cd minisamd51_example*
$ alr build
$ bash
How To: GNAT Pro with Docker Fri, 22 Jan 2021 00:00:00 -0500 Léo Germond

Using GNAT Pro with containerization technologies, such as Docker, is so easy, a whale could do it!

In this article I will show you how to get started setting up a Docker image with the GNAT Pro tools.

But first...

What is Docker?

Maybe a better question:

What isn’t Docker these days… Am I right?

Over the last decade, DevOps methodology has revolutionized software engineering as we know it. A fundamental concept, Infrastructure as Code, has helped us build software that matters in a reliable, repeatable, and performant way, providing an actionable solution to one of the hardest IT problems:

How can I ensure my build process is stable and repeatable?

Docker is one such tool (one we really like here at AdaCore) that allows us to define our IT infrastructure as code in a highly portable way.

It uses an exhaustive approach to configuration using layers, which means it is easier than ever to set up build environments that share dependencies, a common occurrence for build systems.

Furthermore, its decentralized architecture will work independently from any external infrastructure, an argument that has gained weight as fast as a whale recently.

Lastly, a small piece of advice: keep in mind that Docker instances are containers. While powerful, some things aren’t set up out of the box, such as sharing a USB port or a host system directory. In order to do these things you'll need to do some host configuration work.

Rule of thumb: if you need more than a ssh to the tools inside the container, e.g. to run a JTAG debugger through Docker, it will require some leg work.

Getting Started

As a prerequisite, you must have a recent version of Docker, a GNAT Pro release package for Linux x86-64, and a Python 3 install.

The procedure and script works with the current stable native compiler 20.2 version, as well as in-stabilization 21.1 and wavefront 22.0. Older versions or cross toolchains may require some additional work.

First, let’s start with a sample Dockerfile. An example can be found in the AdaCore GitHub's GNAT Docker repository.

The repository contains two directories:

  • gnatpro-deps/ Resources for building the GNAT Pro base image.
  • gnatpro/ Resources for building the full GNAT Pro toolsuite image.

For technical reasons, the actual build is performed in two steps: we set up a build environment in gnatpro-deps/ then use it to build the GNAT Pro release, that we install in gnatpro/.

All of this is done by the create_image script.

Creating a Docker Image Using create_image

The create_image Python script starts by creating a gnatpro:deps image containing the necessary tooling to build GNAT Pro from a release package. Then using a provided GNAT Pro release archive, it builds a second image with GNAT Pro actually installed. Finally it tags this image as gnatpro:NN.N, with NN.N being the GNAT Pro version number, so images of several GNAT Pro versions can coexist side-by-side.

If we try to run the create_image Python script, we can see that the GNAT Pro release package file must be provided, and an optional GNAT Pro version number.

When provided, the version number will be used to tag the GNAT Pro image. Otherwise, it will be gathered from the filename, when possible.

usage: create_image [-h] [--verbose] [--gnat_version GNAT_VERSION] gnatpro_release

positional arguments:
  gnat_release      	GNAT Pro release package file

optional arguments:
  -h, --help        	show this help message and exit
  --verbose, -v     	Display commands as they are run
  --gnat_version GNAT_VERSION
                    	GNAT Pro version number for automatic tagging and archive
                    	search. Leave empty for the script to infer it.

In our example we're using a 20.2 stable GNAT Pro version:

  • The --gnat_version argument should therefore be 20.2, and we provide the path to gnatpro-20.2-x86_64-linux-bin.tar.gz which contains the GNAT Pro installer.
  • If we set the --verbose flag, we can see in real time the commands as they are called by the script.

From the gnatpro-deps/ directory, it builds the gnatpro:deps image, which takes no arguments.

Then, it builds the full GNAT Pro image. This is done in two steps:

First it copies the release package into the gnatpro/ directory (for technical reasons relative to Docker’s “copy” command).

It then calls docker build. This command does the heavy lifting by running the building steps described in the file gnatpro/Dockerfile.

The format of this file is defined in the Docker builder doc. It accepts arguments, and in our case one is mandatory: gnat_release, which should point to the GNAT Pro release package so the create_image script can use it as an argument to the docker build command.

Once this step is performed, the image is complete, and is tagged as gnatpro:20.2.

./create_image --verbose --gnat_version=20.2 gnatpro/gnatpro-20.2-x86_64-linux-bin.tar.gz
Docker for build dependencies: image gnatpro:deps
 > docker build -t gnatpro:deps gnatpro-deps
Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM ubuntu:18.04
 ---> 56def654ec22
Step 2/2 : RUN set -xe     && DEBIAN_FRONTEND=noninteractive [...]
 Get:1 bionic InRelease [242 kB]
  ---> 89afd2c07618
Successfully built 89afd2c07618
Successfully tagged gnatpro:20.2
GNAT Pro image built successfully
You can open a shell on it with the command
docker run --entrypoint bash -it gnatpro:20.2

Do you want to build and run the GNAT example ? [yN]

The script finishes by asking to test the newly built image. Input Y to start the test.

The test builds all the GNAT Pro examples and checks that they compile and run without error.

Do you want to build and run the GNAT example ? [yN] y
 > docker run --entrypoint make -t gnatpro:20.2 -C /usr/gnat/share [...]
make: Entering directory '/usr/gnat/share/examples/gnat'
make -C starter
make[1]: Entering directory '/usr/gnat/share/examples/gnat/starter'
gnatmake -g hello
gcc -c -g hello.adb
gnatbind -x hello.ali
gnatlink hello.ali -g
Hello World. Welcome to GNAT
gnatmake -g demo1
   [gprbind]      use_of_import.bexch
   [Ada]          use_of_import.ali
   [archive]      libimport_from_c.a
   [index]        libimport_from_c.a
   [link]         use_of_import.adb

I am now in the imported function

make[1]: Leaving directory '/usr/gnat/share/examples/gnat/other_languages'
make: Leaving directory '/usr/gnat/share/examples/gnat'
gcc -c -g demo1.adb
gcc -c -g gen_list.adb
gcc -c -g instr.adb
gnatbind -x demo1.ali
gnatlink demo1.ali -g

Notice how the examples are running on the container. At this point, we have a Docker image complete with GNAT Pro!

Using the Docker CLI, we can see the gnatpro:deps and gnatpro:20.2 images.

$ docker image ls
REPOSITORY   TAG       IMAGE ID       CREATED             SIZE
gnatpro      20.2      89afd2c07618   4 minutes ago       1.19GB
gnatpro      deps      9f1bdb3dbef0   15 minutes ago      87.9MB

Only the gnatpro:20.2 image is necessary for using the GNAT Pro toolset. The gnatpro:deps image can be removed if you don't need to build images for other GNAT Pro versions.

How to Use This Example

Using these resources as a template, you'll be able to quickly get a working GNAT Pro toolchain running in a Docker container.

On top of that foundation, it is possible to build on-demand CI matrices, scalable compilation jobs, and on-demand analysis services with tools like CodePeer, GNATcoverage and SPARK Pro.

Your imagination is the limit.

Ada on any ARM Cortex-M device, in just a couple minutes Mon, 11 Jan 2021 00:00:00 -0500 Fabien Chouteau <device Dname="STM32F407VG"> <memory id="IROM1" start="0x08000000" size="0x00100000" startup="1" default="1"/> <memory id="IRAM1" start="0x20000000" size="0x00020000" init ="0" default="1"/> <memory id="IRAM2" start="0x10000000" size="0x00010000" init ="0" default="0"/>
project Hello is

   for Languages use ("Ada", "ASM_CPP"); -- ASM_CPP to compile the startup code
   for Source_Dirs use ("src");
   for Object_Dir use "obj";
   for Main use ("hello.adb");

   for Target use "arm-eabi";

   --  generic ZFP run-time compatible with our MCU
   for Runtime ("Ada") use "zfp-cortex-m4f";

   package Linker is
      --  Linker script generated by startup-gen
      for Switches ("Ada") use ("-T", Project'Project_Dir & "/src/link.ld");
   end Linker;

     package Device_Configuration is

      --  Name of the CPU core on the STM32F407
      for CPU_Name use "ARM Cortex-M4F";

      for Float_Handling use "hard";

      --  Number of interrupt lines on the STM32F407
      for Number_Of_Interrupts use "82";

      --  List of memory banks on the STM32F407
      for Memories use ("SRAM", "FLASH", "CCM");

      --  Specify from which memory bank the program will load
      for Boot_Memory use "FLASH";

      --  Specification of the SRAM
      for Mem_Kind ("SRAM") use "ram";
      for Address ("SRAM") use "0x20000000";
      for Size ("SRAM") use "128K";

      --  Specification of the FLASH
      for Mem_Kind ("FLASH") use "rom";
      for Address ("FLASH") use "0x08000000";
      for Size ("FLASH") use "1024K";

      --  Specification of the CCM RAM
      for Mem_Kind ("CCM") use "ram";
      for Address ("CCM") use "0x10000000";
      for Size ("CCM") use "64K";

   end Device_Configuration;
end Hello;
with Ada.Text_IO;

procedure Hello is
   Ada.Text_IO.Put_Line ("Hello world!");
end Hello;
project Prj is

   type Boot_Mem is ("flash", "sram");
   Boot : Boot_Mem := external ("BOOT_MEM", "flash");

   package Device_Configuration is

      for Memories use ("flash", "sram");

      for Boot_Memory use Boot;

      --  [...]
   end Device_Configuration;
end Prj;
project Prj is

   type Board_Kind is ("dev_board", "production_board");
   Board : Board_Kind := external ("BOARD", "dev_board");

   package Device_Configuration is

      for Memories use ("flash", "sram");

      case Board is
         when "dev_board" =>
            for Size ("sram")     use "256K";
         when "production_board" =>
            for Size ("sram")     use "128K";
      end case;

      --  [...]
   end Device_Configuration;
end Prj;
Finding Vulnerabilities using Advanced Fuzz testing and AFLplusplus v3.0 Thu, 17 Dec 2020 00:00:00 -0500 Paul Butcher

Some of you may recall an AdaCore blog post written in 2017 by Thales engineer Lionel Matias titled "Leveraging Ada Run-Time Checks with Fuzz Testing in AFL". This insightful post took us on a journey of discovery as Lionel demonstrated how Ada programs, compiled using GNAT Pro and an adapted assembler pass can be subjected to advanced fuzz testing. In order to achieve this Lionel demonstrated how instrumentation of the generated assembly code around jump and label instructions, could be subjected to grey-box (path aware) fuzz testing (using the original AFL v2.52b as the fuzz engine). Lionel explained how applying the comprehensive spectrum of Ada runtime checks, in conjunction with Ada's strong typing and contract based programming, enhanced the capabilities of fuzz testing beyond the abilities of other languages. Ada's advanced runtime checking, for exceptions like overflows, and the scrutiny of Ada's design by contract assertions allow corner case bugs to be found whilst also utilising fuzz testing to verify functional correctness.

The success of Thales' initial research work and the obvious potential for this technology (see the blog post for real world examples of the bugs Lionel found), coupled with the evolution of fuzzing tools from the wider community and the impressive bug finding success of AFL, lead to an ongoing AdaCore research and development campaign around advanced security hardening through vulnerability testing.

This blog post provides the reader with an update into the science of vulnerability testing and describes some of the projects AdaCore has undertaken around fuzzing technologies as we work towards an industrial grade fuzz testing solution for Ada applications.

In addition this blog post celebrates the recent inclusion of AdaCore's GCC plugin into the publicly available latest release of AFLplusplus (version 3.00c).

A Brief Introduction to Fuzzing

A modern-day definition of fuzzing would be a testing capability designed to automatically create and inject inputs into a software process under test, while monitoring and recording detected faults. When considered, as part of a software development strategy, fuzzing is widely regarded as ‘negative testing’ and tends to fall under the remit of ‘software robustness’. More recently, however, it has been widely associated with ‘cyber-security’ and proven to be a suitable mechanism for finding code vulnerabilities that could otherwise lead to malicious exploitation. In this context, it is often known as ‘vulnerability testing’. In addition the term 'Refutation Testing', used in the context of aerospace security airworthiness and used to describe the act of 'refuting that our system is not secure', can also be used to describe fuzz testing.

Fuzzing, as a robustness strategy, has evolved exponentially since its initial inception back in the 1980s. Although the goal of fuzzing is always the same, tool development has forked in many directions such that different fuzzing tools use different test strategies and vary significantly in sophistication and complexity. Simplistic ‘black box’ fuzzers rely on brute force and speed of test case generation and execution. Whilst others instrument the code under test before, or during, compilation to allow the fuzzing engine to understand execution flow around decision points and to help guide the mutation engine onto diverging paths.

One of the main features of fuzz testing is rapid and automated test case generation and, as is the case within AFLplusplus, this is often achieved by a series of mutation phases over a starting corpus of test case files. The mutations tend to be 'non-structure aware' and instead alter the test case at the binary level by performing bit flips. That being said AFLplusplus does support an impressive custom mutation algorithm API that we will talk about later. Mutated test cases are then injected into the system under test while the fuzzer simultaneously monitors the application to detect a hung process or core dump. Test cases that resulted in a software fault are then placed in a separate directory for later manual scrutiny. This approach provides flexibility as the engine doesn't need to understand the format of the input data.

Instrumentation Guided Genetic Algorithms

AFLplusplus understands, by using test instrumentation applied during code compilation, when a test case has found a new path (increased coverage) and places that test case onto a queue for further mutation, injection and analysis. The fuzzing driver sets up a small shared memory area for the tested program to store execution path signatures. This structure is a basic array of counters incremented (or set) as execution reaches instrumentation points. Each instrumentation point is assigned, at compile time, a pseudo-random identifier. The array element to be incremented at run time is determined by hashing ordered pairs of identifiers of previous and current instrumentation points, to count the traversal of edges.

Though the driver can spawn (fork and exec) a new test process for each execution, instrumentation of the test program introduces a fork server mode that maps the shared memory and forks a new test process when requested by the driver. This avoids the overhead of exec and of running global constructors.

The aim of the instrumentation is to focus on non-intrusive manipulation; the fuzzer does not rely on complex code analysis or other forms of constraint solving. This form of 'path awareness' fuzzing, using an instrumentation-guided genetic algorithm, focuses the mutations on the test cases that discover path divergence and allows the testing to explore deeper into the control flow.

Figure 1 presents a basic representation of a Grey Box Fuzz Testing Architecture.

Figure 1 - Grey Box Fuzz Testing Basic Architecture

Mutation Strategies - "One size doesn't fit all"

Mutators applied to test inputs can be deterministic or nondeterministic. This flexibility encompasses changes suitable for both text and binary file formats. Mutators involve flipping single or multiple contiguous bits, arithmetic add and subtract at various word widths and endiannesses; overwrites, inserts and deletes. Deterministic mutators iterate over all bits or bytes of an input file, generating one or multiple new potential input tests at each iteration point. Nondeterministic approaches may perform multiple random mutations at random points and with random operands. In addition, chunks from another test may be spliced into the current before another round of random mutations. Some arithmetic mutators use a selection of “interesting” values, while string mutators use blocks of constant values. These are copies of chunks from elsewhere in the test input, fuzzer-identified tokens, and user-supplied keywords.

Figure 2 - Deterministic Vs Non-deterministic Mutations

In figure 2 we can see a basic example of a deterministic mutation strategy and a non-deterministic strategy. The deterministic strategy involves flipping each bit within the test case to produce a new test case whilst the non-deterministic approach shows random sequences of bits being swapped around whilst other sequences are flipped at the same time. However in most cases the mutation engines will actually utilise pseudo-random number generators which are seeded to ensure patterns are repeatable.

An Enhanced GCC Instrumentation Approach

American Fuzzy Lop (AFL) was a revolutionary game changer for fuzzing technologies. For many security engineers and researchers it quickly became the de facto standard for how to implement a generic grey box fuzzing engine. AFL paved the way for the genetic algorithm approach to test data mutation through execution path awareness and without compromise of test execution frame rates. Coupled with an easy to understand interface and high configurability and the fact that AFL has always been 'free software' (in that users have the freedom to run, copy, distribute, study, change and improve the software), lead to the discovery and patching of multiple vulnerabilities within released, and widely used, commercial software applications. AFL can also support full black box testing by utilising modern emulator technology which allows vulnerabilities to be discovered on binary executables when the source code and / or build environment is not available.

Although AFL v2.52b offered a post-processor to instrument assembly output from the compiler, it was target architecture specific, and introduced significant run-time overhead. AFL v2.52b did offer up a pre-existing plugin for LLVM with much lower-overhead instrumentation, but GCC users had to tolerate higher instrumentation overheads. In order to ensure that GCC compiled applications could benefit from a low-overhead and optimizable AFL instrumentation AdaCore took inspiration from an existing plugin, originally developed for instrumenting programs written in 'C', to create a new feature rich and modern GCC plugin for the AFL fuzzing engine.

Building the program with instrumentation for AFL amounts to inserting an additional stage of processing whereby the compiler's assembly code output is modified, before running the actual assembler. This additional build stage introduces global variables, a fork-server and edge counting infrastructure in the translation unit that defines the main function and calls to an edge counting function after labels and conditional branches. In order to not disturb the normal execution of the program, each such call must preserve all registers modified by the edge counter, and by the call sequence, e.g. to pass the identifier of the current instrumentation point as an argument to the edge counter. The edge counter loads the previous instrumentation point identifier from a global variable and updates it, in addition to incrementing the signature array element associated with the edge between them.

Integrating the additional instrumentation stages into the compiler enables several improvements, as demonstrated by the pre-existing plugin for LLVM, and a newly introduced one for GCC. Besides portability to multiple target architectures, this move enables the instrumentation to be integrated more smoothly within the code stream. For example, while the assembler post-processor, with its limited knowledge, has to conservatively assume several registers need to be preserved across the edge counter calls, the compiler can take the calls into account when performing register allocation or, even better, avoid the call overhead entirely and perform the edge counting inline.

Using compiler infrastructure also makes it easier to use a thread-local variable to hold the previous instrumentation point identifier, so that programs with multiple threads do not confuse the fuzzer with cross-thread edges. The sooner in the compiler pipeline instrumentation is introduced, the more closely it will resemble the source code structure, and the more opportunities to optimize the instrumentation there will be. Conversely, the later it is introduced, the more closely it will resemble the executable code blocks after optimization, so duplicated blocks, e.g. resulting from loop unrolling, get different identifiers, and merged blocks, e.g. after jump threading, may end up with fewer instrumentation points.


AFLplusplus takes up the mantle from where AFL v2.52b left off adding various enhancements to the capability whilst also offering up a highly customisable, tool chain-able and extendable approach through it's modular architecture.

One particular feature of AFLplusplus of interest is the ability to add custom mutator algorithms which can either extend, sanitise or completely replace the AFL internal mutation engine.

This, along with other features and a strong interest in industry lead by Thales, guided AdaCore towards ensuring our GCC plugin was compatible with AFLplusplus and ensuring that new features, developed into the existing AFLplusplus GCC plugin, were reimplemented in the AdaCore developed plugin.

This work has now been completed and in partnership with Thales and with support from the AFLplusplus maintainer team AdaCore have since upstreamed the work into the publicly available AFLplusplus repository.

Or to quote the AFLplusplus public github repo:

Fuzz Testing Ada Programs

Programming languages come with a variety of runtime libraries which are responsible for abstracting away fundamental architecture required to execute program instructions. By design each runtime is matched to the predicted role of the programming language and Ada, which is aimed at safety and security systems, is no exception. Runtimes, like the Ada runtime, that support advanced constraint checking are particularly well suited to fuzz testing. Languages, like 'C', with runtimes that allow some constraints to go unchecked are less well suited.

The following examples explain this better:

C program that receives a password as a command line argument and displays secret information if the password = "A*^%bd0eK":

#include <string.h>
#include <stdio.h>

void Expose_Secrets_On_Correct_Password (char *Password)
   int Privileged = 0;
   char Password_Buff[10];
   memcpy (Password_Buff, Password, strlen(Password));

   if (strcmp (Password_Buff, "A*^%bd0eK") == 0)
      Privileged = 1;

   if (Privileged != 0)
      printf ("Shhhh... the answer to life the universe and everything is 42!\n");
   } else {
      printf ("Pah. There is no getting passed this system's hardened security layer...\n");

int main(int argc, char **argv)
   return 0;

Ada equivalent version (disclaimer: implemented to closely represent the C code above and I'm not in anyway advocating that this is the correct way to write Ada - because it clearly isn't!)

with Ada.Text_IO; use Ada.Text_IO;
with Ada.Command_Line;

pragma Warnings (Off);
procedure API_Fuzzing_Example is

   procedure Expose_Secrets_On_Correct_Password (Password : access String);

   procedure Expose_Secrets_On_Correct_Password (Password : access String) is
      Privileged : Integer := 0;
      Password_Buff : String (1 .. 9);

      Password_Buff := Password.all;

      if Password_Buff = "A*^%bd0eK" then
         Privileged := 1;
      end if;

      if Privileged /= 0 then
         Put_Line ("Shhhh... the answer to life the universe and everything is 42");
         Put_Line ("Pah. There is no getting passed this system's hardened security layer...");
      end if;

   end Expose_Secrets_On_Correct_Password;

   Entered_Password : aliased String := Ada.Command_Line.Argument (1);

   Expose_Secrets_On_Correct_Password (Password => Entered_Password'Access);
end API_Fuzzing_Example;

Both code examples above contain an array overflow bug. In particular the assignment to variable 'Password_Buffer' will overflow if the command line argument contains a String greater than 9 characters.

For the examples above let's assume that the encompassing applications have system security requirements around the protection of the identified security asset: "the answer to life the universe and everything".

Let's now consider what happens when both of these programs execute with the following command line arguments:

1. "A*^%bd0eK"

This is the correct password. Both programs correctly expose the security asset by displaying: "Shhhh... the answer to life the universe and everything is 42!"

2. "123456789"

This is an incorrect password. Both programs correctly protect the security asset by displaying: "Pah. There is no getting passed this system's hardened security layer..."

3. "Password"

This is an incorrect password. The Ada runtime raises a constraint error on line 14 "Password_Buff := Password.all;" stating "length check failed". The exception is unhandled so the program terminates - the security asset is protected.

The C program displays: "Pah. There is no getting passed this system's hardened security layer..." - the security asset is also protected.

4. "A*^%bd0eKKKK"

This is an incorrect password. The Ada runtime raises a constraint error on line 14 "Password_Buff := Password.all;" stating "length check failed". The exception is unhandled so the program terminates - the security asset is protected.

The C program displays: "Shhhh... the answer to life the universe and everything is 42!". The security asset is now exposed and can be exploited by the attacker.

But why?

You will have likely spotted the reason the C program failed to protect the asset which is obviously due to the memcpy overflow writing data into the neighbouring 'int' variable 'Privileged'. This type of stack overflow exploitation is a widely recognised security issue within C programs and various counter measures exist to prevent this type of vulnerability. For example random stack memory allocation can help ensure the overflow leads to a core dump rather than continuing to execute with corrupted state.

The issue is exasperated when you consider that fuzz testing the C program may or may not reveal the bug. The overflow error could eventually cause a segmentation fault however, this is not guaranteed. Fuzz testing the Ada program however would quickly identify the problem and allow the developer to fix the bug with some basic defensive programming and therefore remove the vulnerability altogether.

Fuzzing into the Future

Within the UK AdaCore is actively researching and developing a fuzz testing solution for Ada based software as part of the “High-Integrity, Complex, Large, Software and Electronic Systems” (HICLASS) Aerospace Technology Institute (ATI) supported research and development programme. HICLASS was created to enable the delivery of the most complex, software-intensive, safe and cyber-secure systems in the world and is best described in an earlier AdaCore blog post titled "AdaCore for HICLASS - Enabling the Development of Complex and Secure Aerospace Systems".

Stage one of this work involved the development of an Ada library to abstract away the complexity involved in fuzz testing Ada programs at the unit level. Unit level fuzz testing involves considering the test case files as in-memory representations of identified test data. The test data is any software variables the test engineer would like the fuzzer to drive and this is typically made up of 'in mode' parameters for the function under test, global state, stub return values and even private state manipulated through test injection points or memory overlays. Once the test data has been identified a series of generic packages are instantiated with the Ada types associated with the test data. These packages are responsible for marshalling and unmarshalling the starting corpus and mutated test cases between binary files and Ada variables; we call this collection of test data information an "AFL Map". A simple test harness is then constructed which receives the test case from the fuzzer via a String file path within a command line parameter. The unmarshalled data is then used to setup and execute the test. In addition a top level exception handler is placed in the test harness to capture unexpected exceptions raised by the runtime, these are converted into core dumps to signal a fault to the fuzzer.

Figure 3 shows a high level overview of the architecture of unit level fuzz testing for Ada.

Figure 3 - Unit Level Fuzz Testing for Ada Programs Architecture

In addition, this work also involved researching and developing a prototype 'Starting Corpus Generator' to create the initial set of test cases that the fuzzer will mutate and a 'Test Case Printer' to convert binary test cases into a human readable syntax. Figure 4 below shows the content of a binary test case created using an AFL Map made up of test case data with the types: Integer, Boolean, Character, Float and String and the human readable version created using the Test Case Printer.

Figure 4 - Raw View and a 'Test Case Printer' View of a Binary Unit Test Case Suitable for Fuzz Testing

Stage two of this work required the construction of a separate Ada application to automate the building and executing of Ada fuzz tests. In addition this application allows the test engineer to build up a set of stop criteria rules that, when satisfied, will terminate the fuzzing session. The rules are based around the exposed state of AFLplusplus which the fuzzing engine broadcasts every thirty seconds. This tool also features an incorporated coverage capability through GNATcoverage allowing for dynamic full MCDC coverage analysis that can also be included within the stop criteria rules. Finally this tool allows for additional AFLplusplus companion applications to easily be utilised by Ada fuzzing sessions - including starting corpus minimisation and utilising maximum available processor cores to run multiple afl-fuzz instances simultaneously.

Figure 5 below shows this tool in action with an active AFLplusplus fuzzing session.

Figure 5 - Extending AFLplusplus with Stop Criteria and Dynamic Code Coverage

Stage 3 involved the building of a fuzzer benchmark testing suite used internally to perform comparisons between versions of the capability as new prototype enhancements are developed.

Stage 4 is where we research and develop a prototype structure aware custom mutation engine for Ada fuzzing sessions. This application is being developed as a standalone dynamic library that implements the AFLplusplus custom mutation API. The library can then be registered with an active AFLplusplus session, to unmarshall the received byte array representations of the test case data into their associated Ada types before being sent to type specific custom mutation packages for manipulation.

Stage 5 is looking into random automated test case generation. This capability will form the basis of the starting corpus of test cases by automatically generating wide ranging sets of test case data. In addition it will be utilised by the custom mutation library to mutate the data whilst ensuring the data remains correctly constrained.

Stage 6 involves further automation in the form of automated test harness creation and test data identification.

Final Note...

Fuzz testing is seen by AdaCore as a key tool in the software development process for asset protection within security related and security critical applications and continued effort will be applied to ensure the capability matches the needs of our customers. In addition when combining Fuzzing with other security based tools such as static code analysers like CodePeer and formal verification tools like SPARK Pro, and particularly when the tools come with extensive documentation showing mitigation against the top CWEs, the result can be a cyber-secure software development environment fit for the modern cyber-warfare battle ground.

To conclude: the future may be fuzzy but we're ok with that! ;-)

Make with Ada 2020: ADArrose Mon, 07 Dec 2020 00:00:00 -0500 Juliana Silva

Charles Villard, Cyril Etourneau, Thomas Delecroix, Louise Flick worked together in the ADArrose project. It won the student prize in the Make with Ada 2019/20 competition. This project was originally posted on here. For those interested in participating in the 2020/21 competition, registration is now open and project submissions will be accepted until Jan 31st 2021, register here.




This project intends to make an automated sprinkler based on a STM32F429 board. This project will use ADA language and SPARK verification system. We all know the trouble to keep a plant. Especially in Paris, where the light is low and where our lives goes to 100km/h. We don’t have time to water plants. On the eventuality that we manage to find the necessary time, we will most likely give it too much water or not enough and by the time we find out the plant will be dead. Using the secure ADA language and contract verification, we will be able to create an automatic sprinkler to keep our plants alive and have a beautiful green environment without worrying about it.


Our automatic sprinkler comes with numerous features. First of all it can be set in different activity modes to best respond to your needs.

• Continuous mode:
The automatic sprinkler is always active. Once every hour, the soil humidity around the plant is checked with a sensor. If the humidity percentage isn’t high enough, the sprinkler will water the plant with the right amount of water to keep it well hydrated. Perfect for fragile plants.

• Economic mode:
The automatic sprinkler is always active. When the luminosity is low, the soil humidity around the plant is checked once every hour with a sensor. If the humidity percentage isn’t high enough, the sprinkler will water the plant with the right amount of water to keep it well hydrated. This is done to avoid losing water due to evaporation. Perfect to minimize the water consumption. Use on plants without special needs.

• Planned mode:
The automatic sprinkler always active, but can only water the plant during the periods specified by the user. The humidity of the soil is checked at the start of the period and then once every hour. If the humidity percentage isn’t high enough, the sprinkler will water the plant with the right amount of water to keep it well hydrated. A last check is done at the end of the period to ensure the well being of the plant. Useful for a tailored experience.

• Punctual mode:
The automatic sprinkler does one humidity check when starting. If the humidity percentage isn’t high enough, the sprinkler will water the plant with the right amount of water to keep it well hydrated. When the process is finished, the system signals to the user that it can be shut down. For those who want to keep some contact with their green friends. Use to reduce the electricity bill.

Numerous data sets about the plant are collected when the sprinkler is in the first 3 modes. The soil humidity and surrounding luminosity are recorded once every hour and the user can have access to these data over the last 24 hours. Whenever something abnormal is detected in the plant consumption or in the environment, messages will warn him about the peculiar situation. If a problem arises in the system, the user will also be notified.

Full view of the system
Current light and humidity values
Graph over the last hours
Pump activating
  • Access the project schematics here.
  • Access the project code here.
Make with Ada 2020: The autonomous firetruck Fri, 20 Nov 2020 00:00:00 -0500 Juliana Silva

Laboratorio Gluon's project won a finalist prize in the Make with Ada 2019/20 competition. This project was originally posted on here. For those interested in participating in the 2020/21 competition, registration is now open and project submissions will be accepted until Jan 31st 2021, register here.



In this project I will show you the why and the how of my idea for the MakeWithAda contest. I also made a video, which is in spanish but I subtitled it to english so all you can understand, explaining the truck structure and building.

Disclaimer for MakeWithAda contest:

All the code, images, design and videos of this contest are made by myself, except the Console library, which I made, with a friend, for the previous MakeWithAda Contest in this project. Everything else is new creation for this contest.

End Disclaimer;

The problem

With the unstoppable increase in world temperatures mainly due to Global Warming, the news of fires devastating vast areas of the rainforest and populated areas, will be the usual. We can already see them in a daily basis, and will increase in the following years if we do not stop it.

These fires worldwide, are one of the greatest dangers for many species, including the human race. But as the most dangerous specie for the world, we need to implement all we can to stop this devastation.

Also, we are experiencing one notorious development of unmanned systems, from UAVs(Unmanned Aerial Vehicle) to UGVs(Unmanned Ground Vehicle), using this technology we can develop systems that increment the effectiveness of the fire extinguishing methods.

The solution

In this project I would like to introduce you AFT (the Autonomous FireTruck) which is my prototype for a solution that could be implemented in large scale, and for which the technology is already available. This proyect will consists in two main parts:

  • The guidance system, which is the one that detect the position of the AFT and analyze the fire propagation and position
  • The vehicle that puts out fires, which is in constant communication with the guidance system, to know exactly his own position and the fire's one.

The guidance system can be: from a mesh of antennas and sensors around the mountains, or a UAV flying over the fires, to an space satellite measuring the fire and vehicles. On the other hand, the vehicles can be anaything from an autonomous firetruck, or an autonomous airplane, or even small robots, for smaller fires.

This solution would safe life of many people and animals, directly and indirectly. Directly, by avoiding sending people to dangerous fires, and indirectly with a faster response and tireless systems that can work 24/7, increasing the efficiency of the current fire extinguishing systems.


Since the scope of the proyect can be anywhere from a simple simulation to a fully working system, I am developing something in between. That is:

  • A small guidance system which will be an image processing software running in a computer, that sends the position of the fire and the vehicle information.
  • The vehicle will be a small truck that carries a small water deposit and tower to aim the waterjet.
Operation summary of the system

The main reasons behind this prototype is the technology testing and the availability for a real world implementation. The guidance system (a camera detecting fire and trucks) can be mounted in poles in the forest, in an UAV or in satellites. Also, the technology in Unmanned vehicles' navigation system allow for a safe autonomous guidance that can drive them to the objective marked by the guidance system.

To sum up, this prototype tries to implement a small scale prototype of a guidance system based in image, and a small truck that actually tries to put out fires. This system can be also evolved to a real world vehicles and technologies.

The AFT (Autonomous FireTruck) in the garage

The implementation


In order to achieve our goal of a fully functional prototype we need modern hardware and software libraries. In this section I will cover the main ones I am using in this project.


The first one I will talk about it 'OpenCV' a computer vision and machine learning software library. It has been ported to many languages so one can use it with the one he feels more confortable. In my case I am using openCV with Python for a basic system. As I said earlier, this is not the main part of the project, so the software will be as simple as possible.


As the main brain on the AFT I will use the STM32F4DISCOVERY. I used this board in previous contests, and I could use some of the code that I implemented for it. Furthermore, I will keep expanding the library of drivers for this board, which will make the next projects easier to implement.

For this prototype we will be using the following interfaces of the board:

  • SPI: For communication with the NRF24
  • Digital I/O: For the water pump activation
  • PWM: For the motors and servos.


The embedded software which will be run on the STM32 board will be programmed in Ada. I though this proyect would be a nice choice since Ada is one of the main languages in the development of critical systems, where the life of human beings are at risk. So, since this is a prototype of the software on a "real truck" or "real UAV", the use of Ada is very convenient.

Also, the use of Ada as the programming language helps in the development, detecting the most common errors and problem during developing. This makes that every release of the software is reliable with even low testing. We will be using the Ada Library so most of the interfaces of the STM32's modules are already available. However we will be implementing some library for:

  • Servo control: A servo library that allows to initialize and configure the PWM port, adjust the calibration and allow for limited range of movement.
  • CarControl: A library that can be used to speak with a L298N (or equivalent H-bridge) and allows to control the speed and direction.
  • RF24 library: To initialize de SPI port, configure the pins, and allows for simple RF24 startup and use as a receiver.

Is a cheap wireless communication module in 2.4Ghz that can be used to send and receive data. It is connected through an SPI port, and while the configuration and initiazation is a bit complex, once it is configured it is easy to use. Therefore, once this library is implemented and included in any project, the nRF24L01 module can be used seamlessly in any other project.

The communication testbench with Arduinos

Guidance System

The Guidance system is the one in charge of detect the fire and its position from the vehicle, and send this information to the vehicle. Since this project is aimed for the Make With Ada Contest, I will not focus too much in this section as it is out of the scope of the contest. However, it will be fully functional, yet can be highly improved.

This system consists in two parts:

  • The PC software that analyze the image from a camera
  • The Arduino connected to the with the communication system attached.

The implementation of the guidance software was done in multiple iterations. I choose openCV + Python as the base, and since it was my first time with openCV I really needed several steps until I get a functional prototype. Basicly the workflow has been:

  • Learn how to calibrate the camera with openCV.
  • Learn how to detect simple things (in my case, colors): In this step I started detecting a color pattern in the back of the vehicle.
  • Calculate the position of the vehicle from the Camera (with solvePnP function).
  • Implementation of mask filtering in order to improve segmentation of colors.
  • Detect background and filter constant background from the image. I have written a small tutorial, but it is in spanish
  • Detect the fire/objective.
  • Send all this information to the vehicle.

The Arduino software is a basic interface between the PC and the NRF24, so it basically send the data received by the USB to the NRF24.

Flow chart of the openCV software


The AFT (Autonomous FireTruck) is the main part of the proyect, is the one implemented in Ada, and in where I put most of my effords. This truck is build around a cardboard box where all the components are attached. This system contains:

  • A 3S-LiPo battery pack
  • Two main motors with wheels
  • An L298N module to control the motors.
  • The STM32 as the main board of the AFT.
  • A deposit with a Water Pump.
  • An adaptation board for the Water Pump activation.
  • Two servos for a 2 degrees of freedom control of a Hose.
  • The Hose who connects the water deposit and the control tower.
Internal structure

Part 1 : Communications

The first part was establishing the communication between the computer and the STM32 board, so I can advance from there. With the communication link working, the next steps would be much easier. I did not found any nRF24L01 library for STM32 written in Ada, so I started working on my own version, following a little bit the interface of the Arduino's version, so it can be easily followed and upgraded. This part was a big headache, making the nRF24L01 start receiving data took me a lot of time. The nRF24L01 has a lot of configurations and registers, which makes the device hard to turn on. However, once it has been initialized it is really easy to use and intregate in any other project.

nRF24L01 with the STM32 and Ada

For the data in the communication I use a record to store the command type and the command data, and then I parse each binary packet received from the nRF24L01 and convert it to the Command. Then in the main loop, I update all the systems according to the new data received.

The Command types I have defined for this project were:

  • TEST_LED: For testing purposes, it toggles a LED in the board so I can check that the code it is still running.
  • SET_DIRECTION: Configure the CarController to set the direction of movement.
  • SET_SPEED: Set the speed for the wheel of the CarController. The CarController has already the direction from the previous command.
  • SET_SERVO: Sets the angle of the servos that control the hose.
  • SET_PUMP: De-/Activate the water pump.
  • SET_MAIN_STATUS: Currently not used, but was created in order to command different states to the truck, i.e.: STOP, MOVING_TO_TARGET, ...
  • INFO_TARGET: Is used to receive the data with the position and distance of the objective.

As you would expect the TEST_LED command was the first one implemented, with this command, I started testing the communications and the full message parsing functions. Since in this prototype we are only using one way commands, with no responses, the nRF24L01 library will not implement the send functions.

In this part I learned a lot about the nRF24L01 and all the internal states and configuration, also improved my hability handling big and complex datasheets. Also, since the communication with the nRF24L01 is using the SPI interface, I learned how to use it for the first time.

About the library of the nRF24L01, it is not complete, as there are many configuration options that were out of scope for this project. However, I tried to do it as scalable as possible, with a clean and structured code.

For example to add a new register to the processing one has to:

  • Define the new record and its sizes
  • Create the FROM/TO unchecked Conversion
  • Use the register!

Part 2: Movements

The next natural step is making it moving with the L298N and two motors. In this step I implemented the SET_DIRECTION command, which makes the car move in the desired direction. I implemented the CarController library that allows one to control a L298N board, using the digital pins to set the direction of rotation of the wheels, and a PWM signal generated to set the speed of rotation.

This library is pretty simple, it just set the values for the GPIOs that are connected to the L298N. In this part I learned about the PWM signal generation, which I will use later in the Servo controller.

Part 3: Pumping water!

The programming of the Water pump is pretty straightforward, it just ONE or ZERO!!! , almost everything done in 2-3 lines of code :D. However, the intesting part of this section was the electronic design to activate the water pump which worked at 7V, while the STM32 works at 3.3V. So the first iteration of the code was to do a two-stage circuit: the first part was to rise de 3.3V from the GPIO to 7V but with low power consumption, this 7V would activate a MOSFET to control the actual water pump.

First iteration of the circuit

However, this solution was really complex for the problem that was addressing. The water pump just draws 50mA of current at full power, so it can be controlled with just a single BJT transistor, leaving us a pretty simple circuit.

Photo of the process

Part 4: Aim and ... shoot!

The last part of the truck is the building of the tower that controls the direction of the water splash. The tower is build around two servos: one for vertical and the second one for horizontal aiming. There are two main problems here, the servo calibration, and the aiming.

For the servo calibration, we have to take into account that, since these are cheap servos, they do not follow the standard protocol exactly, that is why my servo library allows for a simple calibration. This calibration methods allow to set the value of the PWM for 0, 90 and 180 degrees.

So, once the PWM signal was configured for the 20ms period, I did a testbench in order to get the values of microseconds that makes the servos go from 0 to 180 degrees.

The servo tower, ready to put up fires.

Then with these values, the servo library has two methods to move them: one with degrees as input, and another one with microseconds. If they are calibrated, the first one is recomended. Also, limits for the movement of the servos can be configured, therefore if the servo is commanded to move further the limit, the servo will stop in the limit.

Until now, I can aim the tower, however the aiming was not related to the information received by the "guidance system", in order to fix the aiming. First I have to do some experiments to calculate the relation between the angle of the vertical servo, and the distance to where the water hits the ground.

The results were as follow:

Distance-Angle relation table

With this values, we can now interpolate, with a simple linear interpolation, the angle of the vertical servo. So finally, we can calculate the orientation of the vertical and horizontal servos, based on the distance and angle to the target!

All together

We have seen each part separately, but the fun part comes when they get connected together. The next image represent how all the items are conected, the voltages of each one, and all the interfaces needed.

Full system diagram

The Embedded code

I tried to do the code as self-explanatory as possible, but the big picture would be:

  • First we call all the initialization for each component. Each one has the Init/Config function in order to configure the GPIOs and internal registers (timers, ...)
  • Then in the main loop I just get all the new commands from the nRF24L01 wifi device and update the commands. Then, only new commands are executed, each one in its component. For example, if a "SET_SERVO" arrives, in that loop the software will set the servo position and mark the command as "old", so it is not processed again. However, every command is stored until the next one arrives.

In the following picture we can see the full software flow. On the left column we have the entry point of the software, while the other 4 columns are the components of the project and what they do in each step of the main program.

Full software diagram

The OpenCV Code

The folder 'openCV' of the project contains the code for the python and openCV. There we have 5 files:

  • it was a playground to learn how to start detecting things with openCV, so this is not mandatory for the project, and hence it is not clean nor optimized
  • is the file that handles the calibration process of the Camera, it only has to be run once.
  • is the main file of the software. This piece of software initialize the camera, the detection algorithm and execute it in the main loop
  • contains all the code to initialize the serial, compose the data packets and send it to the Arduino with the nRF24
  • Has all the logic of the detection algorithm. It can work in three modes: Simple (which is the simplest one and now it try to go to the target using only 2 elements), the tester (which is used to get the HSV filter in realtime) and the normal mode (which is the one used in the AFT)

How to use it

In order to replicate this project the following steps shall be followed:

  • Assemble the hardware following the previous notes
  • Flash the Embedded AdaCore source in the stm32F407
  • Flash the Arduino Code to send data over the WiFi
  • Adapt the openCV Comms script to open the correct Serial port.
  • Setup the webcam and connect to the PC.
  • Launch the "ApagaFuegos" in the openCV folder
  • Once the scenario is clear, press "b" to take a picture of the background
  • Add the truck and the fire to the scenario
  • Press "m" to activate the movement in the Truck
  • Wait and enjoy! :D

Conclusion and future

As we can see in the video, the truck work as expected, and actually moves to the target and aim to it. However, as this is a prototype is has a strong contrains in the enviroment, due to the openCV algorithm, which is where this project is highly improbable.

  • Access the project schematics here.
  • Access the project code here.
Make with Ada 2020: Ada Robot Car With Neural Network Thu, 05 Nov 2020 00:00:00 -0500 Juliana Silva

Guillermo Perez's project won a finalist prize in the Make with Ada 2019/20 competition. This project was originally posted on here. For those interested in participating in the 2020/21 competition, registration is now open and project submissions will be accepted until Jan 31st 2021, register here.

With this smart robot, we will use a neural network to follow the light and stop when it reaches its goal.



If the robots had to have everything pre-programmed it would be a problem. They simply could not adapt to changes, a "programmer" would have to change the code for the robot to work every time the environment changes. The way to solve it's by allowing the robot to learn and artificial intelligence is a necessary alternative if we want the robots not only to do what we need them to do, but to help us find even a better solution. The perceptron is an algorithm based on the functioning of a neuron. It was first developed in 1957 by Frank Rosenblatt, and here you can check the whole story:

Mark | Perceptron machine

The main goal of this project is to develop a robot car that follows the light emitted by a lamp and through the use of neural networks.

Particular goals:

  • We will use three inputs (including the BIAS), four hidden neurons and four outputs (each one connected to the motors pins). It's important to clarify that some programmers prefer that the robot simultaneously calculate the weights of the neurons and execute the action, and this is the cause a small delay of time. We prefer to calculate the weights using software to avoid these errors.
  • And also we include an input to stop the robot when it reaches its target from a threshold of light, since many robots continues the move without stopping.
Autonomous Robot Car

Next, you will find all the information in the following order: Hardware, Neural Network, GPS Project, Assembling the Chassis, Test and Conclusion.


(Timing: 2 hrs.)

The electrical diagram of this project I show you in the figure below.

Note: All parts are commercial and easy to get. I recommend that when you assemble the motors do it carefully, since you can connect them in the opposite direction. So make the connections, and then do the tests.


This is the board that I will use for my project, which has many useful tools and a great AdaCore library support.

  • STM32F429ZIT6 microcontroller featuring 2 Mbytes of Flash memory, 256 Kbytes of RAM in an LQFP144 package
  • USB functions: Debug port, Virtual COM port, and Mass storage.
  • Board power supply: through the USB bus or from an external 3 V or 5 V supply voltage
  • 2.4" QVGA TFT LCD
  • 64-Mbit SDRAM
  • L3GD20, ST-MEMS motion sensor 3-axis digital output gyroscope
  • Six LEDs: LD1 (red/green) for USB communication, LD2 (red) for 3.3 V power-on, two user LEDs: LD3 (green), LD4 (red) and two USB OTG LEDs: LD5 (green) VBUS and LD6 (red) OC (over-current)
  • Two push-buttons (user and reset)

Tools, software, and resources on:

STM32F429I board

LDR (Light Dependent Resistor)

Two cadmium sulphide(cds) photoconductive cells with spectral responses similar to that of the human eye. The cell resistance falls with increasing light intensity. Applications include smoke detection, automatic lighting control, batch counting and burglar alarm systems.



(Timing: 8 hrs.)

A neural network is a circuit of neurons, or an artificial neural network, composed of artificial neurons or nodes. Thus a neural network is either a biological neural network, made up of real biological neurons, or an artificial neural network, for solving artificial intelligence problems. The connections of the biological neuron are modeled as weights. A positive weight reflects an excitatory connection, while negative values mean inhibitory connections. All inputs are modified by a weight and summed. This activity is referred as a linear combination. Finally, an activation function controls the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be −1 and 1.

Bibliographic reference:

The theoretical information you can find on several websites, in my case I support my work with the following publication in Spanish:

Here the author shows us the theory, and an example to calculate the weights of a neural network with Python and applied to Arduino board. In my case I will make a brief review of how I use this information to do my project with AdaCore. For example I've modified the codes in Python programming language according to my needs, and I'm the author of the GPS codes.

Lets starts

In our robot, what we want to achieve is that it moves, using a light sensor, in a directed way towards the light. The robot will only have two types of movement: move forward, and rotate to the left. We want the car to learn to move forward when its movement brings it closer to the light, and to turn for a time of 100 milliseconds when it moves away from the light. The reinforcement will be as follows:

  • If the movement improves the light, with respect to its previous position, then it rewards: move forward
  • And if the movement results in a less light position, then the movement is penalized for not doing so again: turn to the left

I show you the neural network that I designed with three inputs in the figure below, four neurons and four outputs. We can appreciate all the connections that each one represents the hidden weights that we will calculate later using software.

Neural network

We want our robot to learn to follow the light with only one light sensor. We are going to build a system that takes two inputs: 1) the received light (a value between 0 to 4095) before the movement, and 2) the received light after the movement. In the figure below we can see the table of values of how our neural network would work.

Table of values of inputs, outputs and actions of the neural network.

We can appreciate that there are only four options:

  • If the current light is greater than or equal to the previous light and we have a light emission of less than 3750, the robot moves forward for 100 milliseconds.
  • If the current light is less than the previous light and we have a light emission less than or equal to 3750, the robot turns to the left for 100 milliseconds.
  • In the other two remaining options the robot stops because the light emitted by the lamp is greater than 3750 bits. This means that it's very close to the light.

The Complete code of the "Neural network" with Backpropagation is as follows:

import numpy as np

# We create the class 
class NeuralNetwork:

    def __init__(self, layers, activation='tanh'):
        if activation == 'sigmoid':
            self.activation = sigmoid
            self.activation_prime = sigmoid_derivada
        elif activation == 'tanh':
            self.activation = tanh
            self.activation_prime = tanh_derivada

        # Initialize the weights
        self.weights = []
        self.deltas = []
        # Assign random values to input layer and hidden layer
        for i in range(1, len(layers) - 1):
            r = 2*np.random.random((layers[i-1] + 1, layers[i] + 1)) -1
        # Assigned random to output layer
        r = 2*np.random.random( (layers[i] + 1, layers[i+1])) - 1

    def fit(self, X, y, learning_rate=0.2, epochs=100000):
        # I add column of ones to the X inputs. With this we add the Bias unit to the input layer
        ones = np.atleast_2d(np.ones(X.shape[0]))
        X = np.concatenate((ones.T, X), axis=1)
        for k in range(epochs):
            i = np.random.randint(X.shape[0])
            a = [X[i]]

            for l in range(len(self.weights)):
                    dot_value =[l], self.weights[l])
                    activation = self.activation(dot_value)
            #Calculate the difference in the output layer and the value obtained
            error = y[i] - a[-1]
            deltas = [error * self.activation_prime(a[-1])]
            # We start in the second layer until the last one (A layer before the output one)
            for l in range(len(a) - 2, 0, -1): 

            # Reverse

            # Backpropagation
            # 1. Multiply the output delta with the input activations to obtain the weight gradient.             
            # 2. Updated the weight by subtracting a percentage of the gradient
            for i in range(len(self.weights)):
                layer = np.atleast_2d(a[i])
                delta = np.atleast_2d(deltas[i])
                self.weights[i] += learning_rate *

            if k % 10000 == 0: print('epochs:', k)

    def predict(self, x): 
        ones = np.atleast_2d(np.ones(x.shape[0]))
        a = np.concatenate((np.ones(1).T, np.array(x)), axis=0)
        for l in range(0, len(self.weights)):
            a = self.activation(, self.weights[l]))
        return a

    def print_weights(self):
        for i in range(len(self.weights)):

    def get_weights(self):
        return self.weights
    def get_deltas(self):
        return self.deltas

# When creating the network, we can choose between using the sigmoid or tanh function
def sigmoid(x):
    return 1.0/(1.0 + np.exp(-x))

def sigmoid_derivada(x):
    return sigmoid(x)*(1.0-sigmoid(x))

def tanh(x):
    return np.tanh(x)

def tanh_derivada(x):
    return 1.0 - x**2

########## CAR NETWORK

nn = NeuralNetwork([2,2,4],activation ='tanh') # no incluir la bias aqui porque si la esta en los calculos

X = np.array([[1,1],   # light_Current >= light_Before & light_Current <= 3750
              [-1,1],   # light_Current < light_Before & light_Current <= 3750
              [1,-1],   # light_Current >= light_Before & light_Current > 3750
              [-1,-1],   # light_Current < light_Before & light_Current > 3750
# the outputs correspond to starting (or not) the motors
y = np.array([[1,0,1,0], # go forward 
              [0,1,1,0], # turn to the left
              [0,0,0,0], # stop
              [0,0,0,0], # stop
             ]), y, learning_rate=0.03,epochs=15001)
def valNN(x):
    return (int)(abs(round(x)))
for e in X:
    prediccion = nn.predict(e)
    print("X:",e,"expected:",y[index],"obtained:", valNN(prediccion[0]),valNN(prediccion[1]),valNN(prediccion[2]),valNN(prediccion[3]))

You can run this code with Python 3.7.3, or with Jupiter Notebook, and the result is as follows:

Expected and obtained values with this code

With 15 thousand periods it was enough to get minimal errors. We have to add the following code to see the cost graphic:

import matplotlib.pyplot as plt

deltas = nn.get_deltas()
for arreglo in deltas:
    valores.append(arreglo[1][0] + arreglo[1][1])

plt.plot(range(len(valores)), valores, color='b')
plt.ylim([0, 1])
Cost graphic

Finally we can see the hidden weights obtained and the output weights of the connections, as these values will be the ones we will use in the final network on our AdaCore code:

def to_str(name, W):
    s = str(W.tolist()).replace('[', '{').replace(']', '}')
    return 'float '+name+'['+str(W.shape[0])+']['+str(W.shape[1])+'] = ' + s + ';'

# We get the weights trained to be able to use them in the GPS code
pesos = nn.get_weights();

print('// Replace these lines in your arduino code:')
print('// float HiddenWeights ...')
print('// float OutputWeights ...')
print('// With trained weights.')
print(to_str('HiddenWeights', pesos[0]))
print(to_str('OutputWeights', pesos[1]))
Hidden weights and output weights of neural network

I repeat it, I modified these codes and ran them with Python and you can get in the download section. Now we are going to implement these coefficients in our AdaCore code.


(Timing: 8 hrs.)

GNAT Programming Studio (GPS) is a free multi-language integrated development environment (IDE) by AdaCore. GPS uses compilers from the GNU Compiler Collection, taking its name from GNAT, the GNU compiler for the Ada programming language. Released under the GNU General Public License, GPS is free software. The download link is as follows:

In my opinion, this is a powerful software with mathematical tools that allow us to develop high-precision projects as I explain below. First step, we have to download the next library:

I'm the author of the following code, and to develop this GPS project, I used the following two examples: demo_gpio_direct_leds, and demo_adc_polling.

How does it work?

To start my project I have used the sample code: demo_adc_gpio_polling.adb; in my project I'm going to measure the analog signal generated by the LDR sensor through the analog port PA5, we will have 4095 values with its 12-bit CAD.

Converter     : Analog_To_Digital_Converter renames ADC_1;
Input_Channel : constant Analog_Input_Channel := 5;
Input         : constant GPIO_Point := PA5;

By default, this code has already configured and initialized the digital output ports of the green and red LEDS, ie PG13 and PG14. I have only added ports PD12 and PD13.

LED1       : GPIO_Point renames PG13; -- GREEN LED
LED2       : GPIO_Point renames PG14; -- RED LED
LED3       : GPIO_Point renames PD12;
LED4       : GPIO_Point renames PD13;

After the digital and analog ports have been configured, in the declaration of variables, I've loaded the weights of the neural network calculated in the previous section:

HiddenWeights_1_1 := -0.8330212122953609;
HiddenWeights_1_2 := 1.0912649297996564;
HiddenWeights_1_3 := -0.6179969826549335;
HiddenWeights_1_4 := -1.0762413280914194;
HiddenWeights_2_1 := -0.7221015071612642;
HiddenWeights_2_2 := -0.3040531641938827;
HiddenWeights_2_3 := 1.424273154914373;
HiddenWeights_2_4 := 0.5226012446435597;
HiddenWeights_3_1 := -1.3873042452980089;
HiddenWeights_3_2 := 0.8796185107005765;
HiddenWeights_3_3 := 0.6115239126364166;
HiddenWeights_3_4 := -0.6941384010920131;
OutputWeights_1_1 := 0.4890791000431967;
OutputWeights_1_2 := -1.2072393706400335;
OutputWeights_1_3 := -1.1170823069750404;
OutputWeights_1_4 := 0.08254392928517773;
OutputWeights_2_1 := 1.2585395954445326;
OutputWeights_2_2 := 0.7259701403757809;
OutputWeights_2_3 := 0.05232196665284013;
OutputWeights_2_4 := 0.5379573853597585;
OutputWeights_3_1 := 1.3028834913318572;
OutputWeights_3_2 := -1.3073304956402805;
OutputWeights_3_3 := 0.1681659447995217;
OutputWeights_3_4 := -0.016766185238717802;
OutputWeights_4_1 := -0.38086087439361543;
OutputWeights_4_2 := 0.8415209522385925;
OutputWeights_4_3 := -1.527573567687556;
OutputWeights_4_4 := 0.476559350936026;

Once the program starts working, we calculate and load the input values, using the variables 'eval' and 'lux':

if light_Current >= light_Before then -- If light_Current is greater than or equal to, move forward
   eval := 1.0;
else -- else, it turns counterclockwise
   eval := -1.0;
end if;
if light_Current <= 3750 then -- If light_Current is less than, move forward or turn to the left
   lux := 1.0;
else -- else, "Stop"
   lux := -1.0;
end if;

I experimentally calibrated the lux value of 3750 with several models of LDR sensors... with this value, I found a satisfactory response to position the robot close to the light source and without impacting the lamp or losing it in its movement.

Calibration of the lux value

We multiply the input matrix by the matrix of the hidden weights, and to each accumulated result we calculate the hyperbolic tangent to obtain values between 1 and -1:

-- Input * HiddenWeights
-- We use the Tanh to get values between 1 and -1
Hidden_Layer_1_1 := Tanh(1.0*HiddenWeights_1_1 + eval*HiddenWeights_2_1 + lux*HiddenWeights_3_1);
Hidden_Layer_1_2 := Tanh(1.0*HiddenWeights_1_2 + eval*HiddenWeights_2_2 + lux*HiddenWeights_3_2);
Hidden_Layer_1_3 := Tanh(1.0*HiddenWeights_1_3 + eval*HiddenWeights_2_3 + lux*HiddenWeights_3_3);
Hidden_Layer_1_4 := Tanh(1.0*HiddenWeights_1_4 + eval*HiddenWeights_2_4 + lux*HiddenWeights_3_4);

Again we multiply the matrix of the hidden weights by the matrix of the output weights and to each accumulated result we also calculate the hyperbolic tangent.

-- Hidden_Layers * OutputWeights
-- We use the Tanh to get values between 1 and -1
Output_Layer_1_1 := Tanh(Hidden_Layer_1_1*OutputWeights_1_1 + Hidden_Layer_1_2*OutputWeights_2_1 + Hidden_Layer_1_3*OutputWeights_3_1 + Hidden_Layer_1_4*OutputWeights_4_1);
Output_Layer_1_2 := Tanh(Hidden_Layer_1_1*OutputWeights_1_2 + Hidden_Layer_1_2*OutputWeights_2_2 + Hidden_Layer_1_3*OutputWeights_3_2 + Hidden_Layer_1_4*OutputWeights_4_2);
Output_Layer_1_3 := Tanh(Hidden_Layer_1_1*OutputWeights_1_3 + Hidden_Layer_1_2*OutputWeights_2_3 + Hidden_Layer_1_3*OutputWeights_3_3 + Hidden_Layer_1_4*OutputWeights_4_3);
Output_Layer_1_4 := Tanh(Hidden_Layer_1_1*OutputWeights_1_4 + Hidden_Layer_1_2*OutputWeights_2_4 + Hidden_Layer_1_3*OutputWeights_3_4 + Hidden_Layer_1_4*OutputWeights_4_4);

For each value of the output matrix we calculate the absolute value to handle positive values and round the result:

-- We charge absolute and integer values at the outputs
Output_1 := Integer (abs (Output_Layer_1_1));
Output_2 := Integer (abs (Output_Layer_1_2));
Output_3 := Integer (abs (Output_Layer_1_3));
Output_4 := Integer (abs (Output_Layer_1_4));

Finally we use these output values to activate and deactivate the digital output pins, which in turn feed the L298N driver (IN1, IN2, IN3 and IN4). This process is repeated every 100 milliseconds.

-- Activate the outputs according to the calculations of the neural network
if Output_1 = 1 then
end if;
if Output_2 = 1 then
end if;
if Output_3 = 1 then
end if;
if Output_4 = 1 then
end if;

We can see the output status of these values on our LCD screen:

-- Print the outputs of the neural network
Print (0, 25, Output_1'Img);
Print (0, 50, Output_2'Img);
Print (0, 75, Output_3'Img);
Print (0, 100, Output_4'Img);

You must successfully compile and flash the code on your STM32F429I board. You can get the complete code in the download section.

Note: The code works well on the STM32F429I board, when it's powered by the PC. However, when the board is powered independently, the code doesn't run and the screen turns white. To correct this error you must update the firmware of your board... How? Simply download and update the firmware with the "STM32 ST-LINK Utility" application that you can download at:


(Timing: 4 hrs.)

The two pieces that I printed with a 3D printer are the following: the Chassis is where we are going to adjust the two gear-motors, the L298N driver, the battery, and the LDR sensor module.


You can download the STL files in the downloads section. The upper part of the chassis is where we're going to adjust the STM32F429I board.


In figures below we can see several views of the assembly of our autonomous robot.


(Timing: 2 hrs.)

Below you can see a demonstration of how this prototype works.


In this project I learned and confirmed some theoretical concepts, and I'm contributing with an application of neural networks to the Adacore contest, and with an autonomous robot that showed us its effectiveness in developing its task. I didn't have to program all the situations or combinations that the robot would possibly face, and only 3 bits of information in the inputs were enough. This leads me to deduce that it's possible to add more bits of information to develop more complex tasks. The robot did find its target and also stopped when was necessary. I also demonstrated this smart device can be guided with a moving light, and that GPS libraries can make precision mathematical calculations with real-life problems. I recommend you do this kind of thing very carefully and verify every step.

  • Access the project schematics here.
  • Access the project code here.
Ada for micro:bit Part 8: Music to my ears Tue, 03 Nov 2020 00:00:00 -0500 Fabien Chouteau

Welcome to the Ada for micro:bit series where we look at simple examples to learn how to program the BBC micro:bit with Ada.

If you haven't already, please follow the instructions in Part 1 to setup your development environment.

In this eighth part we will see how to play some simple music with the micro:bit and a piezo buzzer.

In the analog output example we said that the analog output was actually a Pulse Width Modulation (PWM) signal. When a piezo buzzer is connected to a PWM a note is produced, changing the frequency of the PWM signal means changing the note that the buzzer will play.

Wiring Diagram

For this example we will need a couple of extra parts:

  • A breadboard
  • A piezo buzzer
  • A couple of wires to connect them


The package MicroBit.Music provides different ways to play music. We are going to use the procedure that plays a melody. A Melody is an array of Note, each Note is made of a pitch and a duration in milliseconds. This package also provides declaration of pitches for the chromatic scale (e.g C4, GS5).

So we declare our little melody like so:

My_Little_Melody : constant MicroBit.Music.Melody :=
     ((C4,   400),
      (G3,   800),
      (B3,   400),
      (Rest, 400),
      (A3,   400),
      (G3,   400));

Then we simply have to call the procedure Play of the package MicroBit.Music.

procedure Play (Pin : Pin_Id; M : Melody)
     with Pre => Supports (Pin, Analog);


  • Pin : The id of the pin that the melody will play on
  • M : The melody that we want to play


  • The procedure Play has a precondition that the pin must support analog IO.

Here is the full code of the example:

with MicroBit.Music; use MicroBit.Music;

procedure Main is

   My_Little_Melody : constant MicroBit.Music.Melody :=
     ((C4,   400),
      (G3,   800),
      (B3,   400),
      (Rest, 400),
      (A3,   400),
      (G3,   400));

   --  Loop forever

      --  Play the little melody on pin 0
      MicroBit.Music.Play (0, My_Little_Melody);
   end loop;
end Main;

Following the instructions of Part 1 you can open this example (Ada_Drivers_Library-master\examples\MicroBit\music\music.gpr), compile and program it on your micro:bit.

Don't miss out on the opportunity to use Ada in action by taking part in the fifth annual Make with Ada competition! We're calling on developers across the globe to build cool embedded applications using the Ada and SPARK programming languages and are offering over $9,000 in total prizes. Find out more and register today!

First beta release of Alire, the package manager for Ada/SPARK Fri, 30 Oct 2020 00:00:00 -0400 Fabien Chouteau Ada 202x support in GNAT Thu, 29 Oct 2020 04:37:00 -0400 Arnaud Charlet

News from the Ada front

The next revision of the Ada standard is now almost ready, so it's time for a status update on where GNAT and AdaCore stand on this front!

This new standard, called Ada 202x for now, is currently getting the final touches at the ARG (Ada Rapporteur Group) before official standardization by the relevant ISO bodies (WG9, SC22 and JTC1). If you want to know more about these groups, you can visit this page. In all likelihood, Ada 202x will become the new official version of Ada by the end of 2021 or early 2022, so may become Ada 2022.

In any event, we'll call it Ada 202x here, and GNAT Pro 21 will provide support for many of the new features under the -gnat2020 and -gnatX switches as detailed below. The 21.0 preview has just be released to our customers and the official 21.1 release will be available in February 2021.

Ada 202x contains many useful features that complement nicely the current Ada standard, in particular those related to expressiveness of the language and with a focus on programming by contract, introduced with Ada 2012. We'll detail some of these in this blog post.

Assessing Ada 202x and making some tough choices

In the past year or so, we have been working hard assessing and implementing most of these Ada 202x changes (called AIs: Ada Issues in ARG terms). The implementation work and feedback from first users allowed us to identify that a few of these features would need additional time and attention. This led us to make a difficult decision - in order to allow for more investigation and to avoid users to start to rely on constructs that may need to change or be replaced, we decided to put on hold the implementation of some of the changes in language. Of course, we’re currently engaged with the ARG to discuss these.

The main set of features that AdaCore and GNAT are putting on hold are related to the support for parallel constructs. While the overall vision is an exciting and promising one, we realized when looking at the state of the art and gathering user requirements that there were a lot more aspects to consider on top of those currently addressed by the AIs. Some of these are related to GPGPU (General Purpose GPU) support as well as their future CPU counterparts, and include topics such as control of memory transfer, precise allocation of tasks and memory on the hardware layout, target-aware fine tuning options as well as various other parametrization needs. These capabilities happen to be fundamental to obtain actual performance benefits from parallel programming, and providing them may require profound changes in the language interface. Consequently, we’re putting all parallel AIs on hold, including support for the Global and Nonblocking aspects beyond the current support in SPARK.

Note also as a reminder that GNAT Pro already takes full advantage of multicore environments on all its supported targets using Ada tasking, including on bare metal platforms via its Ravenscar and now Jorvik (see below) runtimes.

Ada 202x features already supported in GNAT Pro 21

So back to the Ada 202x support offered in GNAT Pro 21... We have already implemented over 200 AIs, including the following new features:

Jorvik profile

Jorvik is a subset of the Ada tasking capabilities, similar to Ravenscar and which imposes fewer restrictions, removing the following ones compared to the Ravenscar profile:

  • No_Implicit_Heap_Allocations
  • No_Relative_Delay
  • Simple_Barriers
  • Max_Entry_Queue_Length => 1
  • Max_Protected_Entries => 1
  • No_Dependence => Ada.Calendar
  • No_Dependence => Ada.Synchronous_Barriers

The configuration pragma pragma Profile now supports Jorvik as a possible value to enforce the restrictions and is available as part of the ravenscar-full runtimes on bare metal platforms.

Improvements to the 'Image attribute

A number of improvements have been done in the way the ‘Image attribute works. In particular, this attribute can be used directly on objects and now applies to any type, and not just scalar types.

A new attribute and aspect Put_Image has been introduced, allowing a custom implementation for any type as a replacement of the default supplied one. The exact form of the user supplied Put_Image procedure is still under finalization at the ARG and is provided in an intermediate form in GNAT Pro 21 which will likely change in release 22.

Atomic Operations

Four new packages, System.Atomic_Operations.Exchange, System.Atomic_Operations.Test_And_Set, System.Atomic_Operations.Integer_Arithmetic and System.Atomic_Operations.Modular_Arithmetic now support accessing processor-specific atomic operations, allowing users to write thread-safe concurrent code without the use of system locks. Support for volatile and atomic objects are also further refined via the Full_Access_Only aspect to ensure that objects are always read and written entirely.

Support for infinite precision numbers

Two new packages Ada.Numerics.Big_Numbers.Big_Integers and Ada.Numerics.Big_Numbers.Big_Reals provide support for unbounded integer and real numbers with arithmetic operations implemented in software.

User-Defined Literals

Literals for user types (Integer, Real, String) can also be specified and are supported by default for the infinite precision number types.

Variadic C function import

Importing variadic functions was not portable and not easily done in practice without resorting to writing C wrappers. It is now supported via a new convention, C_Variadic_N, where N is the number of fixed parameters in the C profile:

procedure printf (format : String; optional_param1 : int)
  with Import, Convention => C_Variadic_1;

printf ("value is %d" & LF & NUL, 20);

Improved expression and contract expressiveness

Declare expressions

Ada 202x now allows declaring constants and renamings inside a declare expression, which facilitates writing more complex preconditions and postconditions:

Val : Integer := (declare X : constant Integer := Random_Value; begin X + X);

Delta aggregates

This Ada feature replaces the SPARK 'Update attribute and allows modifying partially the copy of an object:

Record_Object := (Record_Object with delta Field1 => X, Field2 => Y);

Contracts on Access-to-Subprogram

Aspects Pre and Post can now be specified on access to subprogram types. As a consequence, when a call is made from such a type, the contract of the type will be executed, together with the specific contracts of the called subprogram if any.

Static expression functions

Ada 202x defines a new aspect Static that can be specified on expression functions. Such an expression function can be called in contexts requiring static expressions when the actual parameters are all static, allowing for greater abstraction in complex static expressions. For example:

function Half_Size (S : Integer) return Integer is (S / 2) with Static;

type T is range 0 .. 10 with Size => Half_Size (Integer'Size);

Iterator Filters

Iterators can now be provided with an optional filter. This can be used in loops or in the new container aggregates and reduce expressions. For example:

for E of Some_Array when E /= 0 loop
   Put_Line (E'Image);
end loop;

Indexes in array aggregate

Array aggregates now support a for-loop like syntax:

(for Index in 1 .. Count => Function_Returning_Limited (Index))

Assignment target name @

Ada 202x provides a convenient shortcut to refer to the left hand side of an assignment statement, as in:

Some_Very_Long.And_Complex (Expression) := @ + 1;
Another_Very_Long.And_Complex (Expression) := Function_Call (@);

Renames with type inference

The type information in a renames clause now becomes optional, as in:

X renames Y;

This means also that named numbers can now be renamed as well:

PI : constant := 3.1415926;
PI2 renames PI;


A new attribute 'Reduce is available for experimentation under the -gnatX switch. It supports performing a map/reduce operation over the values of a container. For example:

X : Integer := (1, 2, 3)'Reduce ("+", 0);

Will add 1, 2 and 3, and store the result (6) in X.

Container aggregates

You can now initialize a container object via the aggregate notation, e.g:

V : Vector := (1, 2, 3);

Next Steps

In GNAT Pro 22, we will complete the implementation of all the relevant AIs, and at the same time have started a language prototyping and experimentation effort to prepare future Ada (and SPARK) revisions, including many exciting or most wanted features such as a simplified model for accessibility checks and anonymous access types, generalized case statements on any type (aka pattern matching), simplified and universal storage pools, more static guarantees (e.g. on object initialization), improved string processing in the standard library, simplified finalization support, implicit generic instantiations, ... If you are interested, you can follow and give your input and ideas on this effort via the Ada & SPARK RFC open platform.

Make With Ada 2020: High Integrity Sumobot Thu, 29 Oct 2020 00:00:00 -0400 Juliana Silva

Blaine Osepchuk's project won a finalist prize in the Make with Ada 2019/20 competition. This project was originally posted on here. For those interested in participating in the 2020/21 competition, registration is now open and project submissions will be accepted until Jan 31st 2021, register here.

This document contains all the instructions you’ll need to build your own High Integrity Sumobot. This fully functional mini-sumobot is an advanced-level project programmed in Ada/SPARK and Arduino (C++).


I created the High Integrity Sumobot using Ada/SPARK and high integrity software engineering techniques. I wanted to make it easy for people interested in Ada/SPARK to see how all the pieces fit together in a small embedded project.

I did my best to write simple, clean, and maintainable code throughout. It’s extensively commented and it’s open source so you can use it almost any way you want.

This document contains all the instructions you’ll need to build your own High Integrity Sumobot.

System overview

I started with a Zumo 32U4 sumobot. But instead of programming it directly as a microcontroller like most people do, I turned it into an I2C slave device, which I control with a microbit I programmed in Ada/SPARK.

The zumo continuously collects sensor data and stores it in a buffer. The microbit periodically requests sensor data from the zumo over I2C and the zumo sends the data it most recently collected. The microbit validates the sensor data, decides how the zumo should move, and sends motor commands back to the zumo (also over I2C). The zumo validates the motor commands and then executes them. And then the process repeats (at least 50 times per second).

The main job of a sumobot is to fight in sumo competitions. So the bulk of the code on the microbit is related to the fighting algorithm. Here’s a simplified state table for it:

The 5 x 5 display on the microbit shows a character representing the current top-level state of the system (see the “Display” column of the state table above):

You can see the most important zumo parameters on the zumo’s LCD, which is helpful for development and debugging. It shows the following information:

My plan for achieving high integrity


  • followed a relaxed version of the personal software process (PSP). I did most the processes prescribed by PSP but because I am new to Ada/SPARK it didn’t make sense to do the tracking and analysis steps
  • carefully implemented the smallest amount of functionality required to create the High Integrity Sumobot
  • chose to do as much processing as possible on the microbit
  • programmed the microbit in SPARK as much as possible, and fell back to Ada only where necessary
  • programmed the microbit using simple constructs, constrained types, contracts, extensive data validation, formal proofs, and unit tests. I even proved functional correctness for some simple subprograms
  • programmatically generated test cases for the fighting algorithm to help ensure I tested all the important paths through the code


1 x microbit microcontroller1 x SparkFun micro:bit Breakout (with Headers)1 x Zumo 32u4 robot assembled with 75:1 motors. Note: the older, cheaper version of this robot (found here) will not work with this project1 x Breadboard – Self-Adhesive (White)1 x SparkFun Logic Level Converter – Bi-Directional (needed to convert the I2C signals back and forth from 3.3 to 5 volts because the microbit operates at 3.3 volts and the zumo operates at 5 volts)4 x AA NiMH batteries1 x Break Away Headers – Straight1 x Female Headers2 x Break Away Female Headers – Swiss Machine Pin (used for mounting the breadboard)2 x USB C cables1 x sumo ring (which you can buy or build yourself)misc parts: scavenged bits of wire, hot glue, solder, electrical tape, etc.

Getting up and running

I tested all of the steps below on Windows 7 and Windows 10 and the 2019 versions of GNAT Community and ARM ELF. Other platforms and versions may work but I have not tested them.

The ring and training dummy

  • you’ll need some kind of ring to train your High Integrity Sumobot in. The specs for mini sumo rings can be found here. You can purchase a ring online but the shipping cost is probably going to be prohibitive. I didn’t have anything as big as the spec requires so I used a old piece of furniture and made the border with masking tape
  • you’ll also need a training dummy, unless you have another robot to compete against. I made mine out of Lego but you could just as easily build one out of cardboard and hot glue

Arduino development environment and Zumo 32U4 setup

  • install batteries in the zumo and turn it on. It comes pre-loaded with a sketch that demonstrates some of the features of the bot. Ensure your zumo works
  • download and install the Arduino IDE. Note: I chose the windows installer, not the windows app (I’m not saying there’s anything wrong with the app, I just have no experience with it)
  • start the Arduino IDE. You may be prompted to update your boards and libraries. Go ahead and do that
  • install the windows drivers for the zumo
  • configure the Arduino IDE to program the zumo 32U4. Make sure you can upload the blink demo to your zumo and confirm that it works
  • in tools -> manage libraries install the following Arduino libraries: Zumo32U4 and ArduinoUnit
  • normal uploading won’t work once the zumo and microbit are linked by I2C (because the I2C interrupts on the zumo interfere with uploading). So you’ll need use an alternative uploading method. You have to edit a file called boards.txt on your PC to enable the “Pololu A-Star 32U4 (bootloader port)” option in the tools -> boards menu. Follow the directions here under the “The bootloader-before-uploading method” to edit the boards.txt file. I found mine in “C:\Users\<username>\AppData\Local\Arduino15\packages\pololu-a-star\hardware\avr\4.0.2\boards.txt” on Windows 10 (your location might be different). Set your board to “Pololu A-Star 32U4 (bootloader port)”
  • turn on “verbose output during upload” so you can see why your upload failed (if it fails). Open file -> preferences and check the appropriate checkbox:
  • try to upload the “face towards opponent” sketch onto the zumo using “The reset” and “The bootloader-before-uploading method” instructions described here. If that works, you are good to go

Ada/SPARK development environment setup

  • setup your dev environment for Ada:
  • newer microbits are not recognized properly and will not flash. You can fix the problem with a one line change by following the directions here
  • create a copy of the scrolling text demo project and flash it to your microbit as per the instructions here
  • “build all” and “flash to board” as shown in the screenshot below
  • you should see “Make with Ada!” scroll across the 5 x 5 display on your microbit
  • clone or download the Ada drivers library. Put the code wherever you want. You’ll need it for the next step

Create a copy of the project's code base

  • clone the High Integrity Sumobot’s code base from GitHub
  • edit the first line of the “microbit/high_integrity_sumobot.gpr” file to point to the location of your Ada drivers library code
  • open the project in GNAT Programming Studio (GPS)
  • set scenario in GPS: “ADL_BUILD: debug”, “ADL_BUILD_CHECKS: enabled”. These settings turn on the compiler switches you need for development and production (turning contract checking off doesn’t increase the frame rate so you might as well leave it on so it can catch errors)
  • build and flash the code to your microbit. The letter ‘U’ should show on the 5 x 5 display indicating an error (because the microbit is not in communication with the zumo yet)

Building your sumobot

Zumo headers

  • cut 3 female headers into groups of two. Use a box cutter to make a deep groove into the next channel on both sides of the header, break the header on the cut marks, then trim the sharp edges and discard the pin you cut free

  • trim the ends of the headers so it’s impossible for them to touch the batteries once they are soldered into place on the zumo (ruptured batteries are a safety risk)
  • cut 2 female swiss machine pin headers into groups of two
  • solder the headers to the zumo. Three of the headers are for wiring (see the wiring diagram) and the other two are the rear mount points for the bread board

Note: you can ignore the headers that are crossed out in the image above (they are not used in this project).

Level converter headers

  • cut 2 female headers into groups of six and solder them to the side of the level converter containing the electronics
  • set the level converter aside

LCD wiring harness

  • make a wiring harness to free the LCD from the zumo main board. You’ll need two sets of male headers on one side and two sets of female headers on the other side. Make your wires 10 cm long. Solder all that together
  • install it in the zumo and turn it on to ensure the LCD still works
  • remove the wiring harness from the zumo, hot glue all the solder joints so they can never short together, and then reinstall it between the zumo and the LCD

Zumo wires

  • cut the five wires that connect to the zumo (5V, 3.3V, ground, I2C clock, and I2C data). Each wire should be 15 cm long
  • strip both ends of each wire
  • cut 3 male headers into groups of two
  • solder the 3.3V and ground to a male header
  • solder the I2C lines to the next male header
  • and solder the 5V wire to the last male header (the other pin on that header isn’t soldered to anything). Note: a clothespin holds headers nicely while you solder them
  • wrap the individual wires soldered to the male headers in electrical tape to prevent shorts
  • insert the wires into the appropriate locations on the zumo
  • position the ends of each wire away from the rear of the zumo so they are out of the way for the next steps


  • remove the power rail from the breadboard closest to column A
  • hot glue the level converter to the breadboard. Ensure the high voltage side is facing the rear of the bot (towards column J). Also ensure it is all the way to the edge or it will conflict with the microbit breakout board when you install it
  • use female swiss machine pin headers and hot glue to make the rear mounting points for the breadboard. Trim the pins on the long header as shown below to make more room for your wires. You might have to experiment to get the breadboard to a sufficient height from the zumo to allow all your wiring to fit between the zumo and the breadboard, depending on the thickness and length of your wiring
  • bend the LCD wiring harness so that the LCD sits just above the breadboard
  • cut two female swiss machine pin headers into groups of 8 to make the front mount points for the breadboard
  • test mount the breadboard on the zumo. If you’re happy with that glue one half of the front mount to the zumo
  • install (but do not glue) the breadboard headers on the zumo so you can simply place the breadboard on top of your mount points
  • if you’re happy with how your breadboard sits on the mount points remove the breadboard, put hot glue on top of the mounting headers, and then gently lower the breadboard onto the mount points. Note: do not glue all the mount points yet. You should be able to install and remove the breadboard easily with only gravity and friction holding it to the zumo.
  • put a row of hot glue on the rear mount point where you cut the pins off in a previous step to prevent the pins from rubbing your wiring. The bottom of your breadboard should look like this:

Populating the breadboard

  • hot glue a single female swiss machine pin header under the left and right sides of the breakout board to hold it level against the breadboard
  • insert the microbit breakout board into the breadboard from B9 to B30
  • insert the microbit into the breakout board with the display and buttons facing up
  • mount the breadboard to the zumo mount points
  • make the wires that connect from the breadboard to the level converter, leaving the wires longer than required and stripping the ends
  • connect all the wires as per the wiring diagram

Uploading code and testing your sumobot

  • configure the zumo portion of the project to run its unit tests (see instructions in the loop function of zumo.ino) upload the code to the zumo and open the serial monitor to see the results
  • turn off the unit tests and upload the code to the zumo again
  • run “prove all” on the microbit code in SPARK. Command template: “gnatprove -P%PP -j0 %X –level=0 –ide-progress-bar –no-subprojects –mode=all” (if gnatprove complains about ‘uncompiled dependencies’ add “-f” to the command and try again)
  • configure the microbit portion of the project to run its unit tests (see instructions in main.adb) and flash them to the microbit. The number of passed assertions will scroll across the display if all the unit tests passed. A filename and line number will scroll across the display for the first failed test encountered
  • turn off the microbit unit tests and flash the code to the microbit
  • put your bot in your ring by itself and confirm it drives around searching for targets without leaving the ring
  • also confirm that it can find and push your training dummy out of the ring
  • you can also run the end-to-end tests if you like

Finishing up the build (optional)

If your High Integrity Sumobot is working but you want to make it a little more durable you can do these steps:

  • hot glue the breadboard mounting points more permanently
  • shorten the wires and hot glue them to the breadboard and level converter so they don’t come loose or snag during a fight
  • hot glue the LCD wiring harness to the side of the breadboard
  • give your bot another test alone in the ring and then with the training dummy to confirm it is operating correctly

Information for the Make with Ada judges

  • I created this project as a solo effort (no teammates)
  • all of the code in the GitHub for the High Integrity Sumobot was created by me for this contest. However, the algorithm for this bot was inspired by a demo in the zumo library, which you can find here. I modified the code extensively but it still retains some of the state machine, variable names, and comments from the original demo. And the body of the last chance handler is roughly based on some code I found online, the source of which is documented in that file
  • none of the code in this project has been submitted in a previous Make with Ada competition
  • I realize that you won’t be able to easily run the code for this project without actually building the High Integrity Sumobot. But you can run the microbit unit tests if you have a microbit. And you can also run the zumo unit tests by uploading the code to almost any Arduino compatible microcontroller. After that you have gnatprove, other kinds of static analysis, and code reading
  • I plan to continue working on this project after the contest close date so please make sure you checkout the correct version of the code. I created release 0.1.0 just for you
  • I wanted to make sure you didn’t miss the written requirements and end-to-end tests in this 3, 200 word document. I think they are important but have never seen them emphasized in a project write-up

Finally, I just wanted to make my case for why this project should score well for the “buzz effect” judging criteria. I spent a great deal of time brainstorming projects that might have “buzz effect” and reviewing all the other projects I could find that have used the Ada Drivers Library and this project is appealing on a number of fronts.

First, many universities, schools, and clubs already have mini-sumo competitions because fighting robots are a fun and easy way to learn embedded software development. And for an extra ~$30 over the cost of the sumobot they were already going to buy, people can build this project, improve upon the code, and actually use Ada/SPARK in a project they care about. Therefore, I believe a sumobot programmed in Ada/SPARK should be relatively easy to market and promote.

Secondly, this project’s code base (excluding tests) is actually pretty small and the objectives of the sumobot are easy to understand. So, diving into the code base and making changes shouldn’t be too intimidating as compared to the Certyflie project, for example. And yet, the opportunities for improvements are vast. I can imagine a class of students starting with this code base, then having everyone making different improvements, and then competing to see who does the best.

Thirdly, this project is my first with Ada/SPARK and I couldn’t find any accessible examples of how to use the high dependability features of these languages or much guidance for how I should use them in the context of a full project. The book Building High Integrity Applications with SPARK was the best resource I found but even it was lacking in several areas. I’m sure I misused some features of Ada/SPARK and you guys are going to get a chuckle out of it but this is the kind of example I looked for and couldn’t find when I started this competition.

Fourthly, I tried to use not just Ada/SPARK but an entire suite of processes (within the time I had available) that would help ensure a high quality, low defect project. I think that’s where the benefits of Ada/SPARK really kick in and it reinforces the case for possibly using this project as a learning example.

Future improvements

  • get gnattest working and port the homemade unit tests to gnattest
  • replace the breadboard and wires with a custom-made circuit board (less likely to malfunction or suffer damage in competition)
  • 3D print a cover (to protect the hardware on top of the zumo)
  • write proofs for more of the code
  • use more of the zumo’s sensors and/or add additional sensors
  • use the motor encoders to compensate for the effect battery voltage has on the sumobot’s speed (speed differences mess with the optimal time limits in the fighting algorithm)
  • improve the fighting algorithm/add new algorithms (there is plenty of unused RAM and flash available on the microbit)


Final thoughts

I loved working on this project. It was interesting switching back and forth between C++ on the zumo and Ada/SPARK on the microbit. I’ve got about 10 years of experience building projects on the Arduino platform so I was able to jump right in on that side. Whereas the learning curve in SPARK and Ada were pretty steep.

But, once I got into it, I started to resent the Arduino compiler for not catching all the stupid ways C++ allows you to write incorrect programs that the Ada compiler would have caught. I also came to appreciate how clearly the Ada compiler told me exactly what I did wrong when I did make a mistake. And, by the end of the project, I had a lot more confidence in the correctness of the Ada/SPARK code as compared to the Arduino code. I can totally see why the AdaCore people are so excited about Ada/SPARK.

If you decide to build your very own High Integrity Sumobot or have questions about it, I’d love to hear from you in the comments below. Finally, if you find any bugs in this project please report them here.

Thanks for reading all the way to the end. Cheers.

  • Access the project schematics here.
  • Access the project code here.
Ada for micro:bit Part 7: Accelerometer Tue, 27 Oct 2020 00:00:00 -0400 Fabien Chouteau

Welcome to the Ada for micro:bit series where we look at simple examples to learn how to program the BBC micro:bit with Ada.

If you haven't already, please follow the instructions in Part 1 to setup your development environment.

In this seventh part we will see how to use the accelerometer of the micro:bit. The accelerometer can, for instance, be used to know which way the micro:bit is oriented.


To get the acceleration value for all axes, we will just call the functionMicroBit.Accelerometer.Data. This function returns a record with X, Y and Z field giving the value for each axis.

function Data return MMA8653.All_Axes_Data;
   --  Return the acceleration value for each of the three axes (X, Y, Z)

Return value:

  • A record with acceleration value for each axis

Note that the type used to store the values of the accelerometer is declared in the package MMA8653 (the driver), so we have to with and use this package to have access to the operations for this type.

We can use the value in the record to get some information about the orientation of the micro:bit. For example, if the Y value is below -200 the micro:bit is vertical.

Here is the full code of the example:

with MMA8653; use MMA8653;

with MicroBit.Display;
with MicroBit.Display.Symbols;
with MicroBit.Accelerometer;
with MicroBit.Console;
with MicroBit.Time;

use MicroBit;

procedure Main is

   Data : MMA8653.All_Axes_Data;

   Threshold : constant := 150;


      --  Read the accelerometer data
      Data := Accelerometer.Data;

      --  Clear the LED matrix

      --  Draw a symbol on the LED matrix depending on the orientation of the
      --  micro:bit.
      if Data.X > Threshold then

      elsif Data.X < -Threshold then

      elsif Data.Y > Threshold then

      elsif Data.Y < -Threshold then

         Display.Display ('X');

      end if;

      --  Do nothing for 100 milliseconds
      Time.Sleep (100);
   end loop;
end Main;

Following the instructions of Part 1 you can open this example (Ada_Drivers_Library-master\examples\MicroBit\accelerometer\accelerometer.gpr), compile and program it on your micro:bit.

See you next week for the last Ada project of this series.

Don't miss out on the opportunity to use Ada in action by taking part in the fifth annual Make with Ada competition! We're calling on developers across the globe to build cool embedded applications using the Ada and SPARK programming languages and are offering over $9,000 in total prizes. Find out more and register today!

Make with Ada 2020: CHIP-8 Interpreter Wed, 21 Oct 2020 00:00:00 -0400 Juliana Silva

Laurent Zhu's and Damien Grisonnet's project won a finalist prize in the Make with Ada 2019/20 competition. This project was originally posted on here. For those interested in participating in the 2020/21 competition, registration is now open and project submissions will be accepted until Jan 31st 2021, register here.

CHIP-8 language interpreter for STM32F429 Discovery board



This project was accomplished for the EPITA Ada courses and the Make With Ada contest.

Originally this project was supposed to be a GBA emulator. However, instead of implementing one from scratch in ADA we wanted to port an existing one on the STM32F429 Discovery and write bindings in ADA. But, while trying to port the emulator we noticed that there was not much to do in ADA and that the project would be mostly written in C instead of ADA. We thought that it was a pity since we were supposed to do a project in ADA. Then, we decided to switch to another project that would allow us to write more ADA. We could have written our own GBA emulator in ADA but it was too big of a challenge to write one on time for the challenge. Thus, we decided to write this CHIP-8 emulator instead which involves the same coding challenges as the GBA one but it is way faster to implement.


The first step of the project was to understand how the emulator is working:


Memory size of 4K with the first 512 bytes of the memory space reserved for the CHIP-8 interpreter. It is common to use this reserved space to store font data.


CHIP-8 has 16 8-bit data registers named V0 to VF.


The stack is only used to store return addresses when subroutines are called. In modern implementations stacks can store up to 16 elements.


CHIP-8 has two timers. They both count down at 60 hertz, until they reach 0.

  • Delay timer: This timer is intended to be used for timing the events of games. Its value can be set and read.
  • Sound timer: This timer is used for sound effects. When its value is nonzero, a beeping sound is made.


Input is done with a hex keyboard that has 16 keys ranging 0 to F.

This keyboard is displayed on the bottom part of the screen of the STM32F429 Discovery.


Original CHIP-8 Display resolution is 64×32 pixels, and color is monochrome. Graphics are drawn to the screen solely by drawing sprites, which are 8 pixels wide and may be from 1 to 15 pixels in height. Sprite pixels are XOR'd with corresponding screen pixels. In other words, sprite pixels that are set flip the color of the corresponding screen pixel, while unset sprite pixels do nothing. The carry flag (VF) is set to 1 if any screen pixels are flipped from set to unset when a sprite is drawn and set to 0 otherwise. This is used for collision detection.

Since the STM32F429 Discovery screen resolution is 320x240, the display of the ROM was scale 5 times to improve the user experience and to match the platform.


A beeping sound is supposed to be played when the value of the sound timer is nonzero. However, since the STM32F429 Discovery does not have any audio module, no sound are played.

Opcode Table

CHIP-8 has 35 opcodes, which are all two bytes long and stored big-endian.

CHIP-8 Interpreter

The different steps of the interpreter:

  • The screen, the touch panel and the layers are initialized
  • We draw the keyboard on the bottom of screen with the first layer, by using the CHIP-8 sprites. In order to do that, we iterate through all the existing keys and from their position in the font set table, we can draw it easily
  • A ROM is loaded with the Load_Rom procedure. The ROMs are located in the file that we generate with a python script (scripts/ It generates all the Ada arrays from all the ROMs located in the roms/ directory.
  • Then, we have our main loop:

Main Loop

  • An opcode, consisting of 2 bytes, is fetched from the memory at the program counter position
  • We call the right function to execute by looking at the 4 first bits of our opcode. Some instruction will not increment the program counter, some will increment it and some will skip the next instruction by incrementing 2 times more
  • At the end of the loop we read the touch screen inputs and we update the list of pressed keys accordingly

Setup the project

git clone
cd CHIP-8
git clone --recursive
python2 Ada_Drivers_Library/scripts/

Compile the project

gprbuild --target=arm-eabi -d -P chip8.gpr -XLCH=led -XRTS_Profile=ravenscar-sfp -XLOADER=ROM -XADL_BUILD_CHECKS=Disabled src/main.adb -largs -Wl,-Map=map.txt

Flash the project

arm-eabi-objcopy -O binary objrelease/main objrelease/main.bin
st-flash --reset write objrelease/main.bin 0x8000000

Add a ROM

cp $ROM roms/
./scripts/ roms/ src/

Change ROM

We encountered a few memory problems with the implementation of the menu. So, in order to choose the ROM, you need to change the argument of the call to Load_Rom in the main.adb file.


Simple ROM that does not require user interaction:


  • Access the project schematics here.
  • Access the project code here.
Ada for micro:bit Part 6: Analog Input Tue, 20 Oct 2020 06:06:00 -0400 Fabien Chouteau

Welcome to the Ada for micro:bit series where we look at simple examples to learn how to program the BBC micro:bit with Ada.

If you haven't already, please follow the instructions in Part 1 to setup your development environment.

In this sixth part we will see how to read the analog value of a pin. This means reading a value between 0 and 1023 that tells the voltage applied to the pin. 0 means 0 volts, 1023 means 3.3 volts.

Wiring Diagram

For this example we will need a couple of extra parts:

  • A breadboard
  • An LED
  • A 470 ohm resistor
  • A potentiometer
  • A couple of wires to connect them all

For this example we start from the same circuit as the pin output example, and we add a potentiometer. The center pin of the potentiometer is connected to pin 1 of the micro:bit the other two pins are respectively connected to GND and 3V.


To read the analog value of the IO pin we are going to use the function Analogof the package MicroBit.IOs.

function Analog (Pin : Pin_Id) return Analog_Value
     with Pre => Supports (Pin, Analog);
   --  Read the voltagle applied to the pin. 0 means 0V 1023 means 3.3V


  • Pin : The id of the pin that we want read the analog value from


  • The function Analog has a precondition that the pin must support analog IO.

In the code, we are going to write an infinite loop that reads the value of pin 1, and set pin 0 to the same value.

This means that you can control the brightness of the LED using the potentiometer.

Here is the full code of the example:

with MicroBit.IOs;

procedure Main is

   Value : MicroBit.IOs.Analog_Value;

   --  Loop forever

      --  Read analog value of pin
      Value := MicroBit.IOs.Analog (1);

      --  Write analog value of pin 0
      MicroBit.IOs.Write (0, Value);
   end loop;
end Main;

Following the instructions of Part 1 you can open this example (Ada_Drivers_Library-master\examples\MicroBit\analog_in\analog_in.gpr), compile and program it on your micro:bit.

See you next week for another Ada project on the micro:bit.

Don't miss out on the opportunity to use Ada in action by taking part in the fifth annual Make with Ada competition! We're calling on developers across the globe to build cool embedded applications using the Ada and SPARK programming languages and are offering over $9,000 in total prizes. Find out more and register today!

Ada for micro:bit Part 5: Analog Output Tue, 13 Oct 2020 08:25:00 -0400 Fabien Chouteau

Welcome to the Ada for micro:bit series where we look at simple examples to learn how to program the BBC micro:bit with Ada.

If you haven't already, please follow the instructions in Part 1 to setup your development environment.

In this fifth part we will see how to write an analog value to a pin. The micro:bit doesn't have a real digital to analog converter, so the analog signal is actually a Pulse Width Modulation (PWM). This is good enough to control the speed of a motor or the brightness of an LED.

There is a limit of three analog (PWM) signals on the micro:bit, if you try to write an analog value to more than three pins an exception will be raised.

Wiring Diagram

For this example we use the same circuit as the pin output example.


To write an analog value to the IO pin we are going to use the procedure Write of the package MicroBit.IOs.

procedure Write (Pin : Pin_Id; Value : Analog_Value)
     with Pre => Supports (Pin, Analog);


  • Pin : The id of the pin that we want to read as digital input
  • Value : The analog value for the pin, between 0 and 1023


  • The procedure Write has a precondition that the pin must support analog IO.

In the code, we are going to write an loop with a value that goes from 0 to 128 and set write this value to pin 0. We could go from 0 to 1023 but since the LED doesn't get brighter after 128, there is no need to go beyond that value.

We also use the procedure Delay_Ms of the package MicroBit.Time to stop the program for a short amount of time.

Here is the full code of the example:

with MicroBit.IOs;
with MicroBit.Time;

procedure Main is

   --  Loop forever

      --  Loop for value between 0 and 128
      for Value in MicroBit.IOs.Analog_Value range 0 .. 128 loop

         --  Write the analog value of pin 0
         MicroBit.IOs.Write (0, Value);

         --  Wait 20 milliseconds
         MicroBit.Time.Delay_Ms (20);
      end loop;
   end loop;
end Main;

Following the instructions of Part 1 you can open this example (Ada_Drivers_Library-master\examples\MicroBit\analog_out\analog_out.gpr), compile and program it on your micro:bit.

See you next week for another Ada project on the micro:bit.

Don't miss out on the opportunity to use Ada in action by taking part in the fifth annual Make with Ada competition! We're calling on developers across the globe to build cool embedded applications using the Ada and SPARK programming languages and are offering over $9,000 in total prizes. Find out more and register today!

Ada for micro:bit Part 4: Pin Input Tue, 06 Oct 2020 08:18:00 -0400 Fabien Chouteau

Welcome to the Ada for micro:bit series where we look at simple examples to learn how to program the BBC micro:bit with Ada.

If you haven't already, please follow the instructions in Part 1 to setup your development environment.

In this fourth part we will see how to read the digital state of a pin. This means reading if the pin is at 0 volts (low) or 3.3 volts (high).

Wiring Diagram

For this example we will need a couple of extra parts:

  • A breadboard
  • An LED
  • A 470 ohm resistor
  • A push button
  • A couple of wires to connect them all

We start from the same circuit as the part three example, and we add a push button.


To control the IO pin we are going to use the function Set of the package MicroBit.IOs.

function Set (Pin : Pin_Id) return Boolean
     with Pre => Supports (Pin, Digital);


  • Pin : The id of the pin that we want to read as digital input


  • The procedure Set has a precondition that the pin must support digital IO.

As you can see, the function Set to read the pin has the same name as the procedure set that we used to control the pin in the output example. It is called overloading, two subprograms with the same name that provide different services.

In the code, we are going to write an infinite loop that reads the state of pin 1. If it is high, it means the button is not pressed so we turn off the LED on pin 0. It if it is low, it means the button is pressed so we turn on the LED on pin 0.

Here is the full code of the example:

with MicroBit.IOs;

procedure Main is

   --  Loop forever

      --  Check if pin 1 is high
      if MicroBit.IOs.Set (1) then

         --  Turn off the LED connected to pin 0
         MicroBit.IOs.Set (0, False);

         --  Turn on the LED connected to pin 0
         MicroBit.IOs.Set (0, True);
      end if;
   end loop;
end Main;

Following the instructions of Part 1 you can open this example (Ada_Drivers_Library-master\examples\MicroBit\digital_in\digital_in.gpr), compile and program it on your micro:bit.

See you next week for another Ada project on the micro:bit.

Don't miss out on the opportunity to use Ada in action by taking part in the fifth annual Make with Ada competition! We're calling on developers across the globe to build cool embedded applications using the Ada and SPARK programming languages and are offering over $9,000 in total prizes. Find out more and register today!

AdaCore Code of Conduct Mon, 05 Oct 2020 10:19:00 -0400 Fabien Chouteau

Starting today, AdaCore has put in place a Code of Conduct (CoC) to ensure a positive environment for everyone willing and wanting to interact with us. With the development of this blog, our twitter accounts, and our GitHub corporate account, there is more and more communication between AdaCore and a number of communities. In this Code of Conduct we want to explain how we are going to moderate the AdaCore-maintained community spaces with the goal of maintaining a welcoming, friendly environment.

The full Code of Conduct can be found on our website:

Here is the introduction of the document:

We expect this code of conduct to be followed by anyone who contributes to AdaCore-maintained community spaces such as Github repositories, public and private mailing lists, issue trackers, wikis, blogs, Twitter, and any other communication channel maintained by AdaCore, and by anyone who participates to an activity organised by AdaCore. It applies equally to users, moderators, administrators, AdaCore staff, partners, and community members. 

This code is not exhaustive or complete. It serves to distill our common understanding of a collaborative, shared environment and goals. We expect it to be followed in spirit as much as in the letter, so that it can enrich all of us and the technical communities in which we participate.

Diversity Statement

AdaCore welcomes and encourages participation by everyone. We are committed to being a community that everyone feels good about joining. Although we may not be able to satisfy everyone, we will always work to treat everyone well.

No matter how you identify yourself or how others perceive you: we welcome you.

Ada for micro:bit Part 3: Pin Output Tue, 29 Sep 2020 08:06:00 -0400 Fabien Chouteau

Welcome to the Ada for micro:bit series where we look at simple examples to learn how to program the BBC micro:bit with Ada.

If you haven't already, please follow the instructions in Part 1 to setup your development environment.

In this third part we will see how to control the output state of a micro:bit pin by lighting an LED.

Wiring Diagram

For this example we will need a couple of extra parts:
  • A breadboard
  • An LED
  • A 470 ohm resistor
  • A couple of wires to connect them all

Wiring the LED directly from the output pin to ground will make it burn, so we have to add a resistor to limit the flow of current.


To control the IO pin we are going to use the procedure Set of the package MicroBit.IOs.

procedure Set (Pin : Pin_Id; Value : Boolean)
  with Pre => Supports (Pin, Digital);


  • Pin : The id of the pin that we want to control as digital output
  • Value : A Boolean that says if we want the pin to be high (True) or low (False)


  • The procedure Set has a precondition that the pin must support digital IO.

We also use the procedure Delay_Ms of the package MicroBit.Time to stop the program for a short amount of time.

Here is the full code of the example:

with MicroBit.IOs;
with MicroBit.Time;

procedure Main is

   --  Loop forever
      --  Turn on the LED connected to pin 0
      MicroBit.IOs.Set (0, True);

      --  Wait 500 milliseconds
      MicroBit.Time.Delay_Ms (500);

      --  Turn off the LED connected to pin 0
      MicroBit.IOs.Set (0, False);

      --  Wait 500 milliseconds
      MicroBit.Time.Delay_Ms (500);
   end loop;
end Main;

Following the instructions of Part 1 you can open this example (Ada_Drivers_Library-master\examples\MicroBit\digital_out\digital_out.gpr), compile and program it on your micro:bit.

See you next week for another Ada project on the micro:bit.

Don't miss out on the opportunity to use Ada in action by taking part in the fifth annual Make with Ada competition! We're calling on developers across the globe to build cool embedded applications using the Ada and SPARK programming languages and are offering over $9,000 in total prizes. Find out more and register today!

Code Obfuscator for Ada using Libadalang and SPARK Mon, 28 Sep 2020 08:33:00 -0400 Michael Frank

In my current job and in my previous job, I was involved in customer support where a user of our product would basically say “your tool does not handle my code correctly.” Whether the problem really is in our tool, or actually with their code, we need to come up with a simplified version of the “incorrect” code. This is not always a trivial task. The customer may be dealing with a fairly large chunk of code, and they may not understand their codebase or our tool well enough to narrow down the problem. It is usually faster for the customer to send as much code as possible, and let the support engineer try to shrink it to a manageable size, as they likely have a better understanding of the tool and what is happening to the example code.

So now we are onto the next problem – how does the support engineer get a copy of the problematic code? In most instances, the customer is very protective of their code – proprietary information, algorithms, or data structures; sometimes even object names can convey information! For example, suppose you had a constant:

Constant_Max_Velocity : constant Velocity_T := 12.3;

That name and value tells you something important about the widget being built.

One could always use some search-and-replace mechanisms to mask some names, but that gets into some complicated matching algorithms, and consistency issues. In just the simple example above, how would you replace “constant” in one place and not the other, and do you replace “Velocity” with the same token or different tokens. What we really want is a tool that understands the language, and can intelligently replace tokens based on their context, not just their textual value. 

So I took on the task of writing a tool to “obfuscate” a code base. One of the best ways to write a tool like this for Ada is using Libadalang. This library (available in Ada or Python) allows the user to parse Ada files and gather semantic information. I could then use the semantic information to track every name definition and replace the name wherever it was referenced. With Libadalang, I did not need to worry about the above problems with “constant” and “velocity”.

I chose to write this tool in Ada, because Ada gave me access to the GNATcoll bindings. With these bindings, I could parse GNAT Project Files to find all the source files I needed. This allowed me to obfuscate an entire codebase – once modified, the new codebase should compile correctly and even perform correctly, although none of the variable names would make any sense!

As a simple example, I wrote a small program that solve the “N-Queens” problem (wikipedia) There is not much “proprietary” information in the code, but looking at the original code and obfuscated code side-by-side shows the information given just by changing names.

Even in this simple example, we see the information “loss”. On the left, we know we are working with a Board made of Rows and Columns; on the right, all we know is we are looping through some counter.

When I started writing this application, I chose to use SPARK subset of the Ada language rather than the full language. With this, I could use the SPARK provers to help ensure the robustness and correctness of the code. My plan was to get all of the code to SPARK “Silver” level (proving the absence of run-time errors), with some of the code reaching “Gold” level (proving functions actually do what they are supposed to).

The first iteration of coding was just writing the application to be SPARK-compliant (“Stone” level). Even this low level required some re-thinking of coding practices. For example: I have a subprogram that basically returns the next available obfuscated name. In Ada, I would write this as a function that increments a counter and returns the next item. In SPARK, functions cannot have side effects (modifying global data) so this function had to be re-written as a procedure. Not a difficult task, but a language constraint (that makes your code a little safer!)

Next, I moved onto flow analysis (“Bronze” level). This involved adding Global dependency contracts (showing which global data was read/modified by which subprogram) and turning off SPARK for some subprograms that had to deal with non-SPARK compliant run-time code. With SPARK, an interface can be in SPARK mode while its implementation is not in SPARK mode. I used this to wrap some non-SPARK code (like some run-time packages) and make my SPARK analysis happier.

In my mind, the most important step was making sure I didn’t have any of the typical overflow issues – an absence of run-time errors (“Silver”) level. This is not as easy as it seems, especially when dealing with strings. The simple act of concatenating two strings raises a lot of flags in proving there are no run-time errors.

procedure Do_Something (Y : String; Z : String) with
   X : constant String :=
     Y (Y'first + 1 .. Y'last) & Z (Z'first .. Z'last - 1);

The most obvious concern is what happens when concatenating the two strings creates a string that is too long for X to hold. You would need to add preconditions on the lengths of Y and Z. But, because the index range of a string is integer, then Y’first could be integer’last – and adding one to that can generate a constraint error (similarly for Z’last and integer’first). As a human, you “know” these indices are not going to happen, but for SPARK to accept it, you have to prove the indices are not going to happen – either through preconditions, better type definitions, or redefining the concatenation operator.

Finally, I started implementing some contracts on subprogram behavior to prove that the subprogram did what I thought it should do (“Gold”) level. For some subprograms that is easy, some of them take a little more thought, and for some, the effort involved in proving correctness (SPARK) is more than the effort involved in showing correctness (testing). This process is still ongoing, and others who work with this application are welcome to contribute!

This application is available on GitHub for use or investigation. This code obfuscator is still a work in progress and will benefit from future Libadalang evolutions in order to support a more complete set of Ada features. But it is a good start to help those of us in the customer support world help our customers!

Ada for micro:bit Part 2: Push buttons Tue, 22 Sep 2020 06:04:45 -0400 Fabien Chouteau

Welcome to the Ada for micro:bit series where we look at simple examples to learn how to program the BBC micro:bit with Ada.

If you haven't already, please follow the instructions in Part 1 to setup your development environment.

In this second part we will see how to use the two buttons of the micro:bit.


To know if a button is pressed or not, we will use the function State of the MicroBit.Buttons package.

type Button_State is (Pressed, Released);

type Button_Id is (Button_A, Button_B);

function State (Button : Button_Id) return Button_State;


  • Button : The Id of the button that we want to check. There are two Ids: Button_A or Button_B.

Return value:

  • The function State return the Button_State that can be either Pressed or Released.

We will use this function to display the letter A if the button A is pressed, or the letter B if the button is pressed.

Here is the full code of the example:

with MicroBit.Display;
with MicroBit.Buttons; use MicroBit.Buttons;
with MicroBit.Time;

procedure Main is


      if MicroBit.Buttons.State (Button_A) = Pressed then
         --  If button A is pressed

         --  Display the letter A
         MicroBit.Display.Display ('A');

      elsif MicroBit.Buttons.State (Button_B) = Pressed then
         --  If button B is pressed

         --  Display the letter B
         MicroBit.Display.Display ('B');
      end if;

      MicroBit.Time.Delay_Ms (200);
   end loop;
end Main;

Following the instructions of Part 1 you can open this example (Ada_Drivers_Library-master\examples\MicroBit\buttons.gpr), compile and program it on your micro:bit.

See you next week for another Ada project on the micro:bit.

Don't miss out on the opportunity to use Ada in action by taking part in the fifth annual Make with Ada competition! We're calling on developers across the globe to build cool embedded applications using the Ada and SPARK programming languages and are offering over $9,000 in total prizes. Find out more and register today!

Ada for micro:bit Part 1: Getting Started Tue, 15 Sep 2020 08:44:00 -0400 Fabien Chouteau

Welcome to the Ada for micro:bit series where we look at simple examples to learn how to program the BBC micro:bit with Ada.

In this first part we will see how to setup an Ada development environment for the micro:bit.

The micro:bit

The Micro:Bit is a very small ARM Cortex-M0 board designed by the BBC for computer education. It's fitted with a Nordic nRF51 Bluetooth enabled microcontroller and an embedded programmer. You can get it at:

The projects in this series will also require basic electronic components (LEDs, resistors, potentiometer, buzzer). If you don't have those items, we recommend one of the micro:bit starter kits like the Kitronik Inventor's Kit:


To start programing in Ada, you first have to download and install GNAT Community from

You will need both the x86_64 and arm-elf packages.

Once you have installed the two packages, you can download the sources of the Ada_Drivers_Library project: here. Unzip the archive in your document folder for instance.

Linux only

On Linux, you might need privileges to access the USB programmer of the micro:bit, without which the flash program will say "No connected boards".

To do this on Ubuntu, you can create (as administrator) the file /etc/udev/rules.d/mbed.rules and add the line:

SUBSYSTEM=="usb", ATTR{idVendor}=="0d28", ATTR{idProduct}=="0204", MODE="0666"

then restarting the service by doing

$ sudo udevadm trigger

First program

Start the GNATstudio development environment that you installed earlier, click on "Open Project" and select the file "Ada_Drivers_Library-master\examples\MicroBit\text_scrolling/text_scrolling.gpr" from the archive that you extracted earlier.

Click on the "Build all" icon in the toolbar to compiler the project.

Plug your micro:bit using a USB micro cable.

And finally click on the "Flash to board" icon in the toolbar to run the program on the micro:bit.

You should see a text scrolling on the LEDs of the micro:bit:

That's it for the setup of your Ada development environment for the micro:bit. See you next week for another Ada project on the micro:bit.

Don't miss out on the opportunity to use Ada in action by taking part in the fifth annual Make with Ada competition! We're calling on developers across the globe to build cool embedded applications using the Ada and SPARK programming languages and are offering over $9,000 in total prizes. Find out more and register today!

GNATcoverage: getting started with instrumentation Thu, 10 Sep 2020 08:34:33 -0400 Pierre-Marie de Rodat

This is the second post of a series about GNATcoverage and source code instrumentation. The previous post introduced how GNATcoverage worked originally and why we extended it to support source instrumentation-based code coverage computation. Let’s now see it in action in the most simple case: a basic program running on the host machine, i.e. the Linux/Windows machine that runs GNATcoverage itself.

Source traces handling

Here is a bit of context to fully understand the next section. In the original GNATcoverage scheme, coverage is inferred from low level execution trace files (“*.trace”) produced by the execution environment. These traces essentially contain a summary of program machine instructions that were executed. We call these “binary traces”, as the information they refer to is binary (machine) code.

With the new scheme, based on the instrumentation of source code, it is instead the goal of each instrumented program to create trace files. This time, the information in traces refers directly to source constructs (declarations, statements, IF conditions, …), so we call them “source traces” (“*.srctrace” files).

The data stored in these files is conceptually simple: some metadata to identify the sources to cover and a sequence of booleans that indicate whether each coverage obligation is satisfied. However, for efficiency reasons, instrumented programs must encode this information in source traces files using a compact format, which is not trivial to produce. To assist instrumented programs in this task, GNATcoverage provides a “runtime for instrumented programs” as a library project: gnatcov_rts_full.gpr, for native programs which have access to a full runtime (we will cover embedded targets in a future post).


First, the GNATcoverage instrumenter needs a project file that properly describes the closure of source files to instrument as well as the program main unit. This is similar to what a compiler needs: access to all the dependencies of a source file in order to compile it.

Next, this blog series assumes the use of a recent GPRbuild (release 20 or beyond), for the support of two switches specifically introduced to facilitate building instrumented sources without modifying project files. What the new options do is conceptually simple so it would be possible to build without this, just less convenient.

Then the source constructs added by the instrumentation expect an Ada 95 compiler. The instrumenter makes several compiler-specific assumptions (for instance when handling Pure/Preelaborate units), so for now we recommend using a recent GNAT compiler.

Finally, users need to build and install the “runtime for instrumented programs” described in the previous section. To make sure the library code can be linked with the program to analyze, the library first needs to be built with the same toolchain, then installed:

# Create a working copy of the runtime project.
# This assumes that GNATcoverage was installed
#  in the /install/path/ directory.
$ rm -rf /tmp/gnatcov_rts
$ cp -r /install/path/share/gnatcoverage/gnatcov_rts /tmp/gnatcov_rts
$ cd /tmp/gnatcov_rts

# Build the gnatcov_rts_full.gpr project and install it in
# gprinstall’s default prefix (most likely where the toolchain is installed).
$ gprbuild -Pgnatcov_rts_full
$ gprinstall -Pgnatcov_rts_full

Note that depending on your specific setup, the above may not work without special filesystem permissions, for instance if the toolchain/GPRbuild was installed by a superuser. In that case, you can install the runtime to a dedicated directory and update your environment so that GPRbuild can find it: add the --prefix=/dedicated/directory argument to the gprinstall command, and add that directory to the GPR_PROJECT_PATH environment variable.

A first example

Now that prerequisites are set up, we can now go ahead with our first example. Let’s create a very simple program:

--  example.adb
procedure Example is
end Example;

-- example.gpr
project Example is
   for Main use (“example.adb”);
   for Object_Dir use “obj”;
end Example;

Before running gnatcov, let’s make sure that this project builds fine:

$ gprbuild -Pexample -p
$ obj/example

Great. So now, let’s instrument this program to compute its code coverage:

$ gnatcov instrument -Pexample --level=stmt --dump-trigger=atexit

As its name suggests, the “gnatcov instrument” command instruments the source code of the given project. The -Pexample and --level=stmt options should be familiar to current GNATcoverage users: the former requests the use of the “example.gpr” project, to compute the code coverage of all of its units, and --level=stmt tells gnatcov to analyze statement coverage.

The --dump-trigger=atexit option is interesting. As discussed earlier, instrumented programs need to dump their coverage state into a file (the trace file), that “gnatcov coverage” reads in order to produce a coverage report. But when should that dump happen? Since one generally wants reports to show all discharged obligations (fancy words meaning: executed statements, decision outcomes exercized, …), the goal is to create the trace file after all code has executed, right before the program exits. However some programs are designed to never stop, running an endless loop (Ravenscar profile), so this trace file creation moment needs to be configurable. --dump-trigger=atexit tells the instrumenter to use the libc’s atexit routine to trigger file creation when the process is about to exit. It’s suitable for most programs running on native platforms, and makes trace file creation automatic, which is very convenient.

Now is the time to build the instrumented program:

$ gprbuild -Pexample -p --src-subdirs=gnatcov-instr --implicit-with=gnatcov_rts_full

Even seasoned GPRbuild users will wonder about the two last options.

--src-subdirs=gnatcov-instr asks GPRbuild to consider, in addition to the regular source directories, all “gnatcov-instr” folders in object directories. Here that means that GPRbuild will first look for sources in “obj/gnatcov-instr” (as “obj” is example.gpr’s object directory), then for sources in “.” (example.gpr’s regular source directory).

But what is “obj/gnatcov-instr” anyway? When it instruments a project, gnatcov must not modify the original sources, so instead it stores instrumented sources in a new directory. The general rule of thumb for programs that deal with project files is to use projects’ object directory (Object_Dir attribute) to store artifacts; “gnatcov instrument” thus creates a “gnatcov-instr” subdirectory there and puts instrumented sources in it. Afterwards, passing --src-subdirs to GPRbuild is the way to tell it to build instrumented sources instead of the original ones.

The job of --implicit-with=gnatcov_rts_full is simple: make GPRbuild consider that all projects use the gnatcov_rts_full.gpr project, even though they don’t contain a “with “gnatcov_rts_full”;” clause. This allows instrumented sources (in obj/gnatcov-instr) to use features in the gnatcov_rts_full project even though “example.gpr” does not request it.

In other words, both --src-subdirs and --implicit-with options allow GPRbuild to build instrumented sources with their extra requirements without having to modify the project file of the project to test/cover (example.gpr).

We are getting closer to the coverage report. All we have to do it to finally run the instrumented program, to create a source trace file:

$ obj/example
Fact (1) = 1
$ ls *.srctrace

So far, so good. By default, the instrumented program creates in the current directory a trace file called “XXX.srctrace” where XXX is the basename of the executed binary, but one can choose a different filename by setting the GNATCOV_TRACE_FILE environment variable to the name of the trace file to create. Now that we have a trace file, the rest will be familiar to GNATcoverage users:

$ gnatcov coverage -Pexample --level=stmt --annotate=xcov example.srctrace
$ cat obj/example.adb.xcov
75% of 4 lines covered
Coverage level: stmt
   1 .: with Ada.Text_IO; use Ada.Text_IO;
   2 .:
   3 .: procedure Example is
   4 .:    function Fact (N : Natural) return Natural is
   5 .:    begin
   6 +:       if N <= 1 then
   7 +:          return 1;
   8 .:       else
   9 -:          return N * Fact (N - 1);
  10 .:       end if;
  11 .:    end Fact;
  12 .: begin
  13 +:    Put_Line ("Fact (1) =" & Fact (1)'Image);
  14 .: end Example;

And voilà! That’s all for today. The next post will demonstrate how to handle programs running on embedded targets.

Introducing source code instrumentation in GNATcoverage Tue, 08 Sep 2020 08:41:37 -0400 Pierre-Marie de Rodat

This is the first post of a series about GNATcoverage and source code instrumentation.

In order to make GNATcoverage viable in more contexts, we planned several years ago to add instrumentation support in GNATcoverage for Ada sources. This feature reached maturation recently and is available in the last Continuous Release, so it is a good time to present it with a blog series!

GNATcoverage background

GNATcoverage is the tool developed by AdaCore to compute the code coverage of Ada/C programs, available on several platforms, both native and embedded. It is able to assess DO-178 criteria, up to MC/DC, on C and Ada programs, from Ada 95 to Ada 2012.

Its coverage analysis capabilities are versatile. Several output formats are available: “xcov”, which look like text coverage reports from GCC’s gcov tool, a set of static HTML pages, a single modern dynamic HTML page, a XML report for machine processing or a custom text report suitable for certification contexts. In addition, the tool features powerful consolidation capabilities, which allow combining the result of multiple executions into a single report in several fashions and let users specify which packages/sources are of actual relevance to an analysis.

The way it works so far is atypical for a code coverage tool working with programs compiled to machine code:

  1. The source code is compiled unchanged to machine code (processor instructions), with a few special compiler options to ease the mapping of machine code back to source constructs and to generate a list of source constructs to cover: the SCOs (Source Coverage Obligations).

  2. An “instrumented” execution environment runs the program and generates a “trace file”, which roughly contains the set of machine instructions executed. For native platforms, this execution environment could be Valgrind or DynamoRIO, while embedded programs would execute either in GNATemulator or on a physical board with a hardware probe attached.

  3. GNATcoverage computes the coverage report from trace files, compiled programs, source files and the SCOs.

Unlike traditional code coverage tools, which generally inject code in the compiled program so that the program itself computes its coverage state, GNATcoverage works on unmodified programs. Instead, the execution environment builds the coverage state on behalf of the executed program.This has several advantages:

  • The main one is that executable code used for coverage analysis can also be used in production, or at least will be very close since the program itself is not modified to embed coverage measurement code and data structures.

  • It allows object coverage analysis (coverage of individual processor-level instructions), which is useful in source/object traceability studies.

So why do we need instrumentation?

Our original approach also comes with drawbacks. For instance, the execution environment is an emulator on native platforms (Valgrind, DynamoRIO) which incurs a non trivial performance penalty. For embedded targets, there is sometimes no possibility to use GNATemulator and no hardware probe available to create execution traces, or setting up such a probe can prove tricky enough to turn out impractical. Hence, depending on specific situations, instrumenting programs for code coverage can be a better fit than using unmodified programs.

One of the design goals for this instrumentation scheme was to be as close as possible to the original one. This facilitates transitions from one mode to the other, and makes most existing features, in particular in coverage analysis capabilities, applicable to both in a consistent manner.

The next post will present how the instrumentation scheme works in GNATcoverage with a simple example program.

The FACE™ open systems strategy gaining traction in the avionics industry Thu, 03 Sep 2020 09:10:00 -0400 Jessie Glockner

The Future Airborne Capability Environment™ (FACE) approach is a US government-industry initiative for reducing avionics system life cycle costs through software component reuse. The technical foundation is based on a portability-focused language-agnostic architecture and data model, common interfaces, and commercially available standards (IDL, POSIX®, and ARINC-653). The architecture comprises a number of segments -- Portable Components Segment, Transport Services Segment, Operating System Segment, Platform-Specific Services Segment, and I/O Services Segment -- that reflect a separation of concerns between portable and platform-dependent functionality.

Cost savings through reusable components is hardly a new idea, and it may look simple on the surface but has proved difficult to achieve in practice. Some hurdles are technical; designing a component for reusability requires experience and a broad set of skills, and programming languages differ in how effectively and efficiently they support reuse. And on the non-technical side, contracts for computer-based systems have historically offered little incentive, for either the agency procuring the software or the company developing it, to focus on reusability. Designing for reuse would add cost to the initial effort but realize savings only on later projects.

The FACE approach resolves this dilemma in a number of ways:

First, from the outset in 2010, it has involved all of the stakeholders: government agencies procuring avionics software, defense contractors developing the software, RTOS vendors and other suppliers of platform infrastructure, and software development and verification tool providers. The FACE Technical Standard has evolved as a consensus from these stakeholders in a “bottom-up” fashion, reflecting the state of the practice in industry.

Second, the FACE Technical Standard has taken a language agnostic approach, with support for the major languages used in airborne systems: C, C++, Ada, and (mostly for user interface components) Java.

Third, it does not attempt to address design goals other than portability. Although the FACE Technical Standard lays out operating system profiles and language subsets (“capability sets”) reflecting safety- and/or security-based restrictions, the FACE conformance tests do not check functionality, safety, or security properties. (Verification of these properties is obviously important but needs to be done separately, using appropriate analysis tools and the relevant certification standard. The SPARK Pro formal methods-based toolset is especially useful in this context, for example allowing proof that the code does not raise any exceptions.) The FACE conformance tests only check the software component and its associated data model -- officially known as a “Unit of Conformance” (UoC) -- to ensure that all interfaces are used correctly. A UoC that provides a FACE service, for example an RTOS in the Operating System Segment, has to implement all of the interfaces that the UoC’s profile requires. A UoC that functions as an application, for example flight management code in the Portable Components Segment, must only use interfaces that the UoC’s profile allows.

And fourth, reflecting the maturity of the FACE Technical Standard and the growing number of certified UoCs, DoD agencies are increasingly adding FACE conformance to the requirements for new avionics system procurements.  Indeed, the FACE approach not only benefits the avionics industry here in the US, but could have a larger impact on other industries in the US and abroad as companies recognize the benefits of reusing portable technologies to provide faster, more cost-effective deployment of new systems and software platforms. 

AdaCore is committed to the success of the FACE approach

Both the Ada programming language and the company’s product offerings directly support the FACE initiative’s objectives while helping developers meet the high assurance requirements that are typical in airborne software.

AdaCore has been an active contributor to both the technical and business sides of the FACE community since 2012 and is a Principal Member of the FACE Consortium. Company representatives have served as key members of the Conformance and Operating System Segment (OSS) Subcommittees, where they have reviewed the various versions of the FACE Technical Standard, helped formulate effective policies and procedures for conformance, and worked to incorporate support for Ada 2012 Safety capability sets in the FACE Technical Standard so that developers can take advantage of contract-based programming and other modern features. And just recently, Dr. Benjamin Brosgol, a member of AdaCore’s senior technical staff, was elected Vice Chair of the Technical Working Group by The Open Group FACE™ Consortium Steering Committee.

On September 21, 2020, The Open Group will host its annual FACE™ and SOSA™ (Sensor Open Systems Architecture) Consortia Technical Interchange Meeting. AdaCore is the Premier Sponsor of this year’s free, virtual event offering paper presentations by leading experts from government, industry, and academia on the use of the FACE Technical Standard and related business practices.

On March 23, 2021, The Open Group will host a live FACE™ and SOSA™ Consortia Exposition & Technical Interchange Meeting at the Holiday Inn Solomons Conference Center & Marina in Solomons, MD. This Technical Interchange Meeting will consist of keynote speakers, a panel discussion, and FACE 101 and SOSA 101 sessions. AdaCore will also be the Premier Sponsor of this event, and will be exhibiting alongside customers and partners such as Wind River, Lynx Software Technologies, Rapita, and Verocel.

Make with Ada 2020: LoRaDa := Ada + LoRa; Wed, 12 Aug 2020 09:06:47 -0400 Emma Adby

Hedley Rainnie's project won a finalist prize in the Make with Ada 2019/20 competition. This project was originally posted on here. For those interested in participating in the 2020/21 competition, registration is now open and project submissions will be accepted until Jan 31st 2021, register here.



Last MakeWithAda I worked on getting BLE going with an STM32L476. This project is another communication protocol: LoRa. This came about as my wife and I were musing about how to detect and deter unwanted garden visitors (cats that come into the garden to use it as a toilet and leave, squirrels that climb the walls and scratch at the soffits). Step one for dealing with this was to ID the interlopers using sensors, Step two would be some form of countermeasure. I am sort of at step 0.5 The basic sensor comm is up but the detection and countermeasures have not been decided yet.


Early on, I realized that LoRa might make a good choice for long distance sensor use. I did not need a high bandwidth here, just, creature detected, issue countermeasure. On twitter one day, Ronoth was retweeted wrt a Crowdsupply board. So, I waited sometime for that to arrive. It was my first use of Ada and LoRa. The history of how these boards came to be is outlined below.

First Ada + LoRa port: Ronoth S76S

This was a Crowdsupply board and is by Ronoth. The S76S is based on the Ascip S76S module.

Ronoth S76S

The Ascip S76S is a module with an STM32L073 + an SX1276 radio. It supports SWD debug and the usual port of Ada to an STM32L series. The board is expensive, $30+. Also an STM32L0 is a Cortex-M0+ and is ARMv6M, this complicated the port slightly and hurts code density. This was my first attempt at an SX1276 driver. It was initially based on the RadioHead Arduino lib but diverged rapidly.

Second Ada + LoRa port: Heltec LoRa Node 151

The second board is the Heltec LoRa Node 151. This is an STM32L151 + SX1276:

Heltec LoRa Node 151

After the Ronoth board, this one was a more modern design with more ram. It was a smooth port and was transmitting and receiving smoothly. Its half the price of the Ronoth.

Third Ada + LoRa port: Blkbox 915MHz LoRa SX1276 module

Then there is the Blkbox 915MHz LoRa SX1276 module, here seen connected to a Bluepill (STM32F103).

Blkbox 915 MHZ LoRa SX1276 module + Bluepill

This module is my favorite. Its cheap, about $7 works fine and can be moved around from target to target. Initially as pictured, I had it ported to the Bluepill. Later I moved it to the complex STM32L552 and the equally challenging STM32WB55 where it acts as a server.

Fourth port the new STM32L552:

Blkbox 915MHz LoRa SX1276 module + STM32L552

The STM32L552 challenge

Whilst the other STM32 ports are not new ground for me wrt getting Ada going on ST platforms, the STM32L552 is exceptionally challenging. I have some experience with embedded platforms and the Cortex-M33 in its various forms from Nordic, NXP and now ST are truly some of the most complex controller designs I have ever worked with. The bugs you can get are really epic. I could fill pages here recounting all the really messy mind bending bugs. The bullet points below outline the work, but at each stage learning and bugs were involved.

1) It uses the ARMv8M architecture. No support in OpenOCD for this targetNo direct Ada support as the lIb is ARMv7M.

2) It is based on a Cortex-M33 with TrustZone.

3) Ada_Drivers_Library is designed for a single view of the peripheral space not a peripheral space partitioned into Secure and Non-secure areas.

4) Debugging also needs to consider ARMv8M+TrustZone and how that effects register/memory/flash reading and writing.

5) To that end openocd was ported to this platform:

From there you can attach to the board:

openocd -f board/st_nucleo_l552.cfg

The usual gdb loading commands work for reading and writing flash and ram.

6) The methodology for the port was to bolt two separate Ada ELF32 binaries into the final image. One image is a Secure boot and API handler, then a very small C glue layer that handles gcc ARMv8M stuff since gnat2019's gcc libs cannot be linked with ARMv8M code yet. Also some future pragma's would be needed in the Ada world to accommodate the special S <=> NS bindings + access to NS functions. (the BLXNS and SG instructions and cleaning up the unbanked register state before BLXNS to avoid S state leaks to the NS side). Finally the NS image is the last piece. For S I am using the Ravenscar sfp and for NS, Ravenscar full. Before NS can toggle an LED or touch any peripheral S has to be told what is allowed to be used by the NS side, extra boilerplate unneeded in a non-secure environment.

7) The basic structure of the flash is the boot area (formally 0x08000000, is now S at 0x0c000000 for a secure boot Ada Elf32 binary) it curr occupies 40-60k of flash. A watermark is created in the ST flash option regs to divide the flash into 2 regions, I chose S from 0x0c000000 - 0x0c01ffff) NS from 0x0802000 - end.

The secure boot area also has the veneer code to allow NS to call back into S. You need a magic SG instruction anywhere you want NS's PC to touch down, any other opcode is an abort. Also the region for the NS PC touch down must be marked Non Secure Callable (NSC), or, another cryptic abort. The S & NS Ada programs are Cortex-M4F compiler builds, the veneer code is in C, and it is compiled as Cortex-M33

Outbound S to NS call flow

Above we see the S_To_NS Ada call going to a C s_to_ns.

A magic function ptr with special attribute ensures that the function pointer produces the blxns shown plus a boatload of assembly to wipe out the CPU regs so that no leaks are present to the NS side. Note that the NS side is a full Cortex-M executable with vector table.

Then we have the NS side calling the LoRa radio IP's API. Let's look at Recv_SX1276 as an example API call

The call goes to the C wrapper recv_sx1276_from_ns. Notice that due to it's attribute, its veneer entry point starts with the ARMv8M instruction: sg an sg is the only instruction ns can execute upon arrival in s. Any other instruction and you will be in the debugger, debugging a crash.

After the sg, a veneer branch is made to a special veneer that finally calls our Ada code via an export of its Recv_SX1276 as C callable.

Upon arrival, we see pointer types. How do we on the secure side know that those pointers are safe (i.e. NS). We need to use another ARMv8M instruction to validate the pointer(s). The tt instruction as shown:

If any of those pointers are S, we return. This is defensive coding in a S/NS environment.

A really challenging port. For example 2 Ravenscar runtimes on one SoC, when an exception comes to/from secure/non-secure, how is context switch decided? This was very challenging as process starvation was a very real bug.


There was no SW initially for this board, ordered in Nov, which came to my house early Dec. So I had 2 months to get the whole show going.

Secure/Non-secure peripheral base handling, this is interesting. How Ada sets up the peripheral bases for the driver code that shared between S/NS.

We see the magic in the declaration, the S base is always 16#1000_000# shifted from the NS base. In this way, legacy Ada_Drivers_Library code that just refers to GPIO_A for example will 'do the right thing' based on the stance that the library was built with (the Secure_Code constant).

Fifth port the STM32WB55 + LoRa. A LoRa server to BLE bridge

An Ada programmed STM32WB55 rounds out the show. Here is the server having its LoRa module traffic being analyzed by a Saleae.

WB55 Server LoRa debug

This was also a big effort done after the last Make with Ada contest. Last contest was a SensorTile with Ada running the BLE stack. This WB55 is a collapsed SensorTile, the BLE radio is not an SPI peripheral here but has been absorbed into an SoC with HW inter-process communication being used instead of SPI. Lots of issues getting a port to this platform. (No SVD file initially(!)). Had a good bug with LoRa SPI. The SPI flags were at the wrong bit offset so no rx/tx notifications were received. This turned out to be an SVD file issue. Took a day to debug that one as an SVD file has a lot of leverage in a port, it will be the last place you look.

For the STM32WB55 nucleo board. There are 2 in the blister pack a large board and a small dongle. I am using the large board and a Blkbox 915MHz LoRa SX1276 module.


Radio Freq

In the US LoRa is restricted to 915Mhz. I thought this Ada code really elegantly solves the freq to 24bit coding the SX1276 uses:

Ada data delta type

And its usage:

Thus abstracting Freq -> 24bit representation

Radio message debug

The Saleae could capture BlkBox LoRa SPI radio traffic between the server on the STM32WB55 and the STM32L552 secure client. Very helpful. At that moment it was 2 Ada programs with a new protocol and issues with retries and list corruption. One bug was quite interesting, the server is in receive 99% of the time, you might think that setting its fifo to 0 and reading the received packet would be solid. No, if the SX1276 state machine never leaves receive, it keeps an internal fifo pointer. Since its internal, it keeps monotonically increasing every packet, even with an overt reset of rx&tx pointers. Thus at each packet notification the code sees stale data at 0 as the internal pointer has moved. The 'fix' is to leave receive just long enough for that pointer to reload. Another day of debug was that issue. There were many of these types of issues.


Putting it all together

OK, so we have 6 nodes of varying pedigree and 1 gateway/server. The design of LoRa packet data is a to/from pair of bytes (ala RadioHead FW) but then we deviate, namely some fields for commands and values for message retries and sequence numbers.

Message Protocol

There are 4 messages:

The four messages

Client data structures are as so:

Client datastructures

Node discovery

Initial 'best effort contact by server to discover the network

Every 5 seconds the server sends a broadcast ping, ping replies come in from:

1) Those nodes in airshot and that are ready to receive.

2) Those nodes whose non-retried reply can get back to the server. A broadcast ping creates a lot of RF hash wrt the replies.

3) Once the server sees the reply, the node is added to a table:

The actives table

This is how the network gets populated and grows as new nodes are discovered or old ones drop off.

Note that 0 and 255 are not valid client node IDs.


Once a node has been recorded by the server as active, it can begin to send and receive notifications. Today I only support 8bit notifications. 8 is more than enough for my target application. Ideally, I don't want long messages being sent. This 8 bit notify has been set so that bit0 is the LED on the board and bit1 is a user button. Only node#1 the STM32L552 has a usable user button. The plumbing of that interrupt was interesting btw, as its GPIO needs to be allowed to be read by the NS side, further, it can have its interrupt routed to NS. Again, all new ground and requiring deep study of the ref manual and board experimentation (no reference code, everything was the try, see, iterate method). Fortunately, I was using RAM for those tests or I think the flash would be worn out by now.

New Ada_Drivers_Library support for s/ns external interrupt routing:

The server's BLE stack is activated when the BLE phone application shows an LED icon press. Its a BLE notification. We then locally change the server's LED state to the requested value and then set a suspension object to let the LoRa task know that the LoRa network needs to be woken up with a notify8 message.

The server's notification task then walks the actives list and sends notify8's to all connected nodes. Upon receipt of a reply, the original message is removed from the queue. If however after a timeout, a node has failed to respond, then the original message with same seq# is resent. Finally, after 15 retries with no reply, the message is retired.

When the user button is pressed, the reverse happens. The button notification task on the client is activated via a suspension object, this then prepares a LoRa packet with a notify8 with bit2 set. Assuming it got over the air to the server after its own client retry count, then the server processes the notify8 by setting another suspension object that wakes up the same BLE task that handles the local button press. This then is transmitted to the BLE phone application where the event with timestamp is shown.


A demo has been coded up. The idea is that via ping packet replies, the server can build an array up of who is connected. Once that map is created the server knows what nodes can are alive on the LoRa network. From there via the Android app, we make a connection to LoRaDa (the name I gave the Ada BLE server on the STM32WB55). From there you can turn on a light and see any alert's. At this time, I only have a user button on the STM32L552 Nucleo board, so it is the only board that sends the alert.

The nodes in the LoRa network are as so:

0: STM32WB55 (Ada coded LoRa server + Ada BLE central)

1: STM32L552 (Ada coded secureboot for radio, and non-secure LoRa client).

2: Heltec LoRa Node 151 #1 (Ada coded LoRa client)

3: Heltec LoRa Node 151 #2 (Ada coded LoRa client)

4: Ronoth LoDev S76S #1 (Ada coded LoRa client)

5: Ronoth LoDev S76S #2 (Ada coded LoRa client)

6: Bluepill STM32F103

So when nodes arrive into the network, when the light button is toggled on the Android app, it cycles through all the connected nodes and sets each ones light to the state requested. All during this, node #1 can signal an alert that shows up as a red bell icon with a timestamp in the app.


The range is outstanding, I walked prob 500m from our house and still had a signal! This tech is more than adequate for low rate sensor data.


I doubt I could have hacked this without using Ada. Once I had the client code up on the STM32L552 and was migrating the changes to the Heltec, Ronoth and bluepill boards, I don't think I spent more than 30mins getting the client changes up and working smoothly on those targets. I had already done a bare bones LoRa port to each but the client task version was not. So Ada made the job a pleasure and at least in the doldrums of code bugs I could know and rely on the fact the compiler was 100% there and solid despite my code problems.


I thought last years Make with Ada was a challenge but this one was seemingly straightforwards up until the Cortex-M33 needed much attention, especially low level work such as flashing S&NS images in OpenOCD that pulled me away from the LoRa client and server. BTW, that flashing bugs and fix are quite fascinating but frustrating too given the schedule. The server too was added late in the project! Originally I had a Dragino LG01 LoRa access point. That LoRa node is programmed in a C++ Arduino env, doable, but not Ada-esque. Finally a bulb went on in my head, why not use the BLE work on the STM32WB55 and make a LoRa to BLE bridge. How hard could that be? :) It was another challenge on top of an already challenging project. Much of the work was on the protocol. Debugging radio code on both the client and servers. The server has the most advanced radio work as it has an async receive task. The server is 99% of the time in receive and drops out of that to transmit from time to time. Timing is really subtle with these tasks, if you mess it up, the BLE can drop the connection or the LoRa network starts too many retries. I spent quite a bit of time on this protocol and it still needs some work.

One item for Ada users that can make LoRa quite inexpensive is a Bluepill (< $2) and a BlkBox LoRa module ($6-$7) so for < $10, you can have an Ada controlled LoRa module. A good value I feel.

Ultimately, ST will release a 48pin 7x7 STM32L5 processor, that will be pin compatible with the Bluepill boards. That will be Cortex-M33 based and might make for another interesting secure IoT solution. I plan to do a chip swap when its available as I did for the STM32L443 before.

About 2 weeks ago ST announced the STM32WL, that is a single chip LoRa controller. So, all the solutions I showed are dual chip, a controller + radio. Through some deal with Semtech, they will have a single chip with the radio IP absorbed. Of course, the Ada code I worked on will need to run on this device when its available.

Finally, whilst an Ada LoRa network and LoRa<->BLE gateway is novel, my feeling is the most interesting part of the work is the progress on the previously Ada untouched Cortex-M33. Ada is well known as a safe&secure language, the Cortex-M33 is a secure processor. So the marriage is a good one. Lets see if the Ada community can make some progress with this CPU, I already prepared a path and have shown its quite possible to get good results from it.

  • Access the project code here.
Relaxing the Data Initialization Policy of SPARK Tue, 28 Jul 2020 08:36:53 -0400 Claire Dross

SPARK always being under development, new language features make it in every release of the tool, be they previously unsupported Ada features (like access types) or SPARK specific developments. However, new features generally take a while to make it into actual user code. The feature I am going to present here is in my experience an exception, as it was used both internally and by external users before it made it into any actual release. It was designed to enhance the verification of data initialization, whose limitations have been a long standing issue in SPARK.

In the assurance ladder, data initialization is associated with the bronze level, that is, the easiest to reach through SPARK. Indeed, most of the time, the verification of correct data initialization is achieved automatically without much need for user annotations or code modifications. However, once in a while, users encounter cases where the tool cannot verify the correct initialization of some data in their program, even though it is correct. Until recently, there were no good solutions for this problem. No additional annotation efforts could help, and users had to either accept the check messages and verify proper initialization by other means, or perform unnecessary initialization to please the tool. This has changed in the most recent releases of SPARK (SPARK community 2020 and recent previews of SPARK Pro 21). In this post, I describe a new feature, called Relaxed_Initialization, designed to help in this situation.

First, let's get some insight on the problem. SPARK performs several analyses. Among them, flow analysis is used to follow the flow of information through variables in the program. It is fast and scales well, but it is not sensitive to values in the program. Said otherwise, it follows variable names through the control flow, but does not try to track their values.
The other main analysis is formal proof. It translates the program into logical formulas that are then verified by an automated solver. It is precise, as it models values of variables at every program point, but it is potentially slow and requires user annotations to summarize the effect of subprograms in contracts. Verifications done by flow analysis are in general easier to complete, and so are associated with the bronze level in the assurance ladder, whereas verifications done by proof require more user inputs and are associated with levels silver or higher.

In SPARK, data initialization is in general handled by flow analysis. Indeed, most of the time, it is enough to look at the control flow graph to decide whether something has been initialized or not. However, using flow analysis for verifying data initialization induces some limitations. Most notably:

  • Arrays are handled as a whole, because flow analysis would need to track values to know which indexes have been written by a component assignment. As a result, SPARK is sometimes unable to verify code which initializes an array by part (using a loop for example, as opposed to a single assignment through an aggregate).
  •  As it does not require user annotations for checking data initialization, SPARK enforces a strict data initialization policy at subprogram boundary. In a nutshell, all inputs should be entirely initialized on subprogram entry, and all outputs should be entirely initialized on subprogram return.

In recent releases of SPARK, it is possible to use proof instead of flow analysis to verify the correct initialization of data. This has the effect of increasing the precision of the analysis, at the cost of a slower verification process and an increased annotation effort. Since this is a trade-off, SPARK allows users to choose if they want to use flow analysis or proof in a fine grained manner on a per variable basis. By default, the lighter approach is preferred, and initialization checks are handled by flow analysis. To use proof instead, users should annotate their variables with the Relaxed_Initialization aspect.

To demonstrate how this can be used to lift previous limitations, let us look at an example. As stated above, arrays are treated as a whole by flow analysis. Since initializing an array using a loop is a regular occurrence, flow analysis has some heuristics to recognize the most common cases. However, this falls short as soon as the loop does not cover the whole range of the array, elements are initialized more than one at a time, or the array is read during the initialization. In particular, this last case occurs if we try to describe the behavior of a loop using a loop invariant. As an example, Add computes the element-wise addition of two arrays of natural numbers:

type Nat_Array is array (Positive range 1 .. 100) of Natural;

   function Add (A, B : Nat_Array) return Nat_Array with
     Pre => (for all E of A => E < 10000)
       and then (for all E of B => E < 10000),
     Post => (for all K in A'Range => Add'Result (K) = A (K) + B (K))
      Res : Nat_Array;
      for I in A'Range loop
         Res (I) := A (I) + B (I);
         pragma Loop_Invariant
           (for all K in 1 .. I => Res (K) = A (K) + B (K));
      end loop;
      return Res;
   end Add;

The correct initialization of Res cannot be verified by flow analysis, because it cannot make sure that the invariant only reads initialized values. If we remove the invariant, then the initialization is verified, but of course the postcondition is not... Until now, the only solution to work around this problem was to add a (useless) initial value to Res using an aggregate. This was less than satisfactory... In recent versions of SPARK, I can instead specify that I want the initialization of Res to be verified by proof using the Relaxed_Initialization aspect:

Res : Nat_Array with Relaxed_Initialization;

With this additional invariant, my program is entirely verified. Note that, when Relaxed_Initialization is used, the bronze level of the assurance ladder is no longer enough to ensure the correct initialization of data. We now need to reach the silver level, which may require adding more contracts and doing more code refactoring.

Let's now consider the second major limitation of the classical handling of initialization in SPARK: the data initialization policy. As I have mentioned earlier, it requires that inputs and outputs of subprograms are entirely initialized at subprogram boundaries. As an example, I can consider the following piece of code which tries to read several natural numbers from a string using a Read_Natural procedure. It as an Error output which is used to signal errors occurring during the read:

type Error_Kind is (Empty_Input, Cannot_Read, No_Errors);

   subtype Size_Range is Natural range 0 .. 100;

   procedure Read_Natural
     (Input    : String;
      Result   : out Natural;
      Num_Read : out Natural)
  with Post => Num_Read <= Input'Length;
  --  Read a number from Input. Return in Num_Read the number of characters read.

   procedure Read
     (Input  : String;
      Buffer : out Nat_Array;
      Size   : out Size_Range;
      Error  : out Error_Kind)
      Num_Read : Natural;
      Start    : Positive range Input'Range;

      --  If Input is empty, set the error code appropriately and return

      if Input'Length = 0 then
         Size := 0;
         Error := Empty_Input;
      end if;

      --  Otherwise, call Read_Natural until either Input is entirely read,
      --  or we have reached the end of Buffer.

      Start := Input'First;

      for I in Buffer'Range loop
         Read_Natural (Input (Start .. Input'Last), Buffer (I), Num_Read);

         --  If nothing can be read from Input, set the error mode and return

         if Num_Read = 0 then
            Size := 0;
            Error := Cannot_Read;
         end if;

         --  We have reached the end of Input

         if Start > Input'Last - Num_Read then
            Size := I;
            Error := No_Errors;
         end if;

         Start := Start + Num_Read;
      end loop;

      --  We have completely filled Buffer

      Size := 100;
      Error := No_Errors;
   end Read;

This example is not following the data initialization policy of SPARK, as I don't initialize Buffer when returning with an error. In addition, if Input contains less than 100 numbers, Buffer will only be initialized up to Size. If I launch SPARK on this example, flow analysis complains, stating that it cannot ensure that Buffer is initialized at the end of Read. To silence it, I can add a dummy initialization for Buffer at the beginning, for example setting every element to 0. However this is not what I want. Indeed, not only might this initialization be costly, but callers of Read may forget to check the error status and read Buffer, and SPARK won't detect it. Instead, I want SPARK to know which parts of Buffer are meaningful after the call, and to check that those only are accessed by callers.

Here again, I can use the Relaxed_Initialization aspect to exempt Buffer from the data initialization policy of SPARK. To annotate a formal parameter, I need to supply the aspect on the subprogram and mention the formal as a parameter:

procedure Read
     (Input  : String;
      Buffer : out Nat_Array;
      Size   : out Natural;
      Error  : out Error_Kind)
   with Relaxed_Initialization => Buffer;

Now my procedure is successfully verified by SPARK. Note that I have initialized Size even when the call completes with errors. Indeed, Ada says that copying an uninitialized scalar, for example when giving it as an actual parameter to a subprogram call, is a bounded error. So the Relaxed_Initialization aspect wouldn't help here, as I would still need to initialize Size on all paths before returning from Read.

Let's write some user code to see if everything works as expected. Use_Read reads up to 100 numbers from a string and prints them to the standard output:

procedure Use_Read (S : String) is
      Buffer : Nat_Array;
      Error  : Error_Kind;
      Size   : Natural;
      Read (S, Buffer, Size, Error);
      for N of Buffer loop
         Ada.Text_IO.Put_Line (N'Image);
      end loop;
   end Use_Read;

Here SPARK complains that Buffer might not be initialized on the call to Read. Indeed, as the local Buffer variable does not have the Relaxed_Initialization aspect set to True, SPARK attempts to verify that it is entirely initialized by the call. This is not what I want, so I annotate Buffer with Relaxed_Initialization:

Buffer : Nat_Array with Relaxed_Initialization;

Now, if I run SPARK again on my example, I have another failed initialization check, this time on the call to Put_Line inside my loop. This one is expected, as I do not check the error status after my call to read. So I now fix my code so that it only accesses indices of Buffer which have been initialized by my read:

procedure Use_Read (S : String) is
      Buffer : Nat_Array with Relaxed_Initialization;
      Error  : Error_Kind;
      Size   : Natural;
      Read (S, Buffer, Size, Error);
      if Error = No_Errors then
         for N of Buffer (1 .. Size) loop
            Ada.Text_IO.Put_Line (N'Image);
         end loop;
      end if;
   end Use_Read;

Unfortunately, it does not help, and the failed initialization check on the call to Put_Line remains the same. This is because I have not given any information about the initialization of Buffer in the contract of Read. With the usual data initialization policy of SPARK, nothing is needed, because SPARK enforces that all outputs are initialized after the call. However, since I have opted out of this policy for Buffer, I now need to use a postcondition to describe its initialization status after the call. This can be done easily using the 'Initialized attribute:

procedure Read
     (Input  : String;
      Buffer : out Nat_Array;
      Size   : out Size_Range;
      Error  : out Error_Kind)
   with Relaxed_Initialization => Buffer,
     Post => (if Error = No_Errors then Buffer (1 .. Size)'Initialized
              else Size = 0);

The postcondition states that if no errors occurred, then Buffer has been initialized up to Size. If I want my code to prove, I also need to supply an invariant at the end of the loop inside read:

pragma Loop_Invariant (Buffer (1 .. I)'Initialized);

Now both Read and Use_Read are entirely proved, and if I tweak Use_Read to access a part of Buffer with no meaningful values, SPARK will produce a failed initialization check.

The Relaxed_Initialization aspect provides a way to opt out of the strict data initialization policy of SPARK and work around the inherent imprecision of flow analysis on value sensitive checks. It enables the verification of valid programs which used to be out of the scope of the proof technology offered by SPARK. You can find more information in the user guide. Don't hesitate to try it in your project, and tell us if you think it is useful and how we can improve it!

Make with Ada 2020: Disaster Management with Smart Circuit Breaker Thu, 09 Jul 2020 10:18:03 -0400 Emma Adby

Shahariar's project won a finalist prize in the Make with Ada 2019/20 competition. This project was originally posted on here.



Miniature Circuit Breaker (MCB) interrupts mains power when a short circuit or over-current occurs. The purpose is safety of electrical system from fire hazard.

A smart circuit breaker will not only function as a regular MCB but also isolates incoming AC mains supply during disaster by sensing earthquake, fire/smoke, gas leakage or flood water. By disconnecting incoming power lines to equipment and power outlets inside house/office/industry during any disaster, it can reduce the chance of electrical hazard and ensuring safety of peoples life and assets.

This system is programmed with Ada, where safety and security is critical.


Hardware and Theory of Operation

Hardware modules

Following parts are connected together to assemble the hardware according to the schematic below :-

Protoboard, Microbit, RGY LEDs, Laser, Flame Sensor, LiPo charger, Buzzer, Relay, Gas Sensor
Water/Soil Moisture sensor, MCB-Circuit breaker, Servo Motor
  • Microbit: Runs safe firmware written in Ada for the system
  • MAA8693 Accelerometer: Earthquake sensing, onboard I2C sensor
  • 10 RGY LED Module: Fault Status indication, CC connection
  • Buzzer: Fault Alarm beeping and tone generation
  • TL431 External Reference: 2.5V reference for ADC measurement
  • Laser & Photo Transistor: Smoke Sensing with light interruption
  • MQ-5: Natural Gas (CnH2n+2) Leakage Sensor
  • Flood Sensor: Electrode to detects presence of flood water
  • Infrared Flame Sensor: Detects fire break out nearby
  • TP4056LiPo Charger Module: Charges up backup battery
  • Boost Module: Convert3.0-4.2 V from LiPo to 5.0V DC
  • Protoboards: Substrate and interconnection between modules
  • Power Supplies: LiPo Battery (backup) and 5V adapter (primary)
  • MCB / Relay Module***: Connect/Disconnect mains
  • Servo Motor: Trips MCB when smoke/fire/gas/vibration/water sensed
  • B & A button: Acknowledge fault and resume normal operation

*** Note: Relay not used but can be used instead of MCB

Hardware Pin Map

All the GPIO, ADC, I2C pins are utilized as follows:-

pin budget


Here is the schematic for Smart Circuit Breaker hardware prototype :-

schematic for Smart Circuit Breaker

Device Operation

Device operates according to following flowchart :-

Device Operation Flowchart
  • In Ada code, all the I/Os associated with sensors, modules and indication LEDs are initialized first.
  • Next, Smoke, Flame, Natural Gas, Earthquake, Flood sensing happens sequentially until a fault condition is detected.
  • Immediately after any fault detection, MCB will be tripped by the servo motor.
  • Then, LEDs associated with that of fault keeps blinking and buzzer keeps alarming.
  • User needs to press button B to acknowledge fault after taking care of the situation/disaster that triggered the fault in the first place.
  • Finally, user will flip the MCB manually to 'On' position and then press A to resume sensing again.
  • If a short circuit or over current occurs, MCB will just trip like a regular MCB.


On a piece of protoboard the battery and charger modules are connected and secured with double sided tape and hot glue. This is the bottom layer circuit for powering the rest of the components. Header pins are soldered to carry power to the next layer.

battery and charger

On the second layer (i.e the top one), rest of the sensors, modules and microbit is connected according to the schematic.

Servo motor is tied with regular circuit breaker with cable tie and connected to the top layer board to get power from the battery & control signal from Micro:bit.

Servo motor attached to MCB

Preparation for Ada programming

Install all of these in the same directory/folder.

Where to start: GNAT Programming Studio

After downloading/installing GNAT IDE, arm driver into the same directory, open example code from:


Open one of the examples (i.e. digital_out.gpr) for Microbit according to following steps and edit example project code as needed.

  • Step 1: Run GPS (GNAT Programming Studio)
  • Step 2: Select digital_out.gpr example project
Starter code

Step 3: Copy this project's code attached below and replace the example code in main.adb file

Building project on existing example

Programming in Ada

Following files are the most important files when working with GNAT Studio :-

.gpr file is the GNAT project file for a project

.adb file is the file where Ada code resides (src folder)

.ads files is where the definitions and declarations goes

Code snippets below are taken from the attached code of this project to briefly explain essential Ada programming styles :-

Writing Comments in Ada

Comments/ non executable lines in Ada starts with " -- " like this :-

----------------- edge connector pin mapping ----------------------
-- See here : -----------
--  pin(code)   pin (edge connector pads)    hardware connected
--   0         --  large pad 0       -- servo motor control  pin
--   1         --  large pad 1       -- Flame Sense IR module

Anything after -- in a single line is a comment, whereas regular syntax ends with semicolon (;)

Including Packages in Ada

Syntax with "with" keyword is used to add package support to a program. When 'use' keyword is used for that package, it becomes visible/usable in the code

with MicroBit.IOs; use MicroBit.IOs;     -- includes microbit GPIO package
with MicroBit.Time;                      -- includes microbit time package
with MicroBit.Buttons; use MicroBit.Buttons; -- includes button package
with MMA8653;   use MMA8653;          -- includes hal for accelerometer
with MicroBit.Accelerometer;          -- includes acceleratometer package

For example: "with MicroBit.IOs" includes microbit GPIO control support to main.adb code. But including "use MicroBit.IOs" will enable to use variable types from MicroBit.IOs package (see below: Variables in Ada for detailed explanation)

Similarly MMA8653 and MicroBit.Accelerometer enables support for onboard accelerometer chip microbit

Variables/Constants in Ada

Variables are declared in Ada in following format

  • Variable_Name : Type := Initial_Value;
  • Variable_Name : Type;

Connected is a variable name, which is Boolean Type, it's initial Value is True.

Fault_Flag is a variable name, which is Integer Type and it's initial Value is 0

Connected   : Boolean := True;           -- boolean type variable 
Fault_Flag  : Integer := 0;              -- integer type variable
ADCVal      : MicroBit.IOs.Analog_Value; -- variable type for ADC reading
ADCtemp     : MicroBit.IOs.Analog_Value; -- ADC type temp variable
RedLED1_Smoke    : constant MicroBit.IOs.Pin_Id := 13;
RedLED2_Flame    : constant MicroBit.IOs.Pin_Id := 8;

Variable types are 'strict' in Ada.

For example: ADCVal is not 'Integer' type but 'MicroBit.IOs.Analog_Value' type, although it will hold integer numbers between 0 to 1023

Similarly, RedLED1_Smoke has a constant value of 13, but it is not 'Integer' type constant, it is actually 'MicroBit.IOs.Pin_Id' type constant.

To use these odd types of variable like : MicroBit.IOs.Analog_Value and MicroBit.IOs.Pin_Id. coder must include the 'use MicroBit.IOs; ' line of code before variable declaration.

'use' keyword allows programmer to use package specific types of variable.

Ada Main Procedure and Loop

Main procedure in Ada is the main function (equivalent of void main in c), which starts with 'procedure Main is' syntax, then comes the variable declaration. After that the 'begin' keyword begins the main procedure. Below 'begin' is the code which is usually initialization or single run code. Next starts the infinite 'loop' (equivalent of while(1) in c). Finally the 'end loop;' encloses the infinite loop and 'end Main;' ends the main procedure.

Here is the ada code skeleton with comments showing what goes where :-

-- package inclusion goes here

procedure Main is

-- variable declaration goes here


-- initialization or one time executable code goes here


-- body of recurring or looping code goes here

end loop;
end Main;

; (semicolon) is the end of a loop or procedure. There are no use for curly-braces {}

If/else in Ada

In Ada, if-else starts with 'if' keyword followed by logical condition and 'then' keyword, next is the code which will execute if the condition is true, otherwise the code below 'else' will execute. The 'if' statement ends with 'end if;' keyword

if condition_is_true then
   -- do this
    --    do that
end if;

Example :-

if ADCVal >= ADCtemp then
     MicroBit.IOs.Set (RedLED1_Smoke, True); -- Write High to Disble LED
            Fault := True; Fault_Flag := 1;
            Connected := False;
end if;

For Loop in Ada

for tempval in 0 .. 9 loop
end loop;

Case and null in Ada

Case in Ada starts with 'case' keyword, followed by a variable which will be checked and 'is' keywords. Then it checks matching with 'when' keyword followed by different possible values of the variable and ends with '=>' operator. Next is the code which executes when variable check match with a possible value. 'when other' keyword is for no match condition. Case ends with 'end case;' keyword

'null;' is for doing nothing when no match is found, which needs to be explicitly mentioned.

Nothing is left for guess work in Ada !!!

case variable_name is
    when 1 =>
       -- do this
    when 2 =>
       -- do that
    when others
       null; -- do nothing
end case;

Example :-

case Fault_Flag is
            when 1 =>
            -- smoke fault blinkey
            MicroBit.IOs.Set (RedLED1_Smoke, False);
            MicroBit.Time.Delay_Ms (100);
            MicroBit.IOs.Set (RedLED1_Smoke, True);
            MicroBit.Time.Delay_Ms (100);
            when 2 =>
            -- fire fault blinkey
            MicroBit.IOs.Set (RedLED2_Flame, False);
            MicroBit.Time.Delay_Ms (100);
            MicroBit.IOs.Set (RedLED2_Flame, True);
            MicroBit.Time.Delay_Ms (100);
            when 3 =>
            -- gas fault blinkey
            MicroBit.IOs.Set (RedLED3_NGas, False);
            MicroBit.Time.Delay_Ms (100);
            MicroBit.IOs.Set (RedLED3_NGas, True);
            MicroBit.Time.Delay_Ms (100);
            when 4 =>
            -- earthquake fault blinkey
            MicroBit.IOs.Set (YellowLED1_Quake, False);
            MicroBit.Time.Delay_Ms (100);
            MicroBit.IOs.Set (YellowLED1_Quake, True);
            MicroBit.Time.Delay_Ms (100);
            when 5 =>
            -- flood water fault blinkey
            MicroBit.IOs.Set (YellowLED2_Flood, False);
            MicroBit.Time.Delay_Ms (100);
            MicroBit.IOs.Set (YellowLED2_Flood, True);
            MicroBit.Time.Delay_Ms (100);
            when others =>
               -- do nothing
         end case;

Microbit specific APIs

  • MicroBit.Time.Delay_Ms (integer) -- delays operation for certain mili seconds
  • MicroBit.IOs.Set(Pin_Number, boolean) -- Output drive a GPIO pin
  • MicroBit.IOs.Analog(Pin_Number) -- returns a ADC value from an Analog pin
  • MicroBit.Buttons.State (Button_Name) = Pressed -- reads A/B buttons

To use these Microbit specific APIs, following packages must be included first:

with MicroBit.IOs;     use MicroBit.IOs;     
with MicroBit.Time;                          
with MicroBit.Buttons; use MicroBit.Buttons;
  • use MicroBit.IOs enables the use of MicroBit.IOs.Analog_Value type
  • use MicroBit.Buttons enables the use of Pressed type

Examples :-

with MicroBit.IOs;     use MicroBit.IOs;     -- includes microbit GPIO   lib
with MicroBit.Time;                          -- includes microbit timer  lib
with MicroBit.Buttons; use MicroBit.Buttons; -- includes ubit button A/B lib

MicroBit.Time.Delay_Ms(500); -- 500 mS delay

MicroBit.IOs.Set(2, True) -- sets pin 2 Logic-High
MicroBit.IOs.Set(1, False) -- sets pin 1 Logic-Low

ADCVal : MicroBit.IOs.Analog_Value; -- analog_value type variable,not an int                                                           
ADCVal:= MicroBit.IOs.Analog(0) -- returns a value between 0 to 1023

MicroBit.Buttons.State (Button_A) = Pressed -- returns True is A is pressed

Uploading Code

Once the editing of the code is done, connect Microbit to computer with USB cable ( Windows will make ding-dong sound ).

Then click : Build > Bareboard > Flash to Board > main.adbto flash code to Microbit. The Message window below will show code size and upload percent.

If upload problem occurs, check USB cable or reinstall pyOCD.

Flashing code

Ada Programming: Where Ada shines ?

Ada isn't just another programming language. It shines where Safety, Security and Reliability matters. In systems where a hidden firmware/software bug could be fatal, life-threatening or damage of equipment might cause huge economic loss, those are the kind of systems where Ada can make a huge difference.

For example, embedded system used in:-

  • Pacemaker & ICU Medical Equipment
  • Self Driving Vehicles
  • Explosive Igniter
  • Missile Guidance & Para-suite Launcher
  • Spaceship Life Support System
  • Lift Control
  • Fire Alarm & Safety
  • Automated Security
  • Enterprise Server Power Monitoring
  • Fail Safe Mechanism Monitoring
  • Power Plant Steam Generation
  • Radioactivity Monitoring
  • Chemical Process Control
  • Safety Critical Consumer Electronics (e.g. Induction Cooker)

How Ada makes system safe and secure ?

Ada compiler is very strict, it will keep bashing the coder/programmer with errors, warning, suggestions until a clear, well thought code is produced.

Well, compilers in other programming languages do that, too ! But the difference is, things that are not even an error in other programming language is an error in Ada. Someone coming from C or Arduino Land will feel the punch. For example - when trying to add float with integer. In Ada, Apple does not add up with Banana.

“think first, code later” - is the principle which Ada promotes !

Programmer must think clearly about the impact of each type/variable and code in a proper manner. There are other differences like writing style, operators.

Practical Design Considerations

This prototype is designed in a way, so that all the functions can be demonstrated easily. But for practical use, following actions are recommended:

  • Both Smoke and Flame sensor are sensitive to strong light, therefore proper shielding from direct light is recommended
  • Flood sensor should be placed near floor, where it can easily detect indoor flood water
  • Earthquake sensor is susceptible to vibrations, that is why Smart MCB should be mounted on rigid structure
  • Gas sensor requires 24 hrs break in period for proper operation
  • Proper PCB and enclosure is necessary for hardware reliability



As I have already said, this is just a prototype hardware which I made with my limited resources. But Smart MCB is exactly the kind of application (safety critical) for which the spirit of Ada programming is intended.

MCB was invented and patented by Hugo Stotz almost 100 years ago. I wish someone out there can turn this project into a real product and upgrade the century old MCB technology into Smart MCB for improved safety of next generation electrical distribution systems.

  • Access and download the project schematics here.
  • Access the project code here.
  • GNAT Community was used in this project, download it here.
Make with Ada 2020: CryptAda - (Nuclear) Crypto on Embedded Device Thu, 25 Jun 2020 07:17:34 -0400 Emma Adby

Team CryptAda's project won a finalist prize in the Make with Ada 2019/20 competition. This project was originally posted on here.


The project sources

As junior DevOps/SRE, we're quite interested in cryptography and its usage. Therefore, when we were asked to participate to this contest for a uni project, we decided to go for some cryptography-related subject.

For the quite small knowledge we had on the subject, we thought that having a software on the embedded device that could generate RSA keys would be a good start.

The first steps - Bignums

In order to generate RSA private keys, we need a way to obtain big, very big prime numbers. Those numbers we need are way too big for classical number representation in ADA. We need a way to manipulate Bignums, aka number that can have hundreds of number of digits. Typical RSA keys as in 2020 contains 2048 or 4096 bits, so if we want to create such keys, we need bignums that can handle at least 4096 digits.

For the project, we first looked at a library to handle without dependencies bignum, but this library was too limited and too slow. Therefore, we created a bignum library, fitting our needs (allocation on the stack, fast operations, base 256 for better performances, handling negative numbers, ...). We spent quite some time optimizing it, since numbers computations will be most of the CPU time used to generate a key.

Pseudo-prime numbers

The algorithm to find a prime number is quite easy to understand, and leave very few place for optimization. To assert that n is prime, we look at (x * x) mod n, with x going from 2 to sqrt(n).
This operation is slow, and not suitable for 2048 bits numbers. To have an efficient way to assert that a number is prime, we switch to checking it to be pseudo-prime, with a sufficient probability.
A pseudo-prime number means that this number has a probability n of being prime, and functions generating pseudo-prime number must allow us to choose this n value.

We tweaked a bit, and found the best way to generate prime number with a probability > 99.999 % in a reasonable time. The algorithm is the following:
- Check for it being prime with all the prime numbers < 100 using a ~sieve of Eratosthenes
- Check for it being pseudo-prime using Fermat's primality test with {2, 3, 5, 7}
- Run a Miller-Rabin test with 4 iterations using randomly generated witnesses
- Run a Miller-Rabin test with a number of iterations depending on the number being checked with randomly generated witnesses
When a number passes all of this tests, it is safe (up to a defined probability) to consider it being prime.

Prime number generator printing its state as it computes

Random number generator

Random numbers are required to run such algorithms. But random must respect some conditions. We're not only looking for a Pseudo Random Number Generator (PRNG), but for a PRNG that either mix entropy or is Cryptographically Secure (CSPRNG). As we're on an embedded device, we have GPIO pins and sensors, so we chose the first option as entropy could be generated easily.

We implemented a PRNG fed with entropy, inspired by the Linux PRNG (/dev/random). We have an entropy pool of 2048 bits, that is constantly fed by the noise generated by the 3-axis accelerometer using a mixing function. We also have an extraction function, that consume some of the entropy available in the pool (accounted by a estimation function that maintains a gauge every time an operation is performed on the entropy pool) to give a random number.

In our application, entropy is collected periodically in the background, since we created an ADA Task to perform this operation.

The extraction function

The implementation uses chacha20 as a hash function for the extraction process, which can be resumed as:
- Hash the whole entropy pool
- Fold the hash in a 16 bytes hash
- Mix the hash back in the pool to mix-it and re-credit some entropy
- Re-extract 16 bytes of entropy from the pool
- Xor and fold the initial hash and the entropy newly extracted to create a 64 bits output value

The entropy pool can also be fed by any source of entropy, and we also looked on nuclear decay. Unfortunately, our Geiger-Müller counter was shipped with a lot of delay, and we lack the time to interface it.

Our geiger counter, used with a raspberry-pi


With prime numbers and a random number generator, it is finally possible to create RSA keys. The project offers a graphical interface on the touch screen of the board to choose the RSA key size between 256, 512 and 768, which are the biggest key that can be generated in a reasonable time on such low-specs board. The touch screen also permits the user to generate a key, and to print it on the USART connection. The project aims to be a good start for either an embedded crypto library, or for a device like a smartcard (Yubikey).

The interface with buttons

A RSA key is dumped on the USART using the ASN.1 format, but as a conf file. Here's an example of a very small RSA key generated by the board:



With openssl, the configuration above can easily be converted to the DER/PEM format, making the key suitable for most applications.

  • Access and download the project schematics here.
  • Access the project code here.
  • GNAT Community was used in this project, download it here.
Make with Ada 2020: The SmartBase - IoT Adjustable Bed Thu, 11 Jun 2020 07:34:00 -0400 Emma Adby

John Singleton's The SmartBase - IoT Adjustable Bed won both the first prize and a finalist prize in the Make with Ada 2019/20 competition. This project was originally posted on here.


My wife suffers from a rare condition that causes her to be nauseous for many hours quite often. Through trial and error, she has learned that one thing that seems to alleviate the symptoms is sitting upright and perfectly still. I noticed that often, early in the morning, she would slip out of bed and move to the couch where she can sit upright while I slept in bed.

So, about a year ago, after too many nights of being uncomfortable with our bed, my wife and I decided to finally go bed shopping for a good mattress. In doing so we realized how common it is now that people purchase adjustable bases with their mattresses. It dawned on us that having an adjustable base would be excellent for her condition as she could sit up in bed whenever she needed and she wouldn’t have to go to the couch to do it. After trying one on a showroom floor we were excited to find that we loved it and we went out and bought one with our new mattress.

At first, it was amazing; being able to prop yourself up in bed. But then there were problems. When my wife wanted to raise the bed, if I wasn’t there she’d have to fumble around for the remote (which always seemed to be hard to find). Often, if she wasn’t feeling well, she’d give up and acquiesce to laying supine. Another problem we had was that we tended to fall asleep with the bed in the upright position. This gets quite uncomfortable for the entire night and the effort needed to find the remote and adjust the bed would wake us from our sleep, making it hard to get back to sleep.

One night, when faced with this problem it dawned on me: we already use our Alexa to control our lights and other devices, why not our bed? A quick Google search revealed that nothing like this had ever been done. To make it worse, adjustable beds are primitive affairs --- they don’t have APIs and they aren’t programmable. They rely on power control circuits to drive them.

This seemed like the perfect type of project to take on. At the time, I had recently completed my Ph.D. in Computer Science where I focused on formal methods and new techniques for specification inference. From this experience I was already familiar with Ada and SPARK, which uses a specification language similar to my advisor’s own JML. Although I knew about Ada through this experience (The Building High Integrity Applications with SPARK book was one of the first I stole from my advisor’s office!) I hadn’t made anything “real” in Ada, so I thought I’d do this also as a way of understanding what the state of Ada/SPARK would be for a real product.

Project Goals and Overview

The goals of this project were to create an IoT device that was:

  • Able to control the movement of a wired adjustable bed.
  • Able to be easily reconfigured to work with other wired adjustable beds.
  • Able to be controlled by both an Amazon Alexa device as well as the original remote.
  • Able to sense occupancy in the room and able to produce under bed safety lighting when walking around at night.
  • Written in Ada/SPARK 2012.
  • Nicely designed in terms of fit and finish (since I planned on actually using it!). That means that the entire system should use its own custom PCBs and be hosted inside of a custom-designed case that would look great under my bed.

To explain the product, I’ve prepared two videos that demonstrate the main features of the SmartBase as well as showcase its physical construction.

For the interested reader, you can read the remainder of this document to learn all about the different phases of development for this project. Roughly, the phases of development this project followed were:

1. Reverse engineering, in which I figure out how to control the bed.

2. Prototyping, in which I made some rough perf-board prototypes.

3. PCB design, because no one wants a rats nest of wires under their bed, I designed a series of PCBs to support the function of the SmartBase.

4. Enclosure Design, in which I designed a case for my project.

5. Software Design, which essentially happened at all phases, but I’ve given it its own section here.

6. Verifying Things, in which I do a little bit of verification.

7. Moving to STM32, in which I describe porting the entire project over to a STM32F429ZIT6 and a ESP32 for the WIFI functionality.

8. Bitbanging a WS2812B on a STM32F4 in Ada. Neopixels are only easy to use if you are using an Arduino. Apparently no one had done this one before, so, armed with an oscilloscope and a ARM manual, I figured out how to make that happen.

9. Wrapup, What did I think about using Ada to build the SmartBase?

For the complete code of the SmartBase running on the RPIZeroW, please visit:
For the complete code of the STM32 version of the SmartBase please see this repository: -- This repo is essentially a fork of the above repository which targets the STM32. I've published them separately so it's easy to understand which repository represents which product.

However, before we jump into all that, here are some pictures of the finished project in action. I hope you enjoy reading about it as much as I enjoyed making it!

Phase 1: Reverse Engineering

The first problem I had to solve was: how do I control the adjustable base? As I’ve mentioned, I’m a software guy, but I’m dangerous with a multimeter. To figure this out, I simply started by sliding under my bed and looking at how it currently functioned.

The configuration of my bed was that of a wired remote, connected with a DIN5-style (big round connector commonly found in MIDI applications). Again, being a software guy, I assumed that perhaps the way the people who built the bed constructed it was to perhaps to implement some sort of protocol over serial. I connected it to my computer, fired up WireShark, but alas, nothing sensible came out of the remote.

So of course, my next step was to disassemble the remote. The wired remote, horrifically disassembled, is pictured below.

The remote has seen better days but. Don't mind the packet of taco seasoning off to the left.

After some experimentation with a multimeter I was able to determine that the circuit was quite simple. Here’s a picture of a whiteboard from around the time I was figuring it all out:

A whiteboard of me reverse engineering the circuit.

The logic for the circuit is actually quite simple. One of the 5 pins functions as the power, which happens to be +30V. The other pins are connected to the power via a switch and fed back into the bed. I tested the idea by taking hookup wire and touching it across the various pins in the configuration I had determined through my schematic analysis and sure enough it worked. With this information, I was ready to construct a rough physical prototype.

Phase 2: The Super Rough Prototype

So after learning the basics of how the control circuit worked, I wanted to test if this could be controlled via relays so after a few trips to my local electronics hacking store SkyCraft (a really one of a kind place!) and some Amazon purchases I managed to put together a rough perfboard prototype, which sadly I didn’t take a picture of when it was all together – however, I saved the board, which is pictured connected to a off the shelf relay module, below:

Perfboard prototype which didn't do much other than validate that the control circuit worked.

Phase 3: Moving from Perfboard to PCB

One of my goals for this project was to have a tidy little box I could stick under my bed that my wife would tolerate. The early perfboard prototype sat on top of some roughly cut tile with a lot of duct tape. I don't have a picture, but it was a great sight. Horrifying, but a good sight.

If I was going to have a permanent fixture, I had to fix the ratsnest problem, starting with the circuit. For this, I decided to start by designing my own custom PCB to control the bed. The topic of how many wrong turns and iterations of the PCB I did could frankly fill many pages. To shorten that, I'll just present my finished schematics, below. Before that, however, let me just say that I used JLCPCB to do all my fabrication, including SMT part placement. I can't say enough good things about this company. They are fast, cost effective, and the boards are flawless. For my PCB layout I used Autodesk Eagle, which was great as well.

To gain a better understanding of the hardware required to run the SmartBase, I've prepared the following Block Definition Diagram which details the main systems the SmartBase relies on to provide its functionality.

Block Definition Diagram detailing main components of the Smartbase.

At the top level we have the Relay and Power Control Module and the LED Mainboard Module. These two components roughly outline the two boards that were created for this project. In the following paragraphs we discuss each board separately.

First, let's look at the board on the bottom, which was responsible for power and hosting the relays on the board that control the bed.

This schematic shows the daughter board, which is responsible for power and control.
Layout for the bottom board.

Power is routed in through this board on the bottom through a barrel connector to all of the components within the SmartBase. Other components hosted on this board are:

In order to control the relays (safely) with GPIO signals I used a series of FQP30N06L "logic level" MOSFET transistors connected as shown in the above schematic.

One cool thing I did was make the configuration of the relays jumpered, so if I ever encountered a different bed with a different control configuration they relays can be easily rerouted. The basic idea is that the jumper positions control which pin gets the power signal when the relay is activated.

Next, this is the top board, which hosts the WS2812B LED array as well as the CPU, which is a Raspberry Pi Zero W. The schematic and the layout file for the board is pictured, below:

The LED array and host connectors for the two boards.
I have a hard time believing I got this right on the first shot.

This board hosts two 40 connectors which host both the GPIO cable connector from the bottom board and the header for the Raspberry Pi itself. If you can believe it, I got this board right on the first try just by following manufacturer spec sheets.

Phase 4: Enclosure Design

To build the enclosure I opted for a very simple design, which features several components:

  • Two DIN-5 ports on the back. One connects to the bed and the other allows you to reconnect the remote control to the bed if you wish to ever manually control the bed. In reality I never used this feature because the voice control was so good.
  • One power connector for 5VDC power.
  • The case actually splits into 3 different bodies. The bottom body, the top body, and the "lens" which I printed out of clear PETG. The rest of the case was printed in black PLA.
Eagle CAD model of the bottom of the case.
Back of the SmartBase.
View of the top of the case from the bottom view. The LED array is pictured.

One challenging aspect of this case design was actually coming up with appropriate screw sizes. I ended up using a combination of M2 and M3 screws. I found a bunch of assorted hex head sizes on amazon and was able to design the case dimensions around those.

Phase 5: Software Design

In this section we will discuss the design of the Ada software that runs the SmartBase. The software described in this section is the software that drives the version of the SmartBase shown in the demo. Later in the section on STM32 we discuss how these components are handled on a STM32F4-family processor in Ada. However, before we start, a few notes:

  • SmartBase makes use of tasking. It is in fact mainly composed of 5 core tasks that handle relay control, command interfacing, MQTT command processing, and LED status control.
  • Because I wanted to be able to verify things (and the run it on metal later), I enabled the Ravenscar profile.
  • Portions of the application are written in Spark. Notably the components that control the bed are written in Spark with some lightweight specifications around the control sequences.

Concept of Operation

The SmartBase gets inputs from PIR sensors which will trigger fade on / fade off events. These events are able to be processed along with MQTT events which arrive via a AWS Lambda function connected to the Alexa voice service. These MQTT events then turn into motion of the bed via the relay control subsystem. The following diagram provides a high level summary of how the SmartBase performs its main operations.

SmartBase HLA detailing the main high level software components developed.


The typical way one builds microcontroller applications is via a state machine pattern encoded into the main loop of the program running on the microcontroller.

For simple applications this is generally fine but for more complex applications it is common to use the multi-tasking capabilities found in an RTOS such as FreeRTOS. That said, one of the excellent aspects of programming a system like the SmartBase in Ada is Ada's excellent (and I mean excellent) tasking facilities that are built right into the language. Because of this, I opted to use Ada's tasking features to structure my application. If you'd like to learn more about this capability, I suggest you take a look at Fabien's article over on the AdaCore blog, here: There's a mini-RTOS in my language.

The SmartBase uses 5 tasks for performing its core operations:

1. The Bed Task, which is responsible for controlling access to the relay control system.

2. The CLI Task, which provides a debugging command-line based interface to the SmartBase.

3. The MQTT Task, which listens for protocol events from Amazon IoT (from spoken voice commands) and talks to the bed task to execute protocol events.

4. The LED Task, which is responsible for providing a structured interface to controlling the LED ring. The LED ring defines states for connecting, connected, fading on, and fading off.

5. The Motion Detector Task, which is really comprised of several tasks and is easily the most complex of the 5. I describe the Motion Detector tasks more detail later in this section.

The following diagram details the relationship of these five tasks in more detail. Note that in the diagram I use the method loop to indicate the main loop of the task. One stipulation of Ravenscar is that tasks do not exit and notation calls that restriction out.

Overview of the main tasking components used in SmartBase.

The design of the system is that all interaction with the LED and Bed components happens strictly through the Commands interface with the exception of the Motion Detector tasks. This interface itself in turn only acts with protected objects. For example, if a command arrives via the command line and the MQTT task at the same time (assuming we have more than one CPU) they will both attempt to process the command through the Commands interface which in turn will ensure the access to the resources is serialized.

One interesting task is the LED_Status_Task, which is responsible for processing changes to the LED status ring. There are two problems solved in this component: 1) How to provided serialized access to the underling LED hardware and 2) How to ensure that transitions to different LED states are valid? The first problem is solved through protected objects. The second part of the program is covered in more detail in the next section on verification.

Lastly, the most complicated use of tasking is easily in the way the SmartBase does motion detection. As can be seen in the above figure, the MotionDetector package is composed of two Tasks and two protected objects. They function in the following fashion:

  • Interrupts arrive on the protected object Detector, which very quickly sets an interrupt flag.
  • The Motion_Detector_Trigger_Task monitors this flag by waiting on the entry to Triggered_Entry. Once the barrier releases, the Motion_Detector_Trigger_Task engages the Timer_Task via the MotionDetector_Control object's Start method.
  • The Timer_Task is then responsible for changing the status of the LED ring to OFF once the detection is finished.
  • Re-triggering is handled at the level of the Motion_Detector_Trigger_Task. If the LED hasn't already faded off, more time is simply added to the timeout. That way, if people keep moving the lights remain on.

Other Software Details

Some other items worth mentioning pertaining to the Pi Zero version of this implementation are the following:

  • For MQTT I wrote C bindings to the Eclipse Paho library. On the STM32 platform, I use the MQTT AT interface directly built into the ESP32 and program it over UART.
  • For LED Control on the Pi I used a neopixel library and bound to it in C. On STM32 there is no such type of library for Ada (or anything else, really) so I wrote my own hand-coded bit-banged version of a WS2812B control library. What is nice about my implementation is that it is optimized for my particular use case and uses constant RAM (whereas other implementations use RAM on the order of the number of pixels in the array). You can read about it in the section on STM32.
  • I did all my work in GPS, but I would really like to get my hands on the Eclipse version of the GNAT tool set and would happily accept any complimentary licenses!

Phase 6: Verifying Things

One of my goals was to check out the specification-related features of Ada with this project. To that end, I came up with two small verification tasks for my project.

1. First, rather than use a timer, part of the way that the bed is controlled is through the use of tasks and protected bodies. These tasks use a barrier to control when a task should start waiting to see if it should stop moving the bed. One aspect of this protocol is that since other commands may be received while the bed is moving (thus, nullifying the current action), the tasks controlling the timeout have to know that they have been cancelled without having race conditions around the starting and stopping of the bed. I will show a little bit of what I did in the protocol with relation to specification.

2. Second, a critical element of this application is the LED status ring. In this application the LED status ring is used for indicating when the SmartBase is connecting to the internet, disconnected, and when motion is detected. Designing a system that can process all of these states at any time is trickier than it sounds and I discuss the model I used for managing the LED status ring.

Specifications on the Bed Controller

The first thing I wanted to write some specifications for were the behaviors surrounding the behavior of the stopping and starting of the bed. As I described earlier. To do this I wrote the following specifications, which I will explain after the listing.

procedure Do_Stop_At (Pin : in out Pin_Type;
                      Expiration_Slot : Time;
                      Actual_Expiration_Slot : Time)
    Contract_Cases => (
        -- the slots match. This means
        -- we will be performing the action
        -- on the pin we expected.
        (Actual_Expiration_Slot = Expiration_Slot) => Pin'Old = Pin,
        -- the slots DON'T match, which means we missed our window
        (Actual_Expiration_Slot /= Expiration_Slot) => Pin = Pin_None

procedure Stop_At (Pin : in out Pin_Type; Expiration_Slot : Time);

procedure Stop
    Global => (Input => (Device.GPIO),
    Output => (Bed_State_Ghost.Moving)),
        Pre  => True,
        Post => Bed_State_Ghost.Moving = False;

In the first specification, there are two cases. The first case is when the time slot that the timer task used to cancel the task is the one currently executing. In this case, we expect that the pre-state value of the Pin matches the post-state value of the pin, that is, we actually perform the stop on the pin we expected to. In the second case we require that if they are not the same that no pin is used to do anything. This is represented by the expression Pin = Pin_None.

The second set of specs are on the Stop and Start methods. These specs simply require that, in the case of Stop, we actually stop the bed, and in the case of Start, we assign a pin when the move was successful.

In the above listing, you might note that I am using an explicit Ghost package to hold ghost state. Why am I doing this when Ada/Spark 2012 has this feature built in? This is to get around an incompatibility I was having getting this to run on the Raspberry Pi, which only had an older version of GNAT available (and didn't support ghost fields). I was able to replicate this behavior by just encoding the ghost state into an actual package. Since ghosts are really just syntactic sugar for that it works quite nicely.

Design of LED Status Control Ring State Machine

Of course, verifying things doesn't always mean you write specifications. There are lots of ways to add more assurance around your design. The next thing I decided to analyze was the LED status ring, the states for which can be seen in the following diagram.

Diagram showing all the possible LED states.

In the above diagram we have all of the states that the LED ring can be in. One thing that was important to me was ensuring that, the following properties would hold:

  • When the LED faded ON, the system would not attempt to fade it back on. It sounds like a simple thing, but I didn't want to create a disco effect.
  • I wanted to ensure that no matter what state the system was in, if the connection was every lost the system would, as soon as it could, notify the user. This point is subtle. I didn't want the system to interrupt the visual effect of a fade, however I did want the connection sequence to begin as soon as possible. Ensuring this sort of separation was critical to my design.
  • Once the system was trying to connect, the visual feedback that the system was connecting should not be interrupted, even if motion detection events are happening. In my design, one doesn't have to disable motion detection -- the state machine of the LED ring simply subsumes this logic and makes such interruptions impossible.

What's nice about having a model like the one pictured above is that we can simply look at it and see if the desired properties hold. So let's do that one by one.

The first property obviously holds. Specifically, the only edges going into Fade On come from None, which is not reachable for any state after Fade on. So far so good.

The second property holds because all states in the state machine may make a transition to Connecting after completing their state transition.

The third property holds because for all nodes in the graph there are no transitions from the Connecting state except Connected.

Phase 7: Moving to the STM32

As I mentioned in the introduction, after completing this project on the Raspberry Pi Zero W platform I immediately began work on a version that could run on bare metal on a STM32 processor.

For my development boards I selected:

I'm not totally done getting the SmartBase running on the STM32 platform, but here's a quick rundown of what is and isn't done so far:

  • The Bed Control is done and works perfectly with the CLI interface over a serial console. You can see that demonstrated in the video, above.
  • The Motion Detection is done and perfectly works with the LED array.
  • The LED Controls are done, thanks to a driver for the WS2812B I wrote which I describe in detail in Section 8.
  • The MQTT work is almost done. To do this I had to build a custom ESP32 firmware to enable the MQTT AT command set on my board. This works perfectly through the serial console and I'm able to get it to pull down MQTT messages. I'm currently writing the driver that will send the AT commands over UART to the ESP32.
  • PCB design hasn't been redone for the ESP32 + STM32 combo. However, I'm really excited about getting these guys onto my PCBs and I've already looked over the reference designs.

Phase 8: Bit Banging a WS2812B on a STM32F4 in Ada

One of the most interesting parts of this project was when I had to get the LED array working on the STM platform in Ada. The timings on the WS2812B are relatively tight and unlike platforms like Arduino and others where there is a wealth of information available, on the STM32 basically nothing exists. What little does exist could never be used in an Ada program because it is hopelessly coupled to the hardware abstraction layers and drivers provided by ST Microelectronics. Therefore, to do this I had to start from from principles and work my way forward.

From the manufacturer specification sheets, the WS2812B implements the following protocol:

To send 0 or a 1 you hold the signal high or low for the specified amount.
The specific values of the WS2812B can be found in this table, obtained from the manufacturer.

To trick to producing colors with the WS2812B is to set the rightmost 24 bits of a number to the GRB value you want to set. For example, to set a LED to green, you would send a waveform corresponding to the following number:

0000 0000 1111 11111 0000 0000 0000 0000
      (white)      (green)          (red)          (blue)

To control multiple pixels you just repeat this for as many pixels as you have, making sure to say within the Treset window, which practically just means you do it as fast as possible.

To get reliable timings the codes that drive the WS2812B must be written in assembly with interrupts disabled. As a proof of concept I decided to build up a basic loop to set a single LED to green. To do this I cross referenced the cycle cost for each of the instructions, which can be found on the ARM website

As for how many instructions are required, the calculations were performed as follows:

The STM32F429ZIT6 operates at 180Mhz, i.e. 180000000 CPS, which means each cycle costs: 0.00555555556 microseconds (5.555555559999999 ns). To achieve 800ns we need 800/5.555555559999999 = 143.9 instructions (i.e., 144) of delay time added to the pipeline. This works out to 144 * 5.55ns = 799.2 ns, which is well within the +/- 150ns needed. Similarly, to achieve 450 ns, we need 450/5.555555559999999 = 81 instructions of delay added. That gives us 81 * 5.55ns = 449.55, which is well within +/- 150ns needed.

The trick to implementing this is to work in the time to load and set the bits high (or low) and combine that with a loop that keeps that timing in check.

From reading other codes (such as the official neopixel code), one way people have done this is by adding explicitly chains of NOP instructions to the pipeline. This works well on a 16Mhz processor like the Arduino's AVR. It doesn't work as well on a 180Mhz processor. Because I didn't want pages and pages of NOPs pasted into my program, I decided instead to work out the code using a loop structure.

The following code works essentially according to the following procedure:

1. Load up a counter that dictates the number of pixels we want to do this for.

2. Load up a number that indicates which GRB value to load.

3. Loop over each bit of the color and, following the convention above, send the appropriate bit by introducing enough cycle delay to get the required timeouts.

Initially to create this program I worked it out using the math shown above for a single color value. This worked well for a small proof of concept code, but it was difficult to scale to the entire GRB value (let alone multiple ones). To help me expand the program I used an ARM simulator that could count cycles to ensure the loops I had written were correct. I used the VisUAL simulator for this purpose.

Finally, however, I wanted to be absolutely sure that my timings were correct. I actually didn't own an oscilloscope so I finally broke down and ordered one off of Amazon. I ended up getting a Hantek DSO5072P, which is a reasonably good scope for my purposes, if not a fancy one. The image below shows an example cursor session, measuring the delay of one bit.

Doing a little cursor measurement with my Hantek DSO5072P.

The following code shows the general principle of operation -- if you'd like to see how I implemented it in the actual Ada code, please see the end of this post for the full code. One interesting thing to note here is that most neopixel implementations consume ram that grows with the number of pixels. Because I don't ever want any pixel to be a different color than another one I was able to make a simplification that allows my program to always require constant memory, regardless of the number of pixels being driven.

For now I'm just using this library in my own project, however, if others find it useful I'd consider contributing a library that makes STM32 control of neopixels accessible to all.

cpsie	i
mov		r3, #2              ; the number of pixels to do this for
ldr		r1, =0xAAAAAA       ; load bits to be loaded
ldr		r5, =0x40021418     ; set r5 to the GPIOF register + 18 offset for BSRR
ldr		r6, =0x2000         ; pin 13 HIGH mask
ldr		r9, =0x20000000     ; pin 13 LOW mask
mov		r8, #1              ; we use this to test bits
cmp		r3, #0        ; test if we are done
beq		done          ; if we are out of pixels, finish up
mov		r4, #23       ; we are going to send 24 bits, prime it here.
sub		r3, r3, #1    ; decrement this pixel
lsl		r2, r8, r4    ; build the mask by shifting over the number of bits we have
tst		r1, r2        ; check the mask against the bits we are loading.
bne		send_one      ; send a one
b		send_zero     ; otherwise, send a zero
str		r6, [r5]            ; set pin 13 HIGH
;;		delay for ~ 800ns
mov		r0, #36
subs		r0, r0, #1
bne		delay_T1H
;;		end delay
str		r9, [r5]            ; set pin 13 LOW
;;		delay for ~ 450ns
mov		r0, #20
subs		r0, r0, #1
bne		delay_T1L
;;		end delay
b		bit_sent
str		r6, [r5]            ; set pin 13 HIGH
;;		delay for ~ 400ns
mov		r0, #17
subs		r0, r0, #1
bne		delay_T0H
;;		end delay
str		r9, [r5]            ; set pin 13 LOW
;;		delay for ~ 850ns
mov		r0, #38
subs		r0, r0, #1
bne		delay_T0L
;;		end delay
b		bit_sent
cmp		r4, #0       ; was that the last bit?
sub		r4, r4, #1   ; otherwise, decrement our counter
beq		send_pixel   ; if so, go to the next pixel
b		send_bit     ; and send the next bit
cpie	i

Again, if you want to see what this all looks like when it makes it back to Ada, check out the Code Section of this project.


In this article I detailed the creation of a novel IoT device, the SmartBase. I described how it works, and described in detail each phase of its development.

So after all of this, what is my opinion of developing this product in Ada?

Frankly, my reaction is that I cannot even imagine doing it in another language. Even though I am quite familiar with so-called "exotic" languages like Haskell, the blunt, unapologetic efficiency of Ada, the pickiness of the compiler, the the robust, built-in support for tasking and specification were a pleasure to work with and I'm confident these features helped me to find many errors that would have otherwise potentially caused problems in my system. Some other items, in no particular order are:

  • Scenarios are brilliant. Using scenarios I was able to make multiple versions of the code for SmartBase for different boards and tie it all together quite easily.
  • Compared to Java and other languages, the package system of Ada, in general, makes structuring a complex application much better. I love how I can have package initialization, nest objects, and in general encapsulate functionality. I think that many people learn the OO facilities in a language like Java and walk away thinking "this is what objects are all about." The package system reminded me of the one in OCaml, which I also quite like.
  • I love love love the support for separate complication. When you combine that with the package system, you have a powerful mechanism for build management.
  • Having a SPARK > Prove All menu in my IDE is a beautiful thing to see.

In the future I plan to use Ada on a few more projects, for example, I wanted to make some small, battery powered WIFI LED strips. I can't think of a better language for the job.

  • Access and download the project schematics here.
  • Access the project code here.
  • GNAT Community was used in this project, download it here.

CuBit: A General-Purpose Operating System in SPARK/Ada Wed, 10 Jun 2020 06:10:00 -0400 Jon Andrew pragma Suppress (Index_Check); pragma Suppress (Range_Check); pragma Suppress (Overflow_Check); ... pragma Restrictions (No_Floating_Point);
package Compiler is
    for Default_Switches ("Ada") use
end Compiler;
-- Read from a model-specific register (MSR)
function rdmsr(msraddr : in MSR) return Unsigned_64
    low : Unsigned_32;
    high : Unsigned_32;
        Inputs  =>  Unsigned_32'Asm_Input("c", msraddr),
        Outputs => (Unsigned_32'Asm_Output("=a", low),
                    Unsigned_32'Asm_Output("=d", high)),
        Volatile => True);
    return (Shift_Left(Unsigned_64(high), 32) or Unsigned_64(low));
end rdmsr;
/* AP starting point */
AP_START        = 0x7000;

/* kernel load and link locations */
KERNEL_PHYS     = 0x00100000;
KERNEL_BASE     = 0xFFFFFFFF80000000;

    . = AP_START;

    .text_ap : AT(AP_START) {
        stext_ap = .;
        etext_ap = .;



    .text : AT(KERNEL_PHYS)
        stext = .;
        build/boot.o (.text .text.*)    /* need this at the front */
        *( EXCLUDE_FILE(build/init.o) .text .text.*)
    . = ALIGN(4K);
    etext = .;

    .rodata :
        srodata = .;
        *(.rodata .rodata.*)
.text_ap : AT(AP_START) {
    stext_ap = .;
    etext_ap = .;
    -- Symbol is a useless type, used to prevent us from forgetting to use
    -- 'Address when referring to one.
    type Symbol is (USELESS) with Size => System.Word_Size;


; we'll link this section down low, since it has to be in first
; 65535 bytes for real mode.
section .text_ap_entry
> readelf -a build/boot_ap.o
Section Headers:
  [Nr] Name              Type             Address           Offset
       Size              EntSize          Flags  Link  Info  Align
  [ 0]                   NULL             0000000000000000  00000000
       0000000000000000  0000000000000000           0     0     0
  [ 1] .shstrtab         STRTAB           0000000000000000  00000320
       000000000000009e  0000000000000000           0     0     0
  [ 2] .strtab           STRTAB           0000000000000000  000003c0
       000000000000009d  0000000000000000           0     0     0
  [ 3] .symtab           SYMTAB           0000000000000000  00000460
       0000000000000198  0000000000000018           2    15     8
  [ 4] .text             PROGBITS         0000000000000000  00000040
       0000000000000000  0000000000000000  AX       0     0     16
  [ 5] .text_ap_entry    PROGBITS         0000000000000000  00000040
       000000000000008e  0000000000000000   A       0     0     16
    subtype kernelTextPages is Virtmem.PFN range
        Virtmem.addrToPFN(Virtmem.K2P(To_Integer(Virtmem.stext'Address))) .. 
        Virtmem.addrToPFN(Virtmem.K2P(To_Integer(Virtmem.etext'Address) - 1));

    subtype kernelROPages is Virtmem.PFN range
        Virtmem.addrToPFN(Virtmem.K2P(To_Integer(Virtmem.srodata'Address))) .. 
        Virtmem.addrToPFN(Virtmem.K2P(To_Integer(Virtmem.erodata'Address) - 1));

    subtype kernelRWPages is Virtmem.PFN range
        Virtmem.addrToPFN(Virtmem.K2P(To_Integer(Virtmem.sdata'Address))) .. 
        Virtmem.addrToPFN(Virtmem.K2P(To_Integer(Virtmem.ebss'Address) - 1));
procedure determineFlagsAndMapFrame(frame : in Virtmem.PFN) is

        if frame in kernelTextPages then

            if not ok then raise RemapException; end if;

procedure unlink(ord : in Order; addr : in Virtmem.PhysAddress) with
        SPARK_Mode => On,
        Pre  => freeLists(ord).numFreeBlocks > 0,
        Post => freeLists(ord).numFreeBlocks =
                freeLists(ord).numFreeBlocks'Old - 1
        block : aliased FreeBlock with
            Import, Address => To_Address(addr);

        prevAddr : constant System.Address := block.prevBlock;
        nextAddr : constant System.Address := block.nextBlock;

            prevBlock : aliased FreeBlock with
                Import, Address => prevAddr;

            nextBlock : aliased FreeBlock with
                Import, Address => nextAddr;
            prevBlock.nextBlock := nextAddr;
            nextBlock.prevBlock := prevAddr;
        end linkNeighbors;

        -- decrement the free list count when we unlink somebody
        freeLists(ord).numFreeBlocks :=
            freeLists(ord).numFreeBlocks - 1;
    end unlink;
type FreeNode is
        next : System.Address;
        prev : System.Address;
    end record with Size => 16 * 8;

    for FreeNode use
        next at 0 range 0..63;
        prev at 8 range 0..63;
    end record;

    type Slab is limited record
        freeList    : FreeNode;

        numFree     : Integer := 0;
        capacity    : Integer := 0;

        blockOrder  : BuddyAllocator.Order;
        blockAddr   : Virtmem.PhysAddress;
        mutex       : aliased Spinlock.Spinlock;
        alignment   : System.Storage_Elements.Storage_Count;
        paddedSize  : System.Storage_Elements.Storage_Count;
        initialized : Boolean := False;
    end record;

    -- GNAT-specific pragma
    pragma Simple_Storage_Pool_Type(Slab);
    objSlab : SlabAllocator.Slab;
    type myObjPtr is access myObject;
    for myObjPtr'Simple_Storage_Pool use objSlab;

    procedure free is new Ada.Unchecked_Deallocation(myObject, myObjPtr);
    obj : myObjPtr;


    SlabAllocator.setup(objSlab, myObject'Size);
    obj := new Object;
    -- FADT - Fixed ACPI Description Table.
    type FADTRecord is
        header              : SDTRecordHeader;
        firmwareControl     : Unsigned_32;  -- ignored if exFirmwareControl present
        dsdt                : Unsigned_32;  -- ignored if exDsdt present
        reserved1           : Unsigned_8;
        powerMgmtProfile    : PowerManagementProfile;
        sciInterrupt        : Unsigned_16;
        smiCommand          : Unsigned_32;
        acpiEnable          : Unsigned_8;
        acpiDisable         : Unsigned_8;
        S4BIOSReq           : Unsigned_8;
        pStateControl       : Unsigned_8;
        PM1AEventBlock      : Unsigned_32;
        PM1BEventBlock      : Unsigned_32;
        PM1AControlBlock    : Unsigned_32;
        PM1BControlBlock    : Unsigned_32;
        PM2ControlBlock     : Unsigned_32;
        PMTimerBlock        : Unsigned_32;
        GPE0Block           : Unsigned_32;
        GPE1Block           : Unsigned_32;
        PM1EventLength      : Unsigned_8;
        PM1ControlLength    : Unsigned_8;
        PM2ControlLength    : Unsigned_8;
        PMTimerLength       : Unsigned_8;
        GPE0BlockLength     : Unsigned_8;
        GPE1BlockLength     : Unsigned_8;
        GPE1Base            : Unsigned_8;
        cStateControl       : Unsigned_8;
        pLevel2Latency      : Unsigned_16;
        pLevel3Latency      : Unsigned_16;
        flushSize           : Unsigned_16;
        flushStride         : Unsigned_16;
        dutyOffset          : Unsigned_8;
        dutyWidth           : Unsigned_8;
        dayAlarm            : Unsigned_8;
        monthAlarm          : Unsigned_8;
        century             : Unsigned_8;   -- RTC index into RTC RAM if not 0
        intelBootArch       : Unsigned_16;  -- IA-PC boot architecture flags
        reserved2           : Unsigned_8;   -- always 0
        flags               : Unsigned_32;  -- fixed feature flags
        resetRegister       : GenericAddressStructure;
        resetValue          : Unsigned_8;
        armBootArch         : Unsigned_16;
        fadtMinorVersion    : Unsigned_8;
        exFirmwareControl   : Unsigned_64;
        exDsdt              : Unsigned_64;
        exPM1AEventBlock    : GenericAddressStructure;
        exPM1BEventBlock    : GenericAddressStructure;
        exPM1AControlBlock  : GenericAddressStructure;
        exPM1BControlBlock  : GenericAddressStructure;
        exPM2ControlBlock   : GenericAddressStructure;
        exPMTimerBlock      : GenericAddressStructure;
        exGPE0Block         : GenericAddressStructure;
        exGPE1Block         : GenericAddressStructure;

        -- ACPI 6 fields (not supported yet)
        --sleepControlReg     : GenericAddressStructure;
        --sleepStatusReg      : GenericAddressStructure;
        --hypervisorVendor    : Unsigned_64;
    end record with Size => 244*8;

    for FADTRecord use
        header              at 0   range 0..287;
        firmwareControl     at 36  range 0..31;
        dsdt                at 40  range 0..31;
        reserved1           at 44  range 0..7;
        powerMgmtProfile    at 45  range 0..7;
        sciInterrupt        at 46  range 0..15;
        smiCommand          at 48  range 0..31;
        acpiEnable          at 52  range 0..7;
        acpiDisable         at 53  range 0..7;
        S4BIOSReq           at 54  range 0..7;
        pStateControl       at 55  range 0..7;
        PM1AEventBlock      at 56  range 0..31;
        PM1BEventBlock      at 60  range 0..31;
        PM1AControlBlock    at 64  range 0..31;
        PM1BControlBlock    at 68  range 0..31;
        PM2ControlBlock     at 72  range 0..31;
        PMTimerBlock        at 76  range 0..31;
        GPE0Block           at 80  range 0..31;
        GPE1Block           at 84  range 0..31;
        PM1EventLength      at 88  range 0..7;
        PM1ControlLength    at 89  range 0..7;
        PM2ControlLength    at 90  range 0..7;
        PMTimerLength       at 91  range 0..7;
        GPE0BlockLength     at 92  range 0..7;
        GPE1BlockLength     at 93  range 0..7;
        GPE1Base            at 94  range 0..7;
        cStateControl       at 95  range 0..7;
        pLevel2Latency      at 96  range 0..15;
        pLevel3Latency      at 98  range 0..15;
        flushSize           at 100 range 0..15;
        flushStride         at 102 range 0..15;
        dutyOffset          at 104 range 0..7;
        dutyWidth           at 105 range 0..7;
        dayAlarm            at 106 range 0..7;
        monthAlarm          at 107 range 0..7;
        century             at 108 range 0..7;
        intelBootArch       at 109 range 0..15;
        reserved2           at 111 range 0..7;
        flags               at 112 range 0..31;
        resetRegister       at 116 range 0..95;
        resetValue          at 128 range 0..7;
        armBootArch         at 129 range 0..15;
        fadtMinorVersion    at 131 range 0..7;
        exFirmwareControl   at 132 range 0..63;
        exDsdt              at 140 range 0..63;
        exPM1AEventBlock    at 148 range 0..95;
        exPM1BEventBlock    at 160 range 0..95;
        exPM1AControlBlock  at 172 range 0..95;
        exPM1BControlBlock  at 184 range 0..95;
        exPM2ControlBlock   at 196 range 0..95;
        exPMTimerBlock      at 208 range 0..95;
        exGPE0Block         at 220 range 0..95;
        exGPE1Block         at 232 range 0..95;

        -- ACPI 6 fields
        --sleepControlReg     at 244 range 0..95;
        --sleepStatusReg      at 256 range 0..95;
        --hypervisorVendor    at 268 range 0..63;
    end record;
    ; Setup our kernel stack.
    mov rsp, qword (STACK_TOP)

    ; Add a stack canary to bottom of primary stack for CPU #0
    mov rbx, 0xBAD_CA11_D37EC7ED
    mov [rax], rbx

    ; Save rdi, rsi so adainit doesn't clobber them
    push rdi
    push rsi

    ; Initialize with adainit for elaboration prior to entering Ada.
    mov rax, qword adainit
    call rax

    ; Restore arguments to kmain
    pop rsi
    pop rdi

    ; call into Ada code
    mov rax, qword kmain
    call rax
GNAT Community 2020 is here! Tue, 26 May 2020 09:59:00 -0400 Nicolas Setton

We are happy to announce that the GNAT Community 2020 release is now available via Here are some release highlights:

GNAT compiler toolchain

The 2020 compiler includes tightening and enforcing of Ada rules, performance enhancements, and support for some Ada 202x features - watch this space for further news on this.

The compiler back-end has been upgraded to GCC 9 on all platforms except Mac OS - see below for further information about this exception.

ASIS is no longer supported, and we encourage you to switch to Libadalang for all your code intelligence needs. GNAT Community 2019 remains available for legacy support of ASIS.

RISC-V 64-bits

This year we have added a toolchain for RISC-V 64-bits hosted on Linux - you can try it out on boards like the HiFive Unleashed - and we include the emulator for this platform as well.


This release includes GNAT Studio, the evolution of GPS, our multi-language IDE for Ada, SPARK, C, C++ and Python. Notable features are:

  • A Completely new engine for Ada/SPARK navigation, implemented via a language server based on Libadalang. This means, in particular, that navigation works without requiring you to compile the codebase first.
  • Improved overall performance in the editors, the omnisearch, and the debugger.

  • Several UI enhancements, especially the contextual menus which have been reorganized.

Please also note that we no longer support GNAT Studio on Mac OS. 


Libadalang, a library for parsing and semantic analysis of Ada code, has made a lot of progress in the past year. In this GNAT Community release, you'll find:

  • A new app framework that allows you to scaffold your Libadalang project - see this blog post for more information.
  • The Python-facing API is now compatible with Python 3..

  • Support of Aggregate Projects has been added.


For those looking to take their Ada programs to the next level, GNAT Community includes a complete SPARK toolchain now including the Lemma library (doc).

Toolchain and development environment enhancements are:

  • New SPARK submenus and key shortcuts in GNAT Studio.

  • Parallel analysis of subprograms.

  • Automatic target configuration for GNAT runtimes.

Proving engine enhancements are:

  • Support for infinite precision arithmetic in Ada.Numerics.Big_Numbers.Big_Integers/Big_Reals (doc).

  • Support for partially initialized data in proof (doc).

  • Detection of memory leaks by proof.

  • Dead code detected by proof warnings.

  • Improved floating-point support in Alt-Ergo prover.

SPARK language enhancements are:

  • Support for local borrowers as part of pointer support through ownership. 

  • Many fixes in the new pointer support based on ownership.

  • Detection of wrap-around on modular arithmetic with annotation No_Wrap_Around (doc).

  • Support for forward goto.

  • Support for raise expressions (doc).

  • Detection of unsafe use of Unchecked_Conversion (doc).

  • New annotation Might_Not_Return on procedures (doc).

  • Volatility refinement aspects supported for types (doc).

  • Allow SPARK_Mode Off inside subprograms.

  • Support for volatile variables to prevent compiler optimizations (doc).

Support for Visual Studio Code

If you are using Visual Studio Code, we have written a prototype extension for Ada and SPARK as part of our work on the Ada Language Server: you can find it on the Visual Studio Marketplace.

Notes on Mac OS

Mac OS is becoming harder to maintain, especially the latest versions, which require code signing for binaries. For the GNAT Community 2020 release we decided not to codesign and notarize the binaries, so you'll have to circumvent the protections: see the README for the specific instructions. We have also removed support for the ARM cross compiler hosted on this platform, as well as GNAT Studio.

From Ada to Platinum SPARK: A Case Study for Reusable Bounded Stacks Thu, 14 May 2020 10:17:00 -0400 Pat Rogers

1. Introduction

To learn a new programming language, an effective approach is to implement data structures common to computer programming. This is an effective strategy because the problem to be solved is well understood and several different forms of a given data structure are possible: bounded versus unbounded, sequential versus thread-safe, and so on. A clear understanding of the problem allows one to focus on the language details, and the multiple forms likely require a wide range of language features. 

Fortunately, when learning SPARK, Ada programmers need not start from scratch. We can begin with an existing, production-ready Ada implementation for a common data structure and make the changes necessary to conform to SPARK. This approach is possible because the fundamental design, based on the principles of software engineering, is the same in both languages. We would have a package exporting a private type, with primitive operations manipulating that type; in other words, an abstract data type (ADT). The type might be limited, and might be tagged, using the same criteria in both languages to decide. Those primitive operations that change state would be procedures, with functions designed to be "pure" and side effects avoided. As a result, the changes need not be fundamental or extensive, although they are important and in some cases subtle.

The chosen Ada component is one that I have had for decades and have used in real-world applications. Specifically, this component defines a sequential, bounded stack ADT. The enclosing package is a generic so that the type of data contained in the stack objects need not be hard-coded. By "sequential" I mean that the code is not thread-safe. By "bounded" I mean that it is backed by an array, which as usual entails a discriminant on the private type to set the upper bound of the internal array component. Client misuse of the Push and Pop routines, e.g., pushing onto a full stack, raises exceptions. As Ada has evolved I have applied new features to make the code more robust, for example the Push and Pop routines use preconditions to prevent callers from misusing the abstraction, raising exceptions from within the preconditions instead of the procedure bodies.

This blog entry describes the transformation of that Ada stack ADT into a completely proven SPARK implementation that relies on static verification instead of run-time enforcement of the abstraction’s semantics. We will prove that there are no reads of unassigned variables, no array indexing errors, no range errors, no numeric overflow errors, no attempts to push onto a full stack, no attempts to pop from an empty stack, that subprogram bodies implement their functional requirements, and so on. As a result, we get a maximally robust implementation of a reusable stack abstraction providing all the facilities required for production use.

The transformation will occur in phases, following the adoption levels described in section 2. Each adoption level introduces more rigor and thus defines a simple, incremental transition approach.

Note that I assume familiarity with Ada, including preconditions and postconditions. Language details can be obtained from the online learning facilities available at, an interactive site allowing one to enter, compile, and execute Ada programs in a web browser. We also assume a degree of familiarity with SPARK. That same web site provides a similar interactive environment and materials for learning SPARK, including formal proof.

2. SPARK Adoption Levels

In 2016, AdaCore collaborated with Thales in a series of experiments on the application of SPARK to existing software projects written in Ada. The resulting document presents a set of guidelines for adopting formal verification in existing projects. These guidelines are arranged in terms of five levels of software assurance, in increasing order of benefits and costs. The levels are named Stone, Bronze, Silver, Gold and Platinum. Successfully reaching a given level requires successfully achieving the goals of the previous levels as well.

The guidelines were developed jointly by AdaCore and Thales for the adoption of the SPARK language technology at Thales but are applicable across a wide range of application domains. The document is available online:

2.1 Stone Level

The goal at the Stone level is to identify as much code as possible that belongs to the SPARK subset. That subset provides a strong semantic coding standard that enforces safer use of Ada language features and forbids those features precluding analysis (e.g., exception handlers). The result is potentially more understandable, maintainable code.

2.2 Bronze Level

The goal at the Bronze level is to verify initialization and correct data flow, as indicated by the absence of GNATprove messages during SPARK flow analysis. Flow analysis detects programming errors such as reading uninitialized data, problematic aliasing between formal parameters, and data races between concurrent tasks. In addition, GNATprove checks unit specifications for the actual data read or written, and the flow of information from inputs to outputs. As one can see, this level provides significant benefits, and can be reached with comparatively low cost. There are no proofs attempted at this level, only data and flow analyses.

2.3 Silver Level

The goal at the Silver level is to statically prove absence of run-time errors (AoRTE), i.e., that there are no exceptions raised. Proof at this level detects programming errors such as divide by zero, array indexes that are out of bounds, and numeric overflow (integer, fixed-point and floating-point), among others. These errors are detected via the implicit language-defined checks that raise language-defined exceptions. The checks themselves preclude a number of significant situations, including, for example, buffer overflow, which is often exploited to inject malicious executable code.

Preconditions, among other additions, may be required to prove these checks. To illustrate the benefit and part of the cost of achieving the Silver level, consider the way the Ada version of the stack ADT uses preconditions for this purpose. (The complete Ada implementation is explored in section 4.1.) First, here is the full declaration for type Stack in the Ada package private part:

type Content is array (Positive range <>) of Element;

type Stack (Capacity : Positive) is record
   Values : Content (1 .. Capacity);
   Top    : Natural := 0;
end record;

The type Element represents the kind of individual values contained by stack objects. Top is used as the index into the array Values and can be zero. The Values array uses 1 for the lower index bound so when Top is zero the enclosing stack object is logically empty. The following function checks for that condition:

function Empty (This : Stack) return Boolean is
  (This.Top = 0);

Consider, then, a function using Empty as a precondition. The function takes a stack parameter as input and returns the Element value at the logical top of the stack:

19    function Top_Element (This : Stack) return Element with
20      Pre => not Empty (This);

Given the precondition on line 20, within the function completion we know that Top has a value that is a potentially valid array index. (We'll also have to be more precise about Top's upper bound, as explained later in section 4.4.) There is no need for defensive code so the body is simply as follows:

57    function Top_Element (This : Stack) return Element is
58      (This.Values (This.Top));

If we did not have the precondition specified, GNATprove would issue a message:

58:24: medium: array index check might fail, (e.g. when This = (…, Top => 0) and …)

The message shows an example situation in which the check could fail: Top is zero, i.e., the stack is empty. (We have elided some of the message content to highlight the part mentioning Top.)

GNATprove will attempt to prove, statically, that the preconditions hold at every call site, flagging those calls, if any, in which the preconditions might not hold. Those failures must be addressed at the Silver level because the preconditions are necessary to the proof of absence of run-time errors.

As you can see, the Silver level provides highly significant benefits, but does require more contracts and potentially complex changes to the code. The effort required to achieve this level can be high. Arguably, however, this level should be the minimum target level, especially if the application executable is to be deployed with run-time checks disabled.

2.4 Gold Level

The goal at the Gold level is proof of key integrity properties. These properties are typically derived from software requirements but also include maintaining critical data invariants throughout execution. 

Working at this level assumes prior completion at the Silver level to ensure program integrity, such that control flow cannot be circumvented through run-time errors and data cannot be corrupted. Verification at this level is also expected to pass without any violations.

Key integrity properties are expressed as additional preconditions and postconditions beyond those used for defensive purposes.  In addition, the application may explicitly raise application-defined exceptions to signal violations of integrity properties. GNATprove will attempt to prove that the code raising an exception is never reached, and thus, that the property violation never occurs. This approach may also require further proof-oriented code.

The Gold level provides extremely significant benefits. In particular, it can be less expensive to prove at this level than to test to the same degree of confidence. However, the analysis may take a long time, may require adding more precise types (ranges), and may require adding more preconditions and postconditions. Even if a property is provable, automatic provers may fail to prove it due to limitations of the provers, requiring either manual proof or, alternatively, testing.

2.5 Platinum Level

The goal at the Platinum level is nothing less than full functional proof of the requirements, including the functional unit level requirements, but also any abstract requirements such as, for example, safety and security.

As with the Gold level, the application code must pass SPARK analysis without any violations. Furthermore, at the Platinum level GNATprove must verify complete user specifications for type invariants, preconditions, postconditions, type predicates, loop variants, and loop termination.

The effort to achieve Platinum level is high, so high that this level is not recommended during initial adoption of SPARK.

3. Development Environment and Configuration

When we say we use SPARK, we mean that we develop the sources in the SPARK language, but also that we use the SPARK analysis tool to examine and verify those sources. We developed our sources in GNAT Studio (formerly GPS), a multi-lingual IDE supporting both Ada and SPARK, among others. The SPARK analysis tool is named GNATprove, a command-line tool integrated with GNAT Studio. GNAT Studio facilitates invocation of GNATprove with control over switches and source files, providing traversable results and even, if need be, interactive proof.

3.1 The Provers

A critical concept for using GNATprove is that it transparently invokes third-party “provers” to analyze the given source files. These provers are somewhat specialized in their ability to analyze specific semantics expressed by the source code. As a result, invocation of a series of provers may be required before some source code is successfully proven. In addition, we may need to ask the provers to “try harder” when attempting to analyze difficult situations. GNATprove can do both for us via the “level=n” switch, where “n” is a number from 0 to 4 indicating increasing strength of analysis and additional provers invoked. In proving our stack implementation we use level 4.

3.2 Language-Defined Run-time Checks

GNATprove is also integrated with the GNAT Ada compiler, including the analysis of language-defined run-time checks produced by the compiler. GNATprove attempts to verify that no exceptions are raised due to these checks. It will do so even if we suppress the checks with compiler switches or pragma Suppress, so we can interpret lack of corresponding messages as successful verification of those checks.

Integer overflow checks are a special case, and as a result have a dedicated GNAT switch that affects whether that specific check is generated by the compiler. They are a special case because, in addition to the functional code, they may appear in the logical assertions about the functional code, including subprogram preconditions and postconditions. In these contexts, we might expect them to behave mathematically, without implementation bounds. For example, consider the following declaration for a procedure that enters a log entry into a file:

5    Entry_Num : Natural := 0;
7    procedure Log (This : String) with
8      Pre    => Entry_Num + 1 <= Integer'Last,
9      Global => (In_Out => Entry_Num);

The procedure body increments Entry_Num by one and then prepends the result to the string passed as the log entry. This addition in the body might overflow, but the issue under consideration is the addition in the precondition on line 8. If Entry_Num is Integer’Last at the point of the call, the addition on line 8 will overflow, as GNATprove indicates:

8:26: medium: overflow check might fail (e.g. when Entry_Num = Natural'Last)

We could revise the code so that the expression cannot overflow:

Pre => Entry_Num <= Integer'Last - 1,

although that is slightly less readable. Other alternatives within the code are possible as well. However, with regard to switches pertinent for check generation, GNAT provides the “-gnato” switch that allows us to control how integer overflow is treated. (There is a pragma as well, with the same effects.) We can use that switch to have the compiler implement integer arithmetic mathematically, without bounds, the way we might conceptually expect it to work within logical, non-functional assertions. As a result, there will be no integer overflow checks generated. The default effect for the switch, and the default if the switch is not present, is to enable overflow checks in both functional and assertion code so we just need to be aware of non-default usage when we want to determine whether integer overflow checks have been verified. (See the SPARK User Guide, section 5.7 “Overflow Modes” for the switch parameters.) In our GNAT project file, the switch is explicitly set to enable overflow checks in both the functional code and the assertion code.

3.3 Source Code File Organization

The main program declares objects of a type Stack able to contain character values. That Stack type is provided by the package Character_Stacks, which is an instantiation of a generic package defining a stack abstract data type. The instantiation is specified such that objects of the resulting Stack type can contain character values.

Logically, there are four source files in the application: two (declaration and body) for the generic package, one for the instantiation of that generic package, and one containing the demonstration main subprogram.  Operationally, however, there are multiple source files for the generic package. Rather than have one implementation that we alter as we progress through the SPARK adoption levels, we have chosen to have a distinct generic package for each level. Each generic package implements a common stack ADT in a manner consistent with an adoption level. The differences among them reflect the changes required for the different levels. This approach makes it easier to keep the differences straight when examining the code. Furthermore, we can apply the proof analyses to a conceptually common abstraction at arbitrary adoption levels without having to alter the code.

In addition to the content differences required by the adoption levels, each generic package name reflects the corresponding level. We have generic package Bounded_Stacks_Stone for the Stone level, Bounded_Stacks_Gold for the Gold level, and so on. Therefore, although the instantiation is always named Character_Stacks, we have multiple generic packages available to declare the one instantiation used.  

There are also multiple files for the instantiations. Each instantiation is located within a dedicated source file corresponding to a given adoption level (lines 2 and 3 below). For example, here is the content of the file providing the instance for the Stone level:

1 pragma Spark_Mode (On);
2 with Bounded_Stacks_Stone;
3 package Character_Stacks is new Bounded_Stacks_Stone
4   (…);

The file names for these instances must be unique but are otherwise arbitrary. For the above, the file name is “” because it is the instance of the Stone level generic.

Only one of these instances can be used when GNATprove analyzes the code (or when building the executable). To select among them we use a “scenario variable” defined in the GNAT project file that has scenario values matching the adoption level names. In the IDE this scenario variable is presented with a pull-down menu so all we must do to work at a given level is select the adoption level name in the pull-down list. The project file then selects the instantiation file corresponding to the level, e.g., “” when the Silver level is selected.

There are also multiple source files for the main program. Rather than have one file that must be edited as we prove the higher levels, we have two: one for all levels up to and including the Silver level, and one for all levels above that. The scenario variable also determines which of these two source files is active.

3.4 Verifying Generic Units

One of the current limitations of GNATprove is that it cannot verify generic units on their own. GNATprove must instead be provided an instantiation to verify. Therefore, whenever we say that we are verifying the generic package defining the stack ADT, we mean we are invoking GNATprove on an instantiation of that generic. As noted earlier in section 3.3, there are multiple source files containing these instantiations so we must select the file corresponding to the desired level when we want to verify the generic package alone. 

However, because there are only four total files required at any one time, we usually invoke the IDE action that has GNATprove analyze all the files in the closure of the application. The instantiation file corresponding to the scenario variable’s current selection will be analyzed; other instantiation files are ignored. This approach also verifies the main program’s calls to the stack routines, which is vital to the higher adoption levels.

4. Implementations Per Adoption Level

Our first main procedure, used for all adoption levels up through Silver, declares two stack objects (line 6 below) and manipulates them via the abstraction’s interface:

1 with Ada.Text_IO;       use Ada.Text_IO;
2 with Character_Stacks;  use Character_Stacks;
4 procedure Demo_AoRTE with SPARK_Mode is
6    S1, S2 : Stack (Capacity => 10);  -- arbitrary
8    X, Y : Character;
10 begin
11    pragma Assert (Empty (S1) and Empty (S2));
12    pragma Assert (S1 = S2);
13    Push (S1, 'a');
14    Push (S1, 'b');
15    Put_Line ("Top of S1 is '" & Top_Element (S1) & "'");
17    Pop (S1, X);
18    Put_Line ("Top of S1 is '" & Top_Element (S1) & "'");
19    Pop (S1, Y);
20    pragma Assert (Empty (S1) and Empty (S2));
21    Put_Line (X & Y);
23    Reset (S1);
24    Put_Line ("Extent of S1 is" & Extent (S1)'Image);
26    Put_Line ("Done");
27 end Demo_AoRTE;

This is the “demo_aorte.adb” file. The purpose of the code is to illustrate issues found at the initial levels, including proof in a caller context. It has no other functional purpose whatsoever. As we progress through the levels, we will add more assertions to highlight more issues, as will be seen in the other main procedure in the  “demo_gold.adb” file.

4.1 Initial Ada Implementation

The initial version defines a canonical representation of a sequential, bounded stack. As an abstract data type, the Stack type is declared as a private type with routines manipulating objects of the type. The type is declared within a generic package that has one generic formal parameter, a type representing the kind of elements contained by Stack objects. This approach is used in all the implementations.

Some routines have “defensive” preconditions to ensure correct functionality. They raise exceptions, declared within the package, when the preconditions do not hold.

The generic package in Ada is declared as follows:

1 generic
2    type Element is private;
3 package Bounded_Stacks_Magma is
5    type Stack (Capacity : Positive) is private;
7    procedure Push (This : in out Stack; Item : in Element) with
8      Pre => not Full (This) or else raise Overflow;
10    procedure Pop (This : in out Stack; Item : out Element) with
11      Pre => not Empty (This) or else raise Underflow;
13    function Top_Element (This : Stack) return Element with
14      Pre => not Empty (This) or else raise Underflow;
15    --  Returns the value of the Element at the "top" of This
16    --  stack, i.e., the most recent Element pushed. Does not
17    --  remove that Element or alter the state of This stack
18    --  in any way.
20    overriding function "=" (Left, Right : Stack) return Boolean;
22    procedure Copy (Destination : out Stack; Source : Stack) with
23      Pre => Destination.Capacity >= Extent (Source)
24               or else raise Overflow;
25    --  An alternative to predefined assignment that does not
26    --  copy all the values unless necessary. It only copies
27    --  the part "logically" contained, so is more efficient
28    --  when Source is not full.
30    function Extent (This : Stack) return Natural;
31    --  Returns the number of Element values currently
32    --  contained within This stack.
34    function Empty (This : Stack) return Boolean;
36    function Full (This : Stack) return Boolean;
38    procedure Reset (This : out Stack);
40    Overflow  : exception;
41    Underflow : exception;
43 private
45    type Content is array (Positive range <>) of Element;
47    type Stack (Capacity : Positive) is record
48       Values : Content (1 .. Capacity);
49       Top    : Natural := 0;
50    end record;
52 end Bounded_Stacks_Magma;

This version is below the Stone level because it is not within the SPARK subset, due to the raise expressions on lines 8, 11, 14, and 24. We will address those constructs in the Stone version.

The generic package body is shown below.

1 package body Bounded_Stacks_Magma is
3    procedure Reset (This : out Stack) is
4    begin
5       This.Top := 0;
6    end Reset;
8    function Extent (This : Stack) return Natural is
9       (This.Top);
11    function Empty (This : Stack) return Boolean is
12      (This.Top = 0);
14    function Full (This : Stack) return Boolean is
15      (This.Top = This.Capacity);
17    procedure Push (This : in out Stack; Item : in Element) is
18    begin
19       This.Top := This.Top + 1;
20       This.Values (This.Top) := Item;
21    end Push;
23    procedure Pop (This : in out Stack; Item : out Element) is
24    begin
25       Item := This.Values (This.Top);
26       This.Top := This.Top - 1;
27    end Pop;
29    function Top_Element (This : Stack) return Element is
30      (This.Values (This.Top));
32    function "=" (Left, Right : Stack) return Boolean is
33      (Left.Top = Right.Top and then
34       Left.Values (1 .. Left.Top) = Right.Values (1 .. Right.Top));
36    procedure Copy (Destination : out Stack; Source : Stack) is
37       subtype Contained is Integer range 1 .. Source.Top;
38    begin
39       Destination.Top := Source.Top;
40       Destination.Values (Contained) := Source.Values (Contained);   
41    end Copy;
43 end Bounded_Stacks_Magma;

Note that both procedure Copy and function “=” are defined for the sake of increased efficiency when the objects in question are not full. The procedure only copies the slice of Source.Values that represents the Element values logically contained at the time of the call. The language-defined assignment operation, in contrast, would copy the entire contents. Similarly, the overridden equality operator only compares the array slices, rather than the entire arrays, after first ensuring the stacks are the same logical size. 

However, in addition to efficiency, the "=" function is also required for proper semantics. The comparison should not compare array elements that are not, and perhaps never have been, currently contained in the stack objects. The predefined equality would do so and must, therefore, be replaced.

The changes to the body made for the sake of SPARK will amount to moving certain bodies to the package declaration so we will not show the package body again. The full Platinum implementation, both declaration and body, is provided in section 6.

4.2 Stone Implementation

The Stone level version of the package cannot have the "raise expressions" in the preconditions because they are not in the SPARK subset. The rest of the preconditions are unchanged. Here are the updated declarations for Push and Pop, for example:

procedure Push (This : in out Stack; Item : in Element) with
     Pre => not Full (This);

   procedure Pop (This : in out Stack; Item : out Element) with
     Pre => not Empty (This);

When we get to the adoption levels involving proof, GNATprove will attempt to verify statically that the preconditions will hold at each call site. Either that verification will succeed, or we will know that we must change the calling code accordingly. Therefore, the prohibited “raise expressions” are not needed.

The exception declarations, although within the subset, are also removed because they are no longer needed. 

The remaining code is wholly within the SPARK subset so we have reached the Stone level.

4.3 Bronze Implementation

The Bronze level is about initialization and data flow. When we apply GNATprove to the Stone version in flow analysis mode, GNATprove issues messages on the declarations of procedures Copy and Reset in the generic package declaration:

medium: "Destination.Values" might not be initialized in "Copy"
high: "This.Values" is not initialized in "Reset"

The procedure declarations are repeated below for reference:

procedure Copy (Destination : out Stack; Source : Stack) with
     Pre => Destination.Capacity >= Extent (Source);
   procedure Reset (This : out Stack);

Both messages result from the fact that the updated formal stack parameters have mode “out” specified. That mode, in SPARK, means more than it does in Ada. It indicates that the actual parameters are fully assigned by the procedures, but these two procedure bodies do not do so. Procedure Reset simply sets the Top to zero because that is all that a stack requires, at run-time, to be fully reset. It does nothing at all to the Values array component. Likewise, procedure Copy may only assign part of the array, i.e., just those array components that are logically part of the Source object. (Of course, if Source is full, the entire array is copied.) In both subprograms our notion of being fully assigned is less than SPARK requires. Therefore, we have two choices. Either we assign values to all components of the record, or we change the modes to “in out.” These two procedures exist for the sake of efficiency, i.e., not writing any more data than logically necessary. Having Reset assign anything to the array component would defeat the purpose. For the same reason, having Copy assign more than the partial slice (when the stack is not full) is clearly inappropriate. Therefore, we change the mode to “in out” for these two subprograms. In other cases we might change the implementations to fully assign the objects.

The other change required for initialization concerns the type Stack itself. In the main subprogram, GNATprove complains that the two objects of type Stack have not been initialized:

warning: "S1" may be referenced before it has a value
high: private part of "S1" is not initialized
warning: "S2" may be referenced before it has a value
high: private part of "S2" is not initialized
high: private part of "S1" is not initialized

Our full definition of the Stack type in the private part is such that default initialization (i.e., elaboration of object declarations without an explicit initial value) will assign the record components so that a stack will behave as if initially empty. Specifically, default initialization assigns zero to Top (line 5 below), and since function Empty examines only the Top component, such objects are empty.

1 type Content is array (Positive range <>) of Element;
3 type Stack (Capacity : Positive) is record
4    Values : Content (1 .. Capacity);
5    Top    : Natural := 0;
6 end record;

Proper run-time functionality of the Stack ADT does not require the Values array component to be assigned by default initialization. But just as with Reset and Copy, although this approach is sufficient at run-time, the resulting objects will not be fully initialized in SPARK, which analyzes the code prior to run-time. As a result, we need to assign an array aggregate to the Values component as well. Expressing the array aggregate is problematic because the array component type is the generic formal private type Element, with a private view within the package. Inside the generic package we don’t know how to construct a value of type Element so we cannot construct an aggregate containing such values. Therefore, we add the Default_Value generic formal object parameter and use it to initialize the array components.

This new generic formal parameter, shown below on line 5, is added from the Bronze version onward:

1 generic
2    type Element is private;
3    --  The type of values contained by objects of type Stack
5    Default_Value : Element;
6    --  The default value used for stack contents. Never
7    --  acquired as a value from the API, but required for
8    --  initialization in SPARK.
9 package Bounded_Stacks_Bronze is

The full definition for type Stack then uses that parameter to initialize Values (line 2):

1 type Stack (Capacity : Positive) is record
2    Values : Content (1 .. Capacity) := (others => Default_Value);
3    Top    : Natural := 0;
4 end record;

With those changes in place flow analysis completes without further complaint. The implementation has reached the Bronze level.

The need for that additional generic formal parameter is unfortunate because it becomes part of the user’s interface without any functional use. None of the API routines ever return it as such, and the actual value chosen is immaterial.

Note that SPARK will not allow the aggregate to contain default components (line 2):

1 type Stack (Capacity : Positive) is record
2    Values : Content (1 .. Capacity) := (others => <>);
3    Top    : Natural := 0;
4 end record;

as per SPARK RM 4.3(1).

Alternatively, we could omit this generic formal object parameter if we use an aspect to promise that the objects are initially empty, and then manually justify any resulting messages. We will in fact add that aspect for other reasons, but we prefer to have proof as automated as possible, for convenience and to avoid human error.

Finally, although the data dependency contracts, i.e., the “Global” aspects, would be generated automatically, we add them explicitly, indicating that there are no intended accesses to any global objects. For example, on line 3 in the following:

1 procedure Push (This : in out Stack;  Item : Element) with
2   Pre    => not Full (This),
3   Global => null;

We do so because mismatches between reality and the generated contracts are not reported by GNATprove, but we prefer positive confirmation for our understanding of the dependencies.

The flow dependency contracts (the “Depends” aspects) also can be generated automatically. Unlike the data dependency contracts, however, usually these can be omitted from the code even though mismatches with the corresponding bodies are not reported. That lack of notification is not a problem because the generated contracts are safe: they express at least the dependencies that the code actually exhibits. Therefore, all actual dependencies are covered. For example, a generated flow dependency will state that all outputs depend on all inputs, which is possible but not necessarily the case. 

However, overly conservative contracts can lead to otherwise-avoidable issues with proof, leading the developer to add precise contracts explicitly when necessary. The other reason to express them explicitly is when we want to prove data flow dependencies as part of the abstract properties, for example data flowing only between units at appropriate security levels. We are not doing so in this case.

4.4 Silver Implementation

If we try to prove the Bronze level version of the generic package, GNATprove will complain about various run-time checks that cannot be proved in the generic package body. The Silver level requires these checks to be proven not to fail, i.e., not to raise exceptions. 

The check messages are as follows, preceded by the code fragments they reference, with some message content elided in order to emphasize parts that lead us to the solution:

37    procedure Push (This : in out Stack; Item : in Element) is
38    begin
39       This.Top := This.Top + 1;
40       This.Values (This.Top) := Item;
41    end Push;
bounded_stacks_silver.adb:39:28: medium: overflow check might fail, … (e.g. when This = (…, Top => Natural'Last) …

bounded_stacks_silver.adb:40:24: medium: array index check might fail, … (e.g. when This = (…, Top => 2) and This.Values'First = 1 and This.Values'Last = 1)
47    procedure Pop (This : in out Stack; Item : out Element) is
48    begin
49       Item := This.Values (This.Top);
50       This.Top := This.Top - 1;
51    end Pop;
bounded_stacks_silver.adb:49:32: medium: array index check might fail, … (e.g. when This = (…, Top => 2) and This.Values'First = 1 and This.Values'Last = 1)
57    function Top_Element (This : Stack) return Element is
58      (This.Values (This.Top));
bounded_stacks_silver.adb:58:24: medium: array index check might fail, … (e.g. when This = (…, Top => 2) and This.Values'First = 1 and This.Values'Last = 1)
64    function "=" (Left, Right : Stack) return Boolean is
65       (Left.Top = Right.Top and then
66        Left.Values (1 .. Left.Top) = Right.Values (1 .. Right.Top));
bounded_stacks_silver.adb:66:12: medium: range check might fail, … (e.g. when Left = (Capacity => 1, …, Top => 2) …

bounded_stacks_silver.adb:66:43: medium: range check might fail, … (e.g. when Right = (Capacity => 1, …, Top => 2) …
72    procedure Copy (Destination : in out Stack; Source : Stack) is
73       subtype Contained is Integer range 1 .. Source.Top;
74    begin
75       Destination.Top := Source.Top;
76       Destination.Values (Contained) := Source.Values (Contained);
77    end Copy;
bounded_stacks_silver.adb:76:47: medium: range check might fail, … (e.g. when Destination = (Capacity => 1, …) and Source = (Capacity => 1, …), Top => 2)

All of these messages indicate that the provers do not know that the Top component is always in the range 0 .. Capacity. The code has not said so, and indeed, there is no way to use a discriminant in a scalar record component declaration to constrain the component’s range.  This is what we would write for the record type implementing type Stack in the full view, if we could (line 3):

1 type Stack (Capacity : Positive) is record
2    Values : Content (1 .. Capacity) := (others => Default_Value);
3    Top    : Natural range 0 .. Capacity := 0;
4 end record;

but that range constraint on Top is not legal. The reason it is illegal is that the application can change the value of a discriminant at run-time, under controlled circumstances, but there is no way at run-time to change the range checks in the object code generated by the compiler. However, with Ada and SPARK there is now a way to express the constraint on Top, and the provers will recognize the meaning during analysis. Specifically, we apply a “subtype predicate” to the record type declaration (line 5):

1 type Stack (Capacity : Positive) is record
2    Values : Content (1 .. Capacity) := (others => Default_Value);
3    Top    : Natural := 0;
4 end record with
5   Predicate => Top in 0 .. Capacity;

This aspect informs the provers that the Top component for any object of type Stack is always in the range 0 .. Capacity. That addition successfully addresses all the messages about the generic package body. Note that the provers will verify the predicate too.

However, GNATprove also complains about the main program. Consider that the first two assertions in the main procedure are not verified:

10   begin
11      pragma Assert (Empty (S1) and Empty (S2));
12      pragma Assert (S1 = S2);

GNATprove emits:

11:19: medium: assertion might fail, cannot prove Empty (S1)
12:19: medium: assertion might fail, cannot prove S1 = S2

We can address the issue for function Empty, partly, by adding another aspect to the declaration of type Stack, this time to the visible declaration:

type Stack (Capacity : Positive) is private
      with Default_Initial_Condition => Empty (Stack);

The new aspect indicates that default initialization results in stack objects that are empty, making explicit, and especially, verifiable, the intended initial object state. We will be notified if GNATprove determines that the aspect does not hold. 

That new aspect will handle the first assertion in the main program on line 11 but GNATprove complains throughout the main procedure that the preconditions involving Empty and Full cannot be proven. For example:

13    Push (S1, 'a');
14    Push (S1, 'b');
15    Put_Line ("Top of S1 is '" & Top_Element (S1) & "'");

GNATprove emits:

13:06: medium: precondition might fail, cannot prove not Full (This)

14:06: medium: precondition might fail, cannot prove not Full (This) [possible explanation: call at line 13 should mention This (for argument S1) in a postcondition]

15:35: medium: precondition might fail, cannot prove not Empty (This) [possible explanation: call at line 14 should mention This (for argument S1) in a postcondition]

Note the “possible explanations” that GNATprove gives us. These are clear indications that we are not specifying sufficient postconditions. Remember that when analyzing code that includes a call to some procedure, the provers’ knowledge of the call’s effect is provided entirely by the procedure’s postcondition. That postcondition might be insufficient, especially if it is absent!

Therefore, we must tell the provers about the effects of calling Push and Pop, as well as the other routines that change state. We add a new postcondition on Push (line 3):

1 procedure Push (This : in out Stack;  Item : Element) with
2   Pre    => not Full (This),
3   Post   => Extent (This) = Extent (This)'Old + 1,
4   Global => null;

The new postcondition expresses the fact that the Stack contains one more Element value after the call. This is sufficient because the provers know that function Extent is simply the value of Top:

function Extent (This : Stack) return Natural is

Hence the provers know that Top is incremented by Push.

The same approach addresses the messages for Pop (line 3):

1 procedure Pop (This : in out Stack; Item : out Element) with
2   Pre    => not Empty (This),
3   Post   => Extent (This) = Extent (This)'Old - 1,
4   Global => null;

In the above we say that the provers know what the function Extent means. For that to be the case when verifying client calls, we must move the function completion from the generic package body to the generic package declaration. In addition, the function must be implemented as an “expression function,” which Extent already is (see above). As expression functions in the package spec, the provers will know the semantics of those functions automatically, as if each is given a postcondition restating the corresponding expression explicitly. We also need functions Full and Empty to be known in this manner. Therefore, we move the Extent, Empty, and Full function completions, already expression functions, from the generic package body to the package declaration. We put them in the private part because these implementation details should not be exported to clients.

However, we have a potential overflow in the postcondition for Push, i.e., the increment of the number of elements contained after Push returns (line 3 below). The postcondition for procedure Pop, of course, does not have that problem.

1 procedure Push (This : in out Stack;  Item : Element) with
2   Pre    => not Full (This),
3   Post   => Extent (This) = Extent (This)'Old + 1,
4   Global => null;

The increment might overflow because Extent returns a value of subtype Natural, which could be the value Integer'Last. Hence the increment could raise Constraint_Error and the check cannot be verified. We must either apply the “-gnato” switch so that assertions can never overflow, or alternatively, declare a safe subrange so that the result of the addition cannot be greater than Integer'Last. 

Our choice is to declare a safe subrange because the effects are explicit in the code, as opposed to an external switch. Here are the added subtype declarations:

subtype Element_Count is 
      Integer range 0 .. Integer'Last - 1;
   --  The number of Element values currently contained
   --  within any given stack. The lower bound is zero
   --  because a stack can be empty. We limit the upper
   --  bound (minimally) to preclude overflow issues.

   subtype Physical_Capacity is
      Element_Count range 1 .. Element_Count'Last;
   --  The range of values that any given stack object can
   --  specify (via the discriminant) for the number of
   --  Element values the object can physically contain.
   --  Must be at least one.

We use the second subtype for the discriminant in the partial view for Stack (line 1):

1 type Stack (Capacity : Physical_Capacity) is private
2    with Default_Initial_Condition => Empty (Stack);

and both subtypes in the full declaration in the private part (lines 1, 3, and 5):

1 type Content is array (Physical_Capacity range <>) of Element;
3 type Stack (Capacity : Physical_Capacity) is record
4    Values : Content (1 .. Capacity) := (others => Default_Value);
5    Top    : Element_Count := 0;
6 end record with
7   Predicate => Top in 0 .. Capacity;

The function Extent is changed to return a value of the subtype Element_Count so adding one in the postcondition cannot go past Integer’Last. Overflow is precluded but note that there will now be range checks for GNATprove to verify.

With these changes in place we have achieved the Silver level. There are no run-time check verification failures and the defensive preconditions are proven at their call sites.

4.5 Gold Implementation

We will now address the remaining changes needed to reach the Gold level. The process involves iteratively attempting to prove the main program that calls the stack routines and makes assertions about the conditions that follow. This process will result in changes to the generic package, especially postconditions, so it will require verification along with the main procedure. Those additional postconditions may require additional preconditions as well.

In general, a good way to identify postcondition candidates is to ask ourselves what conditions we, as the developers, know to be true after a call to the routine in question. Then we can add assertions after the calls to see if the provers can verify those conditions. If not, we extend the postcondition on the routine.

For example, we can say that after a call to Push, the corresponding stack cannot be empty. Likewise, after a call to Pop, the stack cannot be full. These additions are not required for the sake of assertions or other preconditions because the Extent function already tells the provers what they need to know in this regard. However, they are good documentation and may be required to prove additional conditions added later. (That is the case, in fact, as will be shown.)

To see what other postconditions are required, we now switch to the other main procedure, in the “demo_gold.adb” file. This version of the demo program includes a number of additional assertions:

1 with Ada.Text_IO;       use Ada.Text_IO;
2 with Character_Stacks;  use Character_Stacks;
4 procedure Demo_Gold with SPARK_Mode is
6    S1, S2 : Stack (Capacity => 10);  -- arbitrary
8    X, Y : Character;
10 begin
11    pragma Assert (Empty (S1) and Empty (S2));
12    pragma Assert (S1 = S2);
13    Push (S1, 'a');
14    pragma Assert (not Empty (S1));
15    pragma Assert (Top_Element (S1) = 'a');
16    Push (S1, 'b');
17    pragma Assert (S1 /= S2);
19    Put_Line ("Top of S1 is '" & Top_Element (S1) & "'");
21    Pop (S1, X);
22    Put_Line ("Top of S1 is '" & Top_Element (S1) & "'");
23    Pop (S1, Y);
24    pragma Assert (X = 'b');
25    pragma Assert (Y = 'a');
26    pragma Assert (S1 = S2);
27    Put_Line (X & Y);
29    Push (S1, 'a');
30    Copy (Source => S1, Destination => S2);
31    pragma Assert (S1 = S2);
32    pragma Assert (Top_Element (S1) = Top_Element (S2));
33    pragma Assert (Extent (S1) = Extent (S2));
35    Reset (S1);
36    pragma Assert (Empty (S1));
37    pragma Assert (S1 /= S2);
39    Put_Line ("Done");
40 end Demo_Gold;

For example, we have added assertions after the calls to Reset and Copy, on lines 31 through 33 and 36 through 37, respectively. GNATprove now emits the following (elided) messages for those assertions:

demo_gold.adb:31:19: medium: assertion might fail, cannot prove S1 = S2 (e.g. when S1 = (…, Top => 0) and S2 = (…, Top => 0)) [possible explanation: call at line 30 should mention Destination (for argument S2) in a postcondition]
demo_gold.adb:36:19: medium: assertion might fail, cannot prove Empty (S1) … [possible explanation: call at line 35 should mention This (for argument S1) in a postcondition]

Note again the “possible explanation” hints. For the first message we need to add a postcondition on Copy specifying that the value of the argument passed to Destination will be equal to that of the Source argument (line 3):

1 procedure Copy (Destination : in out Stack; Source : Stack) with
2   Pre    => Destination.Capacity >= Extent (Source),
3   Post   => Destination = Source,
4   Global => null;

We must move the “=” function implementation to the package spec so that the provers will know the meaning. The function was already completed as an expression function so moving it to the spec is all that is required.

For the second message, regarding the failure to prove that a stack is Empty after Reset, we add a postcondition to that effect (line 2):

1 procedure Reset (This : in out Stack) with
2   Post   => Empty (This),
3   Global => null;

The completion for function Empty was already moved to the package spec, earlier. 

The implementations of procedure Copy and function “=” might have required explicit loops, likely requiring loop invariants, but using array slicing we can express the loop implicitly. Here is function “=” again, for example:

1 function "=" (Left, Right : Stack) return Boolean is
2   (Left.Top = Right.Top and then
3    Left.Values (1 .. Left.Top) = Right.Values (1 .. Right.Top));

The slice comparison on line 3 expresses an implicit loop for us, as does the slice assignment in procedure Copy. 

The function could have been implemented as follows, with an explicit loop:

1 function "=" (Left, Right : Stack) return Boolean is
2 begin
3    if Left.Top /= Right.Top then
4       --  They hold a different number of element values so
5       --  cannot be equal.
6       return False;
7    end if;
8    --  The two Top values are the same, and the arrays
9    --  are 1-based, so the bounds are the same. Hence the
10    --  choice of Left.Top or Right.Top is arbitrary and
11    --  there is no need for index offsets.
12    for K in 1 .. Left.Top loop
13       if Left.Values (K) /= Right.Values (K) then
14          return False;
15       end if;
16       pragma Loop_Invariant 
17                (Left.Values (1 .. K) = Right.Values (1 .. K));
18    end loop;
19    --  We didn't find a difference
20    return True;
21 end "=";

Note the loop invariant on lines 16 and 17. In some circumstances GNATprove will handle the invariants for us but often it cannot. In practice, writing sufficient loop invariants is one of the more difficult facets of SPARK development so the chance to avoid them is welcome.

Continuing, we know that after the body of Push executes, the top element contained in the stack will be the value passed to Push as an argument. But the provers cannot verify an assertion to that effect (line 15 below):

13      Push (S1, 'a');
14      pragma Assert (not Empty (S1));
15      pragma Assert (Top_Element (S1) = 'a');

GNATprove emits this message:

demo_gold.adb:15:19: medium: assertion might fail, cannot prove Top_Element (S1) = 'a'

We must extend the postcondition for Push to state that Top_Element would return the value just pushed, as shown on line 4 below:

1 procedure Push (This : in out Stack;  Item : Element) with
2   Pre    => not Full (This),
3   Post   => not Empty (This)
4             and then Top_Element (This) = Item 
5             and then Extent (This) = Extent (This)'Old + 1,
6   Global => null;

Now the assertion on line 15 is verified successfully. 

Recall that the precondition for function Top_Element is that the stack is not empty. We already have that assertion in the postcondition (line 3) so the precondition for Top_Element is satisfied. We must use the short circuit form for the conjunction, though, to control the order of evaluation so that “not Empty” is verified before Top_Element. 

The short-circuit form on line 4 necessitates the same form on line 5, per Ada rules. That triggers a subtle issue flagged by GNATprove. The short-circuit form, by definition, means that the evaluation of line 5 might not occur. If it is not evaluated, we’ve told the compiler to call Extent and make a copy of the result (via ‘Old, on the right-hand side of “=”) that will not be needed. Moreover, the execution of Extent might raise an exception. Therefore, the language disallows applying ‘Old in any potentially unevaluated expression that might raise exceptions. As a consequence, in line 5 we cannot apply ‘Old to the result of calling Extent. GNATprove issues this error message:

prefix of attribute "Old" that is potentially unevaluated must denote an entity

We could address the error by changing line 5 to use Extent(This'Old) instead, but there is a potential performance difference between Extent(This)'Old and Extent(This'Old). With the former, only the result of the function call is copied, whereas with the latter, the value of the parameter is copied. Copying the parameter could take significant time and space if This is a large object. Of course, if the function returns a large value the copy will be large too, but in this case Extent only returns an integer. 

In SPARK, unlike Ada, preconditions, postconditions, and assertions in general are verified statically, prior to execution, so there is no performance issue. Ultimately, though, the application will be executed. Having statically proven the preconditions and postconditions successfully, we can safely deploy the final executable without them enabled, but not all projects follow that approach (at least, not on that basis). Therefore, for the sake of emphasizing the idiom with typically better performance, we prefer applying ‘Old to the function in our implementation.

We can tell GNATprove that this is a benign case, using a pragma in the package spec:

pragma Unevaluated_Use_of_Old (Allow);

GNATprove will then allow use of ‘Old on the call to function Extent and will ensure that no exceptions will be raised by the function.

As with procedure Push, we can also use Top_Element to strengthen the postcondition for procedure Pop (line 4 below):

1 procedure Pop (This : in out Stack;  Item : out Element) with
2   Pre    => not Empty (This),
3   Post   => not Full (This)
4             and Item = Top_Element (This)'Old 
5             and Extent (This) = Extent (This)'Old – 1,
6   Global => null;

Line 4 states that the Item returned in the parameter to Pop is the value that would be returned by Top_Element prior to the call to Pop. 

One last significant enhancement now remains to be made. Consider the assertions in the main procedure about the effects of Pop on lines 24 and 25, repeated below:

21    Pop (S1, X);
22    Put_Line ("Top of S1 is '" & Top_Element (S1) & "'");
23    Pop (S1, Y);
24    pragma Assert (X = 'b');
25    pragma Assert (Y = 'a');

Previous lines had pushed ‘a’ and then ‘b’ in that order onto S1. GNATprove emits this one message:

25:19: medium: assertion might fail, cannot prove Y = 'a' (e.g. when Y = 'b')

The message is about the assertion on line 25, alone. The assertion on line 24 was verified. Also, the message indicates that Y could be some arbitrary character. We can conclude that the provers do not know enough about the state of the stack after a call to Pop. The postcondition requires strengthening.

The necessary postcondition extension reflects a unit-level functional requirement for both Push and Pop. If one considers that postconditions correspond to the low-level unit functional requirements (if not more), one can see why the postconditions must be complete. Identifying and expressing complete functional requirements is difficult in itself, and indeed the need for this additional postcondition content is not obvious at first.

The unit-level requirement for both operations is that the prior array components within the stack are not altered, other than the one added or removed. We need to state that Push and Pop have not reordered them, for example. Specifically, for Push we need to say that the new stack state has exactly the same prior array slice contents, ignoring the newly pushed value. For Pop, we need to say that the new state has exactly the prior array slice contents without the old value at the top. 

A new function can be used to express these requirements for both Push and Pop:

function Unchanged (Invariant_Part, Within : Stack) return Boolean;

The Within parameter is a stack whose internal state will be compared against that of the Invariant_Part parameter. The name “Invariant_Part” is chosen to indicate the stack state that has not changed. The name "Within" is chosen for readability in named parameter associations on the calls. For example:

Unchanged (X, Within => Y)

means that the Element values of X should be equal to precisely the corresponding values within Y.

However, this function is not one that users would call directly. We only need it for proof. Therefore, we mark the Unchanged function as a "ghost" function so that the compiler will neither generate code for it nor allow the application code to call it. The function is declared with that aspect (on line 2) as follows:

1 function Unchanged (Invariant_Part, Within : Stack) return Boolean
2   with Ghost;

Key to the usage is the fact that by passing This'Old and This to the two parameters we can compare the before/after states of a single object. Viewing the function's implementation will help understand its use in the postconditions:

1 function Unchanged (Invariant_Part, Within : Stack) return Boolean is
2   (Invariant_Part.Top <= Within.Top and then
3    (for all K in 1 .. Invariant_Part.Top =>
4        Within.Values (K) = Invariant_Part.Values (K)));

This approach is based directly on a very clever one by Rod Chapman, as seen in some similar code. 

The function states that the array components logically contained in Invariant_Part must have the same values as those corresponding array components in Within. Note how we allow Invariant_Part to contain fewer values than the other stack (line 2 above). That is necessary because we use this function in the postconditions for both the Push and Pop operations, in which one more or one less Element value will be present, respectively.

For Push, we add a call to the function in the postcondition as line 6, below:

1 procedure Push (This : in out Stack;  Item : Element) with
2   Pre    => not Full (This),
3   Post   => not Empty (This)
4             and then Top_Element (This) = Item 
5             and then Extent (This) = Extent (This)'Old + 1  
6             and then Unchanged (This'Old, Within => This),
7   Global => null;

This'Old provides the value of the stack prior to the call of Push, without the new value included, whereas This represents the stack state after Push returns, with the new value in place. Thus, the prior values are compared to the corresponding values in the new state, with the newly included value ignored. 

Likewise, we add the function call to the postcondition for Pop, also line 6, below:

1 procedure Pop (This : in out Stack;  Item : out Element) with
2   Pre    => not Empty (This),
3   Post   => not Full (This)
4             and Item = Top_Element (This)'Old 
5             and Extent (This) = Extent (This)'Old - 1
6             and Unchanged (This, Within => This'Old),
7   Global => null;

In contrast with procedure Push, on line 6 the values This and This'Old are passed to the opposite parameters. In this case the new state of the stack, with one less array component logically present, is used as the invariant to compare against. Line 6 expresses the requirement that the new state's content is the same as the old state's content except for the one array component no longer present. Because the function only compares the number of array components within the Invariant_Part, the additional top element value within This'Old is ignored. 

Note that we must apply ‘Old to This in the calls to Unchanged in both procedures, rather than to some function result. That is unavoidable because we must refer to the prior state of the one stack object being compared.

With those additions to the postconditions we get no further messages from GNATprove from the main procedure, including assertions about the states resulting from a series of calls. We have achieved the Gold level. 

Some additional postconditions are possible, however, for completeness. We can also use function Unchanged in a new postcondition for the "=" function:

1 function "=" (Left, Right : Stack) return Boolean with
2    Post => "="'Result = (Extent (Left) = Extent (Right)
3                          and then Unchanged (Left, Right));

This postcondition expresses an implication: whenever the “=” function comparing the two stacks returns True, the Extent (i.e., Top) values will be the same and Unchanged will hold. In other words, they will have the same logical size and content. Whenever “=” returns False, the conjunction will not hold either. Note that on line 3, neither argument to function Unchanged has ‘Old applied because we are comparing two distinct stack objects, rather than different states for one object. The sizes will be the same (from line 2) so Unchanged will compare the entire slices logically contained by Left and Right.

We can use the same implication approach in a new postcondition for function Empty:

function Empty (This : Stack) return Boolean with
       Post => Empty'Result = (Extent (This) = 0);

Whenever Empty returns True, Top (i.e., Extent) will be zero, otherwise Top will not be zero.

4.6 Platinum Implementation

Our Gold level implementation also achieved the Platinum level because our postconditions fully covered the functional requirements and there were no abstract properties to be proven. Achieving the Platinum level is rare in itself, all the more so using the Gold level implementation. Doing so is possible in no small part because stacks are simple abstractions.

5. Concluding Remarks

We have shown how to transition an Ada implementation of a sequential, bounded stack abstract data type into a SPARK implementation supporting formal proof of the abstraction’s semantics. The full project, including sources for each level, are available on GitHub.

Overall, the changes were relatively simple and brief. The truly difficult part of the effort, of course, was determining what changes to make in order to satisfy the provers. That difficulty is somewhat understated in the text because we go directly from specific problems to their solutions, without indicating the time and effort required to identify those solutions. Similarly, we elided parts of the GNATprove messages to highlight the parts indicating the actual problem. Knowing how to interpret the messages, the counterexamples, and possible explanations is a skill that comes with experience. 

In addition, we must point out that stacks are simple, especially bounded stacks based on arrays. The relative ease in reaching the Gold or Platinum levels would likely not be possible for other data structures. In particular, a “model” of the abstraction’s state will often be required, resulting in complexity well beyond the Unchanged function that was sufficient for bounded stacks. See, for example, the formal containers shipped with GNAT.

Thanks are due to Yannick Moy and the entire SPARK team at AdaCore for their essential help. 

6. Gold/Platinum Implementation Listing

The following is the generic package declaration and body for the Platinum level implementation. As described earlier, the Platinum level implementation is the same as the Gold level implementation. We have kept the two versions in separate packages and files. 

Rather than using the "_Platinum" suffix in this unit name, we use the name shown below because this is the final, production-ready version and, as such, should include the indicator of whether it is thread-safe (it is not). 

The Platinum version, like the Gold version, did not include the Depends contracts. In the source directory we include a version with those contracts, for completeness.

   type Element is private;
   --  The type of values contained by objects of type Stack

   Default_Value : Element;
   --  The default value used for stack contents. Never
   --  acquired as a value from the API, but required for
   --  initialization in SPARK.
package Sequential_Bounded_Stacks is

   pragma Unevaluated_Use_of_Old (Allow);

   subtype Element_Count is Integer range 0 .. Integer'Last - 1;
   --  The number of Element values currently contained
   --  within any given stack. The lower bound is zero
   --  because a stack can be empty. We limit the upper
   --  bound (minimally) to preclude overflow issues.

   subtype Physical_Capacity is
      Element_Count range 1 .. Element_Count'Last;
   --  The range of values that any given stack object can
   --  specify (via the discriminant) for the number of
   --  Element values the object can physically contain.
   --  Must be at least one.

   type Stack (Capacity : Physical_Capacity) is private
      with Default_Initial_Condition => Empty (Stack);

   procedure Push (This : in out Stack;  Item : Element) with
     Pre    => not Full (This),
     Post   => not Empty (This)
               and then Top_Element (This) = Item
               and then Extent (This) = Extent (This)'Old + 1
               and then Unchanged (This'Old, Within => This),
     Global => null;

   procedure Pop (This : in out Stack;  Item : out Element) with
     Pre    => not Empty (This),
     Post   => not Full (This)
               and Item = Top_Element (This)'Old
               and Extent (This) = Extent (This)'Old - 1
               and Unchanged (This, Within => This'Old),
     Global => null;

   function Top_Element (This : Stack) return Element with
     Pre    => not Empty (This),
     Global => null;
   --  Returns the value of the Element at the "top" of This
   --  stack, i.e., the most recent Element pushed. Does not
   --  remove that Element or alter the state of This stack
   --  in any way.

   overriding function "=" (Left, Right : Stack) return Boolean with
     Post   => "="'Result = (Extent (Left) = Extent (Right)
                             and then Unchanged (Left, Right)),
     Global => null;

   procedure Copy (Destination : in out Stack; Source : Stack) with
     Pre    => Destination.Capacity >= Extent (Source),
     Post   => Destination = Source,
     Global => null;
   --  An alternative to predefined assignment that does not
   --  copy all the values unless necessary. It only copies
   --  the part "logically" contained, so is more efficient
   --  when Source is not full.

   function Extent (This : Stack) return Element_Count with
     Global => null;
   --  Returns the number of Element values currently
   --  contained within This stack.

   function Empty (This : Stack) return Boolean with
     Post   => Empty'Result = (Extent (This) = 0),
     Global => null;

   function Full (This : Stack) return Boolean with
     Post   => Full'Result = (Extent (This) = This.Capacity),
     Global => null;

   procedure Reset (This : in out Stack) with
     Post   => Empty (This),
     Global => null;

   function Unchanged (Invariant_Part, Within : Stack) return Boolean
     with Ghost;
   --  Returns whether the Element values of Invariant_Part
   --  are unchanged in the stack Within, e.g., that inserting
   --  or removing an Element value does not change the other
   --  Element values held.


   type Content is array (Physical_Capacity range <>) of Element;

   type Stack (Capacity : Physical_Capacity) is record
      Values : Content (1 .. Capacity) := (others => Default_Value);
      Top    : Element_Count := 0;
   end record with
     Predicate => Top in 0 .. Capacity;

   -- Extent --

   function Extent (This : Stack) return Element_Count is

   -- Empty --

   function Empty (This : Stack) return Boolean is
     (This.Top = 0);

   -- Full --

   function Full (This : Stack) return Boolean is
     (This.Top = This.Capacity);

   -- Top_Element --

   function Top_Element (This : Stack) return Element is
     (This.Values (This.Top));

   -- "=" --

   function "=" (Left, Right : Stack) return Boolean is
     (Left.Top = Right.Top and then
      Left.Values (1 .. Left.Top) = Right.Values (1 .. Right.Top));

   -- Unchanged --

   function Unchanged (Invariant_Part, Within : Stack) return Boolean is
     (Invariant_Part.Top <= Within.Top and then
        (for all K in 1 .. Invariant_Part.Top =>
            Within.Values (K) = Invariant_Part.Values (K)));

end Sequential_Bounded_Stacks;

The package body:

package body Sequential_Bounded_Stacks is

   -- Reset --

   procedure Reset (This : in out Stack) is
      This.Top := 0;
   end Reset;

   -- Push --

   procedure Push (This : in out Stack; Item : in Element) is
      This.Top := This.Top + 1;
      This.Values (This.Top) := Item;
   end Push;

   -- Pop --

   procedure Pop (This : in out Stack; Item : out Element) is
      Item := This.Values (This.Top);
      This.Top := This.Top - 1;
   end Pop;

   -- Copy --

   procedure Copy (Destination : in out Stack; Source : Stack) is
      subtype Contained is Element_Count range 1 .. Source.Top;
      Destination.Top := Source.Top;
      Destination.Values (Contained) := Source.Values (Contained);
   end Copy;

end Sequential_Bounded_Stacks;
An Introduction to Contract-Based Programming in Ada Tue, 21 Apr 2020 08:26:00 -0400 Abe Cohen

One of the most powerful features of Ada 2012* is the ability to specify contracts on your code. Contracts describe conditions that must be satisfied upon entry (preconditions) and upon exit (postconditions) of your subprogram. Preconditions describe the context in which the subprogram must be called, and postconditions describe conditions that will be adhered to by the subprogram’s implementation. If you think about it, contracts are a natural evolution of Ada’s core design principle. To encourage developers to be as explicit as possible with their expressions, putting both the compiler/toolchain and other developers in the best position to help them develop better code.

The addition of contracts into a standard Ada application accomplishes several elusive objectives; specifically, they act as a static method of handling potential errors, as documentation that gets updated and checked for consistency by the compiler alongside your code, and provide static analysis tools like SPARK and CodePeer with more application-specific detail they can use to produce higher-quality results. So let’s get started.

package Graph is
   type Graph_Record (Nodes : Positive) is record
      Adj_List : Adjacency_List (1 .. Nodes);
      Node_List : Node_List_Type (1 .. Nodes);
   end record;
   procedure Set_Source (Graph : in out Graph_Record; ID : Positive);
end Graph;

package body Graph is
   procedure Set_Source (Graph : in out Graph_Record; ID : Positive) is
      Graph.Node_List (ID).dist := 0;
   end Set_Source;
end Graph;

Here is a package with a simple subprogram that sets a property of a graph. One thing to notice about the graph from its definition is that its nodes are labelled with IDs from 1 to the number of nodes. In order to make sure that our subprogram doesn’t index into the graph’s list of nodes out of bounds, we might do a number of things. We can change Set_Source to a function that returns a boolean - True if the operation was successful, False if the supplied ID is out of range. Another option is to do nothing and make use of the default compiler-inserted array access check (I'll get into the drawbacks of this later), or we can even insert an explicit defensive check of our own if we want to raise a specific exception with a specific message. 

However, all of these approaches come with two fundamental issues: they require additional documentation to be effective, and they rely on checks and/or exception handlers at run-time to prevent errors which can hurt performance. By adding a simple precondition, we can mitigate both of these problems at the same time.

procedure Set_Source (Graph : in out Graph_Record; ID : Positive)
     with Pre => (ID <= Graph.Nodes);

The documentation issue is more obvious, so I’ll address that one first. Anyone using this API, even someone without access to the implementation, now knows that this subprogram expects to be called with the ID parameter in a specific range, yet no additional documentation is needed to express this. If we were using conventional methods, we would need another way to tell API users how to correctly use this subprogram. However, using contracts in this manner integrates the task of writing and updating documentation with the subprogram’s design process. On top of that, if the subprogram were to be redesigned, say if the Graph record type was broadened to accept characters as indices for Node_List, those new requirements would be reflected in the new preconditions, with no additional information needed.

In addition to helping other developers use your subprograms properly, contracts introduce a static methodology for dealing with errors. Conventionally, errors are dealt with via defensive checks and exception handlers at run-time. Particularly in an embedded context, where the final executable size in memory and computational demands need to be optimized, the reduction of run-time code is essential to dealing with hardware constraints. Accordingly, many programs have no choice but to trust that their testing infrastructure was sufficient and ship code with most run-time checks turned off. However, it’s not revolutionary to say that all programs wish their applications ran safely with less overhead. Using contracts provides an elegant way for developers higher in the call chain to take appropriate action to avoid violating known conditions that will cause program failure without adding run-time code at every level, as would happen with either explicit or compiler-inserted defensive checks or propagating exception handlers.

Sometimes though, as in the case of input validation, there’s no way to get around defensive code at run-time. Contracts provide the flexibility to add these checks both broadly and on a granular level. If you pass the ‘-gnata’ switch to the compiler, it will insert additional checks assuring your contracts are not violated alongside the standard Ada run-time checks, like range checks on types. However, if you just want to enable a single defensive check, you can do something like this:

pragma Assertion_Policy (Pre => Check, Post => Ignore);
procedure Set_Source (Graph : out Graph_Record; ID : Positive) is
Graph.Node_List (ID).dist := 0;

The use of contracts can also increase organizational confidence that testing was in fact sufficient, and accounted for all the potential ways in which the application could fail. If you’re not at the level of statically verifying contracts to be unbreakable within the context of your application with SPARK, other static analysis tools, like CodePeer, can benefit from the extra information contracts provide about the intended use of your code. This is because in this context, contracts are language-level proxies of your application’s requirements, and CodePeer, like many other tools, only works on language-level constructs.

When CodePeer analyzes a subprogram, it generates implicit pre- and postconditions as part of the analysis. If one of those implicit contracts might be violated, you might get a message like this:

medium: precondition (array index check) might fail on call to graph.set_source: requires ID <= Graph.Nodes

However, when you supply CodePeer with your own contracts to compare against,  it can output situations in which user-supplied contracts contradict some of its own, leading to more specific, more actionable findings, and fewer false positives. To learn more about contracts, check out this chapter from, or this section of the SPARK documentation.

*Contracts can also be used via pragma Precondition and pragma Postcondition with older versions of GNAT, or approximated with pragma Assert as defined in Ada 05. Learn more about that here.

Ada on the ESP8266 Thu, 09 Apr 2020 07:31:00 -0400 Johannes Kliemann $ llvm-gnatmake -c unit.adb -cargs --target=xtensa -mcpu=esp8266
$ cd /path/to/Arduino/libraries
$ git clone --recursive
$ cd esp8266-ada-example
$ make
$ screen /dev/ttyUSB0 115200
Make with Ada!
Make with Ada!
Make with Ada!
Make with Ada!
A Trivial File Transfer Protocol Server written in Ada Tue, 07 Apr 2020 07:50:00 -0400 Martyn Pike

For an upcoming project, I needed a simple way of transferring binary files over an Ethernet connection with minimal (if any at all) user interaction.

A protocol that's particularly appropriate for this kind of usage is the Trivial File Transfer Protocol (TFTP).  You can find a high level description on Wikipedia and a more detailed breakdown of the protocol here.  

My previous experience with this protocol has mostly been within test rig environments,  where a target computer accesses its operational software payload from a TFTP server at boot-time.  The beauty of this is approach is that it allows for different payloads to be used between reboots by switching the files being served.  That is exactly how this server will be used in my forthcoming project, also to be documented on

The Ada TFTP server will be hosted on Ubuntu Linux and support a subset of the transactions provided by the protocol.  For example, the ability for the client to write a file to the server will not be supported.

There are a number of TFTP servers available for Ubuntu Linux. However I wanted to implement my own in Ada, mainly to prove it could be done, but also to test a couple of different options for handling UDP/IP transactions from within Ada applications.

To do this, I needed something to mimic the capabilities of GNAT.Sockets.

To start with, I reviewed the current catalogue of available Ada software on Github and the repository from my good friends over at CodeLabs, who happen to be the developers of the Muen separation kernel which will also figure in my forthcoming project.  

The CodeLabs team had exactly what I was looking for: Anet.  I highly recommend that you review its code on the CodeLabs repository.   That repository is now my go-to for (publicly available and open source) high quality examples of Ada and SPARK software development.  Many of my future blog posts about applying the AdaCore tools and techniques will be oriented around examples based on CodeLabs source code.

Back to my TFTP server.  Since I had two options for the UDP/IP transaction functionality, I decided to set about creating one version of my server using GNAT.Sockets and another using Anet from CodeLabs.

If you want to get ahead of the game, the code is available on my GitHub repository for all three parts of this blog series.

The first step towards my objective was to obtain Anet, by cloning the repository, building the library and installing it in a location where the GNAT build tools can locate it.

All the code for this blog post can be built with GNAT Community 2019 as well as GNAT Pro.

The TFTP server code has been reviewed by CodePeer for the detection of run-time vulnerabilities that may lead to unexpected code execution paths and by GNATcheck against a suitable coding standard.

Both CodePeer and GNATcheck are available from AdaCore as professionally assured products and can be qualified as TQL-5 review tools for use by DO-178B/C projects.

The following command sequence installs the Anet libraries into ~/sw/adalibs and with a suitable GNAT compiler in my PATH,  I would execute the following commands:

git clone
cd anet
git checkout master
make all
make PREFIX=~/sw/adalibs install

After doing this, there will be a shared library called stored in the ~/sw/adalibs/lib directory.

Before writing code that will use this library,  the GPR_PROJECT_PATH environment variable needs to identify the ~/sw/adalibs/lib/gnat directory.

This can be done by using the following

export GPR_PROJECT_PATH=~/sw/adalibs/lib/gnat

I encountered a slight learning curve with the Anet API because it's architecture differs from that of GNAT.Sockets.  However, the code documentation is very good and before I knew it I had a proof of concept working.

To build the code from Github (with the same GNAT compiler in the path which was used to build Anet), you can use the following:

export GPR_PROJECT_PATH=~/sw/adalibs/lib/gnat
git clone
cd adatftpd-anet
make all

The makefile and GNAT project file are as follows:

# Assumes GPR_PROJECT_PATH includes Anet installation
# Try to use the same GNAT to build adatftpd that was used to 
# build Anet.
	gprbuild -p -P adatftpd.gpr
	gnatcheck -P adatftpd.gpr --show-rule -rules -from=gnatcheck.rules
	codepeer -P adatftpd.gpr -level 2 -output-msg
	gprclean -q -P adatftpd.gpr
with "anet.gpr";

project Adatftpd is

   for Languages use ("Ada");
   for Source_Dirs use ("src/**");
   for Object_Dir use "obj";
   for Exec_Dir use "test";
   for Main use ("main.adb");

   package Builder is
      for Executable ("main.adb") use "adatftpd-anet";
   end Builder;

   package Compiler is
      for Switches ("ada") use ("-gnata");
   end Compiler;

end Adatftpd;

Assuming no errors occurred during this sequence of commands, the 'test' sub-directory will contain the 'adatftpf-anet' executable.

You can also checkout the verification program I wrote for my TFTP server,  which is also available on my Github repository.

I'd welcome feedback and collaboration on either of these TFTP related projects.

Proving properties of constant-time crypto code in SPARKNaCl Thu, 02 Apr 2020 08:15:00 -0400 Roderick Chapman #define FOR(i,n) for (i = 0; i < n; ++i) #define sv static void typedef unsigned char u8; typedef long long i64; typedef i64 gf[16];
sv pack25519(u8 *o, const gf n);
subtype I32 is Integer_32;
subtype N32 is I32 range 0 .. I32'Last;
subtype I64 is Integer_64;

subtype Index_32 is I32 range 0 .. 31;

type Byte_Seq is array (N32 range <>) of Byte;
subtype Bytes_32 is Byte_Seq (Index_32);

--  "LM"   = "Limb Modulus"
--  "LMM1" = "Limb Modulus Minus 1"
LM   : constant := 65536;
LMM1 : constant := 65535;
--  "R2256" = "Remainder of 2**256 (modulo 2**255-19)"
R2256 : constant := 38;

--  "Maximum GF Limb Coefficient"
MGFLC : constant := (R2256 * 15) + 1;

--  "Maximum GF Limb Product"
MGFLP : constant := LMM1 * LMM1;

subtype GF_Any_Limb is I64 range -LM .. (MGFLC * MGFLP);

type GF is array (Index_16) of GF_Any_Limb;

subtype GF_Normal_Limb is I64 range 0 .. LMM1;

subtype Normal_GF is GF
  with Dynamic_Predicate =>
     (for all I in Index_16 => Normal_GF (I) in GF_Normal_Limb);
--  Reduces N modulo (2**255 - 19) then packs the
--  value into 32 bytes little-endian.
function Pack_25519 (N : in Normal_GF) return Bytes_32
  with Global => null;
sv pack25519 (u8 *o, const gf n)
  int i, j, b;
  gf m, t;
  FOR(i,16) t[i]=n[i];
  FOR(j, 2) {
    for(i=1;i<15;i++) {
    sel25519 (t, m, 1-b);
  FOR(i, 16) {
sv sel25519 (gf p, gf q, int b);
--  Constant time conditional swap of P and Q.
procedure CSwap (P    : in out GF;
                 Q    : in out GF;
                 Swap : in     Boolean)
  with Global => null,
       Contract_Cases =>
         (Swap     => (P = Q'Old and Q = P'Old)
          not Swap => (P = P'Old and Q = Q'Old));
if Swap then
   Temp := P;
   P := Q;
   Q := Temp;
end if;
sv sel25519 (gf p, gf q, int b)
  i64 t, i, c = ~(b-1);
  FOR(i, 16) {
    t= c&(p[i]^q[i]);
type Bit_To_Swapmask_Table is array (Boolean) of U64;
Bit_To_Swapmask : constant Bit_To_Swapmask_Table :=
  (False => 16#0000_0000_0000_0000#,
   True  => 16#FFFF_FFFF_FFFF_FFFF#);
pragma Assume
  (for all K in I64 => To_I64 (To_U64 (K)) = K);
procedure CSwap (P    : in out GF;
                 Q    : in out GF;
                 Swap : in     Boolean)
   T : U64;
   C : U64 := Bit_To_Swapmask (Swap);
   for I in Index_16 loop
      T := C and (To_U64 (P (I)) xor To_U64 (Q (I)));
      P (I) := To_I64 (To_U64 (P (I)) xor T);
      Q (I) := To_I64 (To_U64 (Q (I)) xor T);

      pragma Loop_Invariant
        (if Swap then
           (for all J in Index_16 range 0 .. I =>
                (P (J) = Q'Loop_Entry (J) and
                 Q (J) = P'Loop_Entry (J)))
           (for all J in Index_16 range 0 .. I =>
                (P (J) = P'Loop_Entry (J) and
                 Q (J) = Q'Loop_Entry (J)))
   end loop;
end CSwap;
--  Subtracting P twice from a Normal_GF might result
--  in a GF where limb 15 can be negative with lower bound -65536
subtype Temp_GF_MSL is I64 range -LM .. LMM1;
subtype Temp_GF is GF
  with Dynamic_Predicate =>
    (Temp_GF (15) in Temp_GF_MSL and
      (for all K in Index_16 range 0 .. 14 =>
         Temp_GF (K) in GF_Normal_Limb));

procedure Subtract_P (T         : in     Temp_GF;
                      Result    :    out Temp_GF;
                      Underflow :    out Boolean)
  with Global => null,
       Pre    => T (15) >= -16#8000#,
       Post   => (Result (15) >= T (15) - 16#8000#);
subtype I64_Bit is I64 range 0 .. 1;

procedure Subtract_P (T         : in     Temp_GF;
                      Result    :    out Temp_GF;
                      Underflow :    out Boolean)
   Carry : I64_Bit;
   R     : GF;
   R := (others => 0);

   --  Limb 0 - subtract LSL of P, which is 16#FFED#
   R (0) := T (0) - 16#FFED#;

   --  Limbs 1 .. 14 - subtract FFFF with carry
   for I in Index_16 range 1 .. 14 loop
      Carry     := ASR_16 (R (I - 1)) mod 2;
      R (I)     := T (I) - 16#FFFF# - Carry;
      R (I - 1) := R (I - 1) mod LM;

      pragma Loop_Invariant
        (for all J in Index_16 range 0 .. I - 1 =>
           R (J) in GF_Normal_Limb);
      pragma Loop_Invariant (T in Temp_GF);
   end loop;

   --  Limb 15 - Subtract MSL (Most Significant Limb)
   --  of P (16#7FFF#) with carry.
   --  Note that Limb 15 might become negative on underflow
   Carry  := ASR_16 (R (14)) mod 2;
   R (15) := (T (15) - 16#7FFF#) - Carry;
   R (14) := R (14) mod LM;

   --  Note that R (15) is not normalized here, so that the
   --  result of the first subtraction is numerically correct
   --  as the input to the second.
   Underflow := R (15) < 0;
   Result    := R;
end Subtract_P;
function Pack_25519 (N : in Normal_GF) return Bytes_32
   L      : GF;
   R1, R2 : Temp_GF;
   First_Underflow  : Boolean;
   Second_Underflow : Boolean;
   L := N;
   Subtract_P (L,  R1, First_Underflow);
   Subtract_P (R1, R2, Second_Underflow);
   CSwap (R1, R2, Second_Underflow);
   CSwap (L,  R2, First_Underflow);
   return To_Bytes_32 (R2);
end Pack_25519;
sparknacl-utils.adb:197:27: medium: predicate check might fail
--  Result := T - P;
--  if     Underflow, then Result is not a Normal_GF
--  if not Underflow, then Result is     a Normal_GF
procedure Subtract_P (T         : in     Temp_GF;
                      Result    :    out Temp_GF;
                      Underflow :    out Boolean)
        with Global => null,
             Pre    => T (15) >= -16#8000#,
             Post   => (Result (15) >= T (15) - 16#8000#) and then
                       (Underflow /= (Result in Normal_GF));
R (14) := R (14) mod LM;
R (15) := R (15) mod LM;
sparknacl-utils.adb:139:23: medium: predicate check might fail
Time travel debugging in GNAT Studio with GDB and RR Tue, 17 Mar 2020 09:18:07 -0400 Ghjuvan Lacambre
with Ada.Numerics.Discrete_Random;

procedure Main is

   package Rand_Positive is new Ada.Numerics.Discrete_Random(Positive);
   Generator : Rand_Positive.Generator;
   Error : exception;
   Bug : Boolean := False;

   procedure Make_Bug is
      Bug := True;
   end Make_Bug;

   procedure Do_Bug is
      Bug := True;
   end Do_Bug;


   for I in 1..10 loop
      if Rand_Positive.Random(Generator) < (Positive'Last / 100) then
         if Rand_Positive.Random(Generator) < (Positive'Last / 2) then
         end if;
      end if;
   end loop;

   if Bug then
      raise Error;
   end if;

end Main;
Android application with Ada and WebAssembly Thu, 12 Mar 2020 10:08:19 -0400 Maxim Reznik

Having previously shown how to create a Web application in Ada, it's not so difficult to create an Android application in Ada. Perhaps the simplest way is to install Android Studio. Then just create a new project and choose "Empty Activity". Open the layout, delete TextView and put WebView instead.

In onCreate function write the initialization code:

WebView webView = (WebView)findViewById(;
WebSettings settings = webView.getSettings();

To make WebView work offline, you need to provide content. One way to do this is just to put content in the asset folder and open it as a URL in WebView. When a user starts the application, WebView will load HTML and corresponding JavaScript. Then JavaScript loads WebAssembly and so, actually, launches Ada code. But it can't use a file:/// schema to load JavaScript and WebAssembly files because of the default security settings. So we trick WebView by intercepting requests and also provide correct MIME types for them. We do this using the shouldInterceptRequest method of WebViewClient class to intercept any request to HTML/WASM/JS/JPEG resources and load the corresponding file from the asset folder:

public WebResourceResponse shouldInterceptRequest(WebView view,
                                                  WebResourceRequest request) {
    String path = request.getUrl().getLastPathSegment();

    try {
        String mime;
        AssetManager assetManager = getAssets();

        if (path.endsWith(".html")) mime = "text/html";
        else if (path.endsWith(".wasm")) mime = "application/wasm";
        else if (path.endsWith(".mjs")) mime = "text/javascript";
        else if (path.endsWith(".jpg")) mime = "image/jpeg";
            return super.shouldInterceptRequest(view, request);

        InputStream input ="www/" + path);

        return new WebResourceResponse(mime, "utf-8", input);
    } catch (IOException e) {
        ByteArrayInputStream result = new ByteArrayInputStream
                (("X:" + path + " E:" + e.toString()).getBytes());
        return new WebResourceResponse("text/plain", "utf-8", result);

Now connect this code to WebView, like this:

webView.setWebViewClient(new WebViewClient() {
    public WebResourceResponse shouldInterceptRequest(WebView view,

For debug purposes, let's connect the WebView console to Android log. We just add this function below the code for shouldInterceptRequest:

public boolean onConsoleMessage(ConsoleMessage cm) {
    Log.d("MyApplication", cm.message() + " -- From line "
            + cm.lineNumber() + " of "
            + cm.sourceId() );
    return true;

Now we're able to build and run an Android Package. Here is how it looks like on Android Studio emulator (it's been tested on my phone too!):

If you need the complete code, there's a repository on github!

PS: This article doesn't discuss how we produced WebAssembly from Ada code for running with WebGL integration. We will write a follow-up post about that soon!

Making an RC Car with Ada and SPARK Tue, 10 Mar 2020 09:52:00 -0400 Pat Rogers

As a demonstration for the use of Ada and SPARK in very small embedded targets, I created a remote-controlled (RC) car using Lego NXT Mindstorms motors and sensors but without using the Lego computer or Lego software. I used an ARM Cortex System-on-Chip board for the computer, and all the code -- the control program, the device drivers, everything -- is written in Ada. Over time, I’ve upgraded some of the code to be in SPARK. This blog post describes the hardware, the software, the SPARK upgrades, and the repositories that are used and created for this purpose.

Why use Lego NXT parts? The Lego NXT robotics kit was extremely popular. Many schools and individuals still have kits and third-party components. Even if the latest Lego kit is much more capable, the ubiquity and low cost of the NXT components make them an attractive basis for experiments and demonstrations.

In addition, there are many existing NXT projects upon which to base demonstrations using Ada. For example, my RC car is based on the third-party HiTechnic IR RC Car. The car turns extremely well because it has an Ackerman steering mechanism, so that the inside wheel turns sharper than the outside wheel, and a differential on the drive shaft so that the drive wheels can rotate at different speeds during a turn. The original car uses the HiTechnic IR (infra-red) receiver to communicate with a Lego remote control. This new car uses that same receiver and controller, but also supports another controller communicating over Bluetooth LE.

Replacing the NXT Brick

The NXT embedded computer controlling NXT robots is known as the “brick,” probably because of its appearance. (See Figure 1.) It consists of an older 48 MHz ARMv7, with 256 KB of FLASH and 64 KB of RAM, as well as an AVR co-processor. The brick enclosure provides an LCD screen, a speaker, Bluetooth, and four user-buttons, combined with the electronics required to interface to the external world. A battery pack is on the back.

Figure 1: NXT Brick (Source:

Our replacement computer is one of the “Discovery Kit” products from STMicroelectronics. The Discovery Kits have ARM Cortex processors and include many on-package devices for interfacing to the external world, including A/D and D/A converters, timers, UARTs, DMA controllers, I2C and SPI communication, and others. Sophisticated external components are also included, depending upon the specific kit.

Specifically, we use the STM32F4 Discovery Kit which has a Cortex M4 MCU running at up to 168 MHz, a floating-point co-processor, a megabyte of FLASH and 192 KB of RAM. It also includes an accelerometer, MEMS microphone, audio codec, a user button, and four user LEDs. (See figure 2.) It is very inexpensive– approximately $15. Details are available here:

Figure 2 STM32F4 Discovery board with labels for some on-board devices

I made one change to the Discovery Kit board as received from the factory. Because the on-package devices, such as the serial ports, I2C devices, timers, etc. all share potentially overlapping groups of GPIO pins, and because not all pins are available on the headers, not all the pins required were exclusively available for all the devices needed for the RC car. Ultimately, I found a set of pin allocations that would almost work, but I needed pin PA0 to do it. However, pin PA0 is dedicated to the blue User button by a solder bridge on the underside of the board. I removed that solder bridge to make PA0 available. Of course, doing so disabled the blue User button but I didn’t need it for this project.

Replacing the NXT brick also removed the internal interface electronics for the motors and sensors. I used a combination of a third-party board and hand-made circuits to replace them. A brief examination of the motors will serve to explain why the additional board was chosen.

The Lego Mindstorms motors are 9-volt DC motors with a precise rotation sensor and significant gear reduction producing high torque. The motors rotate at a rate relative to the power applied and can rotate in either direction. The polarity of the power lines controls the rotation direction: positive rotates one way, negative rotates the other way.

Figure 3: NXT motor internals. (Source: LEGO)

Figure 3 illustrates the partial internals of the NXT motor, including the gear train in light blue, and the rotation sensor to the left in dark blue, next to the motor itself in dark orange. (The dark gray part at far left is the connector housing.)

I mentioned that the polarity of the applied power determines the rotation direction. That polarity control requires an external circuit, specifically an ‘H-bridge” circuit that allows us to achieve that effect.

Figure 4: H-bridge circuit showing power source, motor, and switches. (Source:, created by Cyril BUTTAY)

Figure 4 shows the functional layout of the H-bridge circuit, in particular the arrangement of the four switches S1 through S4 around the motor M. By selectively closing two switches and leaving the other two open we can control the direction of the current flow, and thereby control the direction of the motor rotation.

Figure 5: H-bridge circuit showing direction options. (Source:, created by Cyril BUTTAY)

Figure 5 illustrates two of the three useful switch configurations. The red line shows the current flow. Another option is to close two switches on the same side and end, in which case the rotor will “lock” in place. Opening all the switches removes all power and thus does not cause rotation. The fourth possible combination, in which all switches are closed, is not used.

Rather than build my own H-bridge circuit I used a low-cost product dedicated to interfacing with NXT motors and sensors. In addition to the H-bridge circuits, they also provide filters for the rotation sensor’s discrete inputs so that noise does not result in too many false rotation counts. There are a number of these products available.

One such is the “Arduino NXT Shield Version 2” by TKJ Electronics: in Denmark. The product is described in their blog, here: and is available for sale here: for a reasonable price.

Figure 6: NXT Shield V2, top-down view (Source: TKJ Electronics)

The “NXT Shield” can control two NXT motors and one sensor requiring 9 volts input, including a Mindstorms NXT Ultrasonic Sensor. Figure 6 shows the NXT Shield with the two standard NXT connectors on the left for the two motors, and the sensor connector on the right.

The kit requires assembly but it is just through-board soldering. As long as you get the diodes oriented correctly everything is straightforward. Figure 7 (below) shows our build, already located in an enclosure and connected to the Discovery Kit, power, two NXT motors, and the ultrasonic sensor.

Figure 7: Completed NXT Shield inside final enclosure

The in-coming 9 volts is routed to a DC power jack on the back of the enclosure, visible on the bottom left with red and black wires connecting it to the board. The 5 volts for the on-board electronics comes via the Discovery Kit header and is bundled with the white and green wires coming in through the left side in the figure. The enclosure itself is one of the “Make with Ada” boxes. “Make with Ada” is a competition offering serious prize money for cool projects using embedded targets and Ada. See for more information.

The power supply replacing the battery pack on the back of the NXT brick is an external battery intended for charging cell phones and tablets.

This battery provides separate connections for +5 and +9 (or +12) volts, which is very convenient: the +5V is provided via USB connector, which is precisely what the STM32F4 card requires, and both the NXT motors and the NXT ultrasonic sensor require +9 volts. The battery isn't light but holds a charge for a very long time, especially with this relatively light load. Note that the battery can also provide +12 volts instead of +9, selected by a physical slider switch on the side of the battery. Using +12 volts will drive the motors considerably faster and is (evidently) tolerated by the NXT Shield sensor circuit and the NXT Ultrasonic Sensor itself.

Finally, I required a small circuit supporting the I2C communication with the HiTechnic IR Receiver. The circuit is as simple as one can imagine: power, ground, and a pull-up resistor for each of the two I2C communication lines. These components are housed in the traditional Altoids tin and take power and ground from the Discovery Kit header pins. The communication lines go to specific GPIO header pins.

Figure 8: I2C Circuit for IR Receiver

All of these replacements and the overall completed car (known as "Bob"), are shown in the following images:

Figure 9: Final Assembly Front View
Figure 10: Final Assembly Rear View

Figure 10 shows the rear enclosure containing the NXT Shield board, labeled “Make With Ada” on the outside, and the Altoids tin on the side containing the small circuit for the IR receiver.

Here is the car in action:

Replacing the NXT Software

The Ada Drivers Library (ADL) provided by AdaCore and the Ada community supplies the device drivers for the timers, I2C, A/D and D/A converters, and other devices required to replace those in the the NXT brick. The ADL supports a variety of development platforms from various vendors, including the STM32 series boards. The ADL is available on GitHub for both non-proprietary and commercial use here:

Replacing the brick will also require drivers for the NXT sensors and motors, software that is not included in the ADL. However, we can base them on the ADL drivers for our target board. For example, the motor rotary encoder driver uses the STM32 timer driver internally because those timers directly support quadrature rotation encoders. All these abstractions, including some that are not hardware specific, are in the Robotics with Ada repository: This repo supports the NXT motors and all the basic sensors, as well as some third-party sensors. Abstract base types are used for the more complex sensors so that new sensors can be created easily using inheritance.

In addition, the repository contains some signal processing and control system software, e.g., a “recursive moving average” (RMA) noise filer type and a closed loop PID controller type. These require further packages, such as a bounded ring buffer abstraction.

For example, the analog sensors (e.g., the light and sound sensors), have an abstract base class controlling an ADC, and two abstract subclasses using DMA and polling to transfer the converted data. The concrete light and sound sensor types are derived from the DMA-based parent type (figure 11).

Figure 11: Class Diagram for Analog Sensor Base Type and Subclasses

The so-called NXT “digital” devices contain an embedded chip. These follow a similar design with an abstract base class and concrete subclass drivers for the more sophisticated, complex sensors. Lego refers to these sensors as “digital” sensors because they do not provide an analog signal to be sampled. Instead, the drivers both command and query the internal chips to operate the sensors.

The sensors’ chips use the NXT hardware cable connectors’ two discrete I/O lines to communicate. Therefore, a serial communications protocol based on two wires is applied. This communication protocol is usually, but not always, the “I2C” serial protocol. The Lego Ultrasonic Sonar sensor and the HiTechnic IR Receiver sensor both use I2C for communication. In contrast, version 2 of the Lego Color sensor uses the two discrete lines with an ad-hoc protocol.

The HiTechnic IR Receiver driver uses the I2C driver from the ADL for the on-package I2C hardware. That is a simple approach that also offloads the work from the MCU. The NXT Ultrasonic sensor, on the other hand, was a problem. I could send data to the Ultrasonic sensor successfully using the on-package I2C hardware (via the ADL driver) but could not get any data back. As discussed on the Internet, the problem is that the sensor does not follow the standard I2C protocol. It requires an extra communication line state change in the middle of the receiving steps. I could not find a way to make the on-package I2C hardware in the ARM package do this extra line change. The NXT Shield hardware even includes a GPIO “back door” connection to the I2C data line for this purpose, but I could not make that work with the STM32 hardware. Ultimately, I had to use a bit-banged approach in place of the I2C hardware and ADL driver. Fortunately, the vendor of the NXT Shield also provides the source code for an ultrasonic sensor driver in C++ using the Arduino “Wire” interface for I2C so I could see exactly what was required.

Bit-banging has system-wide implications. Since the software is doing the low-level communication instead of the on-package I2C hardware, interrupting the software execution in the middle of the protocol could be a problem. That would mean that the priority of the task handing the device must be sufficiently high relative to the other tasks in the system. Bit-banging also means an additional utilization of the MPU that would otherwise be offloaded to a separate I2C hardware device. Our application is rather simple, so processor overload is not a problem. Care with the task priorities was required, though.

You cannot hear the ultrasonic sensor pings, as the sensor name indicates. However, I recorded the videos on my cellphone and its microphone detects the pings. They are very directional, necessarily, so they are only heard in the video when the car is pointing at the phone. Here is another short video of the car, stationary, with the camera immediately in front. The pings are quite noticeable:

System Architecture

The overall architecture of the control software is shown below in figure 12.

Figure 12: System Architecture Diagram

In the diagram, the parallelograms are periodic tasks (threads), running until power is removed. Each task is located inside a dedicated package. The Remote_Control package and the Vehicle package also provide functions that are callable by clients. Calls are indicated by dotted lines, with the arrowhead indicating the flow of data. For example, the Servo task in the Steering_Control package calls the Remote_Control package’s function to get the currently requested steering angle.

The Steering Motor and Propulsion Motor boxes represent the two NXT motors. Each motor has a dedicated rotary encoder inside the motor housing but the diagram depicts them as distinct in order to more clearly show their usage. The PID controller and vehicle noise filter are completely in software.

The PID (Proportional Integral Derivative) controller is a closed-loop control mechanism that uses feedback from the system under control to maintain a requested value. These mechanisms are ubiquitous, for example in your house's thermostat maintaining your requested heating and cooling temperatures. In our case, the PID controller maintains the requested steering angle using the steering motor's encoder data as the feedback signal.

The noise filter is a “recursive moving average” filter commonly used in digital signal processing to smooth sensor inputs. (Although the third-party interface board removed most encoder noise, some noise remained.) The PID controller did not require an encoder noise filter because the mechanical steering mechanism has enough “play” in it that encoder noise has no observable effect. The vehicle measured speed calculation, however, needed the filter because the values are used only within the software, not in a physical effector.

The collision detection logic determines whether a collision is imminent, using the NXT ultrasonic sensor data and the vehicle's current speed as inputs. If a sufficiently close object is detected ahead and the vehicle is moving forward, the engine Controller task stops the car immediately. Otherwise, such objects, if any, are ignored.

Application Source Code Example: Steering Servo

As the system diagram shows, the application consists of four primary packages, each containing a dedicated task. (There are other packages as well, but they do not contain tasks.) The task in the Steering_Control package is named “Servo” because it is acting as a servomechanism: it has a feedback control loop. In contrast, the task “Controller” in the Engine_Control package is not acting as a servo because it uses “open loop” control without any feedback. It simply sets the motor power to the requested percentage, with the resulting speed depending on the available battery power and the load on the wheels. I could also use a PID controller to maintain a requested speed, varying the power as required, but did not bother to do so in this version of the application.

The source code for the “Servo” task in the Steering_Control package is shown below, along with the declarations for two subprograms called by the task.

function Current_Motor_Angle (This : Basic_Motor) return Real with Inline;

procedure Convert_To_Motor_Values
  (Signed_Power : Real;
   Motor_Power  : out NXT.Motors.Power_Level;
   Direction    : out NXT.Motors.Directions)
  Pre => Within_Limits (Signed_Power, Power_Level_Limits);

task body Servo is
   Next_Release       : Time;
   Target_Angle       : Real;
   Current_Angle      : Real := 0.0;  -- zero for call to Steering_Computer.Enable
   Steering_Power     : Real := 0.0;  -- zero for call to Steering_Computer.Enable
   Motor_Power        : NXT.Motors.Power_Level;
   Rotation_Direction : NXT.Motors.Directions;
   Steering_Offset    : Real;
   Steering_Computer  : Closed_Loop.PID_Controller;
     (Proportional_Gain => Kp,
      Integral_Gain     => Ki,
      Derivative_Gain   => Kd,
      Period            => System_Configuration.Steering_Control_Period,
      Output_Limits     => Power_Level_Limits,
      Direction         => Closed_Loop.Direct);

   Initialize_Steering_Mechanism (Steering_Offset);

   Global_Initialization.Critical_Instant.Wait (Epoch => Next_Release);

   Steering_Computer.Enable (Current_Angle, Steering_Power);
      pragma Loop_Invariant (Steering_Computer.Current_Output_Limits = Power_Level_Limits);
      pragma Loop_Invariant (Within_Limits (Steering_Power, Power_Level_Limits));

      Current_Angle := Current_Motor_Angle (Steering_Motor) - Steering_Offset;

      Target_Angle := Real (Remote_Control.Requested_Steering_Angle);
      Limit (Target_Angle, -Steering_Offset, Steering_Offset);

        (Process_Variable => Current_Angle,
         Setpoint         => Target_Angle,
         Control_Variable => Steering_Power);

      Convert_To_Motor_Values (Steering_Power, Motor_Power, Rotation_Direction);

      Steering_Motor.Engage (Rotation_Direction, Motor_Power);

      Next_Release := Next_Release + Period;
      delay until Next_Release;
   end loop;
end Servo;

The PID controller object declared on line 19 is of a type declared in package Closed_Loop, an instantiation of a generic package. The package is a generic so that the specific floating-point input/output type is not hard-coded. The task first configures the PID controller object named Steering_Computer to specify the PID gain parameters, the interval at which the output routine is called, and the upper and lower limits for the output value (lines 21 through 27). The task then initializes the mechanical steering mechanism in order to get the steering offset (line 29). This offset is required because the steering angle requests from the user (via the remote control) are based on a frame of reference oriented on the major axis of the vehicle. Because I use the steering motor rotation angle to steer the vehicle, the code must translate the requests from the user's frame of reference (ie, the vehicle's) into the frame of reference of the steering motor. The steering motor's frame of reference is defined by the steering mechanism's physical connection to the car’s frame and is not aligned with the car’s major axis. Therefore, to do the translation the code sets the motor encoder to zero at some known point relative to the vehicle's major axis (line 29) and then handles the difference (line 38) between that motor "zero" and the "zero" corresponding to the vehicle. The code thus orients the steering motor's frame of reference to that of the vehicle, and hence to the user.

Having completed these local initialization steps, the Servo task then waits for the “critical instant” in which all the tasks should begin their periodic execution (line 31). The critical instant is time T0 (usually), so the main procedure passes a common absolute time value to each task from the Epoch formal parameter to the Next_Release variable. Each task uses its local Next_Release variable to compute its next iteration release time (lines 52 and 53) using the same initial epoch time. Waiting for this critical instant release also allows each task to wait for any prior processing in the main procedure to occur.

The task then enables the PID controller and goes into the loop. In each iteration, the task determines the current steering angle from the steering motor’s rotary encoder and the computed offset (line 38), gets the requested angle from the remote control and ensures it is within the steering mechanism’s physical limits (lines 40 and 41), then feeds the current angle and target angle into the PID controller (lines 43 through 46). The resulting output value is the steering motor power value required to reach the target angle.

The signed steering power is then converted into an NXT motor power percentage and rotation direction (line 48). Those values are used to engage the steering motor on line 50.

Finally, the task computes the next time it should be released for execution and then suspends itself until that point in time arrives (lines 52 and 53). All the tasks in the system use this same periodic looping idiom, as is expected for time-driven tasks in a Ravenscar tasking profile. (We are actually using the Jorvik tasking profile, based on Ravenscar and defined in Ada 202x. See

The PID controller is based on the Arduino PID library, version 1.1.1. The primary difference between my design and the Arduino design is that this Ada version does not compute the next time to execute. Instead, because Ada has such good real-time support, barring a design error we can be sure that the periodic task will call the PID output calculation routine at a fixed rate. Therefore, the Configure routine specifies this period, which is then used internally in the output computation. In addition, the PID object does not retain pointers to the input, setpoint, and output objects, for the sake of SPARK compatibility. We pass them as parameters instead.

For a great explanation of the Arduino PID design and implementation, step-by-step, see this web page:

The PID controller abstract data type is declared within a generic package so that the input and output types need not be hard-coded. This specific implementation uses floating-point for the inputs and output, which gives us considerable dynamic range. The ARM MCU includes a floating-point unit so there is no performance penalty. However, if desired, a version using fixed-point types could be defined with the same API, trading some of the problems with floating point computations for problems with fixed-point computations. Neither is perfect.

SPARK Upgrade

One of my long terms goals for the RC Car was to upgrade to SPARK as much as possible. That effort is currently underway and some of the packages and reusable components are now in SPARK. For example, the Steering_Control package, containing the Servo task and PID controller object, are now at the Silver level of SPARK, meaning that it is proven to have no run-time errors, including no overflows. That is the reason for the loop invariants in the Servo task (lines 35 and 36 above), and the precondition on procedure Convert_To_Motor_Values (line 9 above). In particular, the provers needed to be told that the output value limits for the PID controller remain unchanged in each iteration, and that the value of the PID controller output variable remains within those limits.

Other parts of the software are merely in the SPARK subset currently, but some are at the highest level. The recursive moving average (RMA) filter uses a bounded ring buffer type, for example, that is at Gold level, the level of functional proof of unit correctness.

I will continue to upgrade the code to the higher levels, at least the Silver level for proving absence of runtime errors. Ultimately, however, this process will require changes to the ADL drivers because they use access discriminants which are not compatible with SPARK. That is the remaining issue preventing clean proof for the Vehicle package and its Controller task, for instance.

Source Code Availability

The full project for the RC car, including some relevant documents, is here:

A Further Expedition into Libadalang: Save Time with Libadalang.Helpers.App Thu, 06 Feb 2020 09:24:27 -0500 Pierre-Marie de Rodat

Martyn’s recent blog post showed small programs based on Libadalang to find uses of access types in Ada sources. Albeit short, these programs need to take care of all the tedious logistics around processing Ada sources: find the files to work on, create a Libadalang analysis context, use it to read the source files, etc. Besides, they are not very convenient to run:

$ gprls -s -P test.gpr | ./ptrfinder1 | ./ptrfinder2

The gprls command (shipped with GNAT Pro) is used here in order to get the list of sources that belong to the test.gpr project file. Wouldn’t it be nice if our programs could use the GNATCOLL.Projects API in order to read this project file themselves and get the list of sources to process from there? It’s definitely doable, but also definitely cumbersome: first we need to get the appropriate info from the command line (project file name, potentially target and runtime information, or a *.cgpr configuration file), then call all the various APIs to load the project, and many more operations.

Such operations are so common for tools using Libadalang that we have decided to include helpers to factor this in the library itself, so that programs can focus on their real purpose. The 20.1 Libadalang release provides building blocks to save you this trouble: check the App generic package in Libadalang.Helpers. Note that you can see a tutorial and its API reference for it in our nightly documentation.

This package is intended to be used as a framework: you instantiate it with your settings at the top-level of your program and call its Run procedure. App then takes over the control of the program: it parses command-line options and invokes when appropriate the callbacks you provided it. Let’s update Martyn’s programs to use App. The job of the first program (ptrfinder1) is to go through source files and report access type declarations and object declarations that have access types.

First, we declare some shortcuts for code brevity:

package Helpers renames Libadalang.Helpers;
   package LAL renames Libadalang.Analysis;
   package Slocs renames Langkit_Support.Slocs;

Next, we can instantiate App:

procedure Process_Unit
     (Job_Ctx : Helpers.App_Job_Context; Unit : LAL.Analysis_Unit);
   --  Look for the use of access types in Unit

   package App is new Helpers.App
     (Name         => "ptrfinder1",
      Description  => "Look for the use of access types in the input sources",
      Process_Unit => Process_Unit);

Naturally, the Process_Unit procedure will be called once for each file to process. The Name and Description formals allow the automatic generation of a “help” message on the command-line (see later). Implementing the Process_Unit procedure is as easy as running minor adjustments on Martyn’s original code:

procedure Report (Node : LAL.Ada_Node'Class);
   --  Report the use of an access type at Filename/Line_Number on the standard
   --  output.

   -- Report --

   procedure Report (Node : LAL.Ada_Node'Class) is
      Filename : constant String := Node.Unit.Get_Filename;
      Line     : constant Slocs.Line_Number :=
      Put_Line (Filename & ":"
                & Ada.Strings.Fixed.Trim (Line'Image, Ada.Strings.Left));
   end Report;

   -- Process_Unit --

   procedure Process_Unit
     (Job_Ctx : Helpers.App_Job_Context; Unit : LAL.Analysis_Unit)
      pragma Unreferenced (Job_Ctx);

      function Process_Node (Node : Ada_Node'Class) return Visit_Status;
      --  Callback for LAL.Traverse

      -- Process_Node --

      function Process_Node (Node : Ada_Node'Class) return Visit_Status is
         case Node.Kind is
            when Ada_Base_Type_Decl =>
               if Node.As_Base_Type_Decl.P_Is_Access_Type then
                  Report (Node);
               end if;

            when Ada_Object_Decl =>
               if Node.As_Object_Decl.F_Type_Expr
                  Report (Node);
               end if;

            when others =>

               --  Nothing interesting was found in this Node so continue
               --  processing it for other violations.

               return Into;
         end case;

         --  A violation was detected, skip over any further processing of this
         --  node.

         return Over;
      end Process_Node;

      if not Unit.Has_Diagnostics then
         Unit.Root.Traverse (Process_Node'Access);
      end if;
   end Process_Unit;

We’re nearly done! All that’s left to do is to make our program only call the Run procedure:

end ptrfinder1;

That’s it. Build and run this program:

$ ./ptrfinder1
No source file to process

$ ./ptrfinder1 basic_pointers.adb

So far, so good.

$ ./ptrfinder1 --help
usage: ptrfinder1 [--help|-h] [--charset|-C CHARSET] [--project|-P PROJECT]
                 [--RTS RTS] [--config CONFIG] [--auto-dir|-A
                 AUTO-DIR[AUTO-DIR...]] [--no-traceback] [--symbolic-traceback]
                 files [files ...]

Look for the use of access types in the input sources

positional arguments:
   files                 Files to analyze
optional arguments:
   --help, -h            Show this help message

Wow, that’s a lot! As you can see, App takes care of parsing command-line arguments and provides a lot of built-in options. Most of them are for the various ways to communicate to the application the set of source files to process:

  • "ptrfinder1 source1.adb source2.adb …" will process all source files on the command-line, assuming that all source files belong to the current directory;
  • "ptrfinder1 -P my_project.gpr [-XKEY=VALUE] [--target=…] [--RTS=…] [--config=…]" will process all source files that belong to the my_project.gpr project file. If additional  source files appear on the command-line, ptrfinder1 will process only them, but my_project.gpr will still be used to find the other source files.

  • "ptrfinder1 --auto-dir=src1 --auto-dir=src2" will process all Ada source files that can be found in the src1 and src2 directories. Likewise, additional source files on the command-line will restrict processing to them.

These three use cases should cover most needs, the most reliable one being the project file way: calling gprbuild on the project file (with the same arguments) is a cheap way to check using the compiler that the set of sources passed to the application/Libadalang is complete, consistent and valid Ada.

As it is a common gotcha, let’s take a moment to note that even though your application may process only one source file, Libadalang may need to get access to other source files. For instance, computing the type of a variable in source1.adb may require to read, which defines the type of this variable. This is why passing a project file or --auto-dir options is useful even when you pass the list of source files to process explicitly on the command-line.

Martyn’s second program (ptrfinder2) doesn’t use Libadalang, so rewriting it to use App isn’t very interesting. Instead, let’s extend the previous program to run the text verification on the fly. We are going to add a command-line option to our application to optionally do the verification. Right after the App instantiation, add:

package Do_Verify is new GNATCOLL.Opt_Parse.Parse_Flag
      Long => "--verify",
      Help => "Verify detected ""access"" occurences");

App’s command-line parser (App.Args.Parser) uses the GNATCOLL.Opt_Parse library, so adding support for new command-line options is very easy. Here, we add a flag, i.e. a switch with no argument: it’s either present or absent. Just doing this already extends the automatic help message:

$ ./ptrfinder1 --help
usage: ptrfinder1 […]
                 files [files ...] [--verify]

Look for the use of access types in the input sources

positional arguments:
   files                 Files to analyze
optional arguments:
   --verify,             Verify detected "access" occurences

Now we can modify the Report procedure to handle this option:

function Verify
     (Filename : String; Line : Slocs.Line_Number) return Boolean;
   --  Return whether Filename can be read and that its Line'th line contains
   --  the " access " substring.

   procedure Report (Node : LAL.Ada_Node'Class);
   --  Report the use of an access type at Filename/Line_Number on the standard
   --  output. If --verify is enabled, check that the first source line
   --  corresponding to Node contains the " access " substring.

   -- Verify --

   function Verify
     (Filename : String; Line : Slocs.Line_Number) return Boolean
      --  Here, we could directly look for an "access" token in the list of
      --  tokens corresponding to Line in this unit. However, in the spirit of
      --  the original program, re-read the file with Ada.Text_IO.

      Found : Boolean := False;
      --  Whether we have found the substring on the expected line

      File : File_Type;
      --  File to read (Filename)
      Open (File, In_File, Filename);
      for I in 1 .. Line loop
            use type Slocs.Line_Number;

            Line_Content : constant String := Get_Line (File);
            if I = Line
               and then Ada.Strings.Fixed.Index (Line_Content, " access ") > 0
               Found := True;
            end if;
      end loop;
      Close (File);
      return Found;
      when Use_Error | Name_Error | Device_Error =>
         Close (File);
         return Found;
   end Verify;

   -- Report --

   procedure Report (Node : LAL.Ada_Node'Class) is
      Filename   : constant String := Node.Unit.Get_Filename;
      Line       : constant Slocs.Line_Number :=
      Line_Image : constant String :=
         Ada.Strings.Fixed.Trim (Line'Image, Ada.Strings.Left);
      if Do_Verify.Get then
         if Verify (Filename, Line) then
            Put_Line ("Access Type Verified on line #"
                      & Line_Image & " of " & Filename);
            Put_Line ("Suspected Access Type *NOT* Verified on line #"
                      & Line_Image & " of " & Filename);
         end if;

         Put_Line (Filename & ":" & Line_Image);
      end if;
   end Report;

And voilà! Let’s check how it works:

$ ./ptrfinder1 basic_pointers.adb --verify
Access Type Verified on line #3 of /tmp/access-type-detector/test/basic_pointers.adb
Access Type Verified on line #5 of /tmp/access-type-detector/test/basic_pointers.adb

When writing Libadalang-based tools, don’t waste time with trivialities such as command-line parsing: use Libadalang.Helpers.App and go directly to the interesting parts!

You can find the compilable project for this post on my GitHub fork. Just make sure you get Libadalang 20.1 or the next Continuous Release (coming in February 2020). As usual, please send us suggestions and bug reports on GNATtracker (if you are an AdaCore customer) or on Libadalang’s GitHub project.

Using GNAT-LLVM to target Ada to WebAssembly Tue, 04 Feb 2020 07:37:00 -0500 Vadim Godunko

The GNAT-LLVM project provides an opportunity to port Ada to new platforms, one of which is WebAssembly. We conducted an experiment to evaluate the porting of Ada and the development of bindings to use Web API provided by the browser directly from Ada applications.


As a result of the experiment, the standard language library and runtime library were partially ported. Together with a binding for the Web API, this allowed us to write a simple example showing the possibility of using Ada for developing applications compiled into WebAssembly and executed inside the browser. At the same time, there are some limitations both of WebAssembly and of the current GNAT-LLVM implementation:

  • the inability to use tasks and protected types
  • support for exceptions limited to local propagation and the last chance handler
  • the inability to use nested subprograms


Here is small example of an Ada program that shows/hides the text when pressing the button by manipulating attributes of document nodes.

with Web.DOM.Event_Listeners;
with Web.DOM.Events;
with Web.HTML.Buttons;
with Web.HTML.Elements;
with Web.Strings;
with Web.Window;

package body Demo is

   function "+" (Item : Wide_Wide_String) return Web.Strings.Web_String
     renames Web.Strings.To_Web_String;

   type Listener is
     limited new Web.DOM.Event_Listeners.Event_Listener with null record;

   overriding procedure Handle_Event
    (Self  : in out Listener;
     Event : in out Web.DOM.Events.Event'Class);

   L : aliased Listener;

   -- Handle_Event --

   overriding procedure Handle_Event
    (Self  : in out Listener;
     Event : in out Web.DOM.Events.Event'Class)
      X : Web.HTML.Elements.HTML_Element
        := Web.Window.Document.Get_Element_By_Id (+"toggle_label");

      X.Set_Hidden (not X.Get_Hidden);
   end Handle_Event;

   -- Initialize_Demo --

   procedure Initialize_Demo is
      B : Web.HTML.Buttons.HTML_Button
        := Web.Window.Document.Get_Element_By_Id

      B.Add_Event_Listener (+"click", L'Access);
      B.Set_Disabled (False);
   end Initialize_Demo;

end Demo;

As you can see, it uses elaboration, tagged and interface types, and callbacks.

Live demo

Setup & Build

To compile the examples you need to setup GNAT-LLVM & GNAT WASM RTL following instructions in file. 

To compile specific example use gprbuild to build application and open index.html in the browser to run it.

Next steps

The source code is published in a repository on GitHub and we invite everyone to participate in the project.


AdaCore at FOSDEM 2020 Thu, 30 Jan 2020 11:03:04 -0500 Fabien Chouteau

Like last year and the year before, AdaCore will participate to the celebration of Open Source software at FOSDEM. It is always a key event for the Ada/SPARK community and we are looking forward to meet Ada enthusiasts. You can check the program of the Ada/SPARK devroom here.

AdaCore engineers will give two talks in the Ada devroom:

We have a talk in the Hardware Enablement devroom:

And there is a related talk in the Security devroom on the use of SPARK for security:

Hope to see you at FOSDEM this week-end!

Ada on a Feather Thu, 23 Jan 2020 09:35:59 -0500 Fabien Chouteau

In the last couple of years, the maker community switched from AVR based micro-controllers (popularized by Arduino) to the ARM Cortex-M architecture. AdaFruit was at the forefront of this migration, with boards like the Circuit Playground Express or some of the Feathers.

AdaFruit chose to adopt the Atmel (now Microchip) SAMD micro-controller family. Unfortunately for us it is not in the list of platforms with the most Ada support so far (stay tuned, this might change soon ;)). 

So I was quite happy to see AdaFruit release their first Feather format board including a micro-controller with plenty of Ada support, the STM32F4. I bought a board right away and implemented some support code for it.

The support for the Feather STM32F405 is now available in the Ada Drivers Library, along with two examples. The first just blinks the on-board LED and the second displays Make With Ada on a CharlieWing expansion board.


To compile the examples, you need to download and install a couple of things: the GNAT arm-elf package from (I also recommend the native package to get the IDE GNAT Studio) and the Ada Driver Library code from GitHub (AdaCore/Ada_Drivers_Library).

You then have to run the script scripts/ to install the run-time BSPs.


To build the example, open one of the the project files examples/feather_stm32f405/blinky/blinky.gpr or examples/feather_stm32f405/charlie_wing/charlie_wing.gpr with GNATstudio (Aka GPS), and click on the “build all” icon.

Program the board

To program the example on the board, I recommend using the Black Magic Probe debugger (also available from AdaFruit). This neat little device provides a GDB remote server interface to the STM32F4, allowing you not only to program the micro-controller but also to debug it.

An alternative is to use the DFU mode of the STM32F4.

Happy hacking :)

Witnessing the Emergence of a New Ada Era Tue, 21 Jan 2020 09:17:00 -0500 Quentin Ochem

For nearly four decades the Ada language (in all versions of the standard) has been helping developers meet the most stringent reliability, safety and security requirements in the embedded market. As such, Ada has become an entrenched player in its historic A&D niche, where its technical advantages are recognized and well understood. Ada has also seen usage in other domains (such as medical and transportation) but its penetration has progressed at a somewhat slower pace. In these other markets Ada stands in particular contrast with the C language, which, although suffering from extremely well known and documented flaws, remains a strong and seldom questioned default choice. Or at least, when it’s not the choice, C is still the starting point (a gateway drug?) for alternatives such as C++ or Java, which in the end still lack the software engineering benefits that Ada embodies..

Throughout AdaCore’s twenty-five year history, we’ve seen underground activities of software engineers willing to question the status quo and embark on new technological grounds. But driving such a change is a tough sell. While the merits of the language are usually relatively easy to establish, overcoming the surrounding inertia often feels like an insurmountable obstacle. Other engineers have to be willing to change old habits. Management has to be willing to invest in new technology. All have to agree on the need for safer, more secure and more reliable software. Even if we’ve been able to report some successes over the years, we were falling short of the critical mass.

Or so it seemed.

The tide has turned. 2018 and 2019 have been exceptional vintages in terms of Ada and SPARK adoption, all the signs are showing that 2020 will be at least as exciting. What’s more - the new adopters are coming from industries that were never part of the initial Ada and SPARK user base. What used to be inertia is now momentum. Let’s take a look at the information that can be gathered from the web over the past two years to demonstrate the new dynamic of Ada and SPARK usage.

The Established User Base

Before talking about new adopters, it’s important to step back and re-establish the basis of the Ada and SPARK usage, which is the root of its viability over the very long term. Ada and SPARK are used by a very large user base in the defense and avionics domains. A glance at AdaCore customer list - a subset of the actual user base - will give a good idea of the breadth of technology usage. A lot of the projects here have lifetimes over decades, some started in the early days of Ada in the mid 80’s carried all the way to the present, some have already planned lifetimes spanning over the next two decades. Projects range from massive air traffic management systems running on vast arrays of servers to embedded controllers running on aircraft engines, sensors, or satellite flight control systems with extremely stringent resource constraints. Some applications are still maintained today on hardware dating as far back as Motorola 68K or Intel i386 series, while others are deployed on the latest ARM Cortex or RISC-V cores. Most have some level of reliability constraints, up to the highest levels of the avionics DO-178B/C standard. 

Due to the nature of the domain, it is difficult to communicate specifically about these projects, and we only have scarce news. One measure of the increasing interest in Ada and SPARK can be inferred from defense-driven research projects which contain references to these language technologies. The most notable example is the recent UK-funded HICLASS project, focused on security, which involves a large portion of the UK defense industry. Some press releases are also available, in particular in the space domain (European Space Agency, AVIO and MDA). These data samples are representative of a very active and vibrant community which is committed to Ada and SPARK for decades to come - effectively guaranteeing their industrial future as far as we can reasonably guess.

The Emerging Adopters

The so-called “established user base” has fueled the Ada and SPARK community up until roughly the mid 2010s. At that point of time, a new trend started to emerge, from users and use cases that we’had never seen before. While each case is a story in its own right, some common patterns have emerged. The starting point is almost always either the increase of safety or security requirements, or a wish to reduce the costs of development of an application with some kind of high reliability needs. This is connected to the acknowledgement that the programming language in use - almost exclusively C or C++ - may not be the optimal language to reach these goals. This is well documented in the industry; C and C++ flaws have been the subject of countless papers, and the source of catastrophic vulnerability exploits and tools to work around issues. The technical merits of Ada and its ability to prevent many of these issues is also well documented - we even have access to some measurements of the effects. The most recent one is an independent study developed by VDC, which measured up to 38% cost-savings on Ada vs C in the context of high-integrity markets that have adopted Ada for a long time.

We’re talking a lot about Ada here, but in fact new adopters are typically driven by a mix of SPARK and Ada. The promise that SPARK offers is automatic verification of software properties such as absence of buffer overflow, together with stringent mitigation of others - and this by design, early in the development process. This means that developers are able to self-check their code - not only is the code more reliable, it is also more reliable straight out as you write it, avoiding many mistakes that could otherwise pass through testing, integration or deployment phases.

Some of the SPARK adopters motivated by these benefits come from academia. Over the past 2 years, over 40 universities have joined the GNAT Academic Program (“GAP”), with a mix of teaching and research activities, including for example FH Campus Wien train project, CubeSat and UPMSat-2.

Many adopters can also be found in industry. Some of the following references highlight teams at the research phase, some others represent projects already deployed. They all however contribute to this solid wave of new Ada and SPARK adopters. The publications referenced in the following paragraphs have been published between 2018 and 2019.

One obvious application for Ada and SPARK, where human lives are at risk, is the medical device domain. So it comes without surprise that this area is amongst those adopting the technology. Two interesting cases come to mind. The first one in RealHeart, a Scandinavian manufacturer that is developing an artificial heart with on-board software written in Ada and SPARK, who issued a press release and later made an in-depth presentation at SPARK & Frama-C days. The second reference comes from a large medical device corporation, Hillrom, who published a paper explaining the rationale for the selection of SPARK and Ada for development of ECG algorithms.

Another domain is everything that relates to security. The French security agency ANSSI studied various languages to implement a secure USB key and selected SPARK as the best choice. They published a research paper, presentation and source code. Another interesting new application has been implemented by a German company Componolit developing proven communication protocols

Of course, established markets are also at the party. The University of Colorado’s Laboratory for Atmospheric and Space Physics has recently adopted Ada to develop an application for the International Space Station. In the defense domain, the Air Force Research Labs is studying re-writing a drone framework from C++ to SPARK and doing functional proofs, with a public research paper and source code available.

While all of these domains provide interesting adopter stories, the one single domain that has demonstrated the most interest in the recent past is undoubtedly automotive. This is probably coming from the increasing complexity of electronics systems in cars, with applications such as Advanced Driver Assistance Systems (ADAS) and autonomous vehicles. References in this domain ranges from tier 1 suppliers such as Denso or JTEKT as well as OEMs and autonomous vehicle companies like Volvo’s subsidiary Zenuity

And there’s NVIDIA.

In January of this year, we published with NVIDIA a press release and a blog post, followed-up this November by a presentation at our annual Tech Days conference, and an on-line webex (also see the slides for the webex). In many respects, this is a unique tipping point in the history of Ada adoption in terms of impact in a non-A&D domain, touching considerations ranging from security to automotive safety, all under the tight constraints of firmware development. The webex in particular provides a unique dive into the reasons behind the adoption of SPARK and Ada by a company that didn’t have any particular ties to it initially. It also gives key insights on the challenges and costs of such an adoption, together with the benefits already observed. In many respects, this is almost an adoption guide to the technology from a business standpoint.

Wrapping Up

Keep in mind that the above references are only those that are publicly available, which we know about. There are many more projects under the hood, and even more that we’re not even aware of. Everything considered, this is a very exciting time for the Ada and SPARK languages. Stay tuned, we have an array of new stories coming up for the months and years to come!

AdaCore for HICLASS - Enabling the Development of Complex and Secure Aerospace Systems Wed, 11 Dec 2019 07:27:00 -0500 Paul Butcher

What's changed?

In 2019 AdaCore created a UK business unit and embarked on a new and collaborative venture researching and developing advanced UK aerospace systems. This blog introduces the reader to ‘HICLASS’, describes our involvement and explains how participation in this project is aligned with AdaCore’s core values.

Introducing HICLASS

The “High-Integrity, Complex, Large, Software and Electronic Systems” (HICLASS) project was created to enable the delivery of the most complex, software-intensive, safe and cyber-secure systems in the world. HICLASS is a strategic initiative to drive new technologies and best-practice throughout the UK aerospace supply chain, enabling the UK to affordably develop systems for the growing aircraft and avionics market expected over the coming decades. HICLASS includes key prime contractors, system suppliers, software tool vendors and Universities working together to meet the challenges of growing system complexity and size. HICLASS will allow the development of new, complex, intelligent and internet-connected electronic products that are safe and secure from cyber-attack and can be affordably certified.

The HICLASS project is supported by the Aerospace Technology Institute (ATI) Programme, a joint Government and industry investment to maintain and grow the UK’s competitive position in civil aerospace design and manufacture. The programme, delivered through a partnership between the ATI, Department for Business, Energy & Industrial Strategy (BEIS) and Innovate UK, addresses technology, capability and supply chain challenges.

The £32m investment program, led by Rolls-Royce Control Systems, focuses on the UK civil aerospace sector but also has direct engagement with the Defence, Science and Technology Laboratory (DSTL). The collaborative group, comprised of 16 funded partners and 2 unfunded partners, is made up of the following system developers, tool suppliers and academic institutions: AdaCore, Altran, BAE Systems, Callen-Lenz, Cobham, Cocotec, D-Risq, GE Aviation, General Dynamics UK, Leonardo, MBDA, University of Oxford, Rapita Systems, Rolls-Royce, University of Southampton, Thales, Ultra Electronics and University of York. As well as researching and developing advanced aerospace capabilities, the group aims to pool niche skills and build a highly collaborative community based around the enhanced understanding of shared problems. The project is split into 4 main work packages with 2 technology work packages focusing on integrated model based engineering, cyber-secure architectures and mechanisms, high integrity connectivity, networks and data distribution, advanced hardware platforms and smart sensors and advanced software verification capabilities. In addition, a work package will ensure domain exploitation and drive a cross-industry cyber-security regulatory approach for avionics. A final work package will see the development of integrated HICLASS technology demonstrators.

Introducing ASSET

HICLASS also aims to build, promote and manage the Aerospace Software Systems Engineering and Technology (ASSET) partnership. This community is open to all organisations undertaking technical work in aerospace software and systems engineering in the UK and operates in a manner designed to promote sharing, openness and accessibility. Unlike HICLASS, ASSET publications are made under a Creative Commons Licence, and the group operates without any non-disclosure or collaboration agreements.

AdaCore's R&D Work in the UK

Within HICLASS, AdaCore is working with partners across multiple work packages and is also leading a work package titled “SPARK for HICLASS”. This work package will develop and extend multiple SPARK-related technologies in order to satisfy industrial partner’s HICLASS requirements regarding safety and cyber-security.

SPARK is a globally recognised safety and security profile of Ada and a software programming language defined by IEC/ISO 8652:2012. Born out of a UK MOD sponsored research project, the first version of SPARK, based on Ada 83, was initially produced at the University of Southampton. Since then the technology has been progressively extended and refined and the latest version SPARK 2014, based on Ada 2012, is now maintained and developed by AdaCore and Altran in partnership. Due to its rich pedigree, earnt at the forefront of high integrity software assurance, SPARK plays a big part in AdaCore’s safe and secure software development tool offerings. Through focused and collaborative research and development, AdaCore will guide the evolution of multiple SPARK-related technologies towards a level where they are suitable for building demonstrable, safe and secure cyber-physical systems that meet the software implementation and verification requirements of HICLASS developed by UK Plc.

New extensions to the SPARK language, specific to HICLASS systems, will be developed, these will include the verification of cyber-safe systems and auto generated code. There is also a planned maturing of SPARK reusable code modules where AdaCore will be driven by the needs of our partners in providing high assurance reusable SPARK libraries resulting in the reduction of development time and reduced verification costs.

QGen, a qualifiable and tuneable code generation and model verification tool suite for a safe subset of Simulink® and Stateflow® models, is as a game changer in Model Based Software Engineering (MBSE). For HICLASS, AdaCore will place an emphasis on the fusion of SPARK verification capabilities and HICLASS-related emerging MBSE tools, allowing code level verification to be achieved at the model level. The generation of SPARK code, from our QGen tool, as well as various HICLASS partner’s MBSE technologies, will be researched and developed. Collaborative case studies will be performed to assess and measure success. Collaboration is a key critical success factor in meeting this objective; multiple HICLASS partners are developing MBSE tools and SPARK evolution will be achieved in close partnership with them.

The second, and complementary, objective of this work package is to research and develop cyber-secure counter measures and HICLASS verification strategies, namely in the form of compiler hardening and the development of a ‘fuzzing’ capability for Ada/SPARK. HICLASS case studies, produced within proceeding work packages, will be observed to ensure our SPARK work package is aligned with HICLASS specific standards, guidelines and recommendations and to ensure the relevancy of the work package deliverables.

The third objective is for AdaCore, in collaboration with our HICLASS partners, to evaluate QGen, and associated formal approaches, for existing UK aerospace control systems and to make comparisons with existing Simulink code generation processes. In addition, AdaCore will promote processor emulation technology through a collaborative HICLASS case study.

The final objective is to demonstrate the work package technology through the creation of a software stack capable of executing SPARK software on a range of (physical and emulated) target processors suitable for use in HICLASS. The ability to execute code generated from MBSE environments will also be demonstrated.

Committing Investment into the UK

AdaCore has a long history of working with partners within the UK aerospace industry on safety-related, security-related and mission-critical software development projects. Participation in the HICLASS research and development group complemented AdaCore’s commitment to invest within the UK. This four-year research project is also an excellent fit with AdaCore’s core values and its existing and future capabilities. In addition, the creation of a new UK business unit, ‘AdaCore Ltd’, created to rapidly grow into our UK Centre of Excellence, ensures that our existing and future UK aerospace customers will continue to receive the high level of technical expertise and quality products associated with AdaCore.

History has shown that the UK aerospace industry isn’t afraid to be ambitious and has the technological capability to stay at the forefront of this rapidly growing sector. With HICLASS, the sky really is the limit, and AdaCore welcomes the opportunity to be a part of the journey and further extend our partnerships within this technologically advanced and continually growing market.

Further information about the ATI, BEIS and IUK...

Aerospace Technology Institute (ATI)

The Aerospace Technology Institute (ATI) promotes transformative technology in air transport and supports the funding of world-class research and development through the multi-billion pound joint government-industry programme. The ATI stimulates industry-led R&D projects to secure jobs, maintain skills and deliver economic benefits across the UK.

Setting a technology strategy that builds on the UK’s strengths and responds to the challenges faced by the UK civil aerospace sector; ATI provides a roadmap of the innovation necessary to keep the UK competitive in the global aerospace market, and complements the broader strategy for the sector created by the Aerospace Growth Partnership (AGP).

The ATI provides strategic oversight of the R&T pipeline and portfolio. It delivers the strategic assessment of project proposals and provides funding recommendations to BEIS.

Department for Business, Energy and Industrial Strategy (BEIS) 

Department for Business, Energy and industrial Strategy (BEIS) is the government department accountable for the ATI Programme. As the budget holder for the programme, BEIS, is accountable for the final decision regarding projects to progress and fund with Government resources, as well as performing Value for Money (VfM) assessment on all project proposals, one of the 3 ATI Programme assessment streams.

Innovate UK (IUK)

Innovate UK is the funding agency for the ATI Programme. It delivers the competitions process including independent assessment of project proposals, and provides funding recommendations to BEIS. Following funding award, Innovate UK manages the programme, from contracting projects, through to completion.

Innovate UK is part of UK Research and Innovation (UKRI), a non-departmental public body funded by a grant-in-aid from the UK government. Innovate UK drives productivity and economic growth by supporting businesses to develop and realise the potential of new ideas, including those from the UK’s world-class research base.

UKRI is the national funding agency investing in science and research in the UK. Operating across the whole of the UK with a combined budget of more than £6 billion, UKRI brings together the 7 Research Councils, Innovate UK and Research England.

An Expedition into Libadalang Thu, 07 Nov 2019 08:04:00 -0500 Martyn Pike

I’ve been telling Ada developers for a while now that Libadalang will open up the possibility of more-easily writing Ada source code analysis tools.  (You can read more about Libadalang here and here and can also access the project on Github.)

Along these lines, I recently had a discussion with a customer about whether there were any tools for detecting uses of access types in their code which got me thinking about possible ways to detect the use of Access Types in a set of Ada source code files.

GNATcheck doesn't currently have a rule that prohibits the use of access types.  Also, SPARK 2014 recently added support for Access Types, whereas previously they were banned.  So earlier versions of GNATprove could detect them quite effectively, the latest and future versions may not.

I decided to architect a solution to this problem and determined there were several implementation options open to me:

  1. Use ‘grep’ on a set of Ada sources to find instances of the "access" Ada keyword
  2. Use gnat2xml and then use ‘grep’ on the resulting output to search for certain tags
  3. Use gnat2xml and then write an XML-aware search utility to search for certain tags
  4. Use Libadalang to write my own Ada static analysis program

Option 1 and 2 just feel too easy and would defeat the purpose of this blog post. 

Option 3 is perhaps a good topic for another post related to using XML/Ada, however I decided to put my money where my mouth is and go with Option 4!

While I wrote this program in Ada,  I could have written it in Python.

So here is the program:

with Ada.Text_IO;         use Ada.Text_IO;
with Libadalang.Analysis; use Libadalang.Analysis;
with Libadalang.Common;   use Libadalang.Common;
with Ada.Strings.Fixed;
with Ada.Strings;

procedure ptrfinder1 is

   LAL_CTX  : constant Analysis_Context := Create_Context;


   while not End_Of_File(Standard_Input)


         Filename : constant String := Get_Line;

         Unit : constant Analysis_Unit := LAL_CTX.Get_From_File(Filename);

         function Process_Node(Node : Ada_Node'Class) return Visit_Status is

           if Node.Kind in Ada_Access_Def
                         | Ada_Access_To_Subp_Def_Range
                         | Ada_Base_Type_Access_Def
                         | Ada_Anonymous_Type_Access_Def_Range
                         | Ada_Type_Access_Def_Range
                    Source => Filename & ":" & Node.Sloc_Range.Start_Line'Img,
                    Side   => Ada.Strings.Left
           end if;

           return Into;

         end Process_Node;


         if not Unit.Has_Diagnostics then
         end if;

      end Process_Ada_Unit;

   end loop Read_Standard_Input;

end ptrfinder1;

I designed the program to read a series of fully qualified absolute filenames from standard input and process each of them in turn.  This approach made the program much easier to write and test and,  as you'll see, allowed the program to be integrated effectively with other tools.

Let's deconstruct the code a little....

For each provided filename,  the program creates a Libadalang Analysis_Unit for that filename.

while not End_Of_File(Standard_Input)


      Filename : constant String := Get_Line;

      Unit : constant Analysis_Unit :=

As long as it has no issues,  the Ada unit is traversed and the Process_Node subprogram is executed for each detected node.

if not Unit.Has_Diagnostics then
end if;

The Process_Node subprogram checks the Kind field of the detected Ada_Node'Class parameter to see if it is any of the access type related nodes.  If so,  the program outputs the fully qualified filename, a ":" delimiter, and the line number of the detected node.

function Process_Node(Node : Ada_Node'Class) return Visit_Status is

   if Node.Kind in Ada_Access_Def
                 | Ada_Access_To_Subp_Def_Range
                 | Ada_Base_Type_Access_Def
                 | Ada_Anonymous_Type_Access_Def_Range
                 | Ada_Type_Access_Def_Range
            Source => Filename & ":" & Node.Sloc_Range.Start_Line'Img,
            Side   => Ada.Strings.Left
   end if;

   return Into;

end Process_Node;

At the end of the Process_Node subprogram,  the returned value allows the traversal to continue.

To make the program a more useful tool within a development environment based on GNAT Pro,  I integrated it with the piped output of the 'gprls' program.

gprls is a tool that outputs information about compiled sources. It gives the relationship between objects, unit names, and source files. It can also be used to check source dependencies as well as various other characteristics.

My program can then be invoked as part of a more complex command line:

$ gprls -s -P test.gpr | ./ptrfinder1

Given the following content of test.gpr:

project Test is

   For Languages use ("Ada");
   for Source_Dirs use (".");
   for Object_Dir use "obj";

end Test;

Plus an Ada source code file called inc_ptr1.adb (in the same directory as test.gpr) containing the following:

procedure Inc_Ptr1 is

   type Ptr is access all Integer;



end Inc_Ptr1;

The resulting output from the integration of gprls and my program is:


This output correctly identified the access type usage on line 3 of inc_ptr1.adb.

But how do I know that my program or indeed Libadalang has functioned correctly?

I decided to stick in principle to the UNIX philosophy of "Do One Thing and Do it Well" and write a second program to verify the output of my first program using a simple algorithm.

This second program is given a filename and line number and verifies that the keyword "access" appears on the specified line number.

Of course,  I could also have embedded this verification into the first program,  but to illustrate a point about diversity I chose not to.

with Ada.Text_IO;       use Ada.Text_IO;
with Ada.Directories;   use Ada.Directories;
with Ada.Strings;       use Ada.Strings;
with Ada.Strings.Fixed; use Ada.Strings.Fixed;
with Ada.IO_Exceptions;

procedure ptrfinder2 is

   while not End_Of_File(Standard_Input)


         Std_Input : constant String := Get_Line;
         Delimeter_Position : constant Natural := Index(Std_Input,":");
         Line_Number_As_String : constant String :=  Std_Input(Delimeter_Position+1..Std_Input'Last);
         Line_Number : constant Integer := Integer'Value(Line_Number_As_String);
         Filename : constant String := Std_Input(Std_Input'First..Delimeter_Position-1);
         The_File : File_Type;
         Verified : Boolean := False;


         if Ada.Directories.Exists(Filename) and then Line_Number > 1

            Open(File => The_File, Mode => In_File, Name => Filename);

            for I in 1..Line_Number loop

               Verified := Index(Get_Line(The_File)," access ") > 0;

               exit when Verified or else End_Of_File(The_File);

            end loop Locate_Line;

            Close(File => The_File);

         end if;

         if Verified then
            Put_Line("Access Type Verified on line #" & Line_Number_As_String & " of " & Filename);
            Put_Line("Suspected Access Type *NOT* Verified on line #" & Line_Number_As_String & " of " & Filename);
         end if;

      end Process_Standard_Input;

   end loop Read_Standard_Input;

end ptrfinder2;

I can then string the first and second program together:

$ gprls -s -P test.gpr | ./ptrfinder1 | ./ptrfinder2

This produces the output:

Access Type Verified on line #3 of /home/pike/Workspace/access-detector/test/inc_ptr1.adb

It goes without saying that a set of Ada sources with no Access Type usage will result in no output from either the first or second program.

This expedition into Libadalang has reminded me how extremely effective Ada can be at writing software development tools.

The two programs described in this blog post were built and tested on 64-bit Ubuntu 19.10 using GNAT Pro and Libadalang.  They are also known to build successfully with the 64-bit Linux version of GNAT Community 2019.

The source code can be downloaded and built from GitHub.

RecordFlux: From Message Specifications to SPARK Code Thu, 17 Oct 2019 09:08:23 -0400 Alexander Senier

Software constantly needs to interact with its environment. It may read data from sensors, receive requests from other software  components or control hardware devices based on the calculations performed. While this interaction is what makes software useful in the first place, processing messages from untrusted sources inevitably creates an attack vector an adversary may use to exploit software vulnerabilities. The infamous Heartbleed is only one example where a security critical software was attacked by specially crafted messages.

Implementing those interfaces to the outside world in SPARK and proving the absence of runtime errors is a way to prevent such attacks. Unfortunately, manually implementing and proving message parsers is a tedious task which needs to be redone for every new protocol. In this article we'll discuss the challenges that arise when creating provable message parsers and present RecordFlux, a framework which greatly simplifies this task.

Specifying Binary Messages

Ethernet: A seemingly simple example

At a first glance, Ethernet has a simple structure: A 6 bytes destination field, a 6 bytes source field and a 2 bytes type field followed by the payload:

Simplified Ethernet frame layout

We could try to model an Ethernet frame as a simple record in SPARK:

package Ethernet is

   type Byte is mod 2**8;
   type Address is mod 2**48;
   type Type_Length is mod 2**16;
   type Payload is array (1..1500) of Byte;

   type Ethernet_Frame is
      Destination : Address;
      Source      : Address;
      EtherType   : Type_Length;
      Data        : Payload;
   end record;

end Ethernet;

When looking closer, we realize that this solution is a bit short-sighted. Firstly, defining the payload as a fixed-size array as above will either waste memory when handling a lot of small (say, 64 bytes) frames or be too short when handling jumbo frames which exceed 1500 bytes. More importantly, the Ethernet header is not as simple as we pretended earlier. Looking at the standard, we realize that the EtherType field actually has a more complicated semantics to allow different frame types to coexist on the same medium.

If the value of EtherType is greater or equal to 1536, then it contains an Ethernet II frame. The EtherType is treated as a type field which determines the protocol contained in Data. In that case, the Data field uses up the remainder of the Ethernet frame. If the value of EtherType is less or equal to 1500, then it contains a IEEE 802.3 frame and the EtherType field represents the length of the Data field. Frames with an EtherType value between 1501 and 1535 are considered invalid.

To make things even worse, both variants may contain an optional IEEE 802.1Q tag to identify the frames priority and VLAN. The tag is inserted after the source field and itself is comprised of two 16 bit fields, TPID and TCI. It is present if the bytes that would contain the TPID field have a hexadecimal value of 8100. Otherwise these bytes contain the EtherType field.

Lastly, the Data field will usually contain higher-level protocols. Which protocol is contained and how the payload is to be interpreted depends on the value of EtherType. With our naive approach above, we have to manually convert Data into the correct structured message format. Without tool support, this conversion will be another source of errors.

Formal Specification with RecordFlux

Next, we'll specify the Ethernet frame format using the RecordFlux domain specific language and demonstrate how the specification is used to create real-world parsers. RecordFlux, deliberately has a syntax similar to SPARK, but deviates where more expressiveness was required to specify complex message formats.

Packages and Types

Just like SPARK, entities are grouped into packages. By convention, a package contains one protocol like IPv4, UDP or TLS. A protocol will typically define many types and different message formats. Range types and modular types are identical to those found in SPARK. Just like in SPARK, entities can be qualified using aspects, e.g. to specify the size of a type using the Size aspect:

package Ethernet is

   type Address is mod 2**48;
   type Type_Length is range 46 .. 2**16 - 1 with Size => 16;
   type TPID is range 16#8100# .. 16#8100# with Size => 16;
   type TCI is mod 2**16;

end Ethernet;

The first difference to SPARK is the message keyword which is similar to records, but has important differences to support the non-linear structure of messages. The equivalent of the naive Ethernet specification in RecordFlux syntax would be:

type Simplified_Frame is
      Destination : Address;
      Source      : Address;
      Type_Length : Type_Length;
      Data        : Payload;
   end message;

Graph Structure

As argued above, such a simple specification is insufficient to express the complex corner-cases found in Ethernet. Luckily, RecordFlux messages allow for expressing conditional, non-linear field layouts. While SPARK records are linear sequences of fixed-size fields, messages should rather be thought of as a graph of fields where the next field, its start position, its length and constraints imposed by the message format can be specified in terms of other message fields. To ensure that the parser generated by RecordFlux is deadlock-free and able to parse messages sequentially, conditionals must only reference preceding fields.

We can extend our simple example above to express the relation of the value of Type_Length and length of the payload field:

   Type_Length : Type_Length
      then Data
         with Length => Type_Length * 8
         if Type_Length <= 1500,
      then Data
         with Length => Message’Last - Type_Length’Last
         if Type_Length >= 1536;

For a field, the optional then keyword defines the field to follow in the graph. If that keyword is missing, this defaults to the next field appearing in the message specification as in our Simplified_Frame example above. To have different fields follow under different conditions, an expression can be added using the if keyword. Furthermore, an aspect can be added using the with keyword, which can be used to conditionally alter properties like start or length of a field. If no succeeding field is specified for a condition, as for Type_Length in the range between 1501 and 1535, the message is considered invalid.

In the fragment above, we use the value of the field Type_Length as the length of the Data field if its value is less or equal to 1500 (IEEE 802.3 case). If Type_Length is greater or equal to 1536, we calculate the payload length by subtracting the end of the Type_Length field from the end of the message. The 'Last (and also 'First and 'Length) attribute is similar to the respective SPARK attribute, but refer to the bit position (or bit length) of a field within the message. The Message field is special and refers to the whole message being handled.

Optional Fields

The graph structure described above can also be used to handle optional fields, as for the IEEE 802.1Q tag in Ethernet. Let's have a look at the full Ethernet specification first:

package Ethernet is
   type Address is mod 2**48;
   type Type_Length is range 46 .. 2**16 - 1 with Size => 16;
   type TPID is range 16#8100# .. 16#8100# with Size => 16;
   type TCI is mod 2**16;

   type Frame is
         Destination : Address;
         Source : Address;
         Type_Length_TPID : Type_Length
            then TPID
               with First => Type_Length_TPID’First
               if Type_Length_TPID = 16#8100#,
            then Type_Length
               with First => Type_Length_TPID’First
               if Type_Length_TPID /= 16#8100#;
         TPID : TPID;
         TCI : TCI;
         Type_Length : Type_Length
            then Data
               with Length => Type_Length * 8
               if Type_Length <= 1500,
            then Data
               with Length => Message’Last - Type_Length’Last
               if Type_Length >= 1536;
         Data : Payload
            then null
            if Data’Length / 8 >= 46
               and Data’Length / 8 <= 1500;
      end message;
end Ethernet;

Most concepts should look familiar by now. The null field used in the then expression of the Data field is just a way to state that the end of the message is expected. This way, we are able to express that the payload length must be between 46 and 1500. As there is only one then branch for payload (pointing to the end of the message), values outside this range will be considered invalid. This is the general pattern to express invariants that have to hold for a message.

How can this be used to model optional fields of a message? We just need to cleverly craft the conditions and overlay the following alternatives. The relevant section of the above Ethernet specification is the following:

   Source : Address;
   Type_Length_TPID : Type_Length
      then TPID
         with First => Type_Length_TPID’First
         if Type_Length_TPID = 16#8100#,
      then Type_Length
         with First => Type_Length_TPID’First
         if Type_Length_TPID /= 16#8100#;
   TCI : TCI;
   Type_Length : Type_Length

Remember that the optional IEEE 802.1Q tag consisting of the TPID and TCI fields is present after the Source only if the bytes that would contain the TPID field are equal to a hexadecimal value of 8100. We introduce a field Type_Length_TPID only for the purpose of checking whether this is the case. To avoid any confusion when using the parser later, we will overlay this field with properly named fields. If Type_Length_TPID equals 16#8100# (SPARK-style numerals are supported in RecordFlux), we define the next field to be TPID and set its first bit to the 'First attribute of the Type_Length_TPID field. If Type_Length_TPID does not equal 16#8100#, the next field is Type_Length, skipping TPID and TCI.

As stated above, the specification actually is a graph with conditions on its edges. Here is an equivalent graph representation of the full Ethernet specification:

Graph representation of Ethernet spec

Working with RecordFlux

RecordFlux comes with the command line tool rflx which parses specification files, transforms them into an internal representation and subsequently generates SPARK packages that can be used to parse the specified messages:

To validate specification files, which conventionally have the file ending .rflx,