AdaCore Blog

Efficient use of Simics for testing

by Jérôme Lambourg

As seen in the previous blog article, AdaCore relies heavily on virtualisation to perform the testing of its GNAT Pro products for VxWorks.

This involves roughly 350,000 tests that are run each day, including the 60,000 tests that are run on Simics. A typical test involves the following steps:

  1. Compilation of an example, with switches and sources defined in the test;
  2. Start an emulator, transfer the compiled module or RTP on it, potentially with some support files;
  3. Run the module or RTP;
  4. Retrieve the output, potentially with files created on the target;
  5. Exit from the emulator.

Compared to an equivalent test run on a native platform, those steps show two challenges:

  1. Feasibility: we need to add the proper support to enable those steps, in particular the file transfer between the host and the emulated target, and do so reliably.
  2. Efficiency: running the emulator and transferring the files back and forth can lead to a significant overhead, incompatible with the time constraints (24h maximum testing time). Some enhanced techniques are thus needed to speed up those critical steps.

Instrumentation overview

In order to accomplish this testing, we need instrumentation, both on the host side (e.g. on the simulator) and on the guest itself (the VxWorks kernel). The requirements are:

  • Be able to transfer files back and forth between guest and host;
  • Be able to retrieve the output;
  • Automatically execute the test scenario on the target.

File transfer

To support transferring files from/to the host, we used the Simics Builder module of Simics. This allowed us to create a dedicated peripheral with which we can interact to transfer files between the host and a RAM drive on the target.

We chose this solution as this can be very efficient (as opposed to transferring the files using a simulated network) while giving us a very high level of flexibility for the various tests.

The simulated peripheral takes the form of a small set of registers and a buffer. The implementation of such a peripheral as a Simics plugin is pretty straightforward. However, some care must be taken for:

  • The size of the individual registers (32bit or 64bit according to the target system)
  • The endianness of those registers

To copy files back and forth, the device responds to syscalls, with the following functions available:

  • OPEN
  • READ

Any write operation on the first register of the device triggers the syscall.

The 3 other registers contain the arguments, and their expected content depends on the specific syscall.

Below is the implementation of the system call of the simulated device, provided as an example of Simics plugin implementation:

static REG_TYPE do_syscall(hostfs_device_t *hfs)
  REG_TYPE  ID   = hfs->regs[SYSCALL_ID].value;
  REG_TYPE  arg1 = hfs->regs[ARG1].value;
  REG_TYPE  arg2 = hfs->regs[ARG2].value;
  REG_TYPE  arg3 = hfs->regs[ARG3].value;
  REG_TYPE  ret  = 0;
  char     *host_buf  = NULL;
  REG_TYPE guest_buf;
  REG_TYPE len;

  switch (ID)
    case SYSCALL_OPEN:
      guest_buf = arg1;
      len = 1024; /* XXX: maximum length of filename string */
      arg2 = open_flags(arg2);

      host_buf = malloc(len);
      /* Convert guest buffer to host buffer */
      copy_from_target(hfs, guest_buf, -1, (uint8_t *)host_buf);
      ret = open(host_buf, arg2, arg3);

      return ret;

On the VxWorks side, this call is implemented in the kernel:

PHYS_ADDR _to_physical_addr(VIRT_ADDR virtualAddr) {
  PHYS_ADDR physicalAddr;

  vmTranslate(NULL, virtualAddr, &physicalAddr);
  return physicalAddr;

#define TO_PHY(addr) _to_physical_addr((VIRT_ADDR)addr)

static uintptr_t
hfs_generic (uintptr_t syscall_id,
             uintptr_t arg1,
             uintptr_t arg2,
             uintptr_t arg3)
    uintptr_t *hostfs_register = (uintptr_t *)hostfs_addr();
    if (hostfs_register == 0) return -1;

    hostfs_register[1] = arg1;
    hostfs_register[2] = arg2;
    hostfs_register[3] = arg3;

    /* Write syscall_id to launch syscall */
    hostfs_register[0] = syscall_id;
    return hostfs_register[1];

uint32_t hfs_open (const char *pathname, uint32_t flags, uint32_t mode)
  VIRT_ADDR tmp = ensure_physical((VIRT_ADDR)pathname, strlen(pathname) + 1);
  VIRT_ADDR buf;

  if (tmp != (VIRT_ADDR)NULL) {
    memcpy ((void*)tmp, pathname, strlen(pathname) + 1);
    buf = tmp;
  } else {
    buf = (VIRT_ADDR)pathname;

  return hfs_generic (HOSTFS_SYSCALL_OPEN, (uintptr_t) TO_PHY(buf), flags,

Performance considerations

On a typical server, our target is to run around 6,000 of these tests in less than 45 minutes, which means roughly 2 tests per second.

To achieve this goal, the first thing to do is to maximize the parallelism of the execution of the tests: each test can generally be run independently from one another, and also generally requires a single core. This means that on a server with 16 cores, we should be able to execute 16 tests in parallel. This also means that to achieve the target of 6000 tests in 45 minutes globally, on such server each test should target an execution time of less than 8 seconds to execute (8 * 6000 / 16 = 3000 seconds total execution time for the testsuite, so 50 minutes).

Our first experiments with Simics were pretty far from this target: depending on the simulated platform, it could take between 15 seconds to almost a minute just to start VxWorks. When trying to run several Simics instances in parallel, the numbers went even worse, as a lot of server resources were needed to start the simulation.

From what we could see, this was due to the highly configurable and scriptable nature of Simics, where the full simulated environment is built at startup. Such timings were incompatible with our target total execution time.

To address this issue, the Simics engineers pointed us to a very nice feature: The Simics checkpoint. Basically, it’s a mechanism allowing us to save the state of a Simics simulation and to restore it at will.

The restore is very fast.

So what we do now when we build a VxWorks kernel is to also create a generic Simics checkpoint at the point where the VxWorks kernel has just booted. In the Simics script we use this looks like:

script-branch {
  local $con = $system.console
  $con.wait-for-string "Run_shell"
  write-configuration "checkpoint" -z

And that’s it. To load the checkpoint in our tests:

read-configuration “checkpoint”

to restore the simulation with VxWorks already booted.

This mechanism pre-elaborates the simulation environment, and drastically reduces both the load on the server and the total startup time.


With this testing environment, we can successfully and efficiently test our compiler for VxWorks. As an example, a complete run of the ACATS test suite, containing ~3700 tests (the ACATS testsuite is the standard test suite for the Ada language) takes roughly 31 minutes on a fast linux server, which meets our target performance of 2 tests per second.

By using Simics, AdaCore can now quickly put in place an efficient quality assurance infrastructure when introducing a new VxWorks target to be supported by its GNAT Pro product, improving time-to-market and the overall quality of its products.

Posted in #Simics    #WindRiver    #GNAT Pro   

About Jérôme Lambourg

Jérôme Lambourg

Jerome Lambourg is a senior engineer at AdaCore. After graduating from the french High School Telecom ParisTech in 2000, he worked first for Canal+Technologies, and then as a consultant for General Electrics Medical Systems, SAGEM Mobile, and Thales Naval. He then joined AdaCore in 2005. There he worked on various parts of the technology: GPS, GNAT Pro for .NET, AUnit, certification tools (the Qualifying Machine). He is now involved in cross and bare metal platforms, in particular as product manager of GNAT Pro for VxWorks.