Applying hardware verification techniques to software

October 01, 2008

Applying hardware verification techniques to software

As demonstrated by engineering research, model checking is a powerful tool that has yielded significant improvements in hardware reliability.

Confirming design criteria

When designing digital circuits, hardware engineers routinely
use verification tools to prove that their designs meet the necessary
specifications. The most widely used technique, model checking, is considered
to be a highly effective verification technology, as evidenced by the
Association for Computing Machinery's conferral of the 2007 A.M. Turing Award
to Clarke, Emerson, and Sifakis for their pioneering work on the subject in the
early 1980s.

Model checking is a verification technique, meaning that it
is a mathematically sound method of proving that a given property holds for the
design. Roughly speaking, it works as follows: A description of the circuit is
taken as input. This is the model, which is expressed in a Hardware Description
Language (HDL) such as Verilog or VHDL. The user then specifies a property that
must hold for the circuit, such as, "The cruise-control state must never be
true if the brake signal input is true." The model checker explores all
possible states of the model, and if the property is not violated, the result
constitutes a proof of nonviolation. If not, the model checker provides a
counter example that shows the sequence of state changes that led to the
violation. Figure 1 illustrates this process.

 

Figure 1


21

 

 

Of course, model checking involves much more than this simple
description suggests. The main problem is state explosion – the number of
states is so huge that it is impossible to check all states concretely. Many
researchers have devised ingenious methods to deal with this issue, enabling
engineers to check models with up to about 10100 states.

The most significant of these methods is using symbolic
representations to avoid explicit instantiation of all the states. Another
important technique is using abstractions, which entails abstract symbols that
correspond to many concrete states. The disadvantage of abstractions is that
they can lead to false positives, which are reported errors that do not really
exist. Figure 2 shows an example abstraction of a concrete state and a possible
symbolic representation of the state.

 

Figure 2


22

 

 

The properties to be checked are usually specified in a form
of temporal logic. This allows the properties to be specified in terms of
sequences, such as, "If state S0 holds, then eventually state S1 will become
true", meaning that sophisticated properties can be checked.

Hardware vs. software verification

Model checking is so useful for hardware that researchers
have tried formulating ways to apply it to software systems as well. The
potential to prove programs correct is the holy grail of formal methods.
Although hardware descriptions and software programs share some
characteristics, key differences make it challenging to apply model-checking
techniques to software.

First, whereas hardware descriptions are finite state, most
software is unbounded state. In hardware, designers can't just materialize new
bits in which to store data. In software, developers do this all the time with
dynamically allocated memory and the program stack. Even when software is finite
state, the state space for a nontrivial program tends to be enormous, so much
wider abstractions are needed to summarize the state space.

Second, hardware designs are generally closed. Everything
that needs to be known about the state of the hardware is provided in the
specification written in HDL. This is rarely the case with software,
considering that almost all programs use third-party libraries or make calls to
an operating system that can affect the state of the system.

Finally, with hardware, a high-fidelity model can be derived
directly from the HDL specification. With software, this task is not so simple.
Most modern programming languages are underspecified and riddled with
ambiguities that are usually left for the compiler to resolve. Thus, creating a
precise model requires knowing exactly how the compiler works, which is
typically not feasible. It is possible to derive a model automatically from a
program written in a mainstream programming language, but there are too many
unknowns to create a high-fidelity model for software in the same way as can be
done for hardware.

The key question with software is how to generate an accurate
model at the right level of abstraction. An abstraction is a simplification of
the program that must be chosen to model properties of interest. To be correct,
any property that holds for the state of the original program must hold for the
abstraction as well. If an abstraction is very coarse, developers only need to
consider a small number of states for the model-checking algorithms to scale
well, but this introduces the risk of false positives. If it is very fine, that
risk is reduced but at the expense of efficiency and scalability.
Model-checking tools attempt to balance these competing concerns. The
abstractions that make sense for software typically model the control flow of
the program but have fairly coarse abstractions for the values of variables.

Models and abstractions that fail to capture important
program semantics are referred to as incomplete or unsound. A model checker
based on an unsound model might fail to find some errors.

Model-checking achievements in software

Despite these differences, model-checking techniques are
successfully applied to software development, and their use is growing. One
approach is to use model-checking techniques to prove the properties of a
subset of the program's functionality. For example, the SPIN model checker was designed to find protocol errors
in distributed systems[1]. With SPIN, the input specification is written in a
specially designed language named PROMELA. One of the challenges involved in using model checkers is creating
the model. If possible, developers should automatically extract the model from
the code, which is much faster and less error-prone than deriving it manually.

Another approach is to use model checking for a specific
domain. At Microsoft, the Static Driver Verifier (SDV) uses model-checking
techniques to find bugs in device drivers. Device drivers are notoriously
difficult to write and debug and have a profound impact on system reliability.
In one 2003 study, researchers discovered that bugs in device drivers caused 85
percent of system crashes in Windows XP. SDV, which works by confirming that
the code uses the driver API correctly, has proven capable of finding subtle
errors (75-80 percent of which were genuine problems) in device drivers that
went through significant testing and were considered high quality[2].

SDV uses a model-checking technique called SLAM, which creates a model by taking the original C code
for the driver and constructing a finite state Boolean program for it. This is
an abstraction wherein the control flow is preserved but all the variables are
modeled as Boolean values. The model checker then operates on that model to try
and determine if the model violates any of several dozen API usage rules.

SLAM uses an important technique called abstraction
refinement
that, upon detecting a
violation, changes the model to make it more precise on the path that leads to
the violation. The improved model then checks once more for the violation,
allowing it to eliminate false positives. This three-part process is repeated
until the violation is confirmed or there is a proof that it cannot occur.

Model checking is also used to verify properties of concurrent
programs, such as absence of deadlock. The Java PathFinder project exemplifies
this approach[3].

New static analysis tools emerge

Despite the success of model checking in these limited
domains, general-purpose program verification for medium-scale programs remains
impractical. However, verification is a lofty ambition, and many programmers
would be satisfied with tools that can find some of the bugs in their code,
even if they cannot prove that their programs are completely bug-free.

This need has given rise to a new breed of advanced static
analysis tools, such as GrammaTech's CodeSonar. These tools use techniques
similar to model checkers by creating models that are abstractions of the
program and then exploring execution paths in the same way that model checkers
explore program states. However, unlike true model checking, the model
construction is incomplete, meaning that these tools might fail to detect some
flaws.

Although this compromises the principle of verification,
these tools have proven effective at finding real and serious problems. They
work directly on the code, are easy to adopt, require no changes to the build
system, scale to millions of lines of real-world code, and complete their
analysis in a relatively short amount of time. In addition to these benefits,
advanced static analysis tools typically populate a central database with their
results and provide groupware that allows large groups to collaborate on
resolving problems.

As demonstrated by engineering research, model checking is a powerful
tool that has yielded significant improvements in hardware reliability. Though
the nature of software makes it difficult to use model checking in the exact
same way, it is suitable in special circumstances[4]. For general-purpose
software, new static analysis tools are using similar techniques to find
programming flaws.

References

[1] Holzmann,
G.J., The SPIN Model Checker: Primer and Reference Manual. 2003, Boston, MA: Addison-Wesley Professional. 608.

[2] Ball,
T., Bounimova, E., Cook, B., Levin, V., Lichtenberg, J., McGarvey, C.,
Ondrusek, B., Rajamani, S., and Ustuner, A., "Thorough Static Analysis of
Device Drivers." In EuroSys 2006. Leuven, Belgium.

[3] Java
PathFinder, http://javapathfinder.sourceforge.net/.     

[4] Mansouri-Samani,
M., Mehlitz, P.C., Pasareanu, C.S., Penix, J.J., Brat, G.P., Markosian, L.Z.,
O'Malley, O., Pressburger, T.T., and Visser, W.C., Program Model Checking
Practitioner's Guide
,
http://sarpresults.ivv.nasa.gov/ViewResearch/9/59.jsp.

Paul Anderson is VP
of engineering at GrammaTech, an Ithaca, New York-based spin-off of Cornell
University. Paul manages GrammaTech's engineering team and is the architect of
the company's static analysis tools. He has worked in the software industry for
16 years, helping organizations including NASA, the FDA, the FAA, MITRE, Draper
Laboratory, GE, Lockheed Martin, and Boeing apply automated code analysis to
critical projects. His research on static analysis tools and techniques has
been reported in numerous articles, journal publications, book chapters, and international
conferences. Paul received his BSc from Kings College, University of London,
and his PhD in Computer Science from City University London.

GrammaTech, Inc.
607-273-7340
[email protected]
www.grammatech.com

 

Paul Anderson (GrammaTech)