Integrating error-detection techniques to find more bugs in embedded C software

November 01, 2011

Integrating several automated verification techniques as a best practice in embedded C software testing.

 

Automated techniques such as pattern-based static code analysis, runtime memory monitoring, unit testing, and flow analysis can be used together to find bugs in an embedded C application. The following discussion will demonstrate these techniques using Parasoft C/C++test, an integrated solution for automating a broad range of best practices to improve C and C++ software development team productivity and software quality.

Sample sensor application

The recommended bug-finding strategies can be explored in the context of a simple sensor application running on an ARM  Cortex-M3 board. An application is created and uploaded to the board, but when it’s run, it doesn’t render the expected output on the LCD screen.

It’s not working, and the reason why is unclear. Debugging on the target board would be time-consuming and tedious, as debugger results would need to be analyzed manually to try to determine the real problems. Alternatively, certain tools or techniques could be applied to pinpoint errors automatically.

At this point, the two options are to debug the application with the debugger or apply an automated testing strategy to peel errors out of the code. If the application still does not work after applying the automated techniques, the debugger can be used as a last resort.

Pattern-based static code analysis

Instead of debugging, pattern-based static analysis – which is fast, easy to use, and can be applied at almost every code  change – is applied. One problem is identified by performing static analysis (see Figure 1).

 

Figure 1: Static code analysis identifies a MISRA coding standard violation.


21

 

 

This is a violation of a MISRA rule that says using assignment operators inside Boolean expressions can be risky. The intention was not to use an assignment operator, but rather a comparison operator. So this problem is fixed and the program is rerun.

There is improvement as some output is displayed on the LCD. However, the application crashes with an access violation. Once again, there is a choice to make: Use the debugger or continue applying automated error detection techniques. Given that automated error detection is very effective at finding memory corruptions such as this, performing runtime memory monitoring is the best option.

Runtime memory monitoring of the complete application

Runtime memory monitoring can be performed by applying lightweight instrumentation suitable for running on the target board. After uploading and running the instrumented application and downloading results, an error is reported (see Figure 2).

 

Figure 2: Runtime memory monitoring reports reading an array out of range.


22

 

 

This indicates reading an array out of range at line 48. Obviously, the msgIndex variable must have had a value that was outside the bounds of the array. Going up the stack trace reveals that this print message with an out-of-range value was caused by putting an improper condition for it before calling function printMessage(). This can be fixed by relaxing the value range control inside the if statement and taking away the unnecessary condition(value <= 20).

void handleSensorValue(int value)

{

  initialize();

  int index = -1;

  if (value >= 0 && value <= 10) {

     index = VALUE_LOW;

  } else if ((value > 10) && (value <= 20)) {

     index = VALUE_HIGH;

  }

  printMessage(index, value);

}

Now, when rerunning the application, no memory errors are reported. After the application is uploaded to the board, it seems to work as expected. However, some concerns remain.

One instance of a memory overwrite was found in the code paths that were exercised, but does that mean there are no memory overwrites in the code that wasn’t exercised? Coverage analysis shows that some code has not been exercised at all. The reportSensorFailure()  function is not covered, and one branch inside the mainLoop function that calls reportSensorFailure has not been exercised at all (see again Figure 2). One way to test this code is to create a unit test (for the mainLoop function) in combination with a user stub (for the readSensor function) to simulate conditions that are difficult to reproduce during functional testing.

Unit testing with runtime memory monitoring

A test case skeleton is created and then filled with test code.  Also, a stub is added for the readSensor function to simulate a reading error. The test case is run – exercising just this one previously untested function – with runtime memory monitoring enabled. The results show that the function is now covered, but new errors are reported (see Figure 3).

 

Figure 3: Unit testing with runtime memory monitoring enabled exposes memory errors.


23

 

 

The test case uncovered more memory-related errors. There is a clear problem with memory initialization (null pointers) when the failure handler is being called. Further analysis shows that an order of calls was mixed in reportSensorValue(), such that finalize() is being called before printMessage() is called, but finalize() actually frees memory used by printMessage().

void finalize()

{

  if (messages) {

     free(messages[0]);

     free(messages[1]);

     free(messages[2]);

  }

  free(messages);

}

 

 

void printMessage(int msgIndex, int value)

 

{

 

    const char* msg = messages[msgIndex];

 

    printf("Value: %d, State: %s\n", value, msg);

 

    fflush(stdout);

 

}

 

 

 

void reportSensorFailure()

 

{

 

    finalize(); 

 

    printMessage(ERROR_MSG, 0);

 

}

This order is fixed, and the test case is rerun one more time.

That resolves one of the errors reported. The next step is to address the second problem reported: AccessViolationException in the print message. This occurs because these table messages are not initialized. To resolve this, the initialize() function is called before printing the message. The repaired function looks as follows:

void reportSensorFailure()

{

  initialize();

  printMessage(ERROR, 0);

  finalize();

}

When rerunning the test, only one task is reported: an invalidated unit test case, which is not really an error. The outcome must be verified to convert this test into a regression test (see Figure 4).

 

Figure 4: The test must be configured for regression testing.


24

 

 

Next, the entire application is run again. Coverage analysis shows that almost the entire application was covered, and the results indicate that no memory error problems occurred.

Even though the entire application was run and unit tests were created for an uncovered function, there are still some paths that are not covered. Unit test creation can continue to be employed, but it would take some time to cover all of the paths in the application. Instead, those paths can be simulated with flow analysis.

Flow analysis

Flow analysis is run to simulate different paths through the system and check if there are potential problems in those paths. Several issues are reported (see Figure 5).

 

Figure 5: Flow analysis discovers several problems in the paths.


25

 

 

There is a potential path – one that was not covered – where there can be a double free in the finalize() function. The reportSensorValue() function calls finalize(), then finalize() calls free(). Also, finalize() is called again in the mainLoop(). This can be fixed by making finalize() more intelligent:

void finalize()

{

  if (messages) {

     free(messages[0]);

     free(messages[1]);

     free(messages[2]);

     free(messages);

     messages = 0;

  }

}

Flow analysis is then run one more time. Only two problems are reported (see Figure 6).

 

Figure 6: Flow analysis detects two remaining problems.


26

 

 

A table with the index -1 is possibly being accessed here. This is because the integral index is set initially to -1, and there is a possible path through the if statement that does not set this integral to the correct value before calling printMessage(). Runtime analysis did not lead to this path, and this path might never be taken in real life. That is the major weakness of flow analysis compared to actual runtime memory monitoring. Flow analysis shows potential paths, not necessarily paths that will be taken during actual application execution. This potential error is fixed easily by removing the unnecessary condition (value >= 0).

void handleSensorValue(int value)

{

  initialize();

  int index = -1;

  if (value <= 10) {

     index = VALUE_LOW;

  } else {

     index = VALUE_HIGH;

  }

  printMessage(index, value);

}

The final error reported is fixed in a similar way. Now, when rerunning flow analysis, no issues are reported.

Regression testing

To ensure that everything is still working, the entire analysis is rerun. First, the application is run with runtime memory monitoring and everything seems fine. Then unit testing is run with memory monitoring and a task is reported (see Figure 7).

 

Figure 7: Unit testing perceives a regression failure.


27

 

 

The unit test detected a change in the behavior of the reportSensorFailure() function. This was caused by modifications in finalize(), a change that was made to correct one of the previously reported issues. This task draws attention to the change and indicates that the test case must be reviewed. Then either the code should be corrected or the test case should be updated to show that this new behavior is actually the expected behavior. After looking at the code, it is apparent that the latter is true, and the assertion’s condition is updated.

void sensor_tests_test_reportSensorFailure()

{

    {

         messages  = 0 ;

    }

    {

        reportSensorFailure();

    CPPTEST_ASSERT(0 == ( messages ));

    }

}

As a final sanity check, the entire application is run on its own, building it in the integrated development environment without any runtime memory monitoring. The results confirm that it is working as expected.

Complementary tools

All of the testing methods applied – pattern-based static code analysis, memory analysis, unit testing, flow analysis, and regression testing – do not compete with one another, but rather complement one another. Used together, they provide an amazingly powerful tool that delivers an unparalleled level of automated error detection for embedded C software.

Marek Kucharski, president of Parasoft SA and VP of development, directs operations, sales, and development at Parasoft Corporation’s Polish subsidiary. Marek has been developing and managing software systems since he graduated from Jagiellonian University in Krakow in 1994. His professional experience includes building a wide range of software, from retail client server systems to cutting-edge development tools.

Mirosław Zielinski, C++test product development manager, is responsible for developing embedded applications of Parasoft’s C/C++test embedded testing product. Mirosław has been developing and supporting embedded systems testing frameworks since he graduated from AGH University of Science and Technology in Krakow in 2002, where his studies of automation systems and applied robotics gave him key insight into embedded software industry quality challenges.

Parasoft
626-256-3680
[email protected]
Linkedin: www.linkedin.com/company/parasoft
Facebook: www.facebook.com/parasoftcorporation?ref=s
Twitter: @Parasoft
www.parasoft.com

 

 

 

 

Marek Kucharski (Parasoft) and Miroslaw Zielinski (Parasoft)