Hardware emulation to debug embedded system software
February 10, 2016
In today's competitive landscape, getting complex electronic devices rich in embedded software to market faster while making them cheaper and more rel...
In today’s competitive landscape, getting complex electronic devices rich in embedded software to market faster while making them cheaper and more reliable is a very risky proposition.
Not thoroughly testing hardware designs inexorably lead to respins, increasing design costs and lengthening delivery of the netlist to the layout process, and ultimately delaying the time-to-market target with devastating effects on the revenue stream. Even more dramatic outcomes of missing market windows hide in late testing of embeddedsoftware.
It’s no surprise that the verification portion of the project cycle takes a disproportionately large amount of the schedule. That’s because tracking and eliminating bugs is no small feat, especially when the software content of a system on chip (SoC) is growing at a rate of approximately 200 percent per year. By contrast, the growth of the hardware portion of the design is only about 50percent.
Hardware emulation as the foundation of system verification
While virtual prototyping and field-programmable gate array (FPGA) prototyping have gained attention for early embedded software testing, they cannot help with the integration of software and hardware. The former lacks the hardware accuracy required for tracking hardware bugs. The latter provides limited hardware debugging capabilities necessary to quickly zoom in on a bug.
As a result, development teams and project managers have turned to hardware emulation as the foundation for their verification strategy. Emulation is a versatile verification tool with many associated benefits, including hardware/software co-verification, or the ability to test the integration of hardware and software. Software developers have taken notice because it is the only verification tool able to ensure that the embedded system software works properly with the underling hardware. It’s noteworthy, as well, for hardware engineers working to debug a complex SoC design since it can trace a software bug into the hardware or a hardware bug within the software’s behavior. Other benefits include its fast compilation capabilities, another plus for software verification, thorough design debugging and scalability to accommodate designs that encompass more than one billion application-specific integrated circuit (ASIC) gates. Additionally, it can process billions of verification cycles at high rates of speed mandatory to validate embedded software and perform system validation (Figure 1).
In the past, hardware debug and testing was the sole reason for the verification portion of the project cycle, something managed by logic simulation driven by hardware description language (HDL) testbenches. Traditional big-box emulation was employed solely for the largest designs. Formal verification has been adopted by many development teams to supplement simulation, increasing basic coverage and ensuring that corner cases aren’t missed. However, only hardware emulation can complete the entire verification task for SoC designs within a practical timeframe and alleviate the runtime problems associated with event-based simulation.
It’s all about the software content
An SoC’s software content makes co-verification an all-important part of the verification strategy because it confirms that the hardware and software parts of an embedded SoC are verified concurrently and interact correctly before committing tosilicon.
In the past, if there was a hardware problem once the design was taped out to silicon, the software developer had to work out how to code around it, if at all possible. By validating the software before the SoC is complete, the design team has the opportunity to fix hardware issues before they’re set in silicon. As already stated, emulation checks to make sure the embedded software is running on the supporting hardware according to thespecification.
Software debug had been done in the past using a variety of debug engines. With one per core, they took advantage of hardware features that provided visibility into and control over the inner workings of a processor. While some debug capabilities were offered, the ability to diagnose issues was limited by the kind of access that the processor provided. Moreover, because traditional software debug typically happened on the actual system, software developers were executing real code on real hardware at target system speeds. This allowed them to work quickly through large volumes of code to find the errantroutine.
These traditional techniques broke down when debugging an SoC. Because there is no real hardware, the code cannot be executed at true system speed. Supposedly, hardware can be simulated as the code is executed and all of the hardware visibility would be provided by the simulator. The problem is speed – it’s a slow way to debugcode.
For example, if an SoC is designed to run programs over Linux, the software developer would have to complete the Linux boot with billions of clock cycles before the software can begin executing. The rough estimate is that it would take more than 28 years using typical simulation speeds of about 10 hertz (Hz) to complete a Linuxboot.
Regardless of whether it’s hardware or software being debugged, traditional hardware and software debug tools don’t know anything about the other. With large and complex SoC designs, it’s inefficient for two types of debug to be done independently in an attempt to locate a problem.
Allowing the two to work together is the ideal scenario and this is where emulation saves the day. The SoC hardware is implemented in hardware, typically an FPGA or some other programmable element, giving it a higher speed. With this setup, the Linux boot can be finished in as quickly as 15 minutes, depending on the actual speed being run. Hardware emulation provides similar control and visibility to that of a hardware debugger with both breakpoints andwaveforms.
Confirming an SoC design will work as intended
Hardware emulation differentiates itself from other verification tools with its high performance – an increasingly important requirement driven by software requirements. It is able to confirm that an SoC design will work as planned and is suitable for handling complex designs that can be as large as one billion ASIC-equivalent gates and consume more than one trillion verification cycles per month. Even so, thorough and exhaustive functional verification using hardware emulation at this stage continues to be the most cost-effective and effective debugging approach available (Figure 2).
The introduction of transaction-level modeling (TLM) and availability of transactors can turn hardware emulation into a virtual platform test environment for a range of vertical markets. A transactor, part of verification intellectual property (IP) portfolio, is a high-level abstraction model of a peripheral function or protocol. Transactors, often provided as off-the-shelf IP, are available for a variety of different protocols. A typical catalog includes PCIe, USB, FireWire, Ethernet, Digital Video, RGB, HDMI, I2C, UART, and JTAGcomponents.
Better verification for more complex systems
Previously, hardware design was independent from the creation of the software to be executed on those chips. That’s no longer the case. As SoCs double in number of processors and incorporate twice the software content with each product generation, concerns about software becomes a priority for development teams and project managers. Now, an SoC isn’t complete until the development team has proven that the intended software works on the hardware platform.
An SoC is a full-fledged embedded system and needs hardware emulation to verify it works correctly. With hardware emulation, development teams can plan more strategically and implement a debugging approach based on multiple abstraction levels. They can track concurrently a bug between the hardware and embedded software to identify where the problem lies. In the process, they are saving time in a cost-effective and efficient manner, dramatically reducing the risk of missing the marketwindow.