Minimizing risk: embedded software and today's medical devices
April 01, 2011
Safety-critical software and SoCs – for the next generation of medical devices, what's the straight-and-narrow path?
Embedded developers face several decisions when developing medical embedded devices, from selecting the best system software for optimal application performance, to understanding the interactions and limitations between the software operating system and target hardware. Should the software engineer use a small micro-kernel, Real-Time Operating System (RTOS), or a General Purpose OS (GPOS) such as Android or Linux? Other considerations include the physical size of the system for portability and functionality requirements, including faster performance, power consumption, data protection, and display (user interface) technology. And FDA certification and industry standards that affect embedded software selection come into the mix as well.
Modern medical devices are evolving at a record clip. From portable wireless units for patients to use at home to larger more complex devices used by healthcare professionals at a facility, there’s no question we are at the forefront of developing new ways to empower patients and medical professionals alike. How do we make sure the system software that controls these devices does exactly as planned with little to no risk of harming the patient?
At the heart of the matter: operating systems
Typically an Operating System (OS) manages a medical embedded device. An OS can vary from a simple “roll your own” built in-house by a few enterprising software coders to a more complex OS from an established vendor. A GPOS such as Linux or Android establishes a feature-rich platform for application development, but sometimes consumes more memory than necessary. An RTOS is also a good choice for modern medical devices, particularly when specific system requirements require a deterministic preemptive kernel and a small memory footprint. Somewhere in the mix there is an ideal candidate for your application and hardware. One thing is certain: Before selecting your OS, know exactly the intentions of your application and the hardware you plan to use.
How will the device be used?
One way to minimize risk when developing your embedded system is to first consider its use cases – not just how the end-user will interact with it, but also how it will be designed, developed, and tested. Will the device be used primarily by a healthcare provider, by the patient at home, or both?
Does the device have communication modes or is it purely a stand-alone? Depending on its communication needs, you may find that your preferred OS includes many of the modes you need, or you might prefer another OS, in which case you’ll have to port over the communications stack and/or the driver to attain the right mix of communications software.
Are there any real-time needs identified? For some devices, there is no requirement for real-time behavior. If an interrupt is serviced 100 milliseconds late, the results may be delayed by 100 milliseconds, but that’s not going to cause a failure. However, if it’s a laser involved in eye surgery, this can have catastrophic effects if the laser does not turn on and off at the precise time. If the laser has eye tracking guidance, the laser must move in lockstep with a predefined pattern even in the presence of eye movement.
Perhaps the device is a critical piece of equipment, so there is minimal sensitivity to cost. On the contrary, a device that is handheld and sold in the millions has a high sensitivity to cost. These types of considerations will directly affect the need to minimize BOM, which in turn results in possibly minimizing the memory you’ll need to effectively build the complete application with some margin.
It’s all about the hardware
Once the use cases have been defined, it’s time to find the appropriate hardware. Medical systems can be extremely small, with an 8-bit microcontroller clocked at less than 25 MHz, and use only 8K of memory. More complex designs can include feature-rich SoCs clocked in the hundreds of MHz and megabytes of memory. The range of systems encompasses hybrid systems that have special purpose processors or DSPs to systems that include numerous multicore chips.
What’s best for your design comes out of the use cases and expectations on how you want the system to behave.
Is multicore necessary?
To the two main reasons that come to mind for selecting multicore – pure processing performance and low power management – a third could well be added, the combination of the two.
If you’re concerned with low power you may want to use a multicore SoC simply because it can utilize all the available cores at a lower clock frequency rather than clocking a main processor at a much higher frequency. When not needed, it can power down the extra cores to save power.
While both power and performance are good reasons to use multicore, the question is more about finding the best way to allocate the CPUs. With symmetric hardware you can use a single operating system across all the available cores as a type of Symmetric Multicore Processing (SMP). Most GPOSes and some RTOSes have this capability. Using SMP could complicate the scheduling across the cores, however, as real-time hits due to cache misses in one core could cause a cache flush in the other core, which invariably leads to delays in the system. Features such as spinlocks are common to all SMP-capable operating systems. If not employed correctly, a spinlock can hurt system performance, as one core stalls for an indeterminate time waiting for the resource on another core to be freed.
The other way to build the system (even with symmetric hardware) is to apply Asymmetric Multicore Processing (AMP) techniques. This approach involves two or more separate operating systems (Figure 1) interacting through some type of communication channel using hardware like a series of FIFOs or through shared memory. There is a standard that makes the application development portable by the Multicore Association called the Multicore Communications API (MCAPI).
When hardware and software worlds collide
Consider the case where a medical device that has USB connectivity to a Windows host computer usually follows the USB specification, but when all the parts of the SoC are activated, the hardware intermittently starts signaling outside the specification in such a way that the host computer shuts down the port in the middle of a session causing failure at the most inopportune time – during patient data collection. With results lost, the patient, who made special preparations for the original procedure 24 hours in advance, must prepare for retesting.
Two fundamental reasons caused the port shutdown. First, the software assumed that the USB controller would not fail. And second, the system architecture had not planned for the case where the unit is unplugged in the middle of a session. Had the system taken either one of these use cases into account, the system would have stored the data locally, allowing for a transfer after the session was re-established, thus minimizing the risk to the patient of a possible retest.
The application relied on the USB controller’s flawless operation to prevent data loss. If the application had been broken down into several sections, data loss might have been avoided. With an architecture where data collection and data transmission are not interrelated, even if the link goes down, the data is still stored in the device, so when it recovers it can pick up where it left off and not lose any data. A write to a buffer before the transmission to the host, which would occur in the background, is one way to avoid lost data in this type of scenario.
If a software workaround to detect the USB bus suspension is used, the workaround can take the SoC pins out of USB mode and make them GPIO pins so that the host can detect a reset condition and force a re-enumeration of the device. The USB software would then resubmit the buffers, and the transmission would resume. The end result is that the data would not be lost, just delayed while the workaround took place.
An OS manages the system’s resources both in hardware and software. The most basic management is that of memory and time. But where does the responsibility of the OS stop and that of the application start? While an application can have a device driver built into it and talk directly to the hardware, porting to new hardware becomes a challenge as the device evolves and newer hardware is employed. Therefore, it is recommended that most, if not all, devices in the system be managed by the OS in order to ensure future portability.
Regulations and patient privacy
The need for portability includes wireless devices like a GSM radio or an 802.11 wireless interface for connectivity. Others include Bluetooth and ZigBee, where these links must also be secure and provide patient privacy. Even in the device itself, it’s imperative that only doctors who are authorized to see a patient actually see the data of that patient. Disallowing unauthorized access is also a critical requirement of any device. Are the records secure, even for the technician who works on the equipment? Are there any modes where this data is not secure? True Health Insurance Portability and Accountability Act (HIPPA) compliance is making sure that the security of information is paramount. Security inside the database of patient records is as well.
Medical devices are a special breed that will touch all of us in some way. We need to take extra care when designing these systems to ensure that the device does what it is intended to do. Does it make sense to use an RTOS or a GPOS to meet the requirements of determinism, size, boot time, power optimization, and the breadth of middleware available? Finally, to minimize risk we need to make sure that all regulations with both HIPPA and the FDA are followed.