Make your device unattractive to hackers: Design in security early on
March 30, 2017
Stuxnet. Black Hat insulin pump hack. Jeeps driving off the side of the road. Mirai botnet malware. It used to be difficult to argue that embedded dev...
Stuxnet. Black Hat insulin pump hack. Jeeps driving off the side of the road. Mirai botnet malware. It used to be difficult to argue that embedded devices need security, and even more difficult to convince designers to factor it into their designs. After these attacks, the argument hardly needs to be made: connected, embedded devices need security.
The sensors we deploy are gathering data that we use to generate decisions or feed important machine learning, and we need to be able to trust the root data being used. Likewise, distributed actuators need a way to trust the commands that they receive. Yet, time and time again, we hear about attacks on Internet of Things (IoT) devices — simple attacks that exploit things like USB drives or default passwords – and the news is accelerating. Attacking “things” is the new fashion. With the number of “things” easily measured in billions, is it any wonder the IoT is such a rich target?
The technology to secure the IoT is available. In fact, good embedded security technology has been available since before the IoT was created. So, why we aren’t secure yet? And how we can go forward?
How did we get here?
Let’s go back to a different era. The Jacquard loom (Figure 1) was one of the first programmable devices, using punch cards to program a mechanical loom’s weaving patterns. The era of “smart devices” was dawning. The idea of programmable weaving patterns is an historic moment in the development of the modern era. But why don’t we have stories of Jacquard looms being attacked? Were people just better then?
Security and hacking are an economic game: there is a risk and a reward side to every endeavor. In the case of Jacquard looms, the risk was relatively high: the devices were rare and generally inaccessible, leaving attackers little chance to execute an attack. On the other side, the value of an attack was low. What is the possible outcome of an attack on a Jacquard loom? Perhaps sabotage from a competing textile manufacturer? The desire to see comical loom patterns implemented by a machine? If the attack was hard to execute, and the results of any attack were unrewarding, then the target was not attractive enough to attract nineteenth century hackers.
[Figure 1 | Jacquard looms simplified textile manufacturing.]
Let’s move forward over 100 years. Room-sized computers were starting to serve government, military, and big business applications (Figure 2). These were certainly juicier targets for attackers. The reward side of the equation was increasing. However, access to these machines was still extremely limited – these machines were rare, and the skill to program and manage them was extremely specialized. Computing was becoming more important, but accessibility remained limited. From a risk-reward perspective, risk was still high even though reward was increasing.
[Figure 2 | The ENIAC computer is considered the first general-purpose digital computer.]
That isn’t to say that hacking in the era didn’t exist. Enigma provides an example of how the equation could go differently. Enigma machines were well protected, being accompanied by armed soldiers. But they were mobile, and mobility always increases risk of capture. But the potential reward is difficult to measure: a way to end World War II and help save countless lives. While the risk was high, the reward was much higher, so much that the Allied forces were willing to throw substantial resources at hacking Enigma.
Jump to today. The proliferation of smart devices means we all carry products with intelligence and sensors. Phones are just the beginning, tracking where we are and what we do. Smart watches and fitness trackers log data about our bodies. Smart home devices know when we come home and what we like for entertainment. Our automobiles know not just our travelling patterns, but even our behavior – are we aggressive or passive drivers? Does that tell anything about our personalities? The proliferation of data-gathering sensors means that information is always being gathered around us, whether we consent or not. In our modern era, information is power and money, and all of the smart devices we surround ourselves with are also creating value by monitoring us.
Go back to the risk-reward equation: the risk-side for many devices is close to zero. Most of us carry at least one smart device, and the march of technology tells us we will own six to seven of them within the next decade. Even if you don’t believe the IoT numbers hype, the trend of more “smart” devices is clear: more things that surround us and sense or control our world are becoming connected. From a risk-reward perspective, this means the devices are accessible to any attacker. The risk of exploring and executing an attack has never been lower. The reward isn’t always spectacular—take the example of attacks on programmable street-warning signs that are hacked to promote political points of view or convey humorous messages. In these cases, the value of a successful attack is at best the satisfaction from inflicting people with some laughter, and at worst, causing a public danger by reprogramming signs conveying real risks to deliver comedic messages instead. The reward of the attack is low, but the risk of the attack is close to zero.
Let’s look at a more serious risk-reward scenario. The Stuxnet attack on an Iranian nuclear processing facility had an obvious high reward: the perpetrators would be able to destroy important nuclear processing equipment of a political adversary. The attack was complex, and designed to take advantage of the connected nature of things to propagate without detection until it was too late and the centrifuges had already been reprogrammed to damage themselves. The reward was high, and the nature of our connected world lowered the risk side of the equation to a point where advanced actors could carry out a successful attack.
Forget seven degrees of Kevin Bacon – the modern world makes us all connected with only one or two hops. Accessibility and risk have never been higher, and in an era where data is value, the reward is always high.
So why aren’t we secure?
Connected devices are not a new idea. Connected toasters were seen as early as 2001. Connected cash machines showed up in 1972. We have had decades to think about the security of things. Why are we having the issues we are seeing today?
First, let’s take a moment to understand what including security in a design means. First, it means that the engineering team needs to consider and understand security issues to a certain extent, so there is a potential expertise gap with any team. Second, security can incur a cost: more stringent (less breakable) security based on advanced hardware will cost more money than simpler security tactics based on software measures.
With these challenges in mind, there have traditionally only been two reasons to design security into an embedded system:
- You are told that you have to.
- You are actively losing money.
In industries like financial terminals (including automated teller machines (ATMs)), the credit card brands set the security standards that the manufacturers of financial terminals have to meet. Card brands recognize their business depends on the trust of the financial transaction, and they can require equipment that participates in the transaction to meet a certain security level. Government and military applications also have strict security guidelines that equipment must meet to be considered for use. In these examples, the architects of the system wield enough control to demand a certain level of security be implemented by manufacturers.
Other industries have implemented security after discovering fraud or recognizing a loss of potential revenue. Printer cartridges employ ever-increasing levels of security so that printer manufacturers can guarantee their revenue streams after selling printers at cost. Medical consumable manufacturers (blood glucose strips, disposable sensors) implement the same measures. Data center hardware increasingly integrates more security measures to prevent counterfeit devices from being introduced to their customers. In these examples, there is no singular system architect (similar to a government or to a visa) that can force manufacturers to implement security, so manufacturers tend to implement security after they see losses significant enough to justify the investment in time and bill of materials (BOM) cost to add security measures.
Now we start to see why today’s IoT applications are generally lacking in security: the markets are so new that there is no requirement to design security measures before jumping in with a product. There is little incentive for designers (especially at startups) to add even minimal cost and spend a little extra time designing a new feature into their devices. Time to market is critical, and designers are usually under the gun to release.
Everyone agrees that security is an issue for connected devices (or at least that it will be an issue), but developers designing new products or working for startups generally report that they’ll design in security later, or that their volume won’t be big enough to attract hackers. And they reason that when their volume grows big enough, then they’ll add security in to the design. There are two issues with this approach:
- Security is hard to effectively achieve when it is applied to an existing product (the “band-aid” approach) if it isn’t comprehended in the design from the start
- In reality, engineers are never available to go back and fix things later. Instead, business development demands that they move on to their next projects and get them to market quickly. Security won’t become an issue until there is actual financial loss, and it becomes a priority for the entire company.
Mirai highlighted the threat of unsecured devices having network connections. The end effect was a large number of people having problems accessing popular Internet-based services like Netflix. IoT security suddenly became mainstream — something every person experienced. There have even been product recalls because of the attack. Given the threat of an expensive recall, will companies now begin to think about security at the start?
How do we go forward?
If we think security in smart, connected products is an important problem to solve, we must address the concerns discussed above. Startups and new projects are generally up against the gun, with both time constraints and perhaps cost constraints. How can engineers do the right thing to design in security features without breaking the BOM cost and without missing deliverable deadlines?
Let’s take a step back for a moment and talk about security as an economic endeavor. There is no such thing as an unbreakable security system given infinite time and resources, and the game of security is to build something that will practically keep people with bad intentions out. Attackers will always look for the weakest system to attack that provides the most impact. Even moderate improvements in the security of your system can move your application into the realm of “too tough to hack.” It’s similar to the story of a group of people happening across a hungry lion: you don’t have to be the fastest, you just have to be faster than the slowest person in the group. Similarly with security, you don’t necessarily need to be the most secure possible, you just need to be secure enough to make your system unattractive to an attacker with limited time and resources. In short: a moderate amount of security will probably solve most of your problems, and a thoughtful security design will likely make you safe for a long time.
There are now tools like reference designs that ease the effort of designing in security. For example, Maxim’s MAXREFDES143# IoT embedded security reference design protects an industrial sensing node via authentication and notification to a web server. It uses the SHA-256 symmetric-key algorithm and features an ARM mbed shield that represents a controller node responsible for monitoring one or more sensor nodes. Using this reference design doesn’t require deep expertise in sensors, sensor interfaces, and data-acquisition systems. IC companies, including Maxim, also offer components such as secure microcontrollers that provide a foundation for a protected, connected system.
Given that good embedded security technology is readily available, there’s really no reason not to build security into your IoT design early on. You’ll be thankful when the next big hack happens and your product isn’t the one making headlines for all of the wrong reasons.