Common Pitfalls in IoT Security Implementations and How to Avoid Them - Part 2
April 10, 2019
This article focuses particularly on small wireless connected systems that are often battery powered and running a lightweight, low-bandwidth RF protocol.
Plaintext disclosure refers to secrets, such as keys or sensitive data, being delivered or stored “in the clear,” or unencrypted. Most of the transport vulnerabilities apply to sensitive user data, such as login credentials, being delivered over an unencrypted channel. The storage vulnerabilities include both sensitive user data and keys, and this is an area where disposable IoT devices need to be particularly concerned. Most of us exercise care when disposing of a laptop or smartphone, to make sure sensitive information has been appropriately removed. Are you as careful when it comes to your connected light bulb? A “trashcan attack” is when sensitive information, such as Wi-Fi credentials, are extracted from a discarded device.
“Man-in-the-middle” (MITM) is an attack where the attacker secretly relays and possibly alters the communication between two parties who believe they are directly communicating with each other. A vulnerability to a MITM attack indicates a weakness in authentication. This type of exploit is often performed during commissioning when introducing a new device to the network. There are various methods that can be used to authenticate a device to allow it to securely join a network. Some of these require user intervention, such as entering a PIN code (BLE) or scanning a barcode (Z-Wave). Others can occur without user intervention, such as using device certificates combined with a certificate authority and/or a cloud service to provide authentication.
The most common implementation problems for MITM are either a failure to include authentication (for example, the “Just Works” pairing method in BLE), or a failure to properly validate certificates using a trusted third party, such as a Certificate Authority (CA).
A “brute-force” attack attempts to exploit a weakness in the implementation of the cryptographic system. When implemented correctly, the effort required to break a cryptographic system can be estimated theoretically and can easily exceed computational possibility. However, weaknesses in the implementation can drastically reduce the overall solution space, turning something that was impossible into something feasible or even easy. Examples include weak ciphers, improper use of cryptographic functions, hard-coded keys and insufficient entropy.
Many early cryptographic systems were rendered obsolete due to increasing accessibility of computational power combined with advances in finding crypto weaknesses. A brute-force attack on a cipher with a 40-bit key, for example, requires ~1.1 trillion tests. This may sound like a high number, but when coupled with the computational capacity of today’s graphics cards, FPGAs or cloud services, it is insufficient.
In 2017, researchers at KU Leuven University in Belgium were able to break DST40, which is a 40-bit cipher used in the early Tesla Model S key fobs. The “counterfeit fob” attack uses an RF receiver connected to a Raspberry Pi to “sniff” the car’s identifier RF beacon, requests a random challenge phrase from the car, and computes and transmits the correct response to the challenge, which can then be used to unlock the doors or start the car, all in about two seconds. The system uses a 5.4 TB data structure containing all the possible challenge phrases to lookup the correct response. The brute-force work of breaking the cipher, which would have taken 777 days using the same Raspberry Pi, was pre-computed earlier using more powerful computing resources.
The countermeasure for this attack is to not choose weak ciphers, especially if those ciphers have already been broken. The DST40 cipher above was originally broken in 2005 by a team from Johns Hopkins University and RSA Laboratories and demonstrated in a similar way on a 2005 Ford Escape SUV. Other popular ciphers that have been proven weak include DES, 3DES, RC2 and RC4. In the case of a TLS connection or any connection where a cipher or cipher suite is negotiated, it is important to not allow weak protocols (such as SSL) or weak cipher suites.
Improper Use of Cryptography Functions
Advanced Encryption Standard (AES) is a block cipher that operates on a data element that is a fixed size of 128 bits (16 bytes). When encrypting or decrypting a data stream that is more than 16 bytes long, multiple AES operations are required. Processing each block independently (called “AES_ECB” or “Electronic Code Book”) can reveal some patterns in the ciphertext data, which is undesirable for confidentiality, so the recommendation is to use a NIST-approved chained cipher mode such as AES_CBC (“Cipher Block Chaining”) or AES_CTR (“Counter”), or better yet, use an authenticated encryption mode such as AES_CCM (“Counter with CBC-MAC”) or AES_GCM (“Galois/Counter Mode”), which assure both confidentiality and authenticity of the data.
Beware that many of these modes require the use of an Initialization Vector (IV) that has security requirements that vary depending on the specific mode chosen. Conservative guidance is to use a strong random number, such as from an approved cryptographic random number generator, and to use the IV only once, making it a “nonce.” The most common mistake made with an IV is to use a hard-coded or constant IV.
A “hard-coded” key refers to a key that is embedded within the source code. Hard-coded keys are bad because they are difficult to change (requires recompilation of source), and they are among the easiest keys to steal (via reverse engineering, reading the source code or other means). Ideally, keys are computed when they are needed or stored in encrypted form. NIST SP 800-57 recommends that keys be changed periodically, usually every one to three years or more frequently, based on how the key is used. Furthermore, the system should also support a mechanism for revoking a key in the event it has been compromised.
Cryptography relies on having a source of random numbers with high entropy. One of the common and seemingly innocuous cryptography implementation errors is choosing a bad random number source. This occurs when developers use the compiler-native “rand()” function instead of a cryptographically strong pseudo random number generator (PRNG) or they use a good PRNG with a bad seed value (such as a constant or a time reference).
Figure 2 shows a bitmap generated using “rand()” and a bitmap generated using a TRNG (True Random Number Generator). Note the moiré-like pattern in the “rand()” picture. A pattern is not random, which indicates this is a bad choice for a source of entropy.
The strength of cryptography relies on the amount of entropy in the random numbers. Any pattern or bias in the random number source will reduce the number of options needed to test during a brute force attack. For illustration, let’s assume that an embedded system uses “system clocks since last reset” as a seed to its “rand()” function, and the “rand()” function is used to generate a key during system initialization. Because MCUs are largely deterministic, this system will tend to generate the same key or one of a small set of keys. If the system only generates eight unique keys, it doesn’t matter if the key is 128-bits or 256-bits long. The strength of that key is only three bits, because an attacker can determine the key in only eight attempts. Furthermore, the C standard specifies that the period of “rand()” be at least 232, which is well-within brute force attack range, meaning if an attacker can discern the current position in the PRNG sequence, all future numbers will be known.
Fortunately, many MCUs and wireless SoCs are equipped with hardware TRNG peripherals, which provide excellent sources of entropy. TRNGs are peripherals that harvest entropy from a physical source, such as thermal energy. NIST 800-90A/B/C and AIS-31 specify the suitable requirements for cryptography. Alternatively, a cryptographic PRNG can be used (such as CTR_DRBG) if it is periodically seeded with a TRNG source.
If an MCU doesn’t have a TRNG peripheral, it may be possible to use another peripheral, such as a wireless RF receiver or an ADC, as an entropy source, but care must be taken with this approach. Specifically, the source must be characterized as an entropy source to determine if its mathematical properties are sufficient for cryptography per NIST requirements. The NIST standards also mandate the addition of health checks on the raw entropy source to ensure that it is maintaining proper functionality and the addition of a conditioning function (such as SHA-256) to remove any bias in the output.
This is the second blog in a series of three. Please check back for the conclusion.