Encryption 201: Static versus dynamic data

January 07, 2015

Encryption 201: Static versus dynamic data

Moving data presents a whole new challenge when it comes to encryption. In Part 1, "Choosing the right encryption scheme," I looked at various aspects...

Moving data presents a whole new challenge when it comes to encryption.

In Part 1, “Choosing the right encryption scheme,” I looked at various aspects of AES encryption cores embedded inside an FPGA. Now we’ll take that a few steps further. When people talk about encryption, there are two different conditions to consider – data on the move and stored data (sometimes called data at rest).

Consider where AES encrypted data is streaming along an Ethernet link at 1G, 10G, or more. An attacker could intercept and store passing messages for later analysis before he attempts to break the encryption. He will rapidly generate a considerable number of data packets, and will have only the source and destination address to use to filter out messages of interest. These may all use the same encryption key, but in a well-organized crypto system, the encryption key will be changed frequently, limiting the scale of any security breakdown.

So what methods can attackers use to crack the encryption? To be honest, it’s usually easier for an attacker to get access to the secret data without cracking the encryption by breaking physical or human security to get access to the key or the plaintext. The data is only useful in plaintext form (e.g., you can’t watch an encrypted video or read an encrypted document). Similarly a key needs to be stored in unencrypted form to be used and the physical security protecting it is likely easier to breach than the encryption algorithm.

Alternatively, the attacker may have an insider supply the keys. Assuming that the encryption algorithm must be broken, then the method used to break the early DES system is called brute force. Here, the attacker uses massive computing power to try every combination of the key. As computing power increases, this technique becomes feasible for well-funded organizations such as hostile governments. However, the sheer number of combinations of 128-bit keys used as a minimum by AES is staggering (5.79E+76).

Another common attack method relies on the attacker having sufficient access to the equipment containing the key to allow the use of laboratory equipment. Remember, I’ve already said that the embedded crypto core should never go near the device’s pins. In this type of attack, called Differential Power Analysis (DPA), measurements of the supply current are made at the device’s pins, so encryptions with different keys will have slightly different energy use. This results in patterns of power consumption which can be analyzed using signal processing to deduce the key.

Public key algorithms implemented on a microprocessor are more susceptible to this attack than AES. That’s the theory, and some academics have teased out keys from AES implementations under their full control, using DPA with a slow clock and custom board designed to facilitate power measurements.

DPA is fundamentally about extracting a signal from noise. It can be made more challenging by making the measurements difficult to obtain. This includes physical security such as zeroing the keys if tampering is detected. Factors such as a board layout using surface-mounted packages (where pins are inaccessible) and devices with in-package decoupling capacitors will frustrate measuring power glitches.

Designs that use low power-supply voltages, higher frequencies, and where associated circuits on the chip generate additional noise will be less vulnerable. In addition, design techniques in the encryptor can impact the susceptibility to DPA attacks and it can be made impossible by changing the key frequently enough that an assailant can’t capture sufficient data to determine the key before it’s changed. I’m not saying that it can’t be done, but DPA is more of a threat to smartcards, public key algorithms, and microprocessor implementations than typical applications using FPGAs to implement AES. That said, the security of adding encryption to your system compared with none is a massive step forward.

Data at rest presents different issues. The confidential data is obviously accessible and alterable on disk to an attacker. Two important criteria for encrypted storage are that the size of the data is unaltered, and that identical plaintext encrypted and stored in multiple places should always have different ciphertext. It’s also desirable to have random access on a sector basis and not insist on encrypting and decrypting complete files.

The requirement that the data is not enlarged is so that a disk can store the same amount of data whether encrypted or not. Without the option for data expansion to include an additional Integrity Check Value (ICV), it’s difficult to provide authentication, i.e., a guarantee that the encrypted data hasn’t been tampered with.

The reason why identical data should always be different on the disk is that it may reveal the structure of the data (similar to the problems with ECB mode mentioned in the last blog). It opens vulnerabilities to an attack where the system is instructed to encrypt known desired data (e.g., a record indicating the someone has $1,000,000 in their bank account) which is then copied to another location on the disk (in this case over the attacker’s own bank account record). The scheme to avoid this is a variant called AES-XTS (IEEE standard 1619). An XTS IP core is more complex than a basic AES design and uses two different keys to encrypt the data. It considers the data’s sector-address to ensure that identical data isn’t repeated.

Another consideration is that while encryption ensures data confidentiality, it doesn’t give any information about whether the message was corrupted or maliciously altered. This is vital for applications like financial transactions, where a changed digit in a message may alter a payment from $1,000 to $9,000. AES has a solution – it performs a “hash” on the data as it gets encrypted and then includes an Integrity Check Value (ICV) at the end of the message. This expands the packet length, but increases the value to the user, as tampered messages can be detected and rejected.

The most popular IP core for authentication and encryption is called AES-GCM. The benefit of AES-GCM is that the entire operation can be pipelined, and 100G data rates can be supported in FPGAs. This is a further example of trading FPGA real estate for performance. The AES-GCM core is more complex than a basic AES implementation, but forms the basis for even more extensive encryption sub-systems, as we will see in Part 3.

Paul Dillien has worked with Algotronix Ltd. covering sales and marketing for the past six years. He previously worked in the FPGA industry, and is the author of The FPGA Market report. Paul is a Chartered Engineer and has worked in strategic and tactical marketing roles for leading U.S. and UK semiconductor companies and has specializations in competitive analysis and negotiation.

Paul Dillien, Algotronix Ltd.
Categories
Security