Crypto Series: Differential Fault Analysis by examples
So, after more than a year without writing anything here, I was bored today and thought it would be nice to share a new piece on attacking cryptographic implementations here 🙂
Differential Fault Analysis (DFA) attacks are part of what is known as fault injection attacks. This is, they are based on forcing a cryptographic implementation to compute incorrect results and attempt to take advantage from them. With fault injection attacks (also often called active side channel attacks) one can achieve things like unauthenticated access to sensitive functionality, bypassing secure boot implementations, and basically bypassing any security checks an implementation performs.
With DFA attacks, one is able to retrieve cryptographic keys by analyzing correct/faulty output pairs and comparing them. Of course, this assumes you are able to inject faults somehow... which is often true in hardware implementations: gaming consoles, STBs, smart cards, etc. At the software level, one can achieve similar things by debugging the implementation and changing data or by patching instructions... but this is something we have been doing for a long time, haven't we? 🙂 I often say that fault injection attacks are the analog version of 'nopping' instructions out in a program, although we often do not know exactly what kind of faults we are injecting (i.e. we often miss a fault model, but we still successfully attack implementations in this way).
There are ways to protect against this kind of attack as an application programmer, but this is not the objective of this post. In the remainder of this post, I will explain two powerful DFA attacks on two modern cryptographic algorithms: RSA and (T)DES. For some information on protecting from these attacks as a programmer, take a look at these slides. If there is some interest, I will outline the most common techniques to perform fault attacks in a future post.
Crypto-series: Elliptic Curve Cryptography
After a long long while, it's time to go on with our crypto series. Last time we talked about the RSA cryptosystem, and we learned its security is based on the integer factorization problem (plus the DL problem for message secrecy). Today, we'll continue with public key cryptosystems: we'll look into Elliptic Curve Cryptography.
Elliptic Curves
If we are talking about Elliptic Curve Cryptography, first we need to define what an Elliptic Curve is. Mathematically, an Elliptic Curve is a curve with the following equation:
This means that every point (x,y) for which the above expression is met will be part of the curve. However, it turns out in our case we can simplify the equation because the curves we'll be using can generally be written as:
Such a curve, over the reals (i.e. x and y are real numbers) and with a=-3, b = 1, looks like this:
What makes these curves special is that we can define an abelian group with them. To do that, we define the point at infinity and an addition law. The addition law is depicted in the following picture from Wikipedia:
As you can see, if you want to add two points P and Q, you draw a line through them. The intersection of this line and the curve is the point -(P+Q). Then, you just need to invert this point (negate the y coordinate) to obtain the final result.
Of course, we have special cases. If the point is added to itself, the line is defined as the tangent to the curve at that point, as intuitively the tangent touches 'two times' the point.
If we add a point to its inverse, we get a vertical line... and that's a problem because it will never touch the curve. Here is where the point at infinity comes to rescue. The point at inversity is simply 'up there' (and 'down there'), and is the zero element of the group.
Elliptic Curves for Cryptography
We have defined above how an elliptic curve looks like over the reals, and how to perform additions of two points. Obviously, when addition is defined we also have multiplication for free: just add a point to itself several times in a row (although you can do it in smarter and more efficient ways).
But how do we use it for cryptography? I mean, where is the difficult problem here? Actually, the difficult problem is again the discrete logarithm problem. In this case, we define it as follows:
Given a curve E and all its parameters, a base point P and a point Q=nP, obtain n.
And how is this difficult in the curves defined above, you might be thinking... The truth is we do not use real curves in ECC, but we use curves over finite fields instead. We can do it over prime fields GF(p), or we can do it over binary fields GF(2^n). I'll look only at GF(p) here, but similar concepts apply (although the simplified expression I defined above is slightly different in that case).
So, the curve I depicted previously taken over GF(8761) looks like this:
Messy, huh? Exactly the same addition laws apply here, but now when you add two points you draw a line... and when the line gets out of the GF(p) x GF(p) plane it wraps around and comes back from the other side. It is a little more difficult to depict and to visualize, but the concept is the same as before. And now you probably start seeing why this is difficult to solve...
Why Elliptic Curves?
Now you might be wondering... why do we use Elliptic Curve cryptography at all? What are the benefits? The answer is that the ECC allows us to use smaller keys than other algorithms like RSA / 'normal' DL systems for the same amount of security.
This is because the best known general methods for solving the DL in Elliptic Curve are of exponential complexity, while for the other systems we know subexponential methods. Hence, the DL problem under Elliptic Curves is believed to be more difficult than the equivalent base problems for other public key cryptosystems.
Now that we know how elliptic curves are used in cryptography and what benefits they have over traditional
Elliptic Curve Diffie-Hellman
So, if you remember from when we talked about Diffie-Hellman, this is a key exchange protocol that relies on the Discrete Logarithm problem (and the Diffie-Hellman assumption). Usually this is done over a finite field GF(p), but now we have just defined a group based on Elliptic Curves which we can use as well.
In this case, Alice has a private key and a public key
, where G is the base point. Similarly, Bob has
and
. Alice and Bob exchange public keys, and then each of them can compute a common point
.
This protocol relies on the assumption that the DL problem is infeasible in the elliptic curve (which requires a base point G of high order) and the Diffie-Hellman assumption.
Other ECC algorithms
Besides the EC Diffie-Hellman algorithm defined above, there are several other algorithms based on Elliptic Curves. For example, one could compute digital signatures using Elliptic Curve DSA or Elliptic Curve Nyberg Rueppel. Each algorithm has its own details, but the important problem used as a foundation for each of them is the Discrete Logarithm problem over Elliptic Curves as we have defined it here.
Beware, however, that similarly to other algorithms, ECC algorithms rely also on other conditions. For example, for ECDSA (and DSA) there is a secret parameter that must be unique, and two signatures with the same value for this parameter will reveal your secret key. As usual, if you implement cryptography. you need to be aware of the requirements and limitations or you will certainly screw up (toc toc SONY!).
Crypto Series: Introduction to the RSA algorithm
After seeing how the ElGamal system works, today we are going to take a look at the RSA public key cryptosystem. The RSA algorithm was first published by Rivest, Shamir and Adleman in 1978 and is probably the most used crypto algorithm today.
Despite this fact, the algorithm seems to have been invented by Clifford Cocks, a british mathematician who worked for a UK intelligence agency. Since this work was never published due to the top-secret classification, the algorithm received its name from Rivest, Shamir and Adleman who were the first to discuss it publicly. A document declassified in 1997 revealed the fact that Clifford Cocks had actually described an equivalent system in 1973.
Let me remind you once again that these posts are not intended to be 100% accurate in a mathematical sense, but an introduction for people who doesn't know much about cryptography. If you want more accurate and complete descriptions, take a crypto book such as the Handbook of Applied Cryptography I've linked in most of my posts :).
Setting up the RSA algorithm
The RSA algorithm is based on the assumption that integer factorization is a difficult problem. This means that given a large value n, it is difficult to find the prime factors that make up n.
Based on this assumption, when Alice and Bob want to use RSA for their communications, each of them generates a big number n which is the product of two primes p,q with approximately the same length.
Next, they choose their public exponent e, modulo n. Typical values for e include 3 (which is not recommended!) and (65537). From e, they compute their private exponent d so that:
Where is the Euler's totient of n. This is a mathematical function which is equal to the number of numbers smaller than n which are comprimes with n, i.e. numbers that do not have any common factor with n. If n is a prime p, then its totient is p-1 since all numbers below p are comprimes with p.
In the case of the RSA setup, n is the product of two primes. In that case, the resulting value is lcm((p-1)(q-1)) because only the multiples of p and q are not comprimes with n.
Once our two parties have their respective public and private exponents, they can share the public exponents and the modulus they computed.
Encryption with RSA
Once the public key (i.e. e and n) of the receiving end of the communication is known, the sending party can encrypt messages like this:
When this message is received, it can be decrypted using the private key and a modular exponentiation as well:
Example
sage: p=random_prime(10000) sage: q=random_prime(10000) sage: n=p*q sage: p,q,n (883, 2749, 2427367) sage: e=17 sage: G=IntegerModRing(lcm(p-1,q-1)) sage: d = G(e)^-1 sage: G(d)*G(e) 1 sage: m=1337 sage: G2=IntegerModRing(n) sage: c=G2(m)^e sage: c 1035365 sage: m_prime=G2(c)^d sage: m_prime 1337 |
In the commands above, I first create two random primes below 10000 and compute n. Then I create a IntegerModRing object to compute things modulo lcm(p-1,q-1) and perform the computation of the private exponent as the inverse of the public exponent on that ring.
Next, I create a new ring modulo N. Then I can use the public exponent to encrypt a message m and use the private exponent to decipher the cryptotext c... and it works!
Correctness of RSA encryption/decryption
We have seen it works with our previous example, but that doesn't prove that it really works always. I could have chosen the numbers carefully for my example and make them work.
Euler's theorem tells us that given a number n and another number a which does not divide n the following is true:
Therefore, and since , for any message m that does not divide n the encryption and decryption process will work fine. However, for values of m that divide n we need to use more advanced maths to prove the correctness.
Another way to prove it is to use Fermat's little theorem and the Chinese Remainder Theorem. I will explain these theorems in my next post and then I will provide a complete proof based on them.
RSA for signing
In the case of RSA, digital signatures can be easily computed by just using d instead of e. So, for an RSA signature one would take message m and compute its hash H(m). Then, one would compute the signature s as:
For verifying the signature, the receiving end would have to compute the message hash H(m) and compare it to the hash contained in the signature:
Therefore, if the hash computed over the received message matches the one computed from the signature, the message has not been altered and comes from the claimed sender.
Security of RSA
In order to completely break RSA, one would have to factor n into it's two prime factors, p and q. Otherwise, computing d from e would be hard because (p-1) and (q-1) are not known and n is a large number (which means that computing its totient is also difficult).
In a few posts I will show an algorithm to solve the factorization problem. However, another way to break RSA encrypted messages would be to solve a discrete logarithm. Indeed, since , if one solves the discrete logarithm of c modulo n, the message would be recovered.
Luckily, we already know that discrete logs are not easy to compute. And in this case, solving one does not break the whole system but just one message.
Crypto Series – ElGamal Cryptosystem
In our last post we learnt about the Discrete Lograithm problem, why it is a difficult problem and how we can attempt to solve it if the numbers are manageable. Of course, in a real setting we wouldn't use 16 bit numbers as in my example, but at least 1024 bit numbers nowadays (and most likely even bigger numbers).
Now, we are going to see how to make use of that problem to create a public key cryptosystem. We will look at how ElGamal uses the DL problem to provide public key encryption and digital signatures. Keep on reading if you are interested!
Crypto Series: Discrete Logarithm
From last post, it becomes clear that at this stage we won't be able to make it without some maths. That's because we are dealing now with public key crypto, which is based on difficult mathematical problems (as in difficult to solve, not as in difficult to understand).
With symmetric crypto, we could understand the concepts of diffusion and confusion without needing to dive into maths. On the other hand, here we will need to understand the problems on which the algorithms rely in order to understand how they work.
In this post, we'll see what's the Discrete Logarithm problem, why it is difficult to solve based on a simple intuition, and finally a method to solve this kind of problems. Of course it's not the only (nor the best) existing method, but in my opinion it is the simplest one to understand.
Crypto Series: Digital Signatures
In the previous post, I said I'd write about the Discrete Logarithm problem in the next post. However, I forgot to mention the general idea behind digital signatures. Since I can't sleep right now and have to take a train to the airport in a couple of hours, I decided to go ahead and write a few lines about digital signatures ;-).
Basic idea
The basic idea behind digital signatures is to make use of the fact that in public key cryptography a user has a private key which is never disclosed to anyone in order to authenticate the user or messages generated by that user.
In a symmetric setting, authentication is performed using MAC or HMAC mechanisms, and at least two parties know the key used to generate those messages. Therefore, a given party could deny that he or she generated a given authenticated message, because he is not the only one who knows that key and therefore there is no proof that he did generate the message.
Of course, if only two parties know the key, and one of the parties knows that a particular message was not generated by himself, then it must come from the other party. However, in a legal dispute, there is no way to prove that and to an external observer both of the options are equally likely.
To solve that issue, digital signatures generate a sort of authentication code using a private key, never disclosed to anyone. Then, using the related public key, everyone can verify that signature and therefore be sure that the message came from that user. Since that entity is the only one knowing the private key, this sort of construction can be used to bind a user to a message and resolve any legal disputes that might arise.
Normally, you can see the digital signature generation process as some sort of encryption with a private key. On the other hand, you can imagine the signature verification (or opening) phase as a decryption using the public part of the key.
Practical usage of digital signatures
In real world, documents are usually way larger than the message length that common digital signature algorithms can handle directly. Since authenticating each chunk of a document is not very practical (asymmetric crypto is usually slooooow), in practice a cryptographic hash is computed over the document, and the hash is signed using the private key and the signature algorithm.
Then, in the verification stage, a second hash is computed and compared against the signed hash. If they match, the signature is correct and therefore the received document was created by the signing party and has not been modified.
Of course, this assumes that cryptographic hash functions behave as expected, and there are no collisions. Ohterwise, if one might find another document which produces the same hash (and thus the same signature), any legal proof that the document was created by the private key holder would be destroyed.
Therefore, choosing secure hash functions for usage within digital signatures is a crucial issue. As an example problem that arose due to the use of insecure hash functions with digital certificates, check the Hashclash project.
Crypto Series: New Directions in Cryptography
As some of you might have noticed already by looking at the title, this post will be the first one talking about public key cryptography. Today, I'll introduce the basic ideas around public key crypto and the ideas proposed by Diffie and Hellman in their famous paper 'New Directions in Cryptography' from 1976.
In subsequent posts, we well look at the discrete logarithm problem and the factorization problem. We'll also look into some public key cryptosystems, such as El-Gamal and RSA. And after that, we'll look at Elliptic Curve Cryptography. With all this, the algorithms part of this series will be considered closed and I'll move into cryptographic systems and protocols ;-). Stay tuned!
Crypto Series: Authentication codes
This time we'll treat two well known techniques used to solve a common problem in cryptography: authentication. To put it simple, authentication is the process of establishing an identity or a message's origin.
To achieve this using symmetric cryptography, two basic mechanisms exist. The first of them, commonly referred to as Message Authentication Codes (MAC), is based on using block ciphers with a shared key between the party claiming an identity (or sending a message) and the party verifying the identity or the origin of the message.
The second one, known as Hashed Message Authentication Codes (HMAC) is based on the use of a hash function together with some shared key. In the remaining of this post, I briefly describe the basic idea behind these two ways of assuring message authentication.
Message Authentication Codes using block ciphers
A common way to authenticate messages is to use a block cipher, such as DES, in a mode of operation which makes the latest encrypted block dependent on both the key and all the previous plaintext blocks. For instance, one can think of using 3DES in CBC mode to create a MAC over a message: encrypt the message in CBC mode using 3DES with the shared key, get the last output block and attach it to your original message.
When the recipient gets the message with its MAC, it does the same operation: encrypt each block using CBC mode and takes the last block. The result is compared against the MAC attached to the message: if there is a match, the sender of the message must have known the key (unless the encryption used is broken).
Despite being one of the most popular techniques for MAC generation (if not the most popular), CBC-MAC has some security problems and other techniques exist. For instance, you can take a look at Special Publication 800-38B by NIST.
Hashed Message Authentication Codes
HMAC is a standardized way of using hash functions for authentication purposes. The idea is to incorporate the usage of a key into a hash function, in such a way that the resulting hash could not be produced without knowing the key.
The obvious choices of prefixing the message with the key or appending the key after the message before computing the hash have security problems (see Stop using unsafe keyed hashes, use HMAC by Nate Lawson). Therefore, a slightly more complex structure was invented to avoid such problems.
The HMAC construction is defined as follows:
Where opad (outer pad) is the constant 0x5c...5c and ipad (inner pad) is the constant 0x36...36. These constants, as well as the key, are of the same length as the hash function's block length.
With this, one would follow the same approach as with any MAC: compute the HMAC value for the given message, and send it attached to the message. The recipient will perform the same computation, and if it matches the one attached to the message he will conclude that the message was sent by someone who knows K (which is hopefully only the person/entity he shared it with 😉 ).
This concludes my introduction to authentication codes. If you are looking for a good security analysis on HMAC functions, wait for Nate's post because I'm sure it will be very interesting.
Crypto Series: Cryptographic hash functions – SHA-2
So far, we've looked at block and stream ciphers in this series, including examples of each of them. Before going into asymmetric crypto I want to explain a little bit about cryptographic hash functions and some of their applications. We'll look at hash functions in general and at the SHA-1 hash function as an example.
Note that I'll often skip the 'cryptographic' adjective throughout this series of posts, but I'll always refer to cryptographic hash functions and not to regular hash functions. And as usual, this is by no means complete but just tries to give a basic understanding of what hash functions are and how they usually look like.
I must say I never studied hash functions too deeply, so this stuff will serve as a reminder for me as well. If something is not as accurate as you'd hope for, let me know in the comments ;-).
Cryptographic Hash functions: properties
A cryptographic hash function is defined as a series of operations over an input message of arbitrary length, producing an output of fixed length (hash or message digest) such that a change to the message would not come unadvertised. It should be easy to compute a hash function from a message, but given a hash value it should be infeasible to find a message that would produce that value. Further, given a message it should be infeasible to find a second message producing the same message digest and as I stated before, it should be infeasible to modify a message without modifying its hash value.
Therefore, the desired properties of a cryptographic hash function are as follows:
- Preimage resistance: given a hash value h, it should be infeasible to find a message m with h=hash(m). Otherwise the function would be vulnerable to preimage attacks
- Second preimage resistance: given a message
it should be infeasible to find a second message,
which provides the same message. I.e., given
, it should be difficult to find
such that
. Otherwise, the function is said to be vulnerable to second preimage attacks.
- Collision resistance: It should be difficult to find two messages with the same message digest. Obviously, given a hash function with output size of n bits, if you try
messages, you'll get two of them with the same hash. The theory behnd birthday attacks tells us that for a n bit hash function we'd have to try out about
inputs to find a collision. That number is called the birthay bound.
Typical structure of a hash function
A hash function typically consists of a compression function which takes blocks of a fixed length as input and produced blocks of a fixed length (the output length of the hash function). Additionally, the output of the previous block is fed back to the input so that the next block depends in all the previous blocks. Otherwise, the hash function would be looking at the last block only 😉
The structure shown is known as the Merkle-Damgård construction, and most popular hash functions are based on this construction. However, alternative structures exist and many of the proposals for the SHA-3 contest are based on different constructions.
The SHA-2 family
Although MD5 and SHA-1 are way more popular, I decided to take a look and describe here the structure of the SHA-2 family of hash functions. The reason for this is that MD5 was broken a while ago, first by dr. Wang's team and later by a group of researchers including dr. Benne de Weger. I already talked about it here, although it's only in Spanish. You can see the hashcalc project's page if you don't read Spanish ;-).
Further, SHA-1 is very similar to MD5 and the same sort of problems usually apply to it. Therefore, I chose to look at the next family of hash functions, the SHA-2 family. This includes several hash functions with different output lengths: SHA-224, SHA-256, SHA-384, and SHA-512 where the number defines the number of output bits.
SHA-256 and SHA-512 use 32 and 64 bit words respectively, while SHA-224 and SHA-384 are just truncated versions of them. In the remaining of this section I'll explain SHA-256 since SHA-512's structure is basically the same but with different word size and initial values.
Bascially, the input message is divided in 512 bit blocks , and is padded with additional information that includes the length of the original message. Then, for each of these blocks a message schedule is run which produces 64 variables
.
These 64 variables are processed with the compression function shown in this picture, where variables a..f are initialized according to the standard:
After this processing, the intermediate hash value is computed as the addition (modulo 32) of the variables a..f and the previous intermediate hash value. This process is run for each message block and finally yields the message digest.
Of course, this is a very high level description of the algorithm. If you want to know the details, see the FIPS 180-2 standard publication.
The SHA-3 contest
Currently, an open contest is being held by the NIST to create a new hashing standard, SHA-3. Currently, the contest is in its second round and there are 14 second round candidates. The Second SHA-3 Candidate Conference is planned for August 2010 and the idea is to publish a revised Hash Function Standard by 2012.
More information on the contest and the submissions can be found in the NIST Hash competition website.
Crypto Series: Mifare Crypto1
Let's go back into Cryptography. This time I'll tell you how the (in)famous Crypto1 cipher works. It is used in the Mifare Classic RFID tags, typically used for building access control but also for many other systems such as the Oyster Card in London, the OV-Chipkaar in The Netherlands, etc.
We won't talk about the protocol details, nor about how the published attacks work. You'll find a couple of interesting links at the end though ;-).
Note: Images obtained from the papers linked at the end of the post.
The Crypto1 cipher
Crypto1 is a proprietary stream chiper from NXP found in the RFID tags from the Mifare Classic family. At first, it was studied by Karsten Nohl reverse engineering the chip itself. This information was published in the CCC 07, although not many details about the cipher were published.
In parallel, the Radboud Universiteit from Nijmegen was studying this kind of cards and with the help of the information published at CCC completely reverse engineered the cipher and published the details. Let's see how it works then...