Sunday, March 19, 2006

The Fatal Flaw With Credit Cards.

Why is it that credit and debit cards are so continually under attack? Is it that those in charge of securing the credit card system are so incompetent, or is there a fundamental flaw with the model that they are charged with making secure? If the latter case holds, then no matter how intelligent the security teams are, the system will remain insecure due to a these fundamental assumptions. In this essay, then, I attempt to make the case that the model underlying modern credit cards is fundamentally insecure, and must be replaced if we are to expect any sort of security from the system. Let us consider how this might be the case by examining a typical transaction.

John Q. Public buys a few groceries at the local supermarket. He decides to use a credit card, since it is much more convienent than cash. He then proceeds to swipe the card through the card reader. The card has now been comprimised, for he made the decision to trust equipment fundamentally outside of his control with the entirety of his card's data. His web of trust has not even had the chance to start building, as it was so throughly violated at the first step. The problem that is immediately seen is that in this kind of a transaction, the validation data is the key! Every time that card is swiped, the exact same data is exchanged, and one need only capture that data once to invalidate the implicit trust in every other instance. How else might this transaction have been completed, then? By the magic of private/public key encryption, also known as keypair encryption.

Before delving into how this would work, we must first develop an understanding of keypair encryption. If you already have such a grip, please skip this paragraph. Under a keypair encryption model, each user has a private key which they keep secret at all times, and a public key which they distribute as widely as possible. There are four basic operations that can be performed using a keypair system: encryption, decryption, signing and verification. Encryption takes a set of data and encrypts it against a public key so that only the matching private key can decrypt it. Decryption, then, is the act of taking such encrypted data and recovering the original data by applying the private key. Signing is the act of attaching an extra block of data, called a signature, to a message, and requires a private key. Verification takes a public key, a signature and a message and checks to see if the signature was generated from the matching message and private key. For instance, I can sign an e-mail message and attach the signature. If someone malicious intercepts my e-mail and changes it, then the signature will no longer verify and the message should not be trusted. These operations can be combined, too. If I have your public key, I can encrypt a message so that only you can decrypt it, and then sign the encrypted message with my private key. Upon recieving the message, you can verify that it is from me, and be assured that only you will ever see the message.

Let's revisit the credit card purchase scenario again, and use keypair encryption this time. Before arriving at the grocery store, let's say that John Q. Public created a GPG keypair (which can be done using free software available for most any OS) and sent the public key to his bank. He then goes to the bank's branch office and reads to a teller the fingerprint on the key (a string of data that is unique to each public key) and verifies that the public key they recieved was the same as the one he intended to transmit. Having done that, the bank now trusts that key. John now goes to the store, selects his purchases and goes to the counter to pay. Instead of swiping a card, he takes out his PDA.

The register now generates a message to send to John's PDA. It doesn't know who John is, and so it can't encrypt it to his PDA. There would be some serious privacy concerns if the register were to send a message with his purchase details over the air, and so it first attepts to confirm his identity by generating a block of random data (called "salt" for reasons beyond my understanding) and sending it to John's PDA. The PDA then responds by signing it with the private key stored on board. The register now knows which key to use for the transaction, and verifies the signature with the public key from the bank.

Next, the register needs to confirm the transaction details. The register now writes a message consisting of the current time, the name of the company recieving payment, and the transaction amount. The message is then encrypted against the public key it recieved from the bank, as well as the store's public key and the bank's public key. (That means that any of John, the store and the bank can decrypt the message.) John's PDA recieves and decrypts the message and shows a dialog on the screen asking if the transaction should be completed. If he clicks yes, then the PDA signs the encrypted message and sends it back to the register. The signed and encrypted message can serve as legal verification of the transaction in case of dispute.

Note that nowhere in this process is John's private key sent to anything but his PDA. Thus, in order for this transaction to work, John needs to trust the following three things:

  • The bank's public key actually belongs to the bank.
  • The store's public key actually belongs to the store.
  • The PDA has not been comprimised, and will only sign the messages that John explicitly agrees to, and will protect his private key.

The first two can be solved rather easily; if the bank's public key is trusted by a large number of other people, then they can sign a message containing the bank's public key's fingerprint and a statement that the key referred to actually belongs to the bank. Software exists to collect such messages and assemble a web of trust. If John knows any of these people personally, then the trust link will be stronger. The same process can be used to verify that the store's public key really belongs to that store. As for the PDA, that is a trust issue that John has explicit control over. If he does not trust the code on the machine, it can be replaced with code that he does trust. Of course, such trust has its limits, as it may be that the hardware itself was comprimised, but such issues remain in any system, regardless of the model. Rather, a system of trust such as the one described here minimizes the risk by decentralizing points of attack. In order to comprimise John's transactions, one must either comprimise his personal property, or form a conspiracy of the bank, store and many customers of each to poison the first two bullet points above. To form such a large consipiracy without John's knowledge is difficult at best and counter to human nature at worst.

The system can be further protected by creating John's private key such that it only works if a special password is entered at any use of the key. This, however, still requires that the PDA's hardware be trusted, and simply protects against physical theft of the PDA. These arguments are not to indicate, however, that such a system is secure against all attack, but rather that it is improved from the direct key exchange method of modern credit systems.

technorati tags: , ,

3 comments:

Anonymous said...

There's an interesting book you might enjoy, "Policing Online Games" by Peter Wayner, that covers some of these ideas. I agree that something should be done to improve the current system.

Incidentally, the cryptographic origin of "salting" is just an adaptation of the common use, sprinkling bits of something around, like information, to make things more interesting or compelling.

Unknown said...

Interesting sounding. Me and some of my other friends have had long discussions on the economics of online games, and how that relates to their trust models, so the book sounds right up my alley. Thanks again for the recommendation.

Anonymous said...

Chris, please contact Jim Sykes or Deirdre Helfferich ASAP about Rebekah K. Urgent.