Lecture 7 (Weds 10/22/97)

Design Principles for Cryptographic Protocols

Sergey Berezin and Andy Myers

Scribe: Prashant Chandra

While formal methods are good to analyze the properties of a security protocol they are not useful in providing guidelines for the design of good security protocols. The following material discusses some principles that can be used as "rules of thumb" to design good security protocols.

Outline

Be explicit

Otway-Rees

Encrypted parts of message 1 and message 4 are similar leading to ambiguity.

There should be no ambiguity about:

A (Bad) Fix to Otway-Rees

One way to avoid the ambiguity in the Otway-Rees example is to explicitly encode information about what step a message is from, as shown below. However this makes the messages susceptible to known-plaintext attacks.



Denning-Saco

But now B can spoof A:

If a message should be interpreted relative to a principal, then that principal's name should be in the message.

Denning-Saco Fix

By including B in message 3, A makes sure that the encrypted portion of message 3 is only useful to B. Therefore B cannot use that portion of the message to spoof A.

Redundancy?

A general attack on a public key cryptosystem would be to "decrypt" random bit strings with the public key until a reasonable message appears. Then claim that the owner of the public key said this. This attack:

This attack can be overcome by adding redundancy to all signed messages. However, as Mao and Boyd point out, redundancy gives hackers a known plaintext attack, or at least more data for cryptanalysis. For example, in Otway-Rees' first message:

the quantities M, A and B are known and need not be kept secret. In other words, don't encrypt known quantities and don't confuse integrity with encryption.

Know what you are doing

Wide-mouthed Frog protocol:

The hidden assumption here is that A is capable of generating good session keys.

All aspects of a protocol should be thoroughly and exactly specified:

Needham-Schroeder '87

The hidden assumption here is that a hacker can't strip off the first block of a message and append an arbitrary string. For e.g.,

If cryptographic protocols treat cryptography as a black box then one of the useful properties that one may require of underlining cryptography is that it is not possible to chop off random blocks of an encrypted message.

Know who you trust

Consider the following scenario. An evil hacker is able to manipulate the clocks of an organization which uses Kerberos. The result is the system security is terribly compromised. For example, old tickets can be reused. The hidden assumption here in Kerberos is that the time service is trusted or it is impossible to break into the time service.

So the moral of the story is: be explicit about which parties are trusted and what each party is trusted to do. Note however, that the process of deciding who or what is trustworthy isn't a protocol design issue.

Use of Encryption

Encryption can be used for different purposes:

Be clear about why encryption is being done. Encryption is not synonymous to security, and its improper use can lead to errors [AN96].

Kerberos Protocol

Based on the original Needham-Schroeder protocol:

In this protocol:

Otway-Rees Protocol

Here,

Responses to A and B include nonces, hence S exists recently and KAB was generated recently. Hence, A and B are authenticated to each other (through S).

In order to decrease server load, the protocol can be modified so that we are encrypting and comparing ciphertexts instead of decrypting. Messages 1 and 2 of the protocol are modified as follows:

NA and NB are no longer secrets and this generates known plain-text/ciphertext pairs.

(!) Use one way function for {.}K.

What this means is that we should distinguish between secret concealment and one-way service, and use different algorithms for each [MB94].

What encryption methods to use?

Consider the integrity property. Electronic Code Book (ECB) is bad for integrity because blocks are independent and can be substituted for other blocks. However, Cyclic Block Chaining (CBC) is good because it is not possible to change something in the middle of a message. CBC is recommended over ECB by ISO/IEC 9798.2 and 11770-2.

Is CBC really good? CBC has a cut-and-paste property which can be used to perform an attack on Otway-Rees as shown below.

In the CBC algorithm:

Encryption:

Decryption:

Assume,

and

CBC has a cut-and-paste property:

Form .

What is ?

Turns out that

where, and can be computed without knowing keys.

This cut-and-paste property can be used to perform an attack on Otway-Rees as shown below. For suppose that P1 = NB, P2 = M, etc. Now, the evil person E records

from the previous (normal) run between E and B. Let,

Now consider the following run of the protocol:

Here, .

The result of the run is: E shares KEB with B, and B thinks it is KAB! So, CBC is still bad!

Sign Before or After Encryption?

When a principal signs encrypted material, it does not mean he knows the content of the message. The content of signed then encrypted messages should be known to the principal [AN96].

Consider one of the CCITT X.509 protocols:

The protocol guarantees integrity of XA and YA and the privacy of YA. But it does not guarantee that A knows YA! The fix for this is to sign before encrypting.

Hash functions: Sign on encryption:

In this case, A might not know X, thus may not support it.

"Trapdoor" Moduli

RSA example [AN95]. Consider,

Assuming nB is 500-600 bits long, Bob (B) can work out discrete logarithms and find x such that:

Then Bob registers (xeB, nB) as his public key and claims Alice (A) signed M', not M.

Sign before encrypting: This prevents the attack with "trapdoor" moduli for which the signed document can be forged by computing discrete logarithms and changing the public key (key spoofing) [AN95].

Is it a real threat?

The example above is not realistic. The certificate for (xeB, nB) will not be valid at the time M was signed [Syv96].

In some cases it is important to encrypt before signing. Consider the Coin Flip Protocol:

Non-repudiation is only preserved if Alice's signature appears on encrypted H and T.

 

Are double signatures secure?

Consider the Closed-bid Auction Protocol:

where,

M1 = {Alice bids $X on item Y at TA}KA-1

M2 = "Alice's bid of $X on item Y is accepted at Tr".

Now, the attack against this protocol is as follows:

    1. {M1}K1 = "Alice bids $X on item Y at TA", and
    2. {M1}K2 = "Alice bids $Z on item Y at TA",
    3. and Z > X.

Signing before encrypting is not a bill of health [AN96].

Note: For this attack to work, you need short messages and short keys. Can do with RSA with small number of bits (500 bits) --- same as trapdoor moduli.

Assume Nothing

Consider the CCITT X.509 protocol:

TA is a timestamp, NA is a nonce, XA and YA are user data. An evil hacker could send:

Don't assume that a principal who signs an encrypted message has any inkling about its meaning. This isn't true for a message that is signed and then encrypted.

Old secrets can hurt you

Consider Goss' Diffie-Hellman based key exchange protocol:

Key exchange is as follows:

Now, A has rA, gxB, xA, grB and B has rB, gxA, xB, grA. So both can compute the shared key. An attack (by Alice) on Goss' protocol is as follows:

Bob can compute the key, . But, Alice can only compute the first part, gxArB. So, if Bob lets that session key leak after some period of time, Alice can find ZxB. In this case, Alice uses Bob as an oracle.

So, the moral of the story is: Do not assume the secrecy of anyone else's secrets.

Moral

Freshness

Consider the Needham-Schroeder protocol:

Notice that an unbounded amount of time can pass between messages 2 and 3. Don't assume that a key is fresh just because it's been used recently. The fix to this is either have S add a timestamp, or else let B pass a nonce to S.

Note: The largest tolerable difference between local time and a timestamp has to be larger than the largest difference between any two clocks. Clock synchronization becomes a security issue.

Timeliness

Consider a basic protocol to find out what time it is:

If NA is predictable, an evil party could make Alice set her clock back:

There could a large delay between messages 2 and 3 causing Alice to set her clock back. Predictable values can be used to ensure timeliness, but only if they are effectively protected. The fix is to encrypt ("randomize") NA.

Nonces

Consider Otway-Rees (again):

In the above protocol, nonces NA and NB are being used both to guarantee timeliness and as references to Alice and Bob. Be explicit about a nonce's use and meaning.

Don't be an Oracle

Consider the Tatebayashi-Matsuazaki-Newmann (TMN) Scheme.

Both Alice and Bob share rB, the secret key. Suppose Charlie and David conspire. Then Charley gets rB as follows:

In this case the server S is an oracle that computes cubic roots. So "be careful when signing or decrypting data that you never let yourself be used as an oracle by your opponent [AN95]".

Oracle attacks can be fixed by,

A second example of oracle misuse is the Woo and Lam protocol.

"Among other problems a serious problem of this protocol is that S can be used as an oracle and can be impersonated, because between messages 4 and 5 he decrypts NA [AN95]". Can you figure this out?

Count Bits

Account for all the bits --- how many provide equivocation, redundancy, computational complexity, and so on. Make sure that any extra bits cannot be used against you in some way [AN95].

Digital signatures are vulnerable to forward search attacks, if they don't have enough redundancy.

ISO 11166 Key Certificates

Here the redundancy is 45 bits for the short certificate and depends on namespace for long certificate.

If in future there were 30,000 banks sharing the US banking namespace, then the search effort might be as little as 242 modular multiplications [AN95].

KISS Principle

KISS is an acronym for "Keep It Simple Stupid". The design of the cryptographic protocols should be as simple as possible.

New Concepts

A function

is called a cryptographic function, where M is the message space, K is the key space and C is the ciphertext space.

it is infeasible to find f({m,x}, k) for any x without the knowledge of k [Boy90].

Note: While these new concepts are useful, they don't provide a complete formal definition of cryptographic protocols. More detailed formal properties need to be defined.

New notation

even with the knowledge of K. Thus, known plaintext/cipertext pairs don't help to break it.

Using the new concetps and notations, examples of protocol redesign are given below. The main ideas behind redesign are:

Kerberos protocol redesign

Here,

Otway-Rees protocol redesign

Otway-Rees protocol redesign-2

Simplify messages 1 and 2 (if we don't care about flood attacks):

References

[AN96] Martin Abadi and Roger M. Needham. "Prudent Engineering Practice for Cryptographic Protocols", IEEE Transactions on Software Engineering, 22(1):6-15, January 1996.

[AN95] R. Anderson and R. Needham. "Robustness Principles for Public Key Protocols", LCNS, 963:236-247, 1995.

[Boy90] Colin Boyd. "Hidden Assumptions in Cryptographic Protocols", Proceedings of the IEEE, 137 Part E(6):433-436, November 1990.

[MB94] Wenbo Mao and Colin Boyd. "Development of Authentication Protocols: Some Misconceptions and a New Approach", in Proceedings 7th Computer Security Foundations Workshop, pages 178-186, IEEE Computer Society Press, 1994.

[Syv96] Paul Syverson. "Limitations on Design Principles for Public Key Protocols", in Proceedings of IEEE Symposium on Security and Privacy, Research in Security and Privacy, pages 62-72, Oakland, CA, May 1996.


 

Designing Usable Security --- Alma Whitten

Outline

Motivation

Existing work

Adage/MAP project at OpenGroup

Nothing else!

Five challenges unique to UI design in secure systems

There are five challenges listed below, which are unique to user-interface design in secure systems. Each one is described in greater detail in the following sections.

"Barn door" problem

This refers to the fact that once security is compromised, then it cannot be undone. For example, there is no use in restricting read permission on a file, if it could previously be read by anyone. This problem also refers to the fact that it is not safe to adopt a "trial and error" policy when dealing with security. The ideal situation is for a user to be fully informed of the consequences of various actions and to eliminate the possibility of errors.

Weakest link

Security is only as strong as its weakest link; and there are many links all of which have to be managed well. For example, there is no point in employing the strongest cryptographic techniques to secure email, if the user's computer that stores key is itself not secure from attacks (via anonymous ftp for example).

Also, users typically learn about parts of the system that are "interesting" to them. In a secure system, such learning by unguided exploration, risks neglecting areas and creating weak links. For example, in learning about a secure email system, users might concentrate on "cool" cryptographic parts of the system and neglect more basic things such as file-system security. Hence, some sort of "breadth-first search" learning is needed.

Motivating learning

Typically, users only read manuals when they are not satisfied with the results they have. However naive users may not realize that they have bad secuirty. Therefore one cannot expect users to read the manual before using the system.

Novice "programmers"

Managing security policy typically involves creating a set of abstract rules to apply to concrete situations. For example, in setting file system security, a user is basically creating a set of rules that allow/disallow accesses to certain files under certain situations. In this respect, managing security policy is like "programming". And this is not a familiar skill for average users. While the "Weakest link" challenge shows why users have to get the big picture, "novice programmers" challenge shows why it is difficult for users to get the big picture.

Feedback is hard

In a secure system, it is usually hard for the system to give feedback to the user about the consequences about the user's actions. Firstly, there is too much detailed state information which cannot be summarized in a way that makes sense to the user. For example, it is almost impossible to capture the global effects of changing file permissions locally (in one directory) and display that information to the user on a single screen. Secondly, the system does not know what the users wanted, so cannot say whether they got it right. And finally, it may be too late to notify the user that an error has occured (barn door problem).

Understanding the problem

The approach followed in this research is as follows:

Critique of Mac PGP UI

PGP tools display

The following observations can be made about this display:

PGP keys display

In this display,

Usable security requirements

So, usable security requires:

A learning strategy

The learning strategy that is proposed in this research is a combination of the following two approaches:

Plan of attack

Here is how this research will be carried out: