Lecture 7 (Weds 10/22/97), part 2

Scribe: Stephen Bijansky

Design Usable Security

presentation by Alma Whitten

Why do we need usable security

It's have been claimed by some sources that 90-95% of security related break-ins are because of misconfiguration, thereby stressing the important of user configuration.  Therefore, it is important for the users to get the protocols right.
 

Present Work

So far, there is the Adage/MAP project at the OpenGroup that is concentrating on making the rules more fundamental, thereby making rules more usable that hard to configure access control lists. This group is also doing work in trust modeling.  Besides this group, no one is publicly doing any research in this area.
 

Understanding the Problem

The starting point for this work is in analyzing what is most important in making security usable by the "average" user.  By average, they mean someone that is capable of using word processors, spread sheets, and other standard computer applications.  Next, they looked at existing security user interfaces for validation.
 

Important problems for usable security

"Barn Door" problem

Once private, sensitive data has been exposed, that action can not be undone, and the data no longer be considered private.  An important consequence of this principal is that security software is not safe to learn by trial and error because there is no undo button for errors.  Once information has been compromised, mistakes are ever lasting.  Consequently,  usable security should strive to have no errors because of confused users.  Ideally, users should be fully informed of all their actions while using the security software.
 

Weakest link

In security, the protection is only as strong as it's weakest link.  Furthermore, since some places have many links, all of the links have to be managed equally well for security to be effective.  Therefore, users have to have knowledge of all the area that need security.  Along the same line, if a user decides to learn by unguided exploration, they risk neglecting an area, in which case their security could be greatly reduced.
 

Motivating learning

With current applications, like word processors for example, users might only read the manual when they're not satisfied with the results that they have.  While this might work for formatting a paper, one may just be looking to make the output better.  In security though, naive users may never realize that they have bad security unless they take the initiative to learn the basics about security.
 

Novice "programmers"

Security programming require creating systems of abstract rules to apply to concrete situations, which is not a familiar skill for average users.
 

Feedback is hard

A security system has too much detailed state information which can not be summarized, all of which is probably too complicated for the average user.  Also, it is impossible for the program to know exactly what the user wants, so a program could only suggest what a possible setup would be, not the best setup for each individual user.  More importantly, notification that an error as occurred is already too late, since private information could have already been compromised by time the notification arrives.  Once again, this is much of the same idea of the "barn door" problem described earlier.
 

Research

User Interface for Mac PGP

One of the tempting reasons for analyzing this particular program is the manufacturer's following assertion: After making a statement like that, pointing out every flaw, either obvious or subtle, becomes fair game.
 

PGPkeys display

 On the display for PGPkeys, there are many nice looking icons for things like encrypting, signing, encrypting/sign, decrypt/verifing, and also a general keys button.  One of the problems with these icons is that the icon for signing a document only shows a quill pen and piece of paper.  The user gets no feeling that a private key is involved, and this is an important part of security.  A better icon might include a picture of a key in the signing button.  As for the listing of key pairs display, there are just too many different keys listed.  For the average user, this is just too much information, and there is no way for the user to figure out what parts are important.  At the very least, every different part of the display should allow the user to find out more information by just clicking a mouse button.  Another problem with using keys in PGPkeys is that the interface does not give warnings about sending information to the key.  The effect of this is that the users might summit random keys that they are not planning on using while they are learning the security software.  This could possible place extra unneeded strain on the key servers to key these unneeded keys.
 

PGP review

Overall, the Mac version of PGP could use better metaphors in order to give the user a better understanding of their actions.  When dealing with keys, there is too much information that does not prioritize what is really important.  Also, since there is too much reliance on documentation to explain key concepts, it is not obvious when using the program as to which actions are risky.  Mostly though, they have to be aware that pretty is not always clear.

Motivation

In useable security, the main requirements are extreme learnability, observability, and predictability.  In other words, the program should be fast, safely structured, and inescapable.  For observability, a user should be able to easily assess the state of the program from the display.  The last, and arguably the most important, part of security software is that the user should be able to predict the results of their actions from just the display.
 

Previous learnability studies

In 1984, Kieras and Bovair showed that users learn faster and better when they are given explicit mental models.  Around that same time, Carrol and Carrithers demonstrated that interfaces with "training wheels" give faster learning times while contributing to fewer errors.  In this idea, users are guided through most operations in the beginning.  Then, once they become comfortable with the software, they are given the freedom to make their own, more informed decisions.  For Alma, her research will try to combine these two approached and design nested mental models.  In this case, at first, users only trust keys from their friends.  Then, when the user becomes more comfortable with the idea of security and cryptology their web of trust starts to branch out.
 

Next Step

One of the goals of Alma's research is to set guidelines for learnable security interface designs based on nested mental models.  Then, she hopes to analyze existing user interfaces against the guidelines developed.  After finding a guideline, then the next part would involve the actual implementation of nontrivial security interfaces.  For example, with the popularity of the web, applet security management would be a good testing ground for security models.  Lastly, it is also important to know the impact of the new, more usable security interface.