Notes for PolicyMaker, PICS, and Key Escrow Talks


PolicyMaker | PICS | Key Escrow

Trust Management : PolicyMaker

Slide 3

Common, often-implemented examples of trust management are: file management, establishing secure communication, banking. Especially banking!

Slide 4

Some not so common or unimplemented-so-far uses include:

Slide 5

Policy maker works with keys instead of names. Thus it supports evoting: original names are not revealed. Determining whether a requested action is legal is application specific: the policy must be reimplemented for different applications. There is no standard way of doing this.

Slide 6

The System administrator sends policies to PolicyMaker. There is a query engine that applications use to query PolicyMaker about its current policies.

Slide 8

The policy in this slide's (email) example consists of an assertion that a particular key belongs to a particular user in a particular domain. This validates later requests from that person.

Slide 7

Continuing with the examples:

(upper left): the administrator asserts that Bob's key is valid in the domain of "Bob Labs"

(lower left): then Bob can assert that Alice's key is valid for requests (within his domain)

(upper right): and then such a request is valid(!)

(lower right): but a request from "Matt Labs", or from a different user, fails.

Slide 9

There are two basic things the PolicyMaker language can do: process queries, and assemble the information needed to process those queries. For the former, (someone identified by a key) REQUESTS (some action). For the latter, (a person) ASSERTS (a set of people) WHERE (some particular circumstances hold).

The filters used by these actions are language independent.


PICS

  1. Intro

    PICS is the "Platform for Internet Content Selection". The PICS web page is at the WWW consortium web site. These slides are based on PICS: Internet Access Controls Without Censorship, by Paul Resnick and Jim Miller, CACM, 1996, vol. 39(10), pp. 87-93

  2. Why labelling?

    The audience for a given piece of information on the internet is large, and its members vary, both according to the laws that govern them (gambling, for example, is legal in Nevada, but not Pennsylvania) and the social restrictions that apply to them (Companies may wish to prevent their employees from visiting recreational sites with company resources, and parents may wish to prevent inappropriate material from reaching their children).

    In addition, searching the web has become more difficult, in part due to the difficulty of labelling content.

  3. Why something new?

    Labelling is not the only solution proposed for content filtering. Other attempts to remove obscene and indecent material from the internet include the Communications Decency Act and a variety of ad-hoc client filtering approaches. Search engines also use a variety of ad-hoc solutions to the filtering problem.

    The CDA was a failure for several reasons: It would have been difficult to enforce, as violators would have been difficult to track down. Had the act ever been an enforcable law, it still would have only applied within the United States, leaving much indecent outside of its scope. Finally, the Act was overbroad and a threat to free speech.

    Existing "nanny" software is often similarly overbroad. Many such programs block keywords regardless of the sense in which they are used (which leads to embarassments such as restrictions on breast cancer discussion fora) or block entire sites, silencing many users in an attempt to silence only one. Additionally, many of the manufacturers of such software regard their block lists as important competitive tools, which makes it difficult to hold manufacturers accountable for their blocking decisions.

    Search engines face similar problems. Simple keyword matches are often extremely overbroad, and return too many hits. Some search engines search engines use the "META" tag to specify keywords and other indexing information, but use of these tags is not widespread.

  4. What to do?

    Establishing Orwellian censorship offices to police network traffic is unacceptable. Similarly, segregating the network along political boundaries destroys much of the value of the network. Lobbying the on-line publishing industry and large internet service providers is marginally more acceptable, but still fundamentally flawed: It is easy for a small, hard-to-lobby, publisher to produce material that offends some.

    A combination of labelling and selection software presents a more promising solution.

  5. Diagram

    We should maybe put a link to the relevant PICS picture here, but I do not have net access right now. The picture is not high-content: A "child" using a computer on which "Contect selection software" blocks inappropriate material while allowing appropriate material through.

  6. Requirements for a labelling meta-system

    A good labelling system must be able to transmit labels in a variety of formats: In-band, along with HTML pages, in another RFC-822 document, or in response to an HTTP query. The system must be flexible: it shouldn't encode a particular set of labelling categories, but rather describe ways to label according to categories. To be truly useful, the system muyst be widely used, both by publishers and viewers, and it must be supported by many governments and suitable for use in many cultures with many languages.

  7. What is PICS, then?

    Pics is an infrastructure for associating labelling metadata with internet content. The PICS specifications describe syntax for descriptions of both rating services and the labels they produce. They also specify an embedding of these labels into both HTML documents and RFC-822 messages. Finally, they specify a two ways for clients to request PICS labels for documents. Clients can use an extension to HTTP to request labels with a document andthey can use a PICS-specified query language to search an on-line database of labels. PICS makes provisions for "Ratings Services" which can rate content even if they have not produced and do not control the content.

    Finally, and importantly in the eyes of some, PICS is designed to be voluntary.

  8. A Tour of a PICS rating service

    ((PICS-version 1.1)
     (rating-system "http://MPAAscale.org/Ratings/Description/")
     (rating-service "http://MPAAscale.org/v1.0")
     (icon "icons/MPAAscale.gif")
     (name "The MPAA's Movie-rating Service")
     (description "A rating service based on the MPAA's movie-rating scale")
    
     (category
      (transmit-as "r")
      (name "Rating")
      (label (name "G") (value 0) (icon "icons/G.gif"))
      (label (name "PG") (value 1) (icon "icons/PG.gif"))
      (label (name "PG-13") (value 2) (icon "icons/PG-13.gif"))
      (label (name "R") (value 3) (icon "icons/R.gif"))
      (label (name "NC-17") (value 4) (icon "icons/NC-17.gif"))
    

    This PICS rating service label specifies a bunch of things, including a pointer to a description of the service's policies and a graphical representation to use for the various ratings the service can bestow.

  9. A Tour of a PICS label

    (PICS-1.1 "http://old.rsac.org/v1.0/" labels
     on "1994.11.05To8:15-0500"
     until "195.12.31T23:59-0000"
     for "http://www.gcf.org/stuff.html"
     by "John Doe"
     ratings (r 2 ))
    

    Optional extensions to this label allow for a hash of the document's content (and maybe a signature on the label?) to be included in the label, so that a rating service can be sure to not have its old rating associated with a new version of the document.

  10. Another picture

    This picture (again, presumably available from the PICS website) depicts the model of PICS use. A "parent" selects a rating method, and then, later, "Label reading software" is interposed between a "Child" and content on the internet. The software queries ratings services (including, potentially, the publisher of the content) to determine whether the content is appropriate for the child given the selected rating method and constraints.

  11. A picture of a PICS interface

    A picture of a PICS tab dialog box (with one tab per user) that shows a treelist of possible ratings systems, their ratings categories, and a slider that allows the user to select the maxiumum allowable rating for a given field.

  12. What isn't PICS?

    PICS attempts to avoid entanglement with the sticky social issues surrounding content filtering by deliberately specifying only method and not policy. It does not specify how supervisors ("parents" in the previous example) can specify configuration rules. Similarly, it does not specify how to run a ratings service. It doesn't specify the labeling vocabulary or granularity, and it does not specify who creates labels or what they must label.

  13. Other Uses for Labels

    In addition to content restriction, PICS labels have other uses. They provide metadata that may be useful for clustering users, or helping them find others with similar interests. Given such clustering, information systems may be able to predictively suggest content that a given user may find valuable.

    The metadata contained in labels may also make search and classification easier. One might imagine search engines that could use lablelling information to distinguish between, say, web pages about the physics of magnetic fields and web pages about the pop band of the same name.

    Finally, labels may be convenient carrier for specification of intellectual property information, like copyright notices.

  14. Summary

    Scribe's note: this is verbatim from the slide

    PICS provides a labeling infrastructure for the Internet. It does not specify how to label or how to run a labeling service or how to set configuration rules. It rather specifies how to describe a labeling service and labels, accomodating any set of labeling dimensions and any criteria for assigning labels.

    Any PICS-compatible software can interpret labels from any source, because each source provides a machine-readable description of its labeling dimensions.

    "Around the world, governments are considering restrictions on on-line content. Since children differ, contexts of use differ, and values differ, blanket restrictions on distribution can never meet everyone's needs. Selection software can meet diverse needs, by blocking reception, and labels are the raw materials for implementing context-specific selection criteria. The availablity of large quantities of labels will also lead to new sorting, searching, filtering, and organizing tools that help users surf the Internet more efficiently."
  15. Discussion

    Discussion after the talk raised several points:


Key Escrow

Slides 1-2

AT&T introduced secure telephony using DES in 1992 -- NSA knew this would be happening, and sped up development of its Key Escrow system, the Clipper chip. The chip wasn't ready in time, so AT&T released the DES devices anyway.

Slide 3

A fundamental problem with secure encryption technology is that the Government no longer has the ability to wiretap phones. (According to the government, anyway.) This eliminates a powerful tool from their crime-fighting armoury.

Slide 4

To combat this, the US Government decided to take an "Active Stance" on the issue of wiretapping, rather than sitting back and letting the market take care of such things. Thus, they mandated Key Escrow.

Slide 5

We'll look at what Key Escrow is, the EES, and Failsafe Key Escrow. The EES (Escrowed Encryption Standard) and the Clipper chip are the genesis of all Key Escrow systems, and Failsafe Key Escrow is a particular approach to Key Escrow.

Slide 6

There are three generic components to a Key Escrow system:

See http://guru.cosc.georgetown.edu/~denning/crypto/taxonomy.html for an in-depth overview of Key Escrow and its components.

Slide 7

Assuming that our cryptographic methods are secure, what kind of attacks must an Escrow protocol or system withstand? With any Key Escrow system, criminals will aim to:

Slide 9

The Escrowed Encryption Standard was announced in 1993; its implementation is the Clipper chip. Part of the EES is classified, namely the 80-bit symmetric encryption cipher, SkipJack. Clipper was intended to be implemented on a hardware device by government-approved companies. It was published as a NIST standard, FIPS 185, rather than going through Congress for debate or approval. See FIPS 185 for the specification for the system.

Slide 10

Notes on the diagram:

Slide 11

Matt Blaze discovered a serious flaw in the Clipper chip. In Clipper, the LEAF field is generated with an initialisation vector. (The initialisation vector is a small random block tacked on to the start of a message that prevents identical messages from being encrypted to the same ciphertext when using a block cipher.) Blaze showed it was possible to decrypt the data received without the associated LEAF and vector.

There are further weaknesses in the scheme that may allow the identities of those taking part in a Clipper communication to be forged. Schneier, page 593, has details.

Slide 12

Probably the greatest problem with the Clipper scheme is that the small size of the checksum field (only 16 bits) allows a brute force scan for a valid checksum in roughly 40 minutes.

Slide 13

Failsafe Key Escrow is an alternative to EES that uses secret sharing. Tom Leighton from MIT is a good source of info about this.

Slide 14

For a Key Escrow scheme to be abuse resistant, it must prevent unauthorized recovery of the key, and prevent communication with an invalid LEAF.

Slide 15

In particular, FKE uses Verifiable Secret Sharing (VSS). The user splits their private key into n parts, and distributes it to n trustees. Then, if the central authority desires, it can serve a court order on all n trustees for their respective parts of the key, and recover the encrypted communications. On the other hand, a malicious attacker would have to corrupt all n trustees to be able to recover the key. Verifiable secret sharing is needed to avoid the user giving fake key parts to the trustees.

Slide 17

Notes on the verification proof: H(x) is the hash function being used. In the last line of the proof, because S`x is shown to depend on both A and B, we can conclude that P`x can only be forged by the central authority. However, a pre-encryption attack will (obviously) still work.

Slide 18-19

Many "big names" in cryptography signed a letter of protest to the president over the Clipper chip issue. In academia, the advocates of Key Escrow appear to be in the minority. Some of the drawbacks are:

Slide 20

The original system design for the Clipper chip required a trusted computer, secure rooms, transfer of keys by floppy disk -- an approach that seems slightly archaic now. Newer versions make use of encrypted communication over the internet.

Slide 21

Key Escrow for FAX, telephony, etc., is only of interest to the government. In other situations, however, it could have commercial value. For example, if an employee knowing certain passwords dies or leaves the firm, the firm would want to recover any of their data he or she encrypted. Often it's easier to regenerate information (e.g., just create a new password) than to try and retrieve it. For the end-user, the time taken for a central repository to restore lost information is important -- can't have too much government bureaucracy.

Schneier, page 99, has an interesting summary of the politics behind Key Escrow.

Slide 22

Final points:

Slide 24

Commercial Key Escrow would involve a software rather than hardware solution, and be third party. It generalises governmental key recovery to (legitimate) business uses, which might make Key Escrow more acceptable. It's sometimes referred to as "Clipper with a happy face"