(upper left): the administrator asserts that Bob's key is valid in the domain of "Bob Labs"
(lower left): then Bob can assert that Alice's key is valid for requests (within his domain)
(upper right): and then such a request is valid(!)
(lower right): but a request from "Matt Labs", or from a different user, fails.
The filters used by these actions are language independent.
PICS is the "Platform for Internet Content Selection". The PICS web page is at the WWW consortium web site. These slides are based on PICS: Internet Access Controls Without Censorship, by Paul Resnick and Jim Miller, CACM, 1996, vol. 39(10), pp. 87-93
The audience for a given piece of information on the internet is large, and its members vary, both according to the laws that govern them (gambling, for example, is legal in Nevada, but not Pennsylvania) and the social restrictions that apply to them (Companies may wish to prevent their employees from visiting recreational sites with company resources, and parents may wish to prevent inappropriate material from reaching their children).
In addition, searching the web has become more difficult, in part due to the difficulty of labelling content.
Labelling is not the only solution proposed for content filtering. Other attempts to remove obscene and indecent material from the internet include the Communications Decency Act and a variety of ad-hoc client filtering approaches. Search engines also use a variety of ad-hoc solutions to the filtering problem.
The CDA was a failure for several reasons: It would have been difficult to enforce, as violators would have been difficult to track down. Had the act ever been an enforcable law, it still would have only applied within the United States, leaving much indecent outside of its scope. Finally, the Act was overbroad and a threat to free speech.
Existing "nanny" software is often similarly overbroad. Many such programs block keywords regardless of the sense in which they are used (which leads to embarassments such as restrictions on breast cancer discussion fora) or block entire sites, silencing many users in an attempt to silence only one. Additionally, many of the manufacturers of such software regard their block lists as important competitive tools, which makes it difficult to hold manufacturers accountable for their blocking decisions.
Search engines face similar problems. Simple keyword matches are often extremely overbroad, and return too many hits. Some search engines search engines use the "META" tag to specify keywords and other indexing information, but use of these tags is not widespread.
Establishing Orwellian censorship offices to police network traffic is unacceptable. Similarly, segregating the network along political boundaries destroys much of the value of the network. Lobbying the on-line publishing industry and large internet service providers is marginally more acceptable, but still fundamentally flawed: It is easy for a small, hard-to-lobby, publisher to produce material that offends some.
A combination of labelling and selection software presents a more promising solution.
We should maybe put a link to the relevant PICS picture here, but I do not have net access right now. The picture is not high-content: A "child" using a computer on which "Contect selection software" blocks inappropriate material while allowing appropriate material through.
A good labelling system must be able to transmit labels in a variety of formats: In-band, along with HTML pages, in another RFC-822 document, or in response to an HTTP query. The system must be flexible: it shouldn't encode a particular set of labelling categories, but rather describe ways to label according to categories. To be truly useful, the system muyst be widely used, both by publishers and viewers, and it must be supported by many governments and suitable for use in many cultures with many languages.
Pics is an infrastructure for associating labelling metadata with internet content. The PICS specifications describe syntax for descriptions of both rating services and the labels they produce. They also specify an embedding of these labels into both HTML documents and RFC-822 messages. Finally, they specify a two ways for clients to request PICS labels for documents. Clients can use an extension to HTTP to request labels with a document andthey can use a PICS-specified query language to search an on-line database of labels. PICS makes provisions for "Ratings Services" which can rate content even if they have not produced and do not control the content.
Finally, and importantly in the eyes of some, PICS is designed to be voluntary.
((PICS-version 1.1) (rating-system "http://MPAAscale.org/Ratings/Description/") (rating-service "http://MPAAscale.org/v1.0") (icon "icons/MPAAscale.gif") (name "The MPAA's Movie-rating Service") (description "A rating service based on the MPAA's movie-rating scale") (category (transmit-as "r") (name "Rating") (label (name "G") (value 0) (icon "icons/G.gif")) (label (name "PG") (value 1) (icon "icons/PG.gif")) (label (name "PG-13") (value 2) (icon "icons/PG-13.gif")) (label (name "R") (value 3) (icon "icons/R.gif")) (label (name "NC-17") (value 4) (icon "icons/NC-17.gif"))
This PICS rating service label specifies a bunch of things, including a pointer to a description of the service's policies and a graphical representation to use for the various ratings the service can bestow.
(PICS-1.1 "http://old.rsac.org/v1.0/" labels on "1994.11.05To8:15-0500" until "195.12.31T23:59-0000" for "http://www.gcf.org/stuff.html" by "John Doe" ratings (r 2 ))
Optional extensions to this label allow for a hash of the document's content (and maybe a signature on the label?) to be included in the label, so that a rating service can be sure to not have its old rating associated with a new version of the document.
This picture (again, presumably available from the PICS website) depicts the model of PICS use. A "parent" selects a rating method, and then, later, "Label reading software" is interposed between a "Child" and content on the internet. The software queries ratings services (including, potentially, the publisher of the content) to determine whether the content is appropriate for the child given the selected rating method and constraints.
A picture of a PICS tab dialog box (with one tab per user) that shows a treelist of possible ratings systems, their ratings categories, and a slider that allows the user to select the maxiumum allowable rating for a given field.
PICS attempts to avoid entanglement with the sticky social issues surrounding content filtering by deliberately specifying only method and not policy. It does not specify how supervisors ("parents" in the previous example) can specify configuration rules. Similarly, it does not specify how to run a ratings service. It doesn't specify the labeling vocabulary or granularity, and it does not specify who creates labels or what they must label.
In addition to content restriction, PICS labels have other uses. They provide metadata that may be useful for clustering users, or helping them find others with similar interests. Given such clustering, information systems may be able to predictively suggest content that a given user may find valuable.
The metadata contained in labels may also make search and classification easier. One might imagine search engines that could use lablelling information to distinguish between, say, web pages about the physics of magnetic fields and web pages about the pop band of the same name.
Finally, labels may be convenient carrier for specification of intellectual property information, like copyright notices.
Scribe's note: this is verbatim from the slide
PICS provides a labeling infrastructure for the Internet. It does not specify how to label or how to run a labeling service or how to set configuration rules. It rather specifies how to describe a labeling service and labels, accomodating any set of labeling dimensions and any criteria for assigning labels.
Any PICS-compatible software can interpret labels from any source, because each source provides a machine-readable description of its labeling dimensions.
"Around the world, governments are considering restrictions on on-line content. Since children differ, contexts of use differ, and values differ, blanket restrictions on distribution can never meet everyone's needs. Selection software can meet diverse needs, by blocking reception, and labels are the raw materials for implementing context-specific selection criteria. The availablity of large quantities of labels will also lead to new sorting, searching, filtering, and organizing tools that help users surf the Internet more efficiently."
Discussion after the talk raised several points:
There are further weaknesses in the scheme that may allow the identities of those taking part in a Clipper communication to be forged. Schneier, page 593, has details.
Schneier, page 99, has an interesting summary of the politics behind Key Escrow.