go back to the main page

 SSS Abstracts 
Summer 1998

go back to the main page

Distributional Clustering of Words for Text Classification

Friday, June 12th, 1998 from 12-1 pm in WeH 4601.

Presented by Doug Baker,

I will describe the application of Distributional Clustering to text classification. This approach clusters words into groups based on the distribution of class labels associated with each word. Thus, unlike some other unsupervised dimensionality reduction techniques, such as Latent Semantic Indexing, we are able to compress the feature space much more aggressively, while still maintaining high document classification accuracy.

Experimental results obtained on three real-world data sets show that we can reduce the feature dimensionality by three orders of magnitude and lose only 2% accuracy.

A link to the related paper: Distributional Clustering of Words for Text Classification


Terminal Time: A Recombinant History Apparatus

Friday, June 19th, 1998 from 1-2 am in WeH 4623.

Presented by Michael Mateas,

Terminal Time is a history engine which generates believable historical documentaries in response to audience polls. A case-based story system constructs ideologically biased historical documentaries by combining historical episodes and plot templates stored in a case memory. At several points in the story, the audience is presented with a set of multiple-choice questions which are answered via an applause meter. These answers influence the story engine in the choices it makes. The resulting constructed history creates the illusion that history has been about some single theme or small set of interlocked themes. As an art project, Terminal Time critiques positivist history, notions of interactive freedom (utopian navigation), and the rhetorical framing of documentary films. As an AI project, Terminal Time explores case-based story generation, broad and shallow knowledge representations for history, and ideological reasoning. Terminal Time is a work in progress. In this talk I'll describe the current design, show a performance of a prototype, and describe current and future work on the project. Terminal Time is a collaboration between computer scientist Michael Mateas, video producer Steffi Domike, and electronic artist Paul Vanouse.


Concurrency and Refinement

Friday, June 26th, 1998 from 12-1 pm in WeH 4601.

Presented by Jürgen Dingel,

Formal support for the design and verification of parallel programs has been a research topic for a long time. Some of the approaches, for instance, attempt to generalize Hoare logic to a parallel setting and suggest syntax-directed proof systems. Independent of these efforts, traces have been realized as an adequate tool for modeling concurrent computation.

We present a trace-based, syntax-directed refinement calculus for shared-variable parallel programs. It supports compositional reasoning, local variables, and fairness and allows for reasoning about liveness properties like termination or eventual entry.

The presentation will be informal and example-driven.

A link to the related paper: A Trace-Based Refinement Calculus for Shared-Variable Parallel Programs


Compilation Techniques for Planning

Friday, July 3rd, 1998 from 12-1 pm in WeH 4601.

Presented by John Langford,

With the advent of graphplan, the idea of first compiling a planning problem onto a 'fast' representation, then doing search on this fast representation has arrived. During the recent AIPS conference, every planner in the planning competition did some form of plan compilation. I'll discuss the various (amazingly different) techniques used and some new ideas that I have been working on.

A link to the original Graphplan paper by Blum and Furst: Fast Planning Through Planning Graph Analysis


Lexical Discovery with an Enriched Semantic Network

Friday, July 10th, 1998 from 12-1 pm in WeH 4601.

Presented by Doug Beeferman,

In this talk I introduce a simple database system for extracting knowledge from semantic networks. Viewed as a multigraph with typed edges, a semantic network is simply a set of binary relations over the same vocabulary represented in the same structure. I have implemented a system in which these relations can be described, composed using regular expressions, and then combined into "path specification" queries that are useful in a number of domains. I motivate this work with a lexical database that combines primitive lexical relations, semantic and phonetic, drawn from a variety of knowledge and data sources. The system has been used by writers to generate relationships between concepts, find rhyming words and puns, and answer simple identification questions. I'll discuss the implementation challenges of the database and its interface, and demonstrate its application to word discovery.

A link to the related paper: Lexical Discovery with an Enriched Semantic Network


Standard ML for Parallel / Scientific Computing

Friday, July 17th, 1998 from 12-1 pm in WeH 4623.

Presented by Franklin Chen,

In the PSciCo (parallel and scientific computing) project, we wish to explore the use of an advanced language (Standard ML) for expressing in a clean way parallel and scientific algorithms, e.g., problems in computational geometry (e.g., triangulation, meshing) and simulation of physical systems (e.g., galaxy clusters, blood flow).

As an example, I use the N-body problem, the task of simulating the motion of N particles in space, where each particle exerts a force on each other particle. I will show how Standard ML's powerful module system can be used to modularize the organization of a family of algorithms for the N-body problem. The result is a flexible framework for experimenting with various components of the algorithms in a plug-and-play fashion.


An Architecture for Thread-Level Data Speculation

Friday, July 24th, 1998 from 12-1 pm in WeH 4601.

Presented by Chris Colohan,

In this talk I will show a new computer architecture which promises to allow sequential programs to achieve high performance on parallel processors. Thread-Level Data Speculation (TLDS) is a technique which enables the optimistic parallelization of applications. The basic idea is to speculate that dependencies do not exist, and to then recover and restart whenever dependeces do occur dynamically. TLDS can accellerate performace when the gain from increased thread-level parallelism outweight the overhead of failed speculation.

I will give an overview of what TLDS is, and show how it can speed up sequential programs. The hardware software interface will be discussed, with examples of how common code constructs can be converted into speculatively parallel constructs. Hardware modifications to support TLDS will be briefly discussed as well.


Reinforcement Learning Through Gradient Descent

Friday, July 31st, 1998 from 12-1 pm in WeH 4601.

Presented by Leemon Baird,

Reinforcement learning is a rapidly-growing field with many recent successes. But often the algorithms used are heuristic and have no guarantees of being able to learn anything at all. In fact, I will show simple problems on which all existing algorithms learn nothing. A new family of algorithms will be proposed that has guarantees of convergence. This family includes algorithms that even use a whole new approach to reinforcement learning, directly optimizing how the learning systems acts without learning value functions, or combining this with the old approach that does use value functions. Simulation results show that this new approach learns faster than previous approaches. It also suggests a new way of approaching hierarchical systems, multi-agent systems, and hidden-state systems.


Practical Issues in a Type-Passing Compiler

Friday, August 7th, 1998 from 12-1 pm in WeH 4601.

Presented by Chris Stone,

Compiling code when implementation types are unknown until link-time (e.g., separately compiled ADT's) or run-time (e.g., polymorphism) is not easy. Experience with the Fox Project's TIL compiler for Standard ML showed that an implementation using "type-passing" could give noticeable space and time improvements in compiled code, compared to more standard techniques. However, the TIL compiler handled only "core SML" (no module system), had no provision for separate compilation, and the implementation did not scale well. For all these reasons, TIL was tested with only small benchmarks (3K lines or less). Since then we have re-implemented TIL to produce TILT (TIL Two). TILT has successfully compiled larger benchmarks including the SML Standard Basis (200 files, around 22K lines) and TILT itself (250 files, around 70K lines).

I will start by explaining type-passing and why it is useful. The bulk of the talk will cover some of the choices and pitfalls encountered in our quest for a scalable, clean, and correct implementation.


Tool Support for Design Patterns

Friday, August 14th, 1998 from 12-1 pm in WeH 4601.

Presented by Edwin Chan,

Design patterns are seen by many as key ingredients to making reusable software. According to ``Design Patterns'' by Gamma, Helm, Johnson, and Vlissides, they describe "solutions to specific problems in software design," and are distilled from the designs of many evolved systems. Collectively, they form a language for talking about design, and a valuable source of design ideas.

However, patterns are no panacea. They can only guide the redesign of a system, leaving the programmers with the task of retrofitting the existing code with the new design. The ACT/Fluid project is building tools to deal those kinds of ``bureaucratic'' structural change. Programmers direct these tools to manipulate Java source code, for example, to abstract out methods or classes from existing code. Annotations on component interfaces, called promises, enable analyses that check the correctness of these manipulations.

In this talk, I will describe some of our work on these code manipulation tools, and discuss how our techniques can support the use of patterns.


Relational Learning for Web Page Classification

Friday, August 21st, 1998 from 12-1 pm in WeH 4601.

Presented by Seán Slattery,

Classifying text has become quite the hot topic in machine learning over the past few years. The combination of real world data (with high dimensionality and noise), with its availability in large amounts (Usenet, Web, etc.) tends to make machine learning researchers salivate. And, of course, using this technology to build newsreaders and web browsers which could filter based on your interests is no small motivator.

The bulk of research to date in this area has worked from the assumption that documents are independent and "flat". However, for documents like pages on the Web, the information contained in how they are structured internally, and also how they are related to each other (with hyperlinks) tends to be ignored. My thesis work involves investigating how we could take such information into account, and if it could help us build better classifiers.

In this talk I'll go over some of the basic techniques for conventional text classification and then detail the approach I've been taking to adding structure information into the mix. I'll show various results and failures to date and map out future research questions I hope to tackle.


Web contact: sss+www@cs