SSS Abstracts |
R-NUCA: Data Placement in a Distributed Shared Cache
Friday, February 22nd, 2008 from 12-1 pm in NSH 1507.
Presented by Nikos Hardavellas, CSD
Increases in on-chip communication delay and the large working sets of commercial and scientific workloads complicate the design of the on-chip cache for multicore processors. The large working sets favor a shared L2 cache design that maximizes the aggregate cache capacity and minimizes off-chip memory requests. At the same time, the growing on-chip communication delay favors core-private caches that replicate data to minimize delays on global wires. Recent hybrid proposals promise to offer lower average latency than conventional designs. However, these proposals either address the placement requirements of only a subset of the data accessed by the application, require complicated lookup and coherence mechanisms that increase latency, or fail to scale to high core counts.
In this work, we observe that the cache access patterns of a range of server and scientific workloads can be classified into distinct categories, where each class is amenable to different data placement policies. Based on this observation, we propose R-NUCA, a distributed shared L2 cache design which cooperates with the operating system to achieve an efficient, large-capacity, low-average-latency L2 cache. Our design supports intelligent placement, migration, and replication without the overhead of an explicit coherence mechanism for the distributed L2 cache. We evaluate our design in a range of server and scientific workloads and find that it lowers the effective L2 latency as compared to prior proposals by 13% on average, and by up to 23% in some cases.
Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.
Identifying Cell Cycle Genes by Combining Sequence Data and Expression Data
Friday, February 29th, 2008 from 12-1 pm in NSH 1507.
Presented by Yong Lu, CSD
The expression of genes during the cell division process has now been studied in many different species. An important goal of these studies is to identify the set of cell cycle genes. Due to noise and other data analysis problems, accurately deriving a set of cell cycle genes from expression data is a hard problem. Here we present the first algorithm that combines sequence and microarray gene expression data from multiple species for identifying cycling genes. Our algorithm represents genes from multiple species as nodes in a graph. Edges between genes represent sequence similarity. Starting with the measured expression values for each species we use Belief Propagation algorithm to determine a posterior score for genes. This posterior is used to determine a new set of cycling genes for each species. By applying our algorithm, we were able to obtain a more accurate set of cycling genes. Our method was especially successful for the human dataset indicating that it can use a high quality dataset from one species to overcome noise problems in another.
Joint work with Profs. Ziv Bar-Joseph and Roni Rosenfeld. In Partial Fulfillment of the Speaking Requirement.
Toward Realistic and Artifact-Free Insider-Threat Data
Friday, March 21st, 2008 from 12-1 pm in NSH 1507.
Presented by Kevin Killourhy, CSD
The threat posed by malicious insiders is growing. Incidents of corporate espionage and employee sabotage appear regularly in the news. However, research into the detection of insiders is limited by a lack of realistic, publicly available, real-world data, due to concerns about privacy and confidentiality. Although data can be sanitized to mitigate such issues, the mere act of sanitizing the data may introduce artifacts that compromise its utility for research purposes. Accordingly, experimental results based on sanitized data may not generalize to the real world.
In this talk I discuss the consequences of sanitization artifacts on insider-threat detection experiments. I describe a suite of tools for collecting and sanitizing data, and outline a methodology for measuring the impact of data sanitization. The results of an experiment using raw data are compared to the results using each of three types of sanitized data. Two of the three sanitization strategies change the experimental outcomes. Because these same strategies are used in practice, this result calls into question the utility of on-going insider-threat research. On the other hand, the third sanitization strategy addresses these concerns, indicating that realistic, artifact-free data sets can be created with appropriate tools and methods.
Joint work with Prof. Maxion. In partial fulfillment of the CSD Speaking Skills Requirement.
cSamp: A System for Network-Wide Flow Monitoring
Friday, April 4th, 2008 from 12-1 pm in NSH 1507.
Presented by Vyas Sekar, CSD
Critical network management applications increasingly demand fine-grained flow level measurements. However, current flow monitoring solutions are inadequate for many of these applications. In this talk, I will present the design, implementation, and evaluation of cSamp, a system-wide approach for flow monitoring. The design of cSamp derives from three key ideas: flow sampling as a router primitive instead of uniform packet sampling; hash-based packet selection to achieve coordination without explicit communication; and a framework for distributing responsibilities across routers to achieve network-wide monitoring goals while respecting router resource constraints. We show that cSamp achieves much greater monitoring coverage, better use of router resources, and enhanced ability to satisfy network-wide flow monitoring goals compared to existing solutions.
An Empirical Study of a Smartphone-Based Access-Control System
Friday, April 18th, 2008 from 12-1 pm in Wean 5409.
Presented by Kami Vaniea, CSD
Although significant effort has been invested in developing expressive and flexible access-control languages and systems, little has been done to evaluate these systems in practical situations with real users. Moreover, few attempts have been made to discover the access-control policies that users actually want to implement. We present findings from a year-long trial of a smartphone-based access-control system that allows a user to exercise her authority to gain access to rooms in our university building, and to delegate that authority to other users. Using interview and usage data, we derive the ideal access-control policies for a group of users and quantitatively compare ideal policies with the policies the users actually implemented with keys and with our smartphone-based distributed access-control system. We show that users' key-based policies are more permissive, and thus less secure, than their smartphone-based policies. We also present aspects of the system that affected both its general usability and users' acceptance of it. In particular, we demonstrate aspects of the system that gave rise to failures, misunderstandings, misperceptions, and unintended uses. We discuss implications these usability observations and how they might affect similar systems.
Modeling and Analysis of Networked Secure Systems
Friday, April 25th, 2008 from 12-1 pm in NSH 1507.
Presented by Jason Franklin, CSD
We present a logic for reasoning about networked secure systems in the presence of adversaries that may simultaneously control the network, and access unprotected regions of memory on individual machines. Our formalism extends an existing logic for reasoning about security protocols by adding a shared memory and limited forms of access control. Another point of distinction is our use of explicit time both to order events and to represent invariants. We prove a soundness theorem for the proof system and illustrate its use by proving a relevant security property of a part of the Trusted Computing Group's remote attestation and sealed storage protocols.
This logic represents one component of a larger project to develop a theory of secure systems. Our ultimate goal is to develop a formal framework for modeling and analysis of secure systems at two levels of abstraction--system architecture and system implementation. A specific issue that we plan to address in developing and using this framework is to provide rigorous definitions of security and adversary models, a relatively unexplored area in system security. In addition, we hope to identify design principles for secure systems, as well as a core set of basic building blocks from which complex systems can be constructed via secure composition.
This is joint work with Deepak Garg, Dilsun Kaynar, and Anupam Datta. More information can be found at: http://www.cs.cmu.edu/~jfrankli/toss/
ORA: Software for Dynamic Network Analysis
Friday, May 2nd, 2008 from 12-1 pm in NSH 1507.
Presented by Terrill Frantz, ISR/COS
This talk will consist of both a short introduction to the multi-network world-view of Dynamic Network Analysis (DNA) and a brief live-demonstration of ORA software. ORA is a powerful tool for visualizing and working with multi-mode, multi-network and temporal relational datasets as necessary for dynamic network analysis. This presentation will orient the audience to the basic ideas of DNA and the features of ORA software. Sufficient background will be provided for one to begin to self-explore using the these techniques and this software within a far-reaching range of research areas. The ideas of DNA and ORA have been developed over many years at the Center for Computational Analysis of Social and Organizational Systems (CASOS; ISR/SCS/CMU) and are spreading widely throughout the network analysis and research community.
Test The Program, Test The Abstraction, Make It Better, Prove It Right
Friday, May 9th, 2008 from 12-1 pm in Wean 4623.
Presented by Robert J. Simmons, CSD
Microsoft Research India's Yogi Project combines two technologies, counter-example guided abstraction refinement (CEGAR) style software model checking and automatic test case generation, with the aim of increasing the power and efficiency of proving properties of C-style programs. Yogi integrates both techniques by using the program abstraction to guide testing and using the results of symbolic execution to generate the counterexamples that guide graph refinement, in many cases allowing the tool to succeed more efficiently than possible with either technique alone. I will describe the Yogi algorithm and an extension that allows it to efficiently cope with C programs without relying on any whole program alias analysis.
Web contact: sss+www@cs