ABSTRACT
Appears in Proc. of the 15th Symposium of Operating Systems Principles, Copper Mountaing Resort, CO, December 3-6, 1995, pp. 79-95.

Informed Prefetching and Caching

R. Hugo Patterson*, Garth A. Gibson, Eka Ginting, Daniel Stodolsky, Jim Zelenka

Department of Electrical and Computer Engineering*
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
{garth,jimz}@cs.cmu.edu

Abstract

In this paper, we present aggressive, proactive mechanisms that tailor file system resource management to the needs of I/O-intensive applications. In particular, we show how to use application-disclosed access patterns (hints) to expose and exploit I/O parallelism, and to dynamically allocate file buffers among three competing demands: prefetching hinted blocks, caching hinted blocks for reuse, and caching recently used data for unhinted accesses. Our approach estimates the impact of alternative buffer allocations on application execution time and applies cost-benefit analysis to allocate buffers where they will have the greatest impact. We have implemented informed prefetching and caching in DigitalŐs OSF/1 operating system and measured its performance on a 150 MHz Alpha equipped with 15 disks running a range of applications. Informed prefetching reduces the execution time of text search, scientific visualization, relational database queries, speech recognition, and object linking by 20-83%. Informed caching reduces the execution time of computational physics by up to 42% and contributes to the performance improvement of the object linker and the database. Moreover, applied to multiprogrammed, I/O-intensive workloads, informed prefetching and caching increase overall throughput.

Click here for the full paper in pdf or postscript.


PDL Home Publications Home