CMU Artificial Intelligence Repository
Acuff's Lisp Benchmarking Suite
lang/lisp/code/bench/acuff/
Source code for the benchmark programs used to gather data for
Richard Acuff, "Performance of Two Common Lisp Programs on
Several Systems", Stanford University Knowledge Systems Lab
(KSL) Report 89-02, 1989.
Includes copies of the tech report in LaTeX and Microsoft Word
formats. The TR contains information about running the tests.
Acuff benchmarks 10 Common Lisp implementations on a variety of
hardware platforms using two moderate-sized application programs,
BB1 1.2 and SOAR 4.4.4. Neither of these programs makes intensive
use of numeric computation. He also considers the effects of
compiler optimization levels and the impact of display I/O on run time.
The lisp implementations tested include Lucid, Franz Allegro CL,
MACL, AKCL, KCL, VaxLisp, TI Explorers, Symbolics MacIvory,
Symbolics 3650, and Xerox 1186.
The results suggest that the performance of Lisp systems is very
mcuh application dependent.
Origin:
sumex-aim.stanford.edu:89-02
Copying: Free distribution.
CD-ROM: Prime Time Freeware for AI, Issue 1-1
Author(s): Richard Acuff
Contact: Richard Acuff
Stanford KSL
701 Welch Road, Bldg. C
Stanford, CA 94305
Keywords:
Authors!Acuff, Benchmarks!Lisp, Lisp!Benchmarks
References: ?
Last Web update on Mon Feb 13 10:29:11 1995
AI.Repository@cs.cmu.edu