NIPS08 Workshop Announcement
Parallel Implementations of Learning Algorithms:
What Have You Done For Me Lately?
December 13, 2008
Overview:
Interest in parallel hardware concepts, including multicore,
specialized hardware, and multimachine, has recently increased as
researchers have looked to scale up their concepts to large, complex
models and large datasets. In this workshop, a panel of invited
speakers will present results of investigations into hardware concepts
for accelerating a number of different learning and simulation
algorithms. Additional contributions will be presented in poster
spotlights and a poster session at the end of the one-day workshop.
Our intent is to provide a broad survey of the space of hardware
approaches in order to capture the current state of activity in this
venerable domain of study. Approaches to be covered include silicon,
FPGA, and supercomputer architectures, for applications such as
Bayesian network models of large and complex domains, simulations of
cortex and other brain structures, and large-scale probabilistic
algorithms.
Potential participants include researchers interested in accelerating
their algorithms to handle large datasets, and systems designers
providing such hardware solutions. The oral presentations will
include plenty of time for questions and discussion, and the poster
session at the end of the workshop will afford further opportunities
for interaction among workshop participants.
Workshop Organizing Committee:
- Robert Thibadeau, Seagate Research
- Dan Hammerstrom, Portland State University
- David Touretzky, Carnegie Mellon University
- Tom Mitchell, Carnegie Mellon University
Final Program |
Morning Session | Afternoon Session |
|
3:30 PM | David Andersen, Carnegie Mellon University
Using a Fast Array of Wimpy Nodes |
4:00 PM | Rajat Raina and Andrew Ng, Stanford University
Learning Large Deep Belief Networks using Graphics Processors |
4:30 PM | Daniel R. Coates, Portland State University;
Craig Rasmussen and Garret T. Kenyon, Los Alamos National
Laboratory
A Bird's-Eye View of
PetaVision, the World's First Petaflop/s Neural
Simulation
(copy of slides) |
5:00 PM | Coffee break |
5:20 PM | Poster spotlights (4 minutes each):
Brian Tanner, University of Alberta
Reinforcement Learning Recordbook <RL@Home>
Michiel D'Haene, Benjamin Schrauwen, and Dirk Stroobandt, University of Gent
Efficient, Scalable, and Parallel Event-Drive Simulation
Techniques for Complex Spiking Neuron Models
Ning-Yi Xu, Jing Yan, Rui Gao, Xiongfei Cai, Zenglin Xia, and Feng-Hsiung Hsu,
Microsoft Research Asia
FPGA-based Accelerators for "Learning to Rank" in Web Search Engines
Hans Peter Graf, Srihari Cadambi, Igor Durdanovic, Venkata Jakkula, Murugan Sankardadass, Eric Cosatto,
and Srimat Chakradhar, NEC Laboratories America
An FPGA-based Massively Parallel hardware Accelerator for SVM and CN
|
5:40 PM | General discussion |
6:00 PM | Poster session |
6:30 PM | Adjourn |
|
|