- Thomas M. Stricker
- Graduate Student, Computer Science
- Degrees held:
- 1985 Propaedeutical Deg. in Comp.Sci. & Comp. Eng., Swiss Federal Inst. of
Technology (ETHZ), Zurich, Switzerland.
- 1988 Graduate Deg. in Comp.Sci. & Comp. Eng., Swiss Federal Inst. of
Technology (ETHZ), Zurich, Switzerland.
- 1991 M.Sc., Carnegie Mellon University, Pittsburgh, USA
- Entered program:
- Fall 1989
- Thesis Advisor
- Prof. Thomas Gross
- Research keywords:
- Software and hardware architecture of communication systems in
massively parallel computers. Applied parallel algorithms. Run-time
support and OS for massively parallel computers.
- Description of current research:
- My major interest focuses on communication systems for supercomputers,
in particular those for massively parallel machines. Powerful
micro-processors have made it easy to concentrate tremendous computation
power in a very small space, but at the same time they made it much more
difficult to bring enough data as operands to the respective arithmetic
units. Since a computation is distributed among hundreds of processors
in such a supercomputer, extremely powerful networks are needed for
transfers fast enough to keep up with the computation.
Several studies pointed out that simply increasing the throughput and
decreasing the latency of the network hardware is not sufficient to
solve the problem of data transfers in supercomputers. Many types of
overhead remain in handling the data appropriately before and after
transfers through the network. In fast networks it becomes increasingly
difficult to incorporate the transferred data seamlessly into a running
computation. This overhead creates a critical performance bottleneck in
most scientific application programs running on massively parallel
computers.
My research goal is to investigate different communication styles and
determine their suitability for different parallel programming methods
and their performance limits. The programming methods considered include
a wide spectrum of paradigms, from manually coded systolic algorithms to
automatically generated parallel FORTRAN programs, with an an emphasis
on recent programming tools that automatically map programs to parallel
systems.
The specific communication method I am currently looking into is
"message passing," where every node can transfer a message to any other
node at any given time. This communication style has many advantages
with respect to flexibility, but it is particularly difficult to
implement when high communication performance is required. Based on my
observations synchronization and data transfer semantics of message
passing are best handled separately. For building research prototypes I
am partially relying on the Carnegie Mellon's iWarp machines, but I am
also working on porting my ideas to several other supercomputing
platforms. I expect to conclude this investigation in 1995 and publish
my results as a Ph.D. thesis.
tomstr@cs.cmu.edu (last updated Jan 28, 1994)