Mike Rainey
Researcher at Inria, Paris

Theory and Practice of Granularity Control

Abstract:
A classic problem in parallel computing is determining whether to
execute a thread in parallel or sequentially. If small threads are
executed in parallel, the overheads due to thread creation can
overwhelm the benefits of parallelism, resulting in suboptimal
efficiency and performance.  If large threads are executed
sequentially, processors may spin idle, resulting again in suboptimal
efficiency and performance. This granularity problem is especially
important in implicitly parallel languages, where the programmer
expresses all potential for parallelism, leaving it to the system to
exploit parallelism by creating threads as necessary. Although this
problem has been identified as an important problem, it is not well
understood. Broadly applicable solutions remain elusive.

In this talk, I present results from two of our studies, one that
addresses all fork-join programs and another for a particular
graph-traversal algorithm that falls outside the fork-join model. The
former is a language-based approach for controlling granularity
automatically, and the latter one challenging algorithm. I will first
cover the main results from the former publication and then go into
more depth on the latter. In both cases, we have established
theoretical bounds showing the effectiveness of our techniques in
realistic cost models, and we have performed experiments to test our
granularity-control techniques in Parallel ML and C++.



Bio:
Mike Rainey is a researcher at Inria in Paris. Previously, he was a
postdoc in Max Planck Institute for Software Systems in
Kaiserslautern, Germany, where he was supervised by Umut Acar. Before
that, he completed his PhD at the University of Chicago, under the
supervision of John Reppy.












Tuesday, December 15, 2015
2:00 PM
Gates & Hillman Centers 8102

Principles of Programming Seminars