MIT Logo

Stealing Is Good — If It’s Work

While data structures are very commonly used in sequential programs, they are seldom used in designing provably good parallel algorithms. In order to provide good parallel performance to a parallel program, we must allow multiple data structure operations concurrently. Although concurrent data structures are commonly used in practice on shared-memory machines, even the most efficient concurrent structures often lack performance theorems guaranteeing linear speedup for the enclosing parallel program. In contrast, parallel batched data structures can provide provable performance guarantees, since processing a batch in parallel is easier than dealing with the arbitrary asynchrony of concurrent accesses. They can limit programmability, however, since restructuring a parallel program to use batched data structure instead of a concurrent data structure can often be difficult or even infeasible. We present BATCHER, a scheduler that achieves the best of both worlds through the idea of implicit batching, and a corresponding general performance theorem. BATCHER takes as input (1) a dynamically multithreaded program that makes arbitrary parallel accesses to an abstract data type, and (2) an implementation of the abstract data type as a batched data structure that need not cope with concurrent accesses. BATCHER extends a randomized work-stealing scheduler and guarantees provably good performance to parallel algorithms that use these data structures. BATCHER’s performance theorem provides composability; that is, the data structure and the enclosing program can be analyzed independently from each other and the bounds can be combined to provide the overall asymptotic bounds.

Kunal
Agrawal
Speaker’s Biography

Kunal Agrawal is an Assistant Professor at Washington University in St. Louis. She completed her Ph.D at MIT with Prof. Charles Leiserson in 2009. She is interested in various topics related to parallel computing including scheduling, parallel applications and data structures, synchronization mechanisms, parallel real-time systems, and transactional memory.

Washington University in St. Louis
Former Ph.D. Student