-
Test
First Name:
Last Name:
Time:
Abstract
This is a test
Presenter Bio
Presenter Video
-
Soap and Sewers and Science and Software
First Name:
Last Name:
Time:
Abstract
Medical science made major advances in Europe in the nineteenth century, and so did the quality of European life. It is arguable, though, that the increased quality of life had more to do with the diligent application of soap and sewers than with great scientific breakthroughs. Charles Leiserson and his colleagues have made major advances in the science of computing that have revolutionized the practice of the field. As a birthday gift to Charles, this talk offers an area in need of the kind of definition, insight, and solutions that have characterized his career for the past four decades. A performance bug is a minor glitch that does not alter the correctness of a program but does cause the program to consume excessive resources. This talk surveys several such bugs, and proposes some root causes and a few simple techniques for avoiding them and finding those that slip through. I offer this as an important challenge within the field of performance engineering that Charles has so masterfully shaped.
Presenter Bio
Presenter Video
-
Abstraction: From VLSI to Reducers
First Name:
Last Name:
Time:
Abstract
Presenter Bio
Presenter Video
-
A Universal Approach to Data Center Network Design
First Name:
Last Name:
Time:
Abstract
This talk proposes an approach to the design of large-scale general-purpose data center networks based on the notions of volume and area universality introduced by Leiserson in the 1980s in the context of VLSI design. In particular, we suggest that the principle goal of the network designer should be to build a single network that is provably competitive, for any application, with any network that can be built for the same amount of money. We illustrate our approach by walking through the design of a hierarchical data center network using the various networking components available today commercially. Joint work with Aditya Akella, Theo Benson, Bala Chandrasekaran, Cheng Huang, and David Maltz.
Presenter Bio
Presenter Video
-
Energy-Recycling Computers
First Name:
Last Name:
Time:
Abstract
About half a century ago, theoretical physicists exploring the thermodynamics of computation suggested that energy-recycling computers have the potential to operate with higher energy efficiency than conventional computer designs. Up until recently, however, this theoretical potential had remained largely unrealized in practice. In this talk, I will give a brief overview of previous research in energy-recycling computing. I will then highlight the work of my research group in this area. Through the successful demonstration of GHz-speed silicon prototypes, this work has led to the first-ever deployment of energy-recycling for energy-efficient operation in a high-volume commercial microprocessor, AMD’s 4GHz “Piledriver” processor core.
Presenter Bio
Presenter Video
-
Supercomputing Technologies
First Name:
Last Name:
Time:
Abstract
Presenter Bio
Presenter Video
-
Cache-Oblivious Algorithms
First Name:
Last Name:
Time:
Abstract
Cache-oblivious algorithms use the memory hierarchy asymptotically optimally (under mild assumptions about the behavior of caches). In this talk, I present a simple cache-oblivious algorithm for stencil computations, and I briefly survey the history and some of the main results in this field.
Presenter Bio
Presenter Video
-
Stealing Is Good — If It’s Work
First Name:
Last Name:
Time:
Abstract
While data structures are very commonly used in sequential programs, they are seldom used in designing provably good parallel algorithms. In order to provide good parallel performance to a parallel program, we must allow multiple data structure operations concurrently. Although concurrent data structures are commonly used in practice on shared-memory machines, even the most efficient concurrent structures often lack performance theorems guaranteeing linear speedup for the enclosing parallel program. In contrast, parallel batched data structures can provide provable performance guarantees, since processing a batch in parallel is easier than dealing with the arbitrary asynchrony of concurrent accesses. They can limit programmability, however, since restructuring a parallel program to use batched data structure instead of a concurrent data structure can often be difficult or even infeasible. We present BATCHER, a scheduler that achieves the best of both worlds through the idea of implicit batching, and a corresponding general performance theorem. BATCHER takes as input (1) a dynamically multithreaded program that makes arbitrary parallel accesses to an abstract data type, and (2) an implementation of the abstract data type as a batched data structure that need not cope with concurrent accesses. BATCHER extends a randomized work-stealing scheduler and guarantees provably good performance to parallel algorithms that use these data structures. BATCHER’s performance theorem provides composability; that is, the data structure and the enclosing program can be analyzed independently from each other and the bounds can be combined to provide the overall asymptotic bounds.
Presenter Bio
Presenter Video