MIT Logo

Presenters

  • Test

    First Name:

    Test

    Last Name:

    User

    Time:

    Abstract

    This is a test

    Presenter Bio

  • Soap and Sewers and Science and Software

    First Name:

    Jon Louis

    Last Name:

    Bentley

    Time:

    Abstract

    Medical science made major advances in Europe in the nineteenth century, and so did the quality of European life. It is arguable, though, that the increased quality of life had more to do with the diligent application of soap and sewers than with great scientific breakthroughs. Charles Leiserson and his colleagues have made major advances in the science of computing that have revolutionized the practice of the field. As a birthday gift to Charles, this talk offers an area in need of the kind of definition, insight, and solutions that have characterized his career for the past four decades. A performance bug is a minor glitch that does not alter the correctness of a program but does cause the program to consume excessive resources. This talk surveys several such bugs, and proposes some root causes and a few simple techniques for avoiding them and finding those that slip through. I offer this as an important challenge within the field of performance engineering that Charles has so masterfully shaped.

    Presenter Bio

    Jon Bentley’s research interests include programming techniques, algorithm design, and the design of software tools and interfaces. He has written over two hundred articles on a variety of topics, ranging from the theory of algorithms to software engineering. His books include Writing Efficient Programs (1982) and Programming Pearls (2nd Edition 2000). He received a B.S. from Stanford in 1974 and an M.S. and Ph.D. from the University of North Carolina in 1976, then taught Computer Science at Carnegie Mellon for six years. He joined Bell Labs Research in 1982, and retired in 2001 to join Avaya Labs Research, from which he retired in 2013. He has been a visiting faculty member at West Point and Princeton, and has been a member of teams that have shipped software tools, telephone switches, telephones and web services.


  • Abstraction: From VLSI to Reducers

    First Name:

    Guy E

    Last Name:

    Blelloch

    Time:

    Abstract
    Presenter Bio

  • A Universal Approach to Data Center Network Design

    First Name:

    Bruce M

    Last Name:

    Maggs

    Time:

    Abstract

    This talk proposes an approach to the design of large-scale general-purpose data center networks based on the notions of volume and area universality introduced by Leiserson in the 1980s in the context of VLSI design. In particular, we suggest that the principle goal of the network designer should be to build a single network that is provably competitive, for any application, with any network that can be built for the same amount of money. We illustrate our approach by walking through the design of a hierarchical data center network using the various networking components available today commercially. Joint work with Aditya Akella, Theo Benson, Bala Chandrasekaran, Cheng Huang, and David Maltz.

    Presenter Bio

    Bruce Maggs received the S.B., S.M., and Ph.D. degrees in computer science from the Massachusetts Institute of Technology in 1985, 1986, and 1989, respectively. His advisor was Charles Leiserson. After spending one year as a Postdoctoral Associate at MIT, he worked as a Research Scientist at NEC Research Institute in Princeton from 1990 to 1993. In 1994, he moved to Carnegie Mellon, where he stayed until joining Duke University in 2009 as a Professor in the Department of Computer Science. While on a two-year leave-of-absence from Carnegie Mellon, Maggs helped to launch Akamai Technologies, serving as its Vice President for Research and Development, before returning to Carnegie Mellon. He retains a part-time role at Akamai as Vice President for Research.

    Maggs’s research focuses on networks for parallel and distributed computing systems. In 1986, he became the first winner (with Charles Leiserson) of the Daniel L. Slotnick Award for Most Original Paper at the International Conference on Parallel Processing, and in 1994 he received an NSF National Young Investigator Award. He was co-chair of the 1993-1994 DIMACS Special Year on Massively Parallel Computation and has served on the steering committees for the ACM Symposium on Parallel Algorithms and Architectures (SPAA) and ACM Internet Measurement Conference (IMC), and on the program committees of numerous ACM conferences including STOC, SODA, PODC, and SIGCOMM.


  • Energy-Recycling Computers

    First Name:

    Marios

    Last Name:

    Papaefthymiou

    Time:

    Abstract

    About half a century ago, theoretical physicists exploring the thermodynamics of computation suggested that energy-recycling computers have the potential to operate with higher energy efficiency than conventional computer designs. Up until recently, however, this theoretical potential had remained largely unrealized in practice. In this talk, I will give a brief overview of previous research in energy-recycling computing. I will then highlight the work of my research group in this area. Through the successful demonstration of GHz-speed silicon prototypes, this work has led to the first-ever deployment of energy-recycling for energy-efficient operation in a high-volume commercial microprocessor, AMD’s 4GHz “Piledriver” processor core.

    Presenter Bio

    Marios Papaefthymiou is Professor of Electrical Engineering and Computer Science and Chair of Computer Science and Engineering at the University of Michigan. He is also Co-Founder and Chief Scientist of Cyclos Semiconductor, a pioneer in design technologies for energy-efficient processors. His research interests are in energy-efficient computing.

    In addition to a number of best paper awards, his work has resulted in practical energy-saving technologies that are currently deployed in millions of chips for high-end laptops and servers. An IEEE Fellow, Papaefthymiou has received faculty recognition awards from Yale College and the Graduate School at Michigan, an ARO Young Investigator Award, an NSF CAREER Award, and IBM Partnership Awards. He holds a BS degree from Caltech, and SM and PhD degrees from MIT.


  • Supercomputing Technologies

    First Name:

    Bradley C

    Last Name:

    Kuszmaul

    Time:

    Abstract
    Presenter Bio

  • Cache-Oblivious Algorithms

    First Name:

    Matteo

    Last Name:

    Frigo

    Time:

    Abstract

    Cache-oblivious algorithms use the memory hierarchy asymptotically optimally (under mild assumptions about the behavior of caches). In this talk, I present a simple cache-oblivious algorithm for stencil computations, and I briefly survey the history and some of the main results in this field.

    Presenter Bio

    Matteo Frigo is a Principal Engineer at Amazon Web Services in Cambridge, MA. He received his Ph.D. in 1999 from Charles Leiserson at MIT. He has contributed to the development of cache-oblivious algorithms, the Cilk multithreaded programming system, and the FFTW Fourier transform software. He has worked on medical devices, software radios, and compilers for exotic machines. He is currently working on a novel distributed storage system.


  • Stealing Is Good — If It’s Work

    First Name:

    Kunal

    Last Name:

    Agrawal

    Time:

    Abstract

    While data structures are very commonly used in sequential programs, they are seldom used in designing provably good parallel algorithms. In order to provide good parallel performance to a parallel program, we must allow multiple data structure operations concurrently. Although concurrent data structures are commonly used in practice on shared-memory machines, even the most efficient concurrent structures often lack performance theorems guaranteeing linear speedup for the enclosing parallel program. In contrast, parallel batched data structures can provide provable performance guarantees, since processing a batch in parallel is easier than dealing with the arbitrary asynchrony of concurrent accesses. They can limit programmability, however, since restructuring a parallel program to use batched data structure instead of a concurrent data structure can often be difficult or even infeasible. We present BATCHER, a scheduler that achieves the best of both worlds through the idea of implicit batching, and a corresponding general performance theorem. BATCHER takes as input (1) a dynamically multithreaded program that makes arbitrary parallel accesses to an abstract data type, and (2) an implementation of the abstract data type as a batched data structure that need not cope with concurrent accesses. BATCHER extends a randomized work-stealing scheduler and guarantees provably good performance to parallel algorithms that use these data structures. BATCHER’s performance theorem provides composability; that is, the data structure and the enclosing program can be analyzed independently from each other and the bounds can be combined to provide the overall asymptotic bounds.

    Presenter Bio

    Kunal Agrawal is an Assistant Professor at Washington University in St. Louis. She completed her Ph.D at MIT with Prof. Charles Leiserson in 2009. She is interested in various topics related to parallel computing including scheduling, parallel applications and data structures, synchronization mechanisms, parallel real-time systems, and transactional memory.