Acceptance Remarks

2006 Edsger W. Dijkstra Prize in Distributed Computing
John Mellor-Crummey and Michael L. Scott
ACM Conference on Principles of Distributed Computing
Denver, Colorado, 25 July 2006

John began the presentation by reviewing the origins of the MCS lock, including the historical context and the actual conception during a talk at ASPLOS III.  He spoke from slides.

Michael continued with follow-on work and speculation on future directions. He spoke from a prepared text, which follows.


Let me begin by echoing John’s words of thanks, both to the selection committee and to the wider PODC community.  As most of you know, I’m mainly a Systems person.  I have enormous respect for the work that’s published here, and I’ve drawn a lot of inspiration from it.  But while I’ve published a bit at PODC and DISC myself, I still feel mostly like I’m standing on the fringe of this community:  very few of my papers contain any proofs.  It’s humbling to have my name added to such an incredible list of past award recipients. 

As John has explained, we got into the area of synchronization pretty much by accident.  The work I did as a graduate student and an assistant professor was in parallel and distributed languages and operating systems.  Even after our ’91 TOCS paper I thought of synchronization as sort of a hobby I did on the side.  But every time I thought I was done working in the area another interesting topic would come along, until today I find that synchronization and concurrency, broadly defined, are the core of what I do. 

John and I benefited enormously, I think, from being in the right place at the right time.  Perhaps the most important thing we did was to articulate the issues in scalable synchronization, and put them in a paper that was largely tutorial in style. 

John has talked some about historical context and background.  I thought I’d say a bit about follow-on work.  Some of this has come from my own group.  I’m particularly pleased with our work on adding timeout and preemption tolerance to queue locks, the capstone of which appeared at HiPC just this past December [SLIDE]

Two of my favorite follow-ons, however, were devised by other groups, and are less widely known than I think they ought to be; if you’ll permit me, I’d like to make sure you’ve heard of them.  The first, which I’ve taken to calling the CLH lock, appeared in one of the graphs John showed you.  It was devised a year or so after the MCS lock, by Travis Craig at the University of Washington and, independently, Anders Landin and Eric Hagersten at Uppsala.  It’s a slick, elegant lock that works particularly well on machines with coherently cached global memory.  The queue in the MCS lock, as you may know, is linked from head to tail [SLIDE], with a two-step insertion process [SLIDES].  Each thread also knows the location of its predecessor’s node, though it doesn’t have to write this down [SLIDE].  The CLH lock uses only these implicit forward references [SLIDE].  Each thread then spins on the node provided by its predecessor, rather than its own queue node.  If remote locations cannot be cached (as, say, on a Cray machine), then MCS is the clear choice.  With coherent caching, however, the choice between the two locks depends on architectural constants, and CLH is often better.  In the E25K results that John showed you, the two were evenly matched. 

Unfortunately, both the MCS lock and the CLH lock require a thread to pass a queue node as an explicit parameter to the acquire and release routines.  This non-standard interface is an obstacle to straightforward replacement of traditional spin locks.  It can be eliminated by allocating queue nodes dynamically, but this dramatically increases overhead.  The second algorithm to which I’d draw your attention is a very clever modification of the MCS lock, due to Auslander, Edelsohn, Krieger, Rosenburg, and Wisniewski of IBM Research.  Their insight is the observation that an MCS queue node is really needed only during lock acquisition.  Thereafter it serves simply to hold a pointer to the queue node of the first waiting thread, if any.  The IBM modification puts this pointer in the lock itself [SLIDE].  If there are no waiting threads, the lock tail pointer points to this extra field, so it can be filled in by a newly arriving thread — no special-casing required [SLIDE].  Unfortunately, this modification has never been published in an accessible forum, though IBM applied for a patent about 4 years ago. 

As an aside, I worry about software patents, and have struggled with the issue in my group.  As concurrency moves into the commercial mainstream there will be more and more opportunities to patent the work we do in this community.  Those of us who work in industry have little choice but to pursue such patents, given current legal conventions.  Those of us in academia, however, should perhaps be a bit more discriminating:  pursuing a patent may actually push people away from our ideas and toward suboptimal work-arounds. 

And certainly concurrency IS moving into the commercial mainstream.  Just as the explosion of the Web a decade ago gave new significance to distributed computing, the coming explosion in multicore and multithreaded processors will give new significance to shared-memory concurrency.  There are at least two implications, I think, for work in the PODC community.  First, the next generation of multiprocessors will inevitably be constructed from aggressively multicore chips.  These machines will have much more nonuniform memory access times than we’ve been used to in the past, increasing the importance of hierarchical algorithms and data structures.  For synchronization, nonuniform memory poses a difficult tradeoff between locality and fairness.  Radovic and Hagersten have done some nice work in this area, but more is going to be needed. 

Second, as most of you know, the proliferation of multicore chips means that almost all commercial software will need to be multithreaded.  Since most programmers aren’t up to the task, we need to come up with concurrent programming models that are significantly easier to use.  One possibility is to increase reliance on nonblocking data structure libraries.  This remains a very active research area; I’ve listed here some recent examples that seem particularly practical [SLIDE]

For more general use, I’ve come to believe, like many people, that transactional memory is the most promising option on the horizon.  Folks in the database community have developed an extensive theoretical framework for transactions over the course of 30 years, but not all of this transfers to shared memory.  There are important open questions on issues as basic as appropriate sequential semantics.  Several of these questions are laid out in a paper I presented at TRANSACT last month [SLIDE]; I’d love to see people in the PODC community pick up on some of these. 

All in all, these are incredibly exciting times to be working on distribution and concurrency.  The issues we all find intellectually compelling have become commercially compelling as well, giving us an opportunity to reshape not only high-end servers, but also the desktop machines and the programming systems of tens of millions of users.  I hope you’re all finding the opportunity as much fun as I am!  Thank you again for the incredible award and for the opportunity to be here.