next up previous
Next: How Much Laziness Up: Results Previous: Application Characteristics

Lazy v. Eager Release Consistency

 

This section compares our lazy release consistency protocol (presented in section 2) to an eager release consistency protocol like the one implemented in DASH [17]. The performance of a sequentially consistent directory-based protocol is also presented for comparison purposes. The relaxed consistency protocols use a 4-entry write buffer which allows reads to bypass writes and coalesces writes to the same cache line. The eager protocol uses a write-back policy while the lazy protocol uses write-through with a 16-entry coalescing buffer placed between the cache and the memory system.

Table 2 presents the miss rates of our applications under the different protocols. In all cases the lazy variants exhibit the same or lower miss rate than the eager implementation of release consistency. For the applications with an important false sharing miss rate component, miss rates are reduced, while for the remaining applications miss rate remains the same.

Figure 2 presents the normalized execution time of the different protocols on our application suite. Execution time is normalized with respect to the execution time of the sequentially consistent protocol (the unit line in the graph). The lazy protocol provides a performance advantage on the expected applications, with the advantage ranging from 5% to 17%. The application with the largest performance improvement is mp3d. Mp3d has the highest overall miss rate, with false sharing and write misses being important components of it. Barnes-hut's performance also improves by 9% when using a lazy protocol, but unlike all the remaining programs the performance benefits are derived from a decrease in synchronization wait time. Closer study reveals that this decrease stems from better handling of migratory data in the lazy protocol.

    
Figure 2: Normalized execution time for lazy-release and eager-release consistency on 64 processors

 
Figure 3: Overhead analysis for lazy-release, eager-release, and sequential consistency (left to right) on 64 processors

Blocked LU and Locusroute suffer from false sharing and the lazy nature of the protocol allows them to tolerate it much better than eager release consistency, resulting in performance benefits of 5% and 13% respectively. Gauss on the other hand has no false sharing, no migratory data, and still realizes performance improvements of 9% under lazy consistency. We have studied the program and have found that the performance advantage of lazy consistency stems from the elimination of 3-hop transactions in the coherence protocol. Sharing in gauss occurs when processors attempt to access a newly produced pivot row which is in the dirty state. Furthermore this access is tightly synchronized and has the potential to generate large amounts of contention. The lazy protocol eliminates the need for the extra hop and reduces the observed contention, thus improving performance. One could argue that the eager protocol could also use the write-through policy and realize the same benefits. However this would be detrimental to the performance of other applications. For the lazy protocol, write-through is necessary for correctness purposes.

Cholesky and fft have a very small amount of false sharing. Their performance changes little under the lazy protocol: fft runs a little faster; cholesky runs a little slower.

Figure 3 presents a breakdown of aggregate cycles (over all processors) into four categories: cycles spent in cpu processing, cycles spent waiting for read requests to return from main memory, cycles lost due to write-buffer stalls, and cycles lost to synchronization delays. Costs for each category in each protocol are presented as a percentage of the total costs experienced by the sequentially consistent protocol. Results indicate that the lazy consistency protocol reduces read latency and write buffer stalls, but has increased synchronization overhead. For all but one of the programs the decrease in read latency is sufficient to offset the increase in synchronization time, resulting in net performance gains.

Two of our applications, mp3d and locus-route do not obey the release consistency model (they have unsynchronized references). It is possible that the additional time before a line is invalidated may hurt the quality of solution in the lazy protocol. To quantify this effect we experimented with two versions of mp3d running natively on our SGI. One version uses software caching to capture the behavior of the lazy protocol in data propagation while the other version captures the behavior of a sequentially consistent protocol. We compared the cumulative (over all particles) velocity vector after 10 time steps for the two programs. We found that the Y and Z coordinates of the velocity vector were less than one tenth of a percent apart while the X coordinate was 6.7% apart between the two versions.

We believe that for properly synchronized programs with false sharing the lazy protocol will provide an important performance advantage. The same is true for programs with data races whose quality of solution is not affected by the additional delay in invalidations. For the remaining programs (which may not be suitable for relaxed consistency models in the first place) the lazy protocol can match the performance of the eager protocol simply by adding fence operations in the code that would force the protocol processor to process invalidations at regular intervals.



next up previous
Next: How Much Laziness Up: Results Previous: Application Characteristics



Leonidas Kontothanassis
Mon Jul 24 22:40:09 EDT 1995