Data race in parallel programs can be difficult to precisely detect, and doing so manually can often prove unsuccessful.
Task parallel programming models can help reduce defects introduced by the programmer by restricting concurrent functionalities to fork-join operations.
Typical data race detection algorithms compute the happens-before relation either by tracking the order that shared accesses happen via a vector clock counter, or by grouping events into sets that help classify which heap locations are accessed sequentially or in parallel.
Access sets are simple and efficient to compute, and have been shown to have the potential to outperform vector clock approaches in certain use cases.
However, they do not support arbitrary thread synchronization, are limited to fork-join or similar structures, and do not support mutual exclusion.
Vector clock approaches do not scale as well to many threads with many shared interactions, rendering them inefficient in many cases.
This work combines the simplicity of access sets with the generality of vector clocks by grouping heap accesses into access sets, and attaching the vector clock counter to those groupings.
By combining these two approaches, access sets can be utilized more generally to support programs that contain mutual exclusion.
Additionally, entire blocks can be ordered with each other rather than single accesses, producing a much more efficient algorithm for data race detection.
This novel algorithm, ZipperOTF, is compared to the Computation Graph algorithm (an access set algorithm) as well as FastTrack (a vector clock algorithm) to show comparisons in empirical results and in both time and space complexity.
Computer Science Department
Brigham Young University
3361 TMCB PO Box 26576
Provo, Utah 84602