ISSTA 2012 Workshop
2012 International Symposium on Software Testing and Analysis (ISSTA)
Powered by
Conference Publishing Consulting

2012 10th Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging (PADTAD), July 16, 2012, Minneapolis, MN, USA

PADTAD 2012 – Proceedings

Contents - Abstracts - Authors

2012 10th Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging (PADTAD)

Preface

Title Page


Foreword
It is our pleasure to welcome you to the 10th Workshop on Parallel and Distributed Systems: Testing, Analysis and Debugging (PADTAD 2012)! PADTAD is a workshop that brings together researchers from academia and industry to discuss techniques and systems that aid in the testing, analysis, and debugging of multi-threaded, parallel and distributed software. PADTAD strives to provide a venue where researchers interested in the challenges of parallel and distributed testing, analysis and debugging can present technical papers and participate in open discussion regarding current and future research challenges. The PADTAD 2012 program continues this tradition and includes a keynote address, two technical paper sessions and an open brainstorming session. This year’s keynote speaker is Dr. Victor Pankratius, a visiting scientist at the Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology, USA. The two paper sessions include five technical papers that were rigorously peer reviewed by three members of the program committee. The brainstorm session is an open time for small group discussions and informal presentations that allow workshop participants to discuss and share new ideas and challenges in the area of parallel and distributed testing and debugging.

Committees
Committee listing.

Data Race Detection

On-the-fly Detection of Data Races in OpenMP Programs
Ok-Kyoon Ha, In-Bon Kuh, Guy Martin Tchamgoue, and Yong-Kee Jun
(Gyeongsang National University, South Korea)
OpenMP provides a portable way to achieve high performance and simple compiler directives to transform a sequential program into parallel program. It is important to detect data races in OpenMP programs, because they may lead to unpredictable results from an execution of the programs. To detect data races that occur during an execution of OpenMP programs, the representative on-the-fly technique, Helgrind+, mainly focuses on reducing false positives. Unfortunately, this technique is still imprecise and inefficient, when applied to large OpenMP programs which use a structured fork-join parallelism with a large number of threads. This paper presents a novel approach which efficiently detects apparent data races without false positives in large OpenMP programs. This approach combines an efficient thread labeling to maintain the logical concurrency of thread segments with a precise detection protocol to analyze conflicting accesses to every shared memory location. We implemented this approach on top of the Pin binary instrumentation framework and compared it with Helgrind+. Empirical results using OpenMP benchmarks show that our technique detects apparent data races without false positives contrarily to Helgrind+, while reducing the average runtime overhead to 19% of Helgrind+ with a similar amount of space overhead.

A Static Analysis Tool Using a Three-Step Approach for Data Races in HPC Programs
Yasushi Negishi, Hiroki Murata, Guojing Cong, Hui-Fang Wen, and I-Hsin Chung
(IBM Research, Japan; IBM Research, USA)
Multicore processors are becoming dominant in the high performance computing (HPC) area, so multithread programming with OpenMP is becoming a key to good performance on such processors, though debugging problems remain. In particular, it is difficult to detect data races among threads with nondeterministic results, thus calling for tools to detect data races. Because HPC programs tend to run for long periods, detection tools that do not need to run the target programs are strongly preferred. We developed a static program analysis tool to detect data races in OpenMP loops in FORTRAN programs. Programmers can quickly use the tool at compile time without executing the target program. Because static analysis tools tend to report many false positives, we counted the false positives in some large applications to assess the utility and limits of static analysis tools. We have devised a new approach to detect data races. Our approach combines existing program analysis methods with a new analysis. We experimented with NAS parallel benchmarks and two real applications, GTC for plasma physics and GFMC for nuclear physics. Our new analysis method also reduces number of reported candidates from totally 97 to 33 in these applications. We found 13 previously unknown bugs out of 33 candidates reported by our prototype. Our analysis is fast enough for practical use, since the analysis time for the NAS parallel benchmark was shorter than the compilation time (18.5 seconds compared to 33.0 seconds).

Using Program Closures to Make an Application Programming Interface (API) Implementation Thread Safe
Eitan Farchi, Itai Segall, João M. Lourenço, and Diogo Sousa
(IBM Research, Israel; Universidade Nova de Lisboa, Portugal)
Consider a set of methods implementing an Application Programming Interface (API) of a given library or program module that is to be used in a multithreaded setting. If those methods were not originally designed to be thread safe, races and deadlocks are expected to happen. This work introduces the novel concept of program closure and describes how it can be applied in a methodology used to make the library or module implementation thread safe, by identifying the high level data races introduced by interleaving the parallel execution of methods from the API. High-level data races result from the misspecification of the scope of an atomic block, by wrongly splitting it into two or more atomic blocks sharing a data dependency.
Roughly speaking, the closure of a program P, clos(P), is obtained by incrementally adding new threads to P in such a way that enables the identification of the potential high level data races that may result from running P in parallel with other programs.
Our model considers the methods implementing the API of a library of program module as concurrent programs and computes and analyses their closure in order to identify high level data races. These high level data races are inspected and removed to make the interface thread safe. We illustrate the application of this methodology with a simple use case.

Testing and Analysis

Using Combinatorial Benchmark Construction to Improve the Assessment of Concurrency Bug Detection Tools
Jeremy S. Bradbury, Itai Segall, Eitan Farchi, Kevin Jalbert, and David Kelk
(University of Ontario Institute of Technology, Canada; IBM Research, Israel)
Many different techniques for testing and analyzing concurrency programs have been proposed in the literature. Currently, it is difficult to assess the fitness of a particular concurrency bug detection method and to compare it to other bug detection methods due to a lack of unbiased data that is representative of the kinds of concurrency programs that are used in practice. To address this problem we propose a new benchmark of concurrent Java programs that is constructed using combinatorial test design. In this paper we present our combinatorial model for creating a benchmark, we propose a new concurrency benchmark and we discuses the relationship between our new benchmarks and existing benchmarks. Specific combinations of the model parameters define different interleaving spaces, thus differentiating between different test tools.

Noise-Based Testing and Analysis of Multi-threaded C/C++ Programs on the Binary Level
Jan Fiedor and Tomáš Vojnar
(Brno University of Technology, Czech Republic)
This paper aims at allowing noise-based testing and dynamic analysis of multi-threaded C/C++ programs on the binary level. First, several problems of monitoring multi-threaded C/C++ programs on the binary level are discussed together with their possible solutions. Next, a brief overview of noise injection techniques is provided along with a proposal of improving them using a fine-grained combination of several noise injection techniques within a single program. The proposed ideas have been implemented in a prototype way using the PIN framework for Intel binaries and tested on a~set of multi-threaded C/C++ programs. The obtained experimental evidence justifying the proposed solutions and illustrating the effect of various noise settings in the context of multi-threaded C/C++ programs is discussed.

proc time: 0.65