July 17th-21st, 2011, Toronto, ON, Canada

Powered by
Conference Publishing Consulting

Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging (PADTAD 2011), July 17, 2011, Toronto, ON, Canada

PADTAD 2011 – Proceedings

Contents - Abstracts - Authors

Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging (PADTAD 2011)


Title Page

It is our great pleasure to welcome you to the 9th Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging — PADTAD’11. PADTAD is a workshop that brings together researchers from academia and practitioners from industry to promote the development of techniques and tools that aid in testing, analysis, and debugging of multi-threaded, parallel and distributed software. The quest to exploit concurrency at all levels—from hardware through applications—has made it imperative that researchers develop robust methods for testing, analyzing, and debugging concurrent systems. Of the many forums that cater to the need of researchers to assemble and discuss common issues they face and solutions they seek to refine, PADTAD is unique in that it intersperses technical papers with in depth discussion. This year, we are fortunate to receive generous support from Intel and IBM Research, which greatly facilitates the organization of PADTAD 2011. We are especially thankful to the Program Committee and external reviewers for their very hard work on short notice, and to the Publisher and ACM for their excellent support of our workshop. Thanks also to Matthew Dwyer, Frank Tip, and Eric Bodden, the chairs of the International Symposium on Software Testing and Analysis 2011 (ISSTA 2011) for hosting the PADTAD workshop, and for all their help in facilitating the embedding of PADTAD within ISSTA, resulting in a synergistic collaboration! We hope that you will find the technical program interesting and that the workshop will provide you with a valuable opportunity to share ideas with other researchers and practitioners from institutions around the world. Happy Concurrent System Design, Testing, Analysis, and Debugging!

Session 1: Invited Talk

Research in Concurrent Software Testing: A Systematic Review
Simone R. S. Souza, Maria A. S. Brito, Rodolfo A. Silva, Paulo S. L. Souza, and Ed Zaluska
(Universidade de São Paulo São Carlos, Brazil; University of Southampton, UK)
The current increased demand for distributed applications in domains such as web services and cloud computing has significantly increased interest in concurrent programming. This demand in turn has resulted in new testing methodologies for such systems, which take account of the challenges necessary to test these applications. This paper presents a systematic review of the published research related to concurrent testing approaches, bug classification and testing tools. A systematic review is a process of collection, assessment and interpretation of the published papers related to a specific search question, designed to provide a background for further research. The results include information about the research relationships and research teams that are working in the different areas of concurrent programs testing.
Article Search

Session 2: Debugging

Deterministic Replay for MCAPI Programs
Mohamed Elwakil and Zijiang Yang
(Western Michigan University, USA)
The Multicore Communications API (MCAPI) is a new message passing API that was released by the Multicore Association. MCAPI provides an interface designed for closely distributed embedded systems with multiple cores on a chip and/or chips on a board. Similar to concurrent programs in other domains, debugging MCAPI programs is a challenging task due to their non-deterministic behavior. In this paper we present a tool that is able to deterministically replay the executions of MCAPI programs, which provides valuable insight for MCAPI developers in case of failure.
Article Search
Java Replay for Dependence-based Debugging
Jan Lönnberg, Mordechai Ben-Ari, and Lauri Malmi
(Aalto University, Finland; Weizmann Institute of Science, Israel)
In this article, we present a system intended to help students understand and debug concurrent Java programs. The system instruments Java classes to produce execution traces. These traces can then be used to construct a dynamic dependence graph showing the interactions between the different operations performed in the program. These interactions are used as the basis for an interactive visualisation that can be used to explore the execution of a program and trace incorrect program behaviour back from a symptom to the execution of incorrect code.
Article Search
Practical Verification of High-Level Dataraces in Transactional Memory Programs
Vasco Pessanha, Ricardo J. Dias, João M. Lourenço, Eitan Farchi, and Diogo Sousa
(Universidade Nova de Lisboa, Portugal; IBM Research Haifa, Israel)
In this paper we present MoTH, a tool that uses static analysis to enable the automatic verification of concurrency anomalies in Transactional Memory Java programs. Currently MoTH detects high-level dataraces and stale-value errors, but it is extendable by plugging-in sensors, each sensor implementing an anomaly detecting algorithm. We validate and benchmark MoTH by applying it to a set of well known concurrent buggy programs and by close comparison of the results with other similar tools. The results achieved so far are very promising, yielding good accuracy while triggering only a very limited number of false warnings.
Article Search

Session 3: Design for Correctness

Refactoring Java Programs using Concurrent Libraries
Kazuaki Ishizaki, Shahrokh Daijavad, and Toshio Nakatani
(IBM Research Tokyo, Japan; IBM Research Watson, USA)
Multithread programming is becoming ever-more important to exploit the capabilities of multicore processors. Versions of Java prior to version 5 provide only the synchronized construct as a consistency primitive, which causes a performance scalability problem for multicore machines. Therefore, Java 5 added the java.util.concurrent package to reduce lock contention. Programmers must manually rewrite their existing code to use this package in existing programs. There are two typical rewritings methods. One is to replace an operation on a variable within a synchronized block with an atomic-lock-free version. The other is to replace a sequential concurrent class with its concurrent version. The conventional rewriting approach has three deficiencies. One problem is transformations that may change the behavior of a program. The second problem is missed modifications to be rewritten. The third problem is two difference writing techniques are applied individually to each code fragment even in the same method. This paper describes our refactoring algorithms that address these three problems as they rewrite Java code for scalable performance. We use inter-procedural pointer analysis and consistency tests among the candidate code fragments.
Article Search
Extending a Distributed Loop Network to Tolerate Node Failures
Abdel Aziz Farrag
(Dalhousie University, Canada)
We examine the problem of extending a distributed loop network by adding spare nodes and links so as to make it more fault-tolerant. The optimization criterion used in finding a fault-tolerant solution is to reduce the node- degree of the overall network. This is important in practice due to the limitation on the number of links allowed per node in VLSI design. Our results indicate that the solutions obtained (numerically or analytically) are efficient, i.e., either optimal or nearly-optimal.
Article Search

Session 4: Testing

Executing Association Rule Mining Algorithms under a Grid Computing Environment
Raja Tlili and Yahya Slimani
(Tunis El Manar University, Tunisia)
Grids are now regarded as promising platforms for data and computation-intensive applications like data mining. However, the exploration of such large-scale computing resources necessitates the development of new distributed algorithms. The major challenge facing the developers of distributed data mining algorithms is how to adjust the load imbalance that occurs during execution. This load imbalance is due to the dynamic nature of data mining algorithms (i.e. we cannot predict the load before execution) and the heterogeneity of Grid computing systems. In this paper, we propose a dynamic load balancing strategy for distributed association rule mining algorithms under a Grid computing environment. We evaluate the performance of the proposed strategy by the use of Grid’5000. A Grid infrastructure distributed in nine sites around France, for research in large-scale parallel and distributed systems.
Article Search

proc time: 0.02