Workshop WODA 2012 – Author Index |
Contents -
Abstracts -
Authors
|
Alipour, Mohammad Amin |
WODA '12: "Extended Program Invariants: ..."
Extended Program Invariants: Applications in Testing and Fault Localization
Mohammad Amin Alipour and Alex Groce (Oregon State University, USA) Invariants are powerful tools for program analysis and reasoning.Several tools and techniques have been developed to infer invariants of a program. Given a test suite for a program, an invariant detection tool (IDT) extracts (potential) invariants from the program execution on test cases of the test suite. The resultant invariants contain relations only over variables and constants that are visible to the IDT. IDTs are usually unable to extract invariants about execution features like taken branches, since programs usually do not have state variables for such features. Thus, the IDT has no information about such features in order to infer relations between them. We speculate that invariants about execution features are useful for understanding test suites; we call these invariants, extended invariants. In this paper, we discuss potential applications of extended invariants in understanding of test suites, and fault localization. We illustrate the usefulness of extended invariants with some small examples that use basic block count as the execution feature in extended invariants. We believe extended invariants provide useful information about execution of programs that can be utilized in program analysis and testing. @InProceedings{WODA12p7, author = {Mohammad Amin Alipour and Alex Groce}, title = {Extended Program Invariants: Applications in Testing and Fault Localization}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {7--11}, doi = {}, year = {2012}, } |
|
Ashraf, Imran |
WODA '12: "Communication-Aware HW/SW ..."
Communication-Aware HW/SW Co-design for Heterogeneous Multicore Platforms
Imran Ashraf, S. Arash Ostadzadeh, Roel Meeuws, and Koen Bertels (TU Delft, Netherlands) QUAD is an open source profiling toolset, which is an integral part of the Q2 profiling framework. In this paper, we extend QUAD to introduce the concept of Unique Data Values regarding the data communication among functions. This feature is important to make a proper partitioning of the application. Mapping a well-known feature tracker application onto the multicore heterogeneous platform at hand is presented as a case study to substantiate the usefulness of the added feature. Experimental results show a speedup of 2.24x by utilizing the new QUAD toolset. @InProceedings{WODA12p36, author = {Imran Ashraf and S. Arash Ostadzadeh and Roel Meeuws and Koen Bertels}, title = {Communication-Aware HW/SW Co-design for Heterogeneous Multicore Platforms}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {36--41}, doi = {}, year = {2012}, } |
|
Bertels, Koen |
WODA '12: "Communication-Aware HW/SW ..."
Communication-Aware HW/SW Co-design for Heterogeneous Multicore Platforms
Imran Ashraf, S. Arash Ostadzadeh, Roel Meeuws, and Koen Bertels (TU Delft, Netherlands) QUAD is an open source profiling toolset, which is an integral part of the Q2 profiling framework. In this paper, we extend QUAD to introduce the concept of Unique Data Values regarding the data communication among functions. This feature is important to make a proper partitioning of the application. Mapping a well-known feature tracker application onto the multicore heterogeneous platform at hand is presented as a case study to substantiate the usefulness of the added feature. Experimental results show a speedup of 2.24x by utilizing the new QUAD toolset. @InProceedings{WODA12p36, author = {Imran Ashraf and S. Arash Ostadzadeh and Roel Meeuws and Koen Bertels}, title = {Communication-Aware HW/SW Co-design for Heterogeneous Multicore Platforms}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {36--41}, doi = {}, year = {2012}, } |
|
Buell, Kevin |
WODA '12: "Dynamic Cost Verification ..."
Dynamic Cost Verification for Cloud Applications
Kevin Buell and James Collofello (Arizona State University, USA) The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Scientific workflows often involve data intensive transactions which may be costly. Business and consumer application developers are likely to be particularly sensitive to costs in order to maximize profits. Verification of economic attributes of cloud applications has only been touched on lightly in the literature to date. Possibilities for cost verification of cloud applications include both static and dynamic analysis. We advocate for increased attention to economic attributes of cloud applications at every level of software development, and we discuss some measurement based approaches to cost verification of applications running in the cloud. @InProceedings{WODA12p18, author = {Kevin Buell and James Collofello}, title = {Dynamic Cost Verification for Cloud Applications}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {18--23}, doi = {}, year = {2012}, } |
|
Collofello, James |
WODA '12: "Dynamic Cost Verification ..."
Dynamic Cost Verification for Cloud Applications
Kevin Buell and James Collofello (Arizona State University, USA) The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Scientific workflows often involve data intensive transactions which may be costly. Business and consumer application developers are likely to be particularly sensitive to costs in order to maximize profits. Verification of economic attributes of cloud applications has only been touched on lightly in the literature to date. Possibilities for cost verification of cloud applications include both static and dynamic analysis. We advocate for increased attention to economic attributes of cloud applications at every level of software development, and we discuss some measurement based approaches to cost verification of applications running in the cloud. @InProceedings{WODA12p18, author = {Kevin Buell and James Collofello}, title = {Dynamic Cost Verification for Cloud Applications}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {18--23}, doi = {}, year = {2012}, } |
|
Csallner, Christoph |
WODA '12: "Evaluating Program Analysis ..."
Evaluating Program Analysis and Testing Tools with the RUGRAT Random Benchmark Application Generator
Ishtiaque Hussain, Christoph Csallner, Mark Grechanik, Chen Fu, Qing Xie, Sangmin Park, Kunal Taneja, and B. M. Mainul Hossain (University of Texas at Arlington, USA; Accenture Technology Labs, USA; University of Illinois at Chicago, USA; Georgia Tech, USA; North Carolina State University, USA) Benchmarks are heavily used in different areas of computer science to evaluate algorithms and tools. In program analysis and testing, open-source and commercial programs are routinely used as bench- marks to evaluate different aspects of algorithms and tools. Unfor- tunately, many of these programs are written by programmers who introduce different biases, not to mention that it is very difficult to find programs that can serve as benchmarks with high reproducibil- ity of results. We propose a novel approach for generating random benchmarks for evaluating program analysis and testing tools. Our approach uses stochastic parse trees, where language grammar production rules are assigned probabilities that specify the frequencies with which instantiations of these rules will appear in the generated pro- grams. We implemented our tool for Java and applied it to generate benchmarks with which we evaluated different program analysis and testing tools. Our tool was also implemented by a major soft- ware company for C++ and used by a team of developers to gener- ate benchmarks that enabled them to reproduce a bug in less than four hours. @InProceedings{WODA12p1, author = {Ishtiaque Hussain and Christoph Csallner and Mark Grechanik and Chen Fu and Qing Xie and Sangmin Park and Kunal Taneja and B. M. Mainul Hossain}, title = {Evaluating Program Analysis and Testing Tools with the RUGRAT Random Benchmark Application Generator}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {1--6}, doi = {}, year = {2012}, } |
|
Erwig, Martin |
WODA '12: "Finding Common Ground: Choose, ..."
Finding Common Ground: Choose, Assert, and Assume
Alex Groce and Martin Erwig (Oregon State University, USA) At present, the “testing community” is on good speaking terms, but typically lacks a common language for expressing some computational ideas, even in cases where such a language would be both useful and plausible. In particular, a large body of testing systems define a testing problem in the language of the system under test, extended with operations for choosing inputs, asserting properties, and constraining the domain of executions considered. While the underlying algorithms used for “testing” include symbolic execution, explicit-state model checking, machine learning, and“old fashioned”random testing, there seems to be a common core of expressive need. We propose that the dynamic analysis community could benefit from working with some common syntactic (and to some extent semantic) mechanisms for expressing a body of testing problems. Such a shared language would have immediate practical uses and make cross-tool comparisons and research into identifying appropriate tools for different testing activities easier. We also suspect that considering the more abstract testing problem arising from this minimalist common ground could serve as a basis for thinking about the design of usable embedded domain-specific languages for testing and might help identify computational patterns that have escaped the notice of the community. @InProceedings{WODA12p12, author = {Alex Groce and Martin Erwig}, title = {Finding Common Ground: Choose, Assert, and Assume}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {12--17}, doi = {}, year = {2012}, } |
|
Fu, Chen |
WODA '12: "Evaluating Program Analysis ..."
Evaluating Program Analysis and Testing Tools with the RUGRAT Random Benchmark Application Generator
Ishtiaque Hussain, Christoph Csallner, Mark Grechanik, Chen Fu, Qing Xie, Sangmin Park, Kunal Taneja, and B. M. Mainul Hossain (University of Texas at Arlington, USA; Accenture Technology Labs, USA; University of Illinois at Chicago, USA; Georgia Tech, USA; North Carolina State University, USA) Benchmarks are heavily used in different areas of computer science to evaluate algorithms and tools. In program analysis and testing, open-source and commercial programs are routinely used as bench- marks to evaluate different aspects of algorithms and tools. Unfor- tunately, many of these programs are written by programmers who introduce different biases, not to mention that it is very difficult to find programs that can serve as benchmarks with high reproducibil- ity of results. We propose a novel approach for generating random benchmarks for evaluating program analysis and testing tools. Our approach uses stochastic parse trees, where language grammar production rules are assigned probabilities that specify the frequencies with which instantiations of these rules will appear in the generated pro- grams. We implemented our tool for Java and applied it to generate benchmarks with which we evaluated different program analysis and testing tools. Our tool was also implemented by a major soft- ware company for C++ and used by a team of developers to gener- ate benchmarks that enabled them to reproduce a bug in less than four hours. @InProceedings{WODA12p1, author = {Ishtiaque Hussain and Christoph Csallner and Mark Grechanik and Chen Fu and Qing Xie and Sangmin Park and Kunal Taneja and B. M. Mainul Hossain}, title = {Evaluating Program Analysis and Testing Tools with the RUGRAT Random Benchmark Application Generator}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {1--6}, doi = {}, year = {2012}, } |
|
Grechanik, Mark |
WODA '12: "Evaluating Program Analysis ..."
Evaluating Program Analysis and Testing Tools with the RUGRAT Random Benchmark Application Generator
Ishtiaque Hussain, Christoph Csallner, Mark Grechanik, Chen Fu, Qing Xie, Sangmin Park, Kunal Taneja, and B. M. Mainul Hossain (University of Texas at Arlington, USA; Accenture Technology Labs, USA; University of Illinois at Chicago, USA; Georgia Tech, USA; North Carolina State University, USA) Benchmarks are heavily used in different areas of computer science to evaluate algorithms and tools. In program analysis and testing, open-source and commercial programs are routinely used as bench- marks to evaluate different aspects of algorithms and tools. Unfor- tunately, many of these programs are written by programmers who introduce different biases, not to mention that it is very difficult to find programs that can serve as benchmarks with high reproducibil- ity of results. We propose a novel approach for generating random benchmarks for evaluating program analysis and testing tools. Our approach uses stochastic parse trees, where language grammar production rules are assigned probabilities that specify the frequencies with which instantiations of these rules will appear in the generated pro- grams. We implemented our tool for Java and applied it to generate benchmarks with which we evaluated different program analysis and testing tools. Our tool was also implemented by a major soft- ware company for C++ and used by a team of developers to gener- ate benchmarks that enabled them to reproduce a bug in less than four hours. @InProceedings{WODA12p1, author = {Ishtiaque Hussain and Christoph Csallner and Mark Grechanik and Chen Fu and Qing Xie and Sangmin Park and Kunal Taneja and B. M. Mainul Hossain}, title = {Evaluating Program Analysis and Testing Tools with the RUGRAT Random Benchmark Application Generator}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {1--6}, doi = {}, year = {2012}, } |
|
Groce, Alex |
WODA '12: "Finding Common Ground: Choose, ..."
Finding Common Ground: Choose, Assert, and Assume
Alex Groce and Martin Erwig (Oregon State University, USA) At present, the “testing community” is on good speaking terms, but typically lacks a common language for expressing some computational ideas, even in cases where such a language would be both useful and plausible. In particular, a large body of testing systems define a testing problem in the language of the system under test, extended with operations for choosing inputs, asserting properties, and constraining the domain of executions considered. While the underlying algorithms used for “testing” include symbolic execution, explicit-state model checking, machine learning, and“old fashioned”random testing, there seems to be a common core of expressive need. We propose that the dynamic analysis community could benefit from working with some common syntactic (and to some extent semantic) mechanisms for expressing a body of testing problems. Such a shared language would have immediate practical uses and make cross-tool comparisons and research into identifying appropriate tools for different testing activities easier. We also suspect that considering the more abstract testing problem arising from this minimalist common ground could serve as a basis for thinking about the design of usable embedded domain-specific languages for testing and might help identify computational patterns that have escaped the notice of the community. @InProceedings{WODA12p12, author = {Alex Groce and Martin Erwig}, title = {Finding Common Ground: Choose, Assert, and Assume}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {12--17}, doi = {}, year = {2012}, } WODA '12: "Extended Program Invariants: ..." Extended Program Invariants: Applications in Testing and Fault Localization Mohammad Amin Alipour and Alex Groce (Oregon State University, USA) Invariants are powerful tools for program analysis and reasoning.Several tools and techniques have been developed to infer invariants of a program. Given a test suite for a program, an invariant detection tool (IDT) extracts (potential) invariants from the program execution on test cases of the test suite. The resultant invariants contain relations only over variables and constants that are visible to the IDT. IDTs are usually unable to extract invariants about execution features like taken branches, since programs usually do not have state variables for such features. Thus, the IDT has no information about such features in order to infer relations between them. We speculate that invariants about execution features are useful for understanding test suites; we call these invariants, extended invariants. In this paper, we discuss potential applications of extended invariants in understanding of test suites, and fault localization. We illustrate the usefulness of extended invariants with some small examples that use basic block count as the execution feature in extended invariants. We believe extended invariants provide useful information about execution of programs that can be utilized in program analysis and testing. @InProceedings{WODA12p7, author = {Mohammad Amin Alipour and Alex Groce}, title = {Extended Program Invariants: Applications in Testing and Fault Localization}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {7--11}, doi = {}, year = {2012}, } |
|
Hossain, B. M. Mainul |
WODA '12: "Evaluating Program Analysis ..."
Evaluating Program Analysis and Testing Tools with the RUGRAT Random Benchmark Application Generator
Ishtiaque Hussain, Christoph Csallner, Mark Grechanik, Chen Fu, Qing Xie, Sangmin Park, Kunal Taneja, and B. M. Mainul Hossain (University of Texas at Arlington, USA; Accenture Technology Labs, USA; University of Illinois at Chicago, USA; Georgia Tech, USA; North Carolina State University, USA) Benchmarks are heavily used in different areas of computer science to evaluate algorithms and tools. In program analysis and testing, open-source and commercial programs are routinely used as bench- marks to evaluate different aspects of algorithms and tools. Unfor- tunately, many of these programs are written by programmers who introduce different biases, not to mention that it is very difficult to find programs that can serve as benchmarks with high reproducibil- ity of results. We propose a novel approach for generating random benchmarks for evaluating program analysis and testing tools. Our approach uses stochastic parse trees, where language grammar production rules are assigned probabilities that specify the frequencies with which instantiations of these rules will appear in the generated pro- grams. We implemented our tool for Java and applied it to generate benchmarks with which we evaluated different program analysis and testing tools. Our tool was also implemented by a major soft- ware company for C++ and used by a team of developers to gener- ate benchmarks that enabled them to reproduce a bug in less than four hours. @InProceedings{WODA12p1, author = {Ishtiaque Hussain and Christoph Csallner and Mark Grechanik and Chen Fu and Qing Xie and Sangmin Park and Kunal Taneja and B. M. Mainul Hossain}, title = {Evaluating Program Analysis and Testing Tools with the RUGRAT Random Benchmark Application Generator}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {1--6}, doi = {}, year = {2012}, } |
|
Hussain, Ishtiaque |
WODA '12: "Evaluating Program Analysis ..."
Evaluating Program Analysis and Testing Tools with the RUGRAT Random Benchmark Application Generator
Ishtiaque Hussain, Christoph Csallner, Mark Grechanik, Chen Fu, Qing Xie, Sangmin Park, Kunal Taneja, and B. M. Mainul Hossain (University of Texas at Arlington, USA; Accenture Technology Labs, USA; University of Illinois at Chicago, USA; Georgia Tech, USA; North Carolina State University, USA) Benchmarks are heavily used in different areas of computer science to evaluate algorithms and tools. In program analysis and testing, open-source and commercial programs are routinely used as bench- marks to evaluate different aspects of algorithms and tools. Unfor- tunately, many of these programs are written by programmers who introduce different biases, not to mention that it is very difficult to find programs that can serve as benchmarks with high reproducibil- ity of results. We propose a novel approach for generating random benchmarks for evaluating program analysis and testing tools. Our approach uses stochastic parse trees, where language grammar production rules are assigned probabilities that specify the frequencies with which instantiations of these rules will appear in the generated pro- grams. We implemented our tool for Java and applied it to generate benchmarks with which we evaluated different program analysis and testing tools. Our tool was also implemented by a major soft- ware company for C++ and used by a team of developers to gener- ate benchmarks that enabled them to reproduce a bug in less than four hours. @InProceedings{WODA12p1, author = {Ishtiaque Hussain and Christoph Csallner and Mark Grechanik and Chen Fu and Qing Xie and Sangmin Park and Kunal Taneja and B. M. Mainul Hossain}, title = {Evaluating Program Analysis and Testing Tools with the RUGRAT Random Benchmark Application Generator}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {1--6}, doi = {}, year = {2012}, } |
|
Meeuws, Roel |
WODA '12: "Communication-Aware HW/SW ..."
Communication-Aware HW/SW Co-design for Heterogeneous Multicore Platforms
Imran Ashraf, S. Arash Ostadzadeh, Roel Meeuws, and Koen Bertels (TU Delft, Netherlands) QUAD is an open source profiling toolset, which is an integral part of the Q2 profiling framework. In this paper, we extend QUAD to introduce the concept of Unique Data Values regarding the data communication among functions. This feature is important to make a proper partitioning of the application. Mapping a well-known feature tracker application onto the multicore heterogeneous platform at hand is presented as a case study to substantiate the usefulness of the added feature. Experimental results show a speedup of 2.24x by utilizing the new QUAD toolset. @InProceedings{WODA12p36, author = {Imran Ashraf and S. Arash Ostadzadeh and Roel Meeuws and Koen Bertels}, title = {Communication-Aware HW/SW Co-design for Heterogeneous Multicore Platforms}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {36--41}, doi = {}, year = {2012}, } |
|
Ostadzadeh, S. Arash |
WODA '12: "Communication-Aware HW/SW ..."
Communication-Aware HW/SW Co-design for Heterogeneous Multicore Platforms
Imran Ashraf, S. Arash Ostadzadeh, Roel Meeuws, and Koen Bertels (TU Delft, Netherlands) QUAD is an open source profiling toolset, which is an integral part of the Q2 profiling framework. In this paper, we extend QUAD to introduce the concept of Unique Data Values regarding the data communication among functions. This feature is important to make a proper partitioning of the application. Mapping a well-known feature tracker application onto the multicore heterogeneous platform at hand is presented as a case study to substantiate the usefulness of the added feature. Experimental results show a speedup of 2.24x by utilizing the new QUAD toolset. @InProceedings{WODA12p36, author = {Imran Ashraf and S. Arash Ostadzadeh and Roel Meeuws and Koen Bertels}, title = {Communication-Aware HW/SW Co-design for Heterogeneous Multicore Platforms}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {36--41}, doi = {}, year = {2012}, } |
|
Park, Sangmin |
WODA '12: "Evaluating Program Analysis ..."
Evaluating Program Analysis and Testing Tools with the RUGRAT Random Benchmark Application Generator
Ishtiaque Hussain, Christoph Csallner, Mark Grechanik, Chen Fu, Qing Xie, Sangmin Park, Kunal Taneja, and B. M. Mainul Hossain (University of Texas at Arlington, USA; Accenture Technology Labs, USA; University of Illinois at Chicago, USA; Georgia Tech, USA; North Carolina State University, USA) Benchmarks are heavily used in different areas of computer science to evaluate algorithms and tools. In program analysis and testing, open-source and commercial programs are routinely used as bench- marks to evaluate different aspects of algorithms and tools. Unfor- tunately, many of these programs are written by programmers who introduce different biases, not to mention that it is very difficult to find programs that can serve as benchmarks with high reproducibil- ity of results. We propose a novel approach for generating random benchmarks for evaluating program analysis and testing tools. Our approach uses stochastic parse trees, where language grammar production rules are assigned probabilities that specify the frequencies with which instantiations of these rules will appear in the generated pro- grams. We implemented our tool for Java and applied it to generate benchmarks with which we evaluated different program analysis and testing tools. Our tool was also implemented by a major soft- ware company for C++ and used by a team of developers to gener- ate benchmarks that enabled them to reproduce a bug in less than four hours. @InProceedings{WODA12p1, author = {Ishtiaque Hussain and Christoph Csallner and Mark Grechanik and Chen Fu and Qing Xie and Sangmin Park and Kunal Taneja and B. M. Mainul Hossain}, title = {Evaluating Program Analysis and Testing Tools with the RUGRAT Random Benchmark Application Generator}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {1--6}, doi = {}, year = {2012}, } |
|
Rountev, Atanas |
WODA '12: "Dynamic Analysis of Inefficiently-Used ..."
Dynamic Analysis of Inefficiently-Used Containers
Shengqian Yang, Dacong Yan, Guoqing Xu, and Atanas Rountev (Ohio State University, USA; UC Irvine, USA) The goal of this work is to identify suspicious usage of containers, as an indicator of potential performance inefficiencies. To analyze container-related behavior and performance, we propose a dynamic analysis that tracks and records the flow of element objects to/from container objects. The observed interactions among containers and their elements is captured by a container-element flow graph. This graph is then analyzed by three detectors of potential container inefficiencies, based on certain patterns of suspicious behavior. In a promising initial study, this approach uncovered a number of performance problems in realistic Java applications. @InProceedings{WODA12p30, author = {Shengqian Yang and Dacong Yan and Guoqing Xu and Atanas Rountev}, title = {Dynamic Analysis of Inefficiently-Used Containers}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {30--35}, doi = {}, year = {2012}, } |
|
Taneja, Kunal |
WODA '12: "Evaluating Program Analysis ..."
Evaluating Program Analysis and Testing Tools with the RUGRAT Random Benchmark Application Generator
Ishtiaque Hussain, Christoph Csallner, Mark Grechanik, Chen Fu, Qing Xie, Sangmin Park, Kunal Taneja, and B. M. Mainul Hossain (University of Texas at Arlington, USA; Accenture Technology Labs, USA; University of Illinois at Chicago, USA; Georgia Tech, USA; North Carolina State University, USA) Benchmarks are heavily used in different areas of computer science to evaluate algorithms and tools. In program analysis and testing, open-source and commercial programs are routinely used as bench- marks to evaluate different aspects of algorithms and tools. Unfor- tunately, many of these programs are written by programmers who introduce different biases, not to mention that it is very difficult to find programs that can serve as benchmarks with high reproducibil- ity of results. We propose a novel approach for generating random benchmarks for evaluating program analysis and testing tools. Our approach uses stochastic parse trees, where language grammar production rules are assigned probabilities that specify the frequencies with which instantiations of these rules will appear in the generated pro- grams. We implemented our tool for Java and applied it to generate benchmarks with which we evaluated different program analysis and testing tools. Our tool was also implemented by a major soft- ware company for C++ and used by a team of developers to gener- ate benchmarks that enabled them to reproduce a bug in less than four hours. @InProceedings{WODA12p1, author = {Ishtiaque Hussain and Christoph Csallner and Mark Grechanik and Chen Fu and Qing Xie and Sangmin Park and Kunal Taneja and B. M. Mainul Hossain}, title = {Evaluating Program Analysis and Testing Tools with the RUGRAT Random Benchmark Application Generator}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {1--6}, doi = {}, year = {2012}, } |
|
Weyns, Danny |
WODA '12: "Towards an Integrated Approach ..."
Towards an Integrated Approach for Validating Qualities of Self-Adaptive Systems
Danny Weyns (Linnaeus University, Sweden) Self-adaptation has been widely recognized as an effective approach to deal with the increasing complexity and dynamicity of modern software systems. One major challenge in self-adaptive systems is to provide guarantees about the required runtime qualities, such as performance and reliability. Existing research employs formal methods either to provide guarantees about the design of a self-adaptive systems, or to perform runtime analysis supporting adaptations for particular quality goals. Yet, work products of formalization are not exploited over different phases of the software life cycle. In this position paper, we argue for an integrated formally founded approach to validate the required software qualities of self-adaptive systems. This approach integrates three activities: (1) model checking of the behavior of a self-adaptive system during design, (2) model-based testing of the concrete implementation during development, and (3) runtime diagnosis after system deployment. We illustrate the approach with excerpts of an initial study and discuss for each activity research challenges ahead. @InProceedings{WODA12p24, author = {Danny Weyns}, title = {Towards an Integrated Approach for Validating Qualities of Self-Adaptive Systems}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {24--29}, doi = {}, year = {2012}, } |
|
Xie, Qing |
WODA '12: "Evaluating Program Analysis ..."
Evaluating Program Analysis and Testing Tools with the RUGRAT Random Benchmark Application Generator
Ishtiaque Hussain, Christoph Csallner, Mark Grechanik, Chen Fu, Qing Xie, Sangmin Park, Kunal Taneja, and B. M. Mainul Hossain (University of Texas at Arlington, USA; Accenture Technology Labs, USA; University of Illinois at Chicago, USA; Georgia Tech, USA; North Carolina State University, USA) Benchmarks are heavily used in different areas of computer science to evaluate algorithms and tools. In program analysis and testing, open-source and commercial programs are routinely used as bench- marks to evaluate different aspects of algorithms and tools. Unfor- tunately, many of these programs are written by programmers who introduce different biases, not to mention that it is very difficult to find programs that can serve as benchmarks with high reproducibil- ity of results. We propose a novel approach for generating random benchmarks for evaluating program analysis and testing tools. Our approach uses stochastic parse trees, where language grammar production rules are assigned probabilities that specify the frequencies with which instantiations of these rules will appear in the generated pro- grams. We implemented our tool for Java and applied it to generate benchmarks with which we evaluated different program analysis and testing tools. Our tool was also implemented by a major soft- ware company for C++ and used by a team of developers to gener- ate benchmarks that enabled them to reproduce a bug in less than four hours. @InProceedings{WODA12p1, author = {Ishtiaque Hussain and Christoph Csallner and Mark Grechanik and Chen Fu and Qing Xie and Sangmin Park and Kunal Taneja and B. M. Mainul Hossain}, title = {Evaluating Program Analysis and Testing Tools with the RUGRAT Random Benchmark Application Generator}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {1--6}, doi = {}, year = {2012}, } |
|
Xu, Guoqing |
WODA '12: "Dynamic Analysis of Inefficiently-Used ..."
Dynamic Analysis of Inefficiently-Used Containers
Shengqian Yang, Dacong Yan, Guoqing Xu, and Atanas Rountev (Ohio State University, USA; UC Irvine, USA) The goal of this work is to identify suspicious usage of containers, as an indicator of potential performance inefficiencies. To analyze container-related behavior and performance, we propose a dynamic analysis that tracks and records the flow of element objects to/from container objects. The observed interactions among containers and their elements is captured by a container-element flow graph. This graph is then analyzed by three detectors of potential container inefficiencies, based on certain patterns of suspicious behavior. In a promising initial study, this approach uncovered a number of performance problems in realistic Java applications. @InProceedings{WODA12p30, author = {Shengqian Yang and Dacong Yan and Guoqing Xu and Atanas Rountev}, title = {Dynamic Analysis of Inefficiently-Used Containers}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {30--35}, doi = {}, year = {2012}, } |
|
Yan, Dacong |
WODA '12: "Dynamic Analysis of Inefficiently-Used ..."
Dynamic Analysis of Inefficiently-Used Containers
Shengqian Yang, Dacong Yan, Guoqing Xu, and Atanas Rountev (Ohio State University, USA; UC Irvine, USA) The goal of this work is to identify suspicious usage of containers, as an indicator of potential performance inefficiencies. To analyze container-related behavior and performance, we propose a dynamic analysis that tracks and records the flow of element objects to/from container objects. The observed interactions among containers and their elements is captured by a container-element flow graph. This graph is then analyzed by three detectors of potential container inefficiencies, based on certain patterns of suspicious behavior. In a promising initial study, this approach uncovered a number of performance problems in realistic Java applications. @InProceedings{WODA12p30, author = {Shengqian Yang and Dacong Yan and Guoqing Xu and Atanas Rountev}, title = {Dynamic Analysis of Inefficiently-Used Containers}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {30--35}, doi = {}, year = {2012}, } |
|
Yang, Shengqian |
WODA '12: "Dynamic Analysis of Inefficiently-Used ..."
Dynamic Analysis of Inefficiently-Used Containers
Shengqian Yang, Dacong Yan, Guoqing Xu, and Atanas Rountev (Ohio State University, USA; UC Irvine, USA) The goal of this work is to identify suspicious usage of containers, as an indicator of potential performance inefficiencies. To analyze container-related behavior and performance, we propose a dynamic analysis that tracks and records the flow of element objects to/from container objects. The observed interactions among containers and their elements is captured by a container-element flow graph. This graph is then analyzed by three detectors of potential container inefficiencies, based on certain patterns of suspicious behavior. In a promising initial study, this approach uncovered a number of performance problems in realistic Java applications. @InProceedings{WODA12p30, author = {Shengqian Yang and Dacong Yan and Guoqing Xu and Atanas Rountev}, title = {Dynamic Analysis of Inefficiently-Used Containers}, booktitle = {Proc.\ WODA}, publisher = {ACM}, pages = {30--35}, doi = {}, year = {2012}, } |
22 authors
proc time: 0.73