Powered by
28th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2019),
July 15–19, 2019,
Beijing, China
Frontmatter
Welcome from the Chairs
It is our great pleasure to welcome you to Beijing, China for ISSTA 2019, the 28th ACM International Symposium on Software Testing and Analysis, to be held on July 15 - 19, 2019. ISSTA is the leading research symposium on software testing and analysis, bringing together academics, industrial researchers, and practitioners to exchange new ideas, problems, and experience on how to analyze and test software systems.
Invited Talks
Keynote
Some Challenges for Software Testing Research (Invited Talk Paper)
Nadia Alshahwan, Andrea Ciancone,
Mark Harman, Yue Jia, Ke Mao, Alexandru Marginean, Alexander Mols, Hila Peleg,
Federica Sarro, and Ilya Zorin
(Facebook, UK; University College London, UK; Technion, Israel)
This paper outlines 4 open challenges for Software Testing in general and Search Based Software Testing in particular, arising from our experience with the Sapienz System Deployment at Facebook.
The challenges may also apply more generally, thereby representing opportunities for the research community to further benefit from the growing interest in automated test design in industry.
@InProceedings{ISSTA19p1,
author = {Nadia Alshahwan and Andrea Ciancone and Mark Harman and Yue Jia and Ke Mao and Alexandru Marginean and Alexander Mols and Hila Peleg and Federica Sarro and Ilya Zorin},
title = {Some Challenges for Software Testing Research (Invited Talk Paper)},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {1--3},
doi = {10.1145/3293882.3338991},
year = {2019},
}
Publisher's Version
ISSTA 2019 Retrospective Impact Paper Award
From Typestate Verification to Interpretable Deep Models (Invited Talk Abstract)
Eran Yahav, Stephen J. Fink, Nurit Dor, G. Ramalingam, and Emmanuel Geay
(Technion, Israel; Facebook, USA; Kayhut, Israel; Microsoft Research, USA; Wayfair, USA)
The paper ``Effective Typestate Verification in the Presence of Aliasing'' was published
in the International Symposium on Software Testing and Analysis
(ISSTA) 2006 Proceedings, and has now been selected to receive the
ISSTA 2019 Retrospective Impact Paper Award.
The paper described a scalable framework for verification of typestate properties in real-world Java programs.
The paper introduced several techniques that have been used widely in the static analysis of real-world programs. Specifically, it introduced an abstract domain combining access-paths, aliasing information, and typestate that turned out to be simple, powerful, and useful.
We review the original paper and show the evolution of the ideas over the years. We show how some of these ideas have evolved into work on machine learning for code completion, and discuss recent general results in machine learning for programming.
@InProceedings{ISSTA19p4,
author = {Eran Yahav and Stephen J. Fink and Nurit Dor and G. Ramalingam and Emmanuel Geay},
title = {From Typestate Verification to Interpretable Deep Models (Invited Talk Abstract)},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {4--5},
doi = {10.1145/3293882.3338992},
year = {2019},
}
Publisher's Version
Info
ISSTA 2019 Impact Paper Award
Theory and Practice of String Solvers (Invited Talk Abstract)
Adam Kiezun, Philip J. Guo, Pieter Hooimeijer,
Michael D. Ernst, and Vijay Ganesh
(Amazon, USA; University of California at San Diego, USA; Facebook, USA; University of Washington, USA; University of Waterloo, Canada)
The paper titled "Hampi: A Solver for String Constraints" was published in the proceedings of the International Symposium on Software Testing and Analysis (ISSTA) 2009, and has been selected to receive the ISSTA 2019 Impact Paper Award. The paper describes HAMPI, one of the first practical solver aimed at solving the satisfiability problem for a theory of string (word) equations, operations over strings, predicates over regular expressions and context-free grammars. HAMPI has been used widely to solve many software engineering and security problems, and has inspired considerable research on string solving algorithms and their applications.
In this talk, we review the state of research on the theory and practice of string solving algorithms, specifically highlighting key historical developments that have led to their widespread use. On the practical front, we discuss different kinds of algorithmic paradigms, such as word- and automata-based, that have been developed to solve string and regular expression constraints. We then focus on the many hardness results that theorists have proved for fragments of theories over strings. Finally, we conclude with open theoretical problems, practical algorithmic challenges, and future applications of string solvers.
@InProceedings{ISSTA19p6,
author = {Adam Kiezun and Philip J. Guo and Pieter Hooimeijer and Michael D. Ernst and Vijay Ganesh},
title = {Theory and Practice of String Solvers (Invited Talk Abstract)},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {6--7},
doi = {10.1145/3293882.3338993},
year = {2019},
}
Publisher's Version
Main Research
Program Repair
Crash-Avoiding Program Repair
Xiang Gao,
Sergey Mechtaev, and
Abhik Roychoudhury
(National University of Singapore, Singapore; University College London, UK)
Existing program repair systems modify a buggy program so that the modified program passes given tests. The repaired program may not satisfy even the most basic notion of correctness, namely crash-freedom. In other words, repair tools might generate patches which over-fit the test data driving the repair, and the automatically repaired programs may even introduce crashes or vulnerabilities.
We propose an integrated approach for detecting and discarding crashing patches. Our approach fuses test and patch generation into a single process, in which patches are generated with the objective of passing existing tests, and new tests are generated with the objective of filtering out over-fitted patches by distinguishing candidate patches in terms of behavior. We use crash-freedom as the oracle to discard patch candidates which crash on the new tests. In its core, our approach defines a grey-box fuzzing strategy that gives higher priority to new tests that separate patches behaving equivalently on existing tests. This test generation strategy identifies semantic differences between patch candidates, and reduces over-fitting in program repair.
We evaluated our approach on real-world vulnerabilities and open-source subjects from the Google OSS-Fuzz infrastructure. We found that our tool Fix2Fit (implementing patch space directed test generation), produces crash-avoiding patches. While we do not give formal guarantees about crash-freedom, cross-validation with fuzzing tools and their sanitizers provides greater confidence about the crash-freedom of our suggested patches.
@InProceedings{ISSTA19p8,
author = {Xiang Gao and Sergey Mechtaev and Abhik Roychoudhury},
title = {Crash-Avoiding Program Repair},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {8--18},
doi = {10.1145/3293882.3330558},
year = {2019},
}
Publisher's Version
Practical Program Repair via Bytecode Mutation
Ali Ghanbari, Samuel Benton, and Lingming Zhang
(University of Texas at Dallas, USA)
Automated Program Repair (APR) is one of the most recent advances in automated debugging, and can directly fix buggy programs with minimal human intervention. Although various advanced APR techniques (including search-based or semantic-based ones) have been proposed, they mainly work at the source-code level and it is not clear how bytecode-level APR performs in practice. Also, empirical studies of the existing techniques on bugs beyond what has been reported in the original papers are rather limited. In this paper, we implement the first practical bytecode-level APR technique, PraPR, and present the first extensive study on fixing real-world bugs (e.g., Defects4J bugs) using JVM bytecode mutation. The experimental results show that surprisingly even PraPR with only the basic traditional mutators can produce genuine fixes for 17 bugs; with simple additional commonly used APR mutators, PraPR is able to produce genuine fixes for 43 bugs, significantly outperforming state-of-the-art APR, while being over 10X faster. Furthermore, we performed an extensive study of PraPR and other recent APR tools on a large number of additional real-world bugs, and demonstrated the overfitting problem of recent advanced APR tools for the first time. Lastly, PraPR has also successfully fixed bugs for other JVM languages (e.g., for the popular Kotlin language), indicating PraPR can greatly complement existing source-code-level APR.
@InProceedings{ISSTA19p19,
author = {Ali Ghanbari and Samuel Benton and Lingming Zhang},
title = {Practical Program Repair via Bytecode Mutation},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {19--30},
doi = {10.1145/3293882.3330559},
year = {2019},
}
Publisher's Version
Published Artifact
Info
Artifacts Available
Artifacts Functional
TBar: Revisiting Template-Based Automated Program Repair
Kui Liu, Anil Koyuncu,
Dongsun Kim, and
Tegawendé F. Bissyandé
(University of Luxembourg, Luxembourg)
We revisit the performance of template-based APR to build comprehensive knowledge about the effectiveness of fix patterns, and to highlight the importance of complementary steps such as fault localization or donor code retrieval. To that end, we first investigate the literature to collect, summarize and label recurrently-used fix patterns. Based on the investigation, we build TBar, a straightforward APR tool that systematically attempts to apply these fix patterns to program bugs. We thoroughly evaluate TBar on the Defects4J benchmark. In particular, we assess the actual qualitative and quantitative diversity of fix patterns, as well as their effectiveness in yielding plausible or correct patches. Eventually, we find that, assuming a perfect fault localization, TBar correctly/plausibly fixes 74/101 bugs. Replicating a standard and practical pipeline of APR assessment, we demonstrate that TBar correctly fixes 43 bugs from Defects4J, an unprecedented performance in the literature (including all approaches, i.e., template-based, stochastic mutation-based or synthesis-based APR).
@InProceedings{ISSTA19p31,
author = {Kui Liu and Anil Koyuncu and Dongsun Kim and Tegawendé F. Bissyandé},
title = {TBar: Revisiting Template-Based Automated Program Repair},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {31--42},
doi = {10.1145/3293882.3330577},
year = {2019},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Functional
History-Driven Build Failure Fixing: How Far Are We?
Yiling Lou, Junjie Chen, Lingming Zhang,
Dan Hao, and Lu Zhang
(Peking University, China; University of Texas at Dallas, USA)
Build systems are essential for modern software development and maintenance since they are widely used to transform source code
artifacts into executable software. Previous work shows that build systems break frequently during software evolution. Therefore,
automated build-fixing techniques are in huge demand. In this paper we target a mainstream build system, Gradle, which has become the most widely used build system for Java projects in the open-source community (e.g., GitHub). HireBuild, state-of-the-art build-fixing tool for Gradle, has been recently proposed to fix Gradle build failures via mining the history of prior fixes. Although HireBuild has
been shown to be effective for fixing real-world Gradle build failures, it was evaluated on only a limited set of build failures, and largely
depends on the quality/availability of historical fix information. To investigate the efficacy and limitations of the history-driven
build fix, we first construct a new and large build failure dataset from Top-1000 GitHub projects. Then, we evaluate HireBuild on the
extended dataset both quantitatively and qualitatively. Inspired by the findings of the study, we propose a simplistic new technique that
generates potential patches via searching from the present project under test and external resources rather than the historical fix
information. According to our experimental results, the simplistic approach based on present information successfully fixes 2X more
reproducible build failures than the state-of-art HireBuild based on historical fix information. Furthermore, our results also reveal various findings/guidelines for future advanced build failure fixing.
@InProceedings{ISSTA19p43,
author = {Yiling Lou and Junjie Chen and Lingming Zhang and Dan Hao and Lu Zhang},
title = {History-Driven Build Failure Fixing: How Far Are We?},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {43--54},
doi = {10.1145/3293882.3330578},
year = {2019},
}
Publisher's Version
Mobile App Testing
LibID: Reliable Identification of Obfuscated Third-Party Android Libraries
Jiexin Zhang, Alastair R. Beresford, and Stephan A. Kollmann
(University of Cambridge, UK)
Third-party libraries are vital components of Android apps, yet they can also introduce serious security threats and impede the accuracy and reliability of app analysis tasks, such as app clone detection. Several library detection approaches have been proposed to address these problems. However, we show these techniques are not robust against popular code obfuscators, such as ProGuard, which is now used in nearly half of all apps. We then present LibID, a library detection tool that is more resilient to code shrinking and package modification than state-of-the-art tools. We show that the library identification problem can be formulated using binary integer programming models. LibID is able to identify specific versions of third-party libraries in candidate apps through static analysis of app binaries coupled with a database of third-party libraries. We propose a novel approach to generate synthetic apps to tune the detection thresholds. Then, we use F-Droid apps as the ground truth to evaluate LibID under different obfuscation settings, which shows that LibID is more robust to code obfuscators than state-of-the-art tools. Finally, we demonstrate the utility of LibID by detecting the use of a vulnerable version of the OkHttp library in nearly 10% of 3,958 most popular apps on the Google Play Store.
@InProceedings{ISSTA19p55,
author = {Jiexin Zhang and Alastair R. Beresford and Stephan A. Kollmann},
title = {LibID: Reliable Identification of Obfuscated Third-Party Android Libraries},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {55--65},
doi = {10.1145/3293882.3330563},
year = {2019},
}
Publisher's Version
QADroid: Regression Event Selection for Android Applications
Aman Sharma and Rupesh Nasre
(IIT Madras, India)
Popular Android applications undergo frequent releases. Ensuring functional testing of the new features, as well as regression testing of the previous functionality, are time-consuming and error-prone. Thus, there is a need for a tool that eases the testing efforts as well as saves the overall time of the product release cycle. In this work, we present QADroid, the first activity- and event-aware regression selection tool for Android apps. Salient features of QADroid are: (i) a richer change-set analyzer that covers code as well as non-code components for regression, (ii) it presents a pictorial representation of the app’s functioning, and (iii) it displays the regression points in the app as a mapping between activities to user-elements to events. Features (ii) and (iii) help the testers in understanding the technical findings better. We evaluated QADroid on 1105 releases of 50 open source Android projects. The results show that QADroid reduced the activity selection by 58% and event selection by 74% compared to the traditional way of exhaustive testing of all activities and events, thereby significantly reducing the manual testing efforts.
@InProceedings{ISSTA19p66,
author = {Aman Sharma and Rupesh Nasre},
title = {QADroid: Regression Event Selection for Android Applications},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {66--77},
doi = {10.1145/3293882.3330550},
year = {2019},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Functional
Artifacts Reusable
Mining Android Crash Fixes in the Absence of Issue- and Change-Tracking Systems
Pingfan Kong,
Li Li, Jun Gao,
Tegawendé F. Bissyandé, and
Jacques Klein
(University of Luxembourg, Luxembourg; Monash University, Australia)
Android apps are prone to crash. This often arises from the misuse of Android framework APIs, making it harder to debug since official Android documentation does not discuss thoroughly potential exceptions.Recently, the program repair community has also started to investigate the possibility to fix crashes automatically. Current results, however, apply to limited example cases. In both scenarios of repair, the main issue is the need for more example data to drive the fix processes due to the high cost in time and effort needed to collect and identify fix examples. We propose in this work a scalable approach, CraftDroid, to mine crash fixes by leveraging a set of 28 thousand carefully reconstructed app lineages from app markets, without the need for the app source code or issue reports. We developed a replicative testing approach that locates fixes among app versions which output different runtime logs with the exact same test inputs. Overall, we have mined 104 relevant crash fixes, further abstracted 17 fine-grained fix templates that are demonstrated to be effective for patching crashed apks. Finally, we release ReCBench, a benchmark consisting of 200 crashed apks and the crash replication scripts, which the community can explore for evaluating generated crash-inducing bug patches.
@InProceedings{ISSTA19p78,
author = {Pingfan Kong and Li Li and Jun Gao and Tegawendé F. Bissyandé and Jacques Klein},
title = {Mining Android Crash Fixes in the Absence of Issue- and Change-Tracking Systems},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {78--89},
doi = {10.1145/3293882.3330572},
year = {2019},
}
Publisher's Version
Sara: Self-Replay Augmented Record and Replay for Android in Industrial Cases
Jiaqi Guo, Shuyue Li, Jian-Guang Lou, Zijiang Yang, and Ting Liu
(Xi'an Jiaotong University, China; Microsoft Research, China; Western Michigan University, USA)
Record-and-replay tools are indispensable for quality assurance of mobile applications. Due to its importance, an increasing number of tools are being developed to record and replay user interactions for Android. However, by conducting an empirical study of various existing tools in industrial settings, researchers have revealed a gap between the characteristics requested from industry and the performance of publicly available record-and-replay tools. The study concludes that no existing tools under evaluation are sufficient for industrial applications. In this paper, we present a record-and-replay tool called SARA towards bridging the gap and targeting a wide adoption. Specifically, a dynamic instrumentation technique is used to accommodate rich sources of inputs in the application layer satisfying various constraints requested from industry. A self-replay mechanism is proposed to record more information of user inputs for accurate replaying without degrading user experience. In addition, an adaptive replay method is designed to enable replaying events on different devices with diverse screen sizes and OS versions. Through an evaluation on 53 highly popular industrial Android applications and 265 common usage scenarios, we demonstrate the effectiveness of SARA in recording and replaying rich sources of inputs on the same or different devices.
@InProceedings{ISSTA19p90,
author = {Jiaqi Guo and Shuyue Li and Jian-Guang Lou and Zijiang Yang and Ting Liu},
title = {Sara: Self-Replay Augmented Record and Replay for Android in Industrial Cases},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {90--100},
doi = {10.1145/3293882.3330557},
year = {2019},
}
Publisher's Version
Regression Testing
Root Causing Flaky Tests in a Large-Scale Industrial Setting
Wing Lam, Patrice Godefroid,
Suman Nath, Anirudh Santhiar, and Suresh Thummalapenta
(University of Illinois at Urbana-Champaign, USA; Microsoft, USA)
In today’s agile world, developers often rely on continuous integration pipelines to help build and validate their changes by executing tests in an efficient manner. One of the significant factors that hinder developers’ productivity is flaky tests—tests that may pass and fail with the same version of code. Since flaky test failures are not deterministically reproducible, developers often have to spend hours only to discover that the occasional failures have nothing to do with their changes. However, ignoring failures of flaky tests can be dangerous, since those failures may represent real faults in the production code. Furthermore, identifying the root cause of flakiness is tedious and cumbersome, since they are often a consequence of unexpected and non-deterministic behavior due to various factors, such as concurrency and external dependencies.
As developers in a large-scale industrial setting, we first describe our experience with flaky tests by conducting a study on them. Our results show that although the number of distinct flaky tests may be low, the percentage of failing builds due to flaky tests can be substantial. To reduce the burden of flaky tests on developers, we describe our end-to-end framework that helps identify flaky tests and understand their root causes. Our framework instruments flaky tests and all relevant code to log various runtime properties, and then uses a preliminary tool, called RootFinder, to find differences in the logs of passing and failing runs. Using our framework, we collect and publicize a dataset of real-world, anonymized execution logs of flaky tests. By sharing the findings from our study, our framework and tool, and a dataset of logs, we hope to encourage more research on this important problem.
@InProceedings{ISSTA19p101,
author = {Wing Lam and Patrice Godefroid and Suman Nath and Anirudh Santhiar and Suresh Thummalapenta},
title = {Root Causing Flaky Tests in a Large-Scale Industrial Setting},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {101--111},
doi = {10.1145/3293882.3330570},
year = {2019},
}
Publisher's Version
Mitigating the Effects of Flaky Tests on Mutation Testing
August Shi, Jonathan Bell, and
Darko Marinov
(University of Illinois at Urbana-Champaign, USA; George Mason University, USA)
Mutation testing is widely used in research as a metric for evaluating the quality of test suites. Mutation testing runs the test suite on generated mutants (variants of the code under test), where a test suite kills a mutant if any of the tests fail when run on the mutant. Mutation testing implicitly assumes that tests exhibit deterministic behavior, in terms of their coverage and the outcome of a test (not) killing a certain mutant. Such an assumption does not hold in the presence of flaky tests, whose outcomes can non-deterministically differ even when run on the same code under test. Without reliable test outcomes, mutation testing can result in unreliable results, e.g., in our experiments, mutation scores vary by four percentage points on average between repeated executions, and 9% of mutant-test pairs have an unknown status. Many modern software projects suffer from flaky tests. We propose techniques that manage flakiness throughout the mutation testing process, largely based on strategically re-running tests. We implement our techniques by modifying the open-source mutation testing tool, PIT. Our evaluation on 30 projects shows that our techniques reduce the number of "unknown" (flaky) mutants by 79.4%.
@InProceedings{ISSTA19p112,
author = {August Shi and Jonathan Bell and Darko Marinov},
title = {Mitigating the Effects of Flaky Tests on Mutation Testing},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {112--122},
doi = {10.1145/3293882.3330568},
year = {2019},
}
Publisher's Version
Published Artifact
Artifacts Available
Assessing the State and Improving the Art of Parallel Testing for C
Oliver Schwahn, Nicolas Coppik, Stefan Winter, and Neeraj Suri
(TU Darmstadt, Germany)
The execution latency of a test suite strongly depends on the degree of concurrency with which test cases are executed. However, if test cases are not designed for concurrent execution, they may interfere, causing result deviations compared to sequential execution. To prevent this, each test case can be provided with an isolated execution environment, but the resulting overheads diminish the merit of parallel testing. Our large-scale analysis of the Debian Buster package repository shows that existing test suites in C projects make limited use of parallelization. We present an approach to (a) analyze the potential of C test suites for safe concurrent execution, i.e., result invariance compared to sequential execution, and (b) execute tests concurrently with different parallelization strategies using processes or threads if it is found to be safe. Applying our approach to 9 C projects, we find that most of them cannot safely execute tests in parallel due to unsafe test code or unsafe usage of shared variables or files within the program code. Parallel test execution shows a significant acceleration over sequential execution for most projects. We find that multi-threading rarely outperforms multi-processing. Finally, we observe that the lack of a common test framework for C leaves make as the standard driver for running tests, which introduces unnecessary performance overheads for test execution.
@InProceedings{ISSTA19p123,
author = {Oliver Schwahn and Nicolas Coppik and Stefan Winter and Neeraj Suri},
title = {Assessing the State and Improving the Art of Parallel Testing for C},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {123--133},
doi = {10.1145/3293882.3330573},
year = {2019},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Functional
Artifacts Reusable
Failure Clustering without Coverage
Mojdeh Golagha, Constantin Lehnhoff,
Alexander Pretschner, and Hermann Ilmberger
(TU Munich, Germany; BMW, Germany)
Developing and integrating software in the automotive industry is a complex task and requires extensive testing. An important cost factor in testing and debugging is the time required to analyze failing tests. In the context of regression testing, usually, large numbers of tests fail due to a few underlying faults. Clustering failing tests with respect to their underlying faults can, therefore, help in reducing the required analysis time. In this paper, we propose a clustering technique to group failing hardware-in-the-loop tests based on non-code-based features, retrieved from three different sources. To effectively reduce the analysis effort, the clustering tool selects a representative test for each cluster. Instead of analyzing all failing tests, testers only inspect the representative tests to find the underlying faults. We evaluated the effectiveness and efficiency of our solution in a major automotive company using 86 regression test runs, 8743 failing tests, and 1531 faults. The results show that utilizing our clustering tool, testers can reduce the analysis time more than 60% and find more than 80% of the faults only by inspecting the representative tests.
@InProceedings{ISSTA19p134,
author = {Mojdeh Golagha and Constantin Lehnhoff and Alexander Pretschner and Hermann Ilmberger},
title = {Failure Clustering without Coverage},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {134--145},
doi = {10.1145/3293882.3330561},
year = {2019},
}
Publisher's Version
Testing and Machine Learning
DeepHunter: A Coverage-Guided Fuzz Testing Framework for Deep Neural Networks
Xiaofei Xie, Lei Ma, Felix Juefei-Xu,
Minhui Xue, Hongxu Chen,
Yang Liu,
Jianjun Zhao, Bo Li, Jianxiong Yin, and Simon See
(Nanyang Technological University, Singapore; Kyushu University, Japan; Carnegie Mellon University, USA; University of Adelaide, Australia; Zhejiang Sci-Tech University, China; University of Illinois at Urbana-Champaign, USA; NVIDIA AI Tech Centre, Singapore)
The past decade has seen the great potential of applying deep neural network (DNN) based software to safety-critical scenarios, such as autonomous driving. Similar to traditional software, DNNs could exhibit incorrect behaviors, caused by hidden defects, leading to severe accidents and losses. In this paper, we propose DeepHunter, a coverage-guided fuzz testing framework for detecting potential defects of general-purpose DNNs. To this end, we first propose a metamorphic mutation strategy to generate new semantically preserved tests, and leverage multiple extensible coverage criteria as feedback to guide the test generation. We further propose a seed selection strategy that combines both diversity-based and recency-based seed selection. We implement and incorporate 5 existing testing criteria and 4 seed selection strategies in DeepHunter. Large-scale experiments demonstrate that (1) our metamorphic mutation strategy is useful to generate new valid tests with the same semantics as the original seed, by up to a 98% validity ratio; (2) the diversity-based seed selection generally weighs more than recency-based seed selection in boosting the coverage and in detecting defects; (3) DeepHunter outperforms the state of the arts by coverage as well as the quantity and diversity of defects identified; (4) guided by corner-region based criteria, DeepHunter is useful to capture defects during the DNN quantization for platform migration.
@InProceedings{ISSTA19p146,
author = {Xiaofei Xie and Lei Ma and Felix Juefei-Xu and Minhui Xue and Hongxu Chen and Yang Liu and Jianjun Zhao and Bo Li and Jianxiong Yin and Simon See},
title = {DeepHunter: A Coverage-Guided Fuzz Testing Framework for Deep Neural Networks},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {146--157},
doi = {10.1145/3293882.3330579},
year = {2019},
}
Publisher's Version
Search-Based Test and Improvement of Machine-Learning-Based Anomaly Detection Systems
Maxime Cordy, Steve Muller,
Mike Papadakis, and Yves Le Traon
(University of Luxembourg, Luxembourg)
Machine-learning-based anomaly detection systems can be vulnerable to new kinds of deceptions, known as training attacks, which exploit the live learning mechanism of these systems by progressively injecting small portions of abnormal data. The injected data seamlessly swift the learned states to a point where harmful data can pass unnoticed. We focus on the systematic testing of these attacks in the context of intrusion detection systems (IDS). We propose a search-based approach to test IDS by making training attacks. Going a step further, we also propose searching for countermeasures, learning from the successful attacks and thereby increasing the resilience of the tested IDS. We evaluate our approach on a denial-of-service attack detection scenario and a dataset recording the network traffic of a real-world system. Our experiments show that our search-based attack scheme generates successful attacks bypassing the current state-of-the-art defences. We also show that our approach is capable of generating attack patterns for all configuration states of the studied IDS and that it is capable of providing appropriate countermeasures. By co-evolving our attack and defence mechanisms we succeeded at improving the defence of the IDS under test by making it resilient to 49 out of 50 independently generated attacks.
@InProceedings{ISSTA19p158,
author = {Maxime Cordy and Steve Muller and Mike Papadakis and Yves Le Traon},
title = {Search-Based Test and Improvement of Machine-Learning-Based Anomaly Detection Systems},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {158--168},
doi = {10.1145/3293882.3330580},
year = {2019},
}
Publisher's Version
Info
Artifacts Functional
Artifacts Reusable
DeepFL: Integrating Multiple Fault Diagnosis Dimensions for Deep Fault Localization
Xia Li, Wei Li, Yuqun Zhang, and Lingming Zhang
(University of Texas at Dallas, USA; Southern University of Science and Technology, China)
Learning-based fault localization has been intensively studied recently. Prior studies have shown that traditional Learning-to-Rank
techniques can help precisely diagnose fault locations using various dimensions of fault-diagnosis features, such as suspiciousness
values computed by various off-the-shelf fault localization techniques. However, with the increasing dimensions of features considered by advanced fault localization techniques, it can be quite
challenging for the traditional Learning-to-Rank algorithms to automatically identify effective existing/latent features. In this work,
we propose DeepFL, a deep learning approach to automatically
learn the most effective existing/latent features for precise fault
localization. Although the approach is general, in this work, we collect various suspiciousness-value-based, fault-proneness-based and
textual-similarity-based features from the fault localization, defect
prediction and information retrieval areas, respectively. DeepFL
has been studied on 395 real bugs from the widely used Defects4J
benchmark. The experimental results show DeepFL can significantly outperform state-of-the-art TraPT/FLUCCS (e.g., localizing
50+ more faults within Top-1). We also investigate the impacts of
deep model configurations (e.g., loss functions and epoch settings)
and features. Furthermore, DeepFL is also surprisingly effective for
cross-project prediction.
@InProceedings{ISSTA19p169,
author = {Xia Li and Wei Li and Yuqun Zhang and Lingming Zhang},
title = {DeepFL: Integrating Multiple Fault Diagnosis Dimensions for Deep Fault Localization},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {169--180},
doi = {10.1145/3293882.3330574},
year = {2019},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Functional
Artifacts Reusable
Codebase-Adaptive Detection of Security-Relevant Methods
Goran Piskachev, Lisa Nguyen Quang Do, and
Eric Bodden
(Fraunhofer IEM, Germany; University of Paderborn, Germany)
More and more companies use static analysis to perform regular
code reviews to detect security vulnerabilities in their code, configuring
them to detect various types of bugs and vulnerabilities such
as the SANS top 25 or the OWASP top 10. For such analyses to be
as precise as possible, they must be adapted to the code base they
scan. The particular challenge we address in this paper is to provide
analyses with the correct security-relevant methods (Srm): sources,
sinks, etc. We present SWAN, a fully-automated machine-learning
approach to detect sources, sinks, validators, and authentication
methods for Java programs. SWAN further classifies the Srm into
specific vulnerability classes of the SANS top 25. To further adapt
the lists detected by SWAN to the code base and to improve its
precision, we also introduce SWANAssist, an extension to SWAN
that allows analysis users to refine the classifications. On twelve
popular Java frameworks, SWAN achieves an average precision of
0.826, which is better or comparable to existing approaches. Our
experiments show that SWANAssist requires a relatively low effort
from the developer to significantly improve its precision.
@InProceedings{ISSTA19p181,
author = {Goran Piskachev and Lisa Nguyen Quang Do and Eric Bodden},
title = {Codebase-Adaptive Detection of Security-Relevant Methods},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {181--191},
doi = {10.1145/3293882.3330556},
year = {2019},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Functional
APIs and Symbolic Execution
Effective and Efficient API Misuse Detection via Exception Propagation and Search-Based Testing
Maria Kechagia,
Xavier Devroey,
Annibale Panichella,
Georgios Gousios, and
Arie van Deursen
(University College London, UK; Delft University of Technology, Netherlands)
Application Programming Interfaces (APIs)
typically come with (implicit) usage constraints.
The violations of these constraints (API misuses)
can lead to software crashes.
Even though there are several tools that
can detect API misuses,
most of them suffer from a very high rate of false positives.
We introduce Catcher, a novel API misuse detection approach
that combines static exception propagation analysis with automatic search-based test case
generation to effectively and efficiently pinpoint crash-prone API misuses
in client applications.
We validate Catcher against 21 Java applications,
targeting misuses of the Java platform's API.
Our results indicate that Catcher is able to generate
test cases that uncover 243 (unique) API misuses that result in
crashes.
Our empirical evaluation shows that Catcher can detect a large number of misuses (77 cases)
that would remain undetected by the traditional coverage-based test case generator EvoSuite.
Additionally, on average, Catcher is eight times faster than EvoSuite
in generating test cases for the identified misuses.
Finally, we find that the majority of the exceptions triggered by Catcher
are unexpected to developers, i.e.,
not only unhandled in the source code but also not listed in the documentation of the client applications.
@InProceedings{ISSTA19p192,
author = {Maria Kechagia and Xavier Devroey and Annibale Panichella and Georgios Gousios and Arie van Deursen},
title = {Effective and Efficient API Misuse Detection via Exception Propagation and Search-Based Testing},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {192--203},
doi = {10.1145/3293882.3330552},
year = {2019},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Functional
Artifacts Reusable
Automated API-Usage Update for Android Apps
Mattia Fazzini, Qi Xin, and
Alessandro Orso
(Georgia Institute of Technology, USA)
Mobile apps rely heavily on the application programming interface (API) provided by their underlying operating system (OS). Because OS and API can change frequently, developers must quickly update their apps to ensure that the apps behave as intended with new API and OS versions. To help developers with this tedious, error prone, and time consuming task, we developed a technique that can automatically perform app updates for API changes based on examples of how other developers evolved their apps for the same changes. Given a target app to be updated and information about the changes in the API, our technique performs four main steps. First, it analyzes the target app to identify code affected by API changes. Second, it searches existing code bases for examples of updates to the new version of the API. Third, it analyzes, ranks, and transforms into generic patches the update examples found in the previous step. Finally, it applies the generated patches to the target app in order of ranking, while performing differential testing to validate the update. We implemented our technique and performed an empirical evaluation on 15 real-world apps with promising results. Overall, our technique was able to update 85% of the API changes considered and automatically validate 68% of the updates performed.
@InProceedings{ISSTA19p204,
author = {Mattia Fazzini and Qi Xin and Alessandro Orso},
title = {Automated API-Usage Update for Android Apps},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {204--215},
doi = {10.1145/3293882.3330571},
year = {2019},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Functional
A Large-Scale Study of Application Incompatibilities in Android
Haipeng Cai, Ziyi Zhang,
Li Li, and Xiaoqin Fu
(Washington State University, USA; Monash University, Australia)
The rapid expansion of the Android ecosystem is accompanied by continuing diversification of platforms and devices, resulting in increasing incompatibility issues which damage user experiences and impede app development productivity. In this paper, we conducted a large-scale, longitudinal study of compatibility issues in 62,894 benign apps developed in the past eight years, to understand the symptoms and causes of these issues. We further investigated the incompatibilities that are actually exercised at runtime through the system logs and execution traces of 15,045 apps. Our study revealed that, among others, (1) compatibility issues were prevalent and persistent at both installation and run time, with greater prevalence of run-time incompatibilities, (2) there were no certain Android versions that consistently saw more or less app incompatibilities than others, (3) installation-time incompatibilities were strongly correlated with the minSdkVersion specified in apps, while run-time incompatibilities were most significantly correlated with the underlying platform’s API level, and (4) installation-time incompatibilities were mostly due to apps’ use of architecture-incompatible native libraries, while run-time incompatibilities were mostly due to API changes during SDK evolution. We offered further insights into app incompatibilities, as well as recommendations on dealing with the issues for bother developers and end users of Android apps.
@InProceedings{ISSTA19p216,
author = {Haipeng Cai and Ziyi Zhang and Li Li and Xiaoqin Fu},
title = {A Large-Scale Study of Application Incompatibilities in Android},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {216--227},
doi = {10.1145/3293882.3330564},
year = {2019},
}
Publisher's Version
Published Artifact
Info
Artifacts Available
Artifacts Functional
Deferred Concretization in Symbolic Execution via Fuzzing
Awanish Pandey, Phani Raj Goutham Kotcharlakota, and
Subhajit Roy
(IIT Kanpur, India)
Concretization is an effective weapon in the armory of symbolic execution engines. However, concretization can lead to loss in coverage, path divergence, and generation of test-cases on which the intended bugs are not reproduced. In this paper, we propose an algorithm, Deferred Concretization, that uses a new category for values within symbolic execution (referred to as the symcrete values) to pend concretization till they are actually needed. Our tool, COLOSSUS, built around these ideas, was able to gain an average coverage improvement of 66.94% and reduce divergence by more than 55% relative to the state-of-the-art symbolic execution engine, KLEE. Moreover, we found that KLEE loses about 38.60% of the states in the symbolic execution tree that COLOSSUS is able to recover, showing that COLOSSUS is capable of covering a much larger coverage space.
@InProceedings{ISSTA19p228,
author = {Awanish Pandey and Phani Raj Goutham Kotcharlakota and Subhajit Roy},
title = {Deferred Concretization in Symbolic Execution via Fuzzing},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {228--238},
doi = {10.1145/3293882.3330554},
year = {2019},
}
Publisher's Version
Static Analysis and Debugging
Differentially Testing Soundness and Precision of Program Analyzers
Christian Klinger,
Maria Christakis, and
Valentin Wüstholz
(Saarland University, Germany; MPI-SWS, Germany; ConsenSys Diligence, Germany)
In the last decades, numerous program analyzers have been
developed both in academia and industry. Despite their abundance
however, there is currently no systematic way of comparing the
effectiveness of different analyzers on arbitrary code. In this
paper, we present the first automated technique for
differentially testing soundness and precision of program
analyzers. We used our technique to compare six mature,
state-of-the art analyzers on tens of thousands of automatically
generated benchmarks. Our technique detected soundness and
precision issues in most analyzers, and we evaluated the
implications of these issues to both designers and users of
program analyzers.
@InProceedings{ISSTA19p239,
author = {Christian Klinger and Maria Christakis and Valentin Wüstholz},
title = {Differentially Testing Soundness and Precision of Program Analyzers},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {239--250},
doi = {10.1145/3293882.3330553},
year = {2019},
}
Publisher's Version
Judge: Identifying, Understanding, and Evaluating Sources of Unsoundness in Call Graphs
Michael Reif, Florian Kübler, Michael Eichberg,
Dominik Helm, and
Mira Mezini
(TU Darmstadt, Germany)
Call graphs are widely used; in particular for advanced control- and data-flow analyses. Even though many call graph algorithms with different precision and scalability properties have been proposed, a comprehensive understanding of sources of unsoundness, their relevance, and the capabilities of existing call graph algorithms in this respect is missing.
To address this problem, we propose Judge, a toolchain that helps with understanding sources of unsoundness and improving the soundness of call graphs. In several experiments, we use Judge and an extensive test suite related to sources of unsoundness to (a) compute capability profiles for call graph implementations of Soot, WALA, DOOP, and OPAL, (b) to determine the prevalence of language features and APIs that affect soundness in modern Java Bytecode, (c) to compare the call graphs of Soot, WALA, DOOP, and OPAL – highlighting important differences in their implementations, and (d) to evaluate the necessary effort to achieve project-specific reasonable sound call graphs.
We show that soundness-relevant features/APIs are frequently used and that support for them differs vastly, up to the point where comparing call graphs computed by the same base algorithms (e.g., RTA) but different frameworks is bogus. We also show that Judge can support users in establishing the soundness of call graphs with reasonable effort.
@InProceedings{ISSTA19p251,
author = {Michael Reif and Florian Kübler and Michael Eichberg and Dominik Helm and Mira Mezini},
title = {Judge: Identifying, Understanding, and Evaluating Sources of Unsoundness in Call Graphs},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {251--261},
doi = {10.1145/3293882.3330555},
year = {2019},
}
Publisher's Version
Adlib: Analyzer for Mobile Ad Platform Libraries
Sungho Lee and
Sukyoung Ryu
(KAIST, South Korea)
Mobile advertising has become a popular advertising approach by taking advantage of various information from mobile devices and rich interaction with users. Mobile advertising platforms show advertisements of nearby restaurants to users using the geographic locations of their mobile devices, and also allow users to make reservations easily using their phone numbers. However, at the same time, they may open the doors for advertisements to steal device information or to perform malicious behaviors. When application developers integrate mobile advertising platform SDKs (AdSDKs) to their applications, they are informed of only the permissions required by the AdSDKs, and they may not be aware of the rich functionalities of the SDKs that are available to advertisements.
In this paper, we first report that various AdSDKs provide powerful functionalities to advertisements, which are seriously vulnerable to security threats. We present representative malicious behaviors by advertisements using APIs provided by AdSDKs. To mitigate the security vulnerability, we develop a static analyzer, Adlib, which analyzes Android Java libraries that use hybrid features to enable communication with JavaScript code and detects possible flows from the APIs that are accessible from third-party advertisements to device-specific features like geographic locations. Our evaluation shows that Adlib found genuine security vulnerabilities from real-world AdSDKs.
@InProceedings{ISSTA19p262,
author = {Sungho Lee and Sukyoung Ryu},
title = {Adlib: Analyzer for Mobile Ad Platform Libraries},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {262--272},
doi = {10.1145/3293882.3330562},
year = {2019},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Functional
Artifacts Reusable
Interactive Metamorphic Testing of Debuggers
Sandro Tolksdorf,
Daniel Lehmann, and
Michael Pradel
(TU Darmstadt, Germany)
When improving their code, developers often turn to interactive debuggers. The correctness of these tools is crucial, because bugs in the debugger itself may mislead a developer, e.g., to believe that executed code is never reached or that a variable has another value than in the actual execution. Yet, debuggers are difficult to test because their input consists of both source code and a sequence of debugging actions, such as setting breakpoints or stepping through code. This paper presents the first metamorphic testing approach for debuggers. The key idea is to transform both the debugged code and the debugging actions in such a way that the behavior of the original and the transformed inputs should differ only in specific ways. For example, adding a breakpoint should not change the control flow of the debugged program. To support the interactive nature of debuggers, we introduce interactive metamorphic testing. It differs from traditional metamorphic testing by determining the input transformation and the expected behavioral change it causes while the program under test is running. Our evaluation applies the approach to the widely used debugger in the Chromium browser, where it finds eight previously unknown bugs with a true positive rate of 51%. All bugs have been confirmed by the developers, and one bug has even been marked as release-blocking.
@InProceedings{ISSTA19p273,
author = {Sandro Tolksdorf and Daniel Lehmann and Michael Pradel},
title = {Interactive Metamorphic Testing of Debuggers},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {273--283},
doi = {10.1145/3293882.3330567},
year = {2019},
}
Publisher's Version
Testing GUIs and Cars
TestMig: Migrating GUI Test Cases from iOS to Android
Xue Qin,
Hao Zhong, and
Xiaoyin Wang
(University of Texas at San Antonio, USA; Shanghai Jiao Tong University, China)
Nowadays, Apple iOS and Android are two most popular platforms for mobile applications. To attract more users, many software companies and organizations are migrating their applications from one platform to the other, and besides source files, they also need to migrate their GUI tests. The migration of GUI tests is tedious and difficult to be automated, since two platforms have subtle differences and there are often few or even no migrated GUI tests for learning. To address the problem, in this paper, we propose a novel approach, TestMig, that migrates GUI tests from iOS to Android, without any migrated code samples. Specifically, TestMig first executes the GUI tests of the iOS version, and records their GUI event sequences. Guided by the iOS GUI events, TestMig explores the Android version of the application to generate the corresponding Android event sequences. We conducted an evaluation on five well known mobile applications: 2048, SimpleNote, Wire, Wikipedia, and WordPress. The results show that, on average, TestMig correctly converts 80.2% of recorded iOS UI events to Android UI events and have them successfully executed, and our migrated Android test cases achieve similar statement coverage compared with the original iOS test cases (59.7% vs 60.4%).
@InProceedings{ISSTA19p284,
author = {Xue Qin and Hao Zhong and Xiaoyin Wang},
title = {TestMig: Migrating GUI Test Cases from iOS to Android},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {284--295},
doi = {10.1145/3293882.3330575},
year = {2019},
}
Publisher's Version
Learning User Interface Element Interactions
Christian Degott, Nataniel P. Borges Jr., and Andreas Zeller
(CISPA, Germany)
When generating tests for graphical user interfaces, one central problem is to identify how individual UI elements can be interacted with—clicking, long- or right-clicking, swiping, dragging, typing, or more. We present an approach based on reinforcement learning that automatically learns which interactions can be used for which elements, and uses this information to guide test generation. We model the problem as an instance of the multi-armed bandit problem (MAB problem) from probability theory, and show how its traditional solutions work on test generation, with and without relying on previous knowledge. The resulting guidance yields higher coverage. In our evaluation, our approach shows improvements in statement coverage between 18% (when not using any previous
knowledge) and 20% (when reusing previously generated models).
@InProceedings{ISSTA19p296,
author = {Christian Degott and Nataniel P. Borges Jr. and Andreas Zeller},
title = {Learning User Interface Element Interactions},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {296--306},
doi = {10.1145/3293882.3330569},
year = {2019},
}
Publisher's Version
Improving Random GUI Testing with Image-Based Widget Detection
Thomas D. White,
Gordon Fraser, and Guy J. Brown
(University of Sheffield, UK; University of Passau, Germany)
Graphical User Interfaces (GUIs) are amongst the most common user interfaces, enabling interactions with applications through mouse movements and key presses. Tools for automated testing of programs through their GUI exist, however they usually rely on operating system or framework specific knowledge to interact with an application. Due to frequent operating system updates, which can remove required information, and a large variety of different GUI frameworks using unique underlying data structures, such tools rapidly become obsolete, Consequently, for an automated GUI test generation tool, supporting many frameworks and operating systems is impractical. We propose a technique for improving GUI testing by automatically identifying GUI widgets in screen shots using machine learning techniques. As training data, we generate randomized GUIs to automatically extract widget information. The resulting model provides guidance to GUI testing tools in environments not currently supported by deriving GUI widget information from screen shots only. In our experiments, we found that identifying GUI widgets in screen shots and using this information to guide random testing achieved a significantly higher branch coverage in 18 of 20 applications, with an average increase of 42.5% when compared to conventional random testing.
@InProceedings{ISSTA19p307,
author = {Thomas D. White and Gordon Fraser and Guy J. Brown},
title = {Improving Random GUI Testing with Image-Based Widget Detection},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {307--317},
doi = {10.1145/3293882.3330551},
year = {2019},
}
Publisher's Version
Automatically Testing Self-Driving Cars with Search-Based Procedural Content Generation
Alessio Gambi, Marc Mueller, and
Gordon Fraser
(University of Passau, Germany; BeamNG, Germany)
Self-driving cars rely on software which needs to be thoroughly tested. Testing self-driving car software in real traffic is not only expensive but also dangerous, and has already caused fatalities. Virtual tests, in which self-driving car software is tested in computer simulations, offer a more efficient and safer alternative compared to naturalistic field operational tests. However, creating suitable test scenarios is laborious and difficult. In this paper we combine procedural content generation, a technique commonly employed in modern video games, and search-based testing, a testing technique proven to be effective in many domains, in order to automatically create challenging virtual scenarios for testing self-driving car soft- ware. Our AsFault prototype implements this approach to generate virtual roads for testing lane keeping, one of the defining features of autonomous driving. Evaluation on two different self-driving car software systems demonstrates that AsFault can generate effective virtual road networks that succeed in revealing software failures, which manifest as cars departing their lane. Compared to random testing AsFault was not only more efficient, but also caused up to twice as many lane departures.
@InProceedings{ISSTA19p318,
author = {Alessio Gambi and Marc Mueller and Gordon Fraser},
title = {Automatically Testing Self-Driving Cars with Search-Based Procedural Content Generation},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {318--328},
doi = {10.1145/3293882.3330566},
year = {2019},
}
Publisher's Version
Potpourri
Semantic Fuzzing with Zest
Rohan Padhye, Caroline Lemieux, Koushik Sen,
Mike Papadakis, and Yves Le Traon
(University of California at Berkeley, USA; University of Luxembourg, Luxembourg)
Programs expecting structured inputs often consist of both a syntactic analysis stage, which parses raw input, and a semantic analysis stage, which conducts checks on the parsed input and executes the core logic of the program. Generator-based testing tools in the lineage of QuickCheck are a promising way to generate random syntactically valid test inputs for these programs. We present Zest, a technique which automatically guides QuickCheck-like random input generators to better explore the semantic analysis stage of test programs. Zest converts random-input generators into deterministic parametric input generators. We present the key insight that mutations in the untyped parameter domain map to structural mutations in the input domain. Zest leverages program feedback in the form of code coverage and input validity to perform feedback-directed parameter search. We evaluate Zest against AFL and QuickCheck on five Java programs: Maven, Ant, BCEL, Closure, and Rhino. Zest covers 1.03x-2.81x as many branches within the benchmarks' semantic analysis stages as baseline techniques. Further, we find 10 new bugs in the semantic analysis stages of these benchmarks. Zest is the most effective technique in finding these bugs reliably and quickly, requiring at most 10 minutes on average to find each bug.
@InProceedings{ISSTA19p329,
author = {Rohan Padhye and Caroline Lemieux and Koushik Sen and Mike Papadakis and Yves Le Traon},
title = {Semantic Fuzzing with Zest},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {329--340},
doi = {10.1145/3293882.3330576},
year = {2019},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Functional
Artifacts Reusable
Detecting Memory Errors at Runtime with Source-Level Instrumentation
Zhe Chen, Junqi Yan, Shuanglong Kan, Ju Qian, and Jingling Xue
(Nanjing University of Aeronautics and Astronautics, China; UNSW, Australia)
The unsafe language features of C, such as low-level control of memory, often lead to memory errors, which can result in silent data corruption, security vulnerabilities, and program crashes. Dynamic analysis tools, which have been widely used for detecting memory errors at runtime, usually perform instrumentation at the IR-level or binary-level. However, their underlying non-source-level instrumentation techniques have three inherent limitations: optimization sensitivity, platform dependence and DO-178C non-compliance. Due to optimization sensitivity, these tools are used to trade either performance for effectiveness by compiling the program at -O0 or effectiveness for performance by compiling the program at a higher optimization level, say, -O3.
In this paper, we overcome these three limitations by proposing a new source-level instrumentation technique and implementing it in a new dynamic analysis tool, called MOVEC, in a pointer-based instrumentation framework. Validation against a set of 86 microbenchmarks (with ground truth) and a set of 10 MiBench benchmarks shows that MOVEC outperforms state-of-the-art tools, SoftBoundCETS, Google's AddressSanitizer and Valgrind, in terms of both effectiveness and performance considered together.
@InProceedings{ISSTA19p341,
author = {Zhe Chen and Junqi Yan and Shuanglong Kan and Ju Qian and Jingling Xue},
title = {Detecting Memory Errors at Runtime with Source-Level Instrumentation},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {341--351},
doi = {10.1145/3293882.3330581},
year = {2019},
}
Publisher's Version
Optimal Context-Sensitive Dynamic Partial Order Reduction with Observers
Elvira Albert,
Maria Garcia de la Banda, Miguel Gómez-Zamalloa, Miguel Isabel, and
Peter J. Stuckey
(Complutense University of Madrid, Spain; Monash University, Australia)
Dynamic Partial Order Reduction (DPOR) algorithms are used in stateless model checking to avoid the exploration of equivalent execution sequences. DPOR relies on the notion of independence between execution steps to detect equivalence. Recent progress in the area has introduced more accurate ways to detect independence: Context-Sensitive DPOR considers two steps p and t independent in the current state if the states obtained by executing p · t and t · p are the same; Optimal DPOR with Observers makes their dependency conditional to the existence of future events that observe their operations. We introduce a new algorithm, Optimal Context-Sensitive DPOR with Observers, that combines these two notions of conditional independence, and goes beyond them by exploiting their synergies. Experimental evaluation shows that our gains increase exponentially with the size of the considered inputs.
@InProceedings{ISSTA19p352,
author = {Elvira Albert and Maria Garcia de la Banda and Miguel Gómez-Zamalloa and Miguel Isabel and Peter J. Stuckey},
title = {Optimal Context-Sensitive Dynamic Partial Order Reduction with Observers},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {352--362},
doi = {10.1145/3293882.3330565},
year = {2019},
}
Publisher's Version
Exploiting the Laws of Order in Smart Contracts
Aashish Kolluri,
Ivica Nikolic,
Ilya Sergey, Aquinas Hobor, and
Prateek Saxena
(National University of Singapore, Singapore; Yale-NUS College, Singapore)
We investigate a family of bugs in blockchain-based smart contracts, which we dub event-ordering (or EO) bugs. These bugs are intimately related to the dynamic ordering of contract events, i.e. calls of its functions, and enable potential exploits of millions of USD worth of crypto-coins. Previous techniques to detect EO bugs have been restricted to those bugs that involve just one or two event orderings. Our work provides a new formulation of the general class of EO bugs arising in long permutations of such events by using techniques from concurrent program analysis.
The technical challenge in detecting EO bugs in blockchain contracts is the inherent combinatorial blowup in path and state space analysis, even for simple contracts. We propose the first use of partial-order reduction techniques, using automatically extracted happens-before relations along with several dynamic symbolic execution optimizations. We build EthRacer, an automatic analysis tool that runs directly on Ethereum bytecode and requires no hints from users. It flags 8% of over 10, 000 contracts analyzed, providing compact event traces (witnesses) that human analysts can examine in only a few minutes per contract. More than half of the flagged contracts are likely to have unintended behaviour.
@InProceedings{ISSTA19p363,
author = {Aashish Kolluri and Ivica Nikolic and Ilya Sergey and Aquinas Hobor and Prateek Saxena},
title = {Exploiting the Laws of Order in Smart Contracts},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {363--373},
doi = {10.1145/3293882.3330560},
year = {2019},
}
Publisher's Version
Tool Demonstration
Go-Clone: Graph-Embedding Based Clone Detector for Golang
Cong Wang, Jian Gao,
Yu Jiang,
Zhenchang Xing, Huafeng Zhang, Weiliang Yin, Ming Gu, and Jiaguang Sun
(Tsinghua University, China; Australian National University, Australia; Huawei Technologies, China)
Golang (short for Go programming language) is a fast and compiled language, which has been increasingly used in industry due to its excellent performance on concurrent programming. Golang redefines concurrent programming grammar, making it a challenge for traditional clone detection tools and techniques. However, there exist few tools for detecting duplicates or copy-paste related bugs in Golang. Therefore, an effective and efficient code clone detector on Golang is especially needed.
In this paper, we present Go-Clone, a learning-based clone detector for Golang. Go-Clone contains two modules -- the training module and the user interaction module. In the training module, firstly we parse Golang source code into llvm IR (Intermediate Representation). Secondly, we calculate LSFG (labeled semantic flow graph) for each program function automatically. Go-Clone trains a deep neural network model to encode LSFGs for similarity classification. In the user interaction module, users can choose one or more Golang projects. Go-Clone identifies and presents a list of function pairs, which are most likely clone code for user inspection. To evaluate Go-Clone's performance, we collect 6,110 commit versions from 48 Github projects to construct a Golang clone detection data set. Go-Clone can reach the value of AUC (Area Under Curve) and ACC (Accuracy) for 89.61% and 83.80% in clone detection. By testing several groups of unfamiliar data, we also demonstrates the generility of Go-Clone. The address of the abstract demo video: https://youtu.be/o5DogtYGbeo
@InProceedings{ISSTA19p374,
author = {Cong Wang and Jian Gao and Yu Jiang and Zhenchang Xing and Huafeng Zhang and Weiliang Yin and Ming Gu and Jiaguang Sun},
title = {Go-Clone: Graph-Embedding Based Clone Detector for Golang},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {374--377},
doi = {10.1145/3293882.3338996},
year = {2019},
}
Publisher's Version
VFQL: Combinational Static Analysis as Query Language
Guang Chen, Yuexing Wang, Min Zhou, and Jiaguang Sun
(Tsinghua University, China)
Value flow are widely used in static analysis to detect bugs. Existing techniques usually employ a pointer analysis and generate source sink summaries defined by problem domain, then a solver is invoked to determine whether the path is feasible. However, most of the tools does not provide an easy way for users to find user defined bugs within the same architecture of finding pre-defined bugs. This paper presents VFQL, an expressive query language on value flow graph and the framework to execute the query to find user defined defects. Moreover, VFQL provides a nice GUI to demonstrate the value flow graph and a modeling language to define system libraries or user libraries without code, which further enhances its usability. The experimental results on open benchmarks show that VFQL achieve a competitive performance against other state of art tools. The result of case study conducted on open source program shows that the flexible query and modeling language provide a great support in finding user specified defects.
@InProceedings{ISSTA19p378,
author = {Guang Chen and Yuexing Wang and Min Zhou and Jiaguang Sun},
title = {VFQL: Combinational Static Analysis as Query Language},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {378--381},
doi = {10.1145/3293882.3338997},
year = {2019},
}
Publisher's Version
VBSAC: A Value-Based Static Analyzer for C
Chi Li,
Min Zhou, Zuxing Gu, Guang Chen, Yuexing Wang, Jiecheng Wu, and Ming Gu
(Tsinghua University, China)
Static analysis has long prevailed as a promising approach to detect program bugs at an early development process to increase software quality. However, such tools face great challenges to balance the false-positive rate and the false-negative rate in practical use. In this paper, we present VBSAC, a value-based static analyzer for C aiming to improve the precision and recall. In our tool, we employ a pluggable value-based analysis strategy. A memory skeleton recorder is designed to maintain the memory objects as a baseline. While traversing the control flow graph, diverse value-based plug-ins analyze the specific abstract domains and share program information to strengthen the computation. Simultaneously, checkers consume the corresponding analysis results to detect bugs. We also provide a user-friendly web interface to help users audit the bug detection results. Evaluation on two widely-used benchmarks shows that we perform better to state-of-the-art bug detection tools by finding 221-339 more bugs and improving F-Score 9.88%-40.32%.
@InProceedings{ISSTA19p382,
author = {Chi Li and Min Zhou and Zuxing Gu and Guang Chen and Yuexing Wang and Jiecheng Wu and Ming Gu},
title = {VBSAC: A Value-Based Static Analyzer for C},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {382--385},
doi = {10.1145/3293882.3338998},
year = {2019},
}
Publisher's Version
SAFEVM: A Safety Verifier for Ethereum Smart Contracts
Elvira Albert,
Jesús Correas,
Pablo Gordillo, Guillermo Román-Díez, and Albert Rubio
(Complutense University of Madrid, Spain; Universidad Politécnica de Madrid, Spain)
Ethereum smart contracts are public, immutable and distributed and, as such, they are prone to vulnerabilities sourcing from programming mistakes of developers. This paper presents SAFEVM, a verification tool for Ethereum smart contracts that makes use of state-of-the-art verification engines for C programs. SAFEVM takes as input an Ethereum smart contract (provided either in Solidity source code, or in compiled EVM bytecode), optionally with assert and require verification annotations, and produces in the output a report with the verification results. Besides general safety annotations, SAFEVM handles the verification of array accesses: it automatically generates SV-COMP verification assertions such that C verification engines can prove safety of array accesses. Our experimental evaluation has been undertaken on all contracts pulled from etherscan.io (more than 24,000) by using as back-end verifiers CPAchecker, SeaHorn and VeryMax.
@InProceedings{ISSTA19p386,
author = {Elvira Albert and Jesús Correas and Pablo Gordillo and Guillermo Román-Díez and Albert Rubio},
title = {SAFEVM: A Safety Verifier for Ethereum Smart Contracts},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {386--389},
doi = {10.1145/3293882.3338999},
year = {2019},
}
Publisher's Version
CoCoTest: Collaborative Crowdsourced Testing for Android Applications
Haoyu Li,
Chunrong Fang, Zhibin Wei, and
Zhenyu Chen
(Nanjing University, China)
Testing Android applications is becoming more and more challenging due to the notorious fragmentation issues and the complexity of usage scenarios in different environments. Crowdsourced testing has grown as a trend, especially in mobile application testing.
However, due to the lack of professionalism and communication, the crowd workers tend to submit low-quality and duplicate bug reports, leading to a waste of test resources on inspecting and aggregating such reports.
To solve these problems, we developed a platform, CoCoTest, embracing the idea of collective intelligence. With the help of CoCoTest Android SDK, workers can efficiently capture a screenshot, write a short description and create a bug report. A series of bug reports are aggregated online and then recommended to the other workers in real time. The crowdsourced workers can (1) help review, verify and enrich each others' bug reports; (2) escape duplicate bug reports; (3) be guided to conduct more professional testing with the help of collective intelligence. CoCoTest can improve the quality of the final report and reduce test costs.
The demo video can be found at https://youtu.be/PuVuPbNP4tY.
@InProceedings{ISSTA19p390,
author = {Haoyu Li and Chunrong Fang and Zhibin Wei and Zhenyu Chen},
title = {CoCoTest: Collaborative Crowdsourced Testing for Android Applications},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {390--393},
doi = {10.1145/3293882.3339000},
year = {2019},
}
Publisher's Version
Video
Androlic: An Extensible Flow, Context, Object, Field, and Path-Sensitive Static Analysis Framework for Android
Linjie Pan, Baoquan Cui,
Jiwei Yan,
Xutong Ma,
Jun Yan, and
Jian Zhang
(Institute of Software at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China; Peking University, China)
Static analysis is widely used to detect potential defects in apps. Existing analysis tools focus on specific problems and vary in supported sensitivity, which make them difficult to reuse and extend for new analysis tasks. This paper presents Androlic, a precise static analysis framework for Android which is flow, context, object, field and path-sensitive. Through configuration items and APIs provided by Androlic, developers can easily extend it to perform custom analysis tasks. Evaluation on an example program and 20 real-world apps show that Androlic can analyze apps with high precision and efficiency.
@InProceedings{ISSTA19p394,
author = {Linjie Pan and Baoquan Cui and Jiwei Yan and Xutong Ma and Jun Yan and Jian Zhang},
title = {Androlic: An Extensible Flow, Context, Object, Field, and Path-Sensitive Static Analysis Framework for Android},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {394--397},
doi = {10.1145/3293882.3339001},
year = {2019},
}
Publisher's Version
JQF: Coverage-Guided Property-Based Testing in Java
Rohan Padhye, Caroline Lemieux, and Koushik Sen
(University of California at Berkeley, USA)
We present JQF, a platform for performing coverage-guided fuzz testing in Java. JQF is designed both for practitioners, who wish to find bugs in Java programs, as well as for researchers, who wish to implement new fuzzing algorithms.
Practitioners write QuickCheck-style test methods that take inputs as formal parameters. JQF instruments the test program's bytecode and continuously executes tests using inputs that are generated in a coverage-guided fuzzing loop. JQF's input-generation mechanism is extensible. Researchers can implement custom fuzzing algorithms by extending JQF's Guidance interface. A Guidance instance responds to code coverage events generated during the execution of a test case, such as function calls and conditional jumps, and provides the next input. We describe several guidances that currently ship with JQF, such as: semantic fuzzing with Zest, binary fuzzing with AFL, and complexity fuzzing with PerfFuzz.
JQF is a mature tool that is open-source and publicly available. At the time of writing, JQF has been successful in discovering 42 previously unknown bugs in widely used open-source software such as OpenJDK, Apache Commons, and the Google Closure Compiler.
@InProceedings{ISSTA19p398,
author = {Rohan Padhye and Caroline Lemieux and Koushik Sen},
title = {JQF: Coverage-Guided Property-Based Testing in Java},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {398--401},
doi = {10.1145/3293882.3339002},
year = {2019},
}
Publisher's Version
Ukwikora: Continuous Inspection for Keyword-Driven Testing
Renaud Rwemalika, Marinos Kintis,
Mike Papadakis, Yves Le Traon, and Pierre Lorrach
(University of Luxembourg, Luxembourg; BGL BNP Paribas, Luxembourg)
Automation of acceptance test suites becomes necessary in the context of agile software development practices, which require rapid feedback on the quality of code changes. To this end, companies try to automate their acceptance tests as much as possible. Unfortunately, the growth of the automated test suites, by several automation testers, gives rise to potential test smells, i.e., poorly designed test code, being introduced in the test code base, which in turn may increase the cost of maintaining the code and creating new one. In this paper, we investigate this problem in the context of our industrial partner, BGL BNP Paribas, and introduce Ukwikora, an automated tool that statically analyzes acceptance test suites, enabling the continuous inspection of the test code base. Ukwikora targets code written in the Robot Framework syntax, a popular framework for writing Keyword-Driven tests. Ukwikora has been successfully deployed at BGL BNP Paribas, detecting issues otherwise unknown to the automation testers, such as the presence of duplicated test code, dead test code and dependency issues among the tests. The success of our case study reinforces the need for additional research and tooling for acceptance test suites.
@InProceedings{ISSTA19p402,
author = {Renaud Rwemalika and Marinos Kintis and Mike Papadakis and Yves Le Traon and Pierre Lorrach},
title = {Ukwikora: Continuous Inspection for Keyword-Driven Testing},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {402--405},
doi = {10.1145/3293882.3339003},
year = {2019},
}
Publisher's Version
CTRAS: A Tool for Aggregating and Summarizing Crowdsourced Test Reports
Yuying Li, Rui Hao, Yang Feng,
James A. Jones,
Xiaofang Zhang, and Zhenyu Chen
(Nanjing University, China; University of California at Irvine, USA; Soochow University, China; Mooctest, China)
In this paper, we present CTRAS, a tool for automatically aggregating and summarizing duplicate crowdsourced test reports on the fly. CTRAS can automatically detect duplicates based on both textual information and the screenshots, and further aggregates and summarizes the duplicate test reports. CTRAS provides end users with a comprehensive and comprehensible understanding of all duplicates by identifying the main topics across the group of aggregated test reports and highlighting supplementary topics that are mentioned in subgroups of test reports. Also, it provides the classic tool of issue tracking systems, such as the project-report dashboard and keyword searching, and automates their classic functionalities, such as bug triaging and best fixer recommendation, to assist end users in managing and diagnosing test reports.
Video: https://youtu.be/PNP10gKIPFs
@InProceedings{ISSTA19p406,
author = {Yuying Li and Rui Hao and Yang Feng and James A. Jones and Xiaofang Zhang and Zhenyu Chen},
title = {CTRAS: A Tool for Aggregating and Summarizing Crowdsourced Test Reports},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {406--409},
doi = {10.1145/3293882.3339004},
year = {2019},
}
Publisher's Version
Doctoral Symposium
Continuous Software Performance Assessment: Detecting Performance Problems of Software Libraries on Every Build
Christoph Laaber
(University of Zurich, Switzerland)
Degradation of software performance can become costly for companies and developers, yet it is hardly assessed continuously. A strategy that would allow continuous performance assessment of software libraries is software microbenchmarking, which faces problems such as excessive execution times and unreliable results that hinder wide-spread adoption in continuous integration. In my research, I want to develop techniques that allow including software microbenchmarks into continuous integration by utilizing cloud infrastructure and execution time reduction techniques. These will allow assessing performance on every build and therefore catching performance problems before they are released into the wild.
@InProceedings{ISSTA19p410,
author = {Christoph Laaber},
title = {Continuous Software Performance Assessment: Detecting Performance Problems of Software Libraries on Every Build},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {410--414},
doi = {10.1145/3293882.3338982},
year = {2019},
}
Publisher's Version
Mining Constraints for Grammar Fuzzing
Michaël Mera
(CISPA, Germany)
Grammar-based fuzzing has been shown to significantly improve bug detection in programs with highly structured inputs. However, since grammars are largely handwritten, it is rarely used as a standalone technique in large-spectrum fuzzers as it requires human expertise. To fill this gap, promising techniques begin to emerge to automate the extraction of context-free grammars directly from
the program under test. Unfortunately, the resulting grammars are usually not expressive enough and generate too many wrong inputs to provide results capable of competing with other fuzzing techniques. In this paper we propose a technique to automate the creation of attribute grammars from context-free grammars, thus significantly lowering the barrier of entry for efficient and effective large-scale grammar-based fuzzing.
@InProceedings{ISSTA19p415,
author = {Michaël Mera},
title = {Mining Constraints for Grammar Fuzzing},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {415--418},
doi = {10.1145/3293882.3338983},
year = {2019},
}
Publisher's Version
A New Dimension of Test Quality: Assessing and Generating Higher Quality Unit Test Cases
Giovanni Grano
(University of Zurich, Switzerland)
Unit tests form the first defensive line against the introduction of bugs in software systems. Therefore, their quality is of a paramount importance to produce robust and reliable software. To assess test quality, many organizations relies on metrics like code and mutation coverage. However, they are not always optimal to fulfill such a purpose. In my research, I want to make mutation testing scalable by devising a lightweight approach to estimate test effectiveness. Moreover, I plan to introduce a new metric measuring test focus—as a proxy for the effort needed by developers to understand and maintain a test— that both complements code coverage to assess test quality and can be used to drive automated test case generation of higher quality tests.
@InProceedings{ISSTA19p419,
author = {Giovanni Grano},
title = {A New Dimension of Test Quality: Assessing and Generating Higher Quality Unit Test Cases},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {419--423},
doi = {10.1145/3293882.3338984},
year = {2019},
}
Publisher's Version
A Cost-Effective Strategy for Software Vulnerability Prediction Based on Bellwether Analysis
Patrick Kwaku Kudjo and Jinfu Chen
(Jiangsu University, China)
Vulnerability Prediction Models (VPMs) aims to identify vulnerable and non-vulnerable components in large software systems. Consequently, VPMs presents three major drawbacks (i) finding an effective method to identify a representative set of features from which to construct an effective model. (ii) the way the features are utilized in the machine learning setup (iii) making an implicit assumption that parameter optimization would not change the outcome of VPMs. To address these limitations, we investigate the significant effect of the Bellwether analysis on VPMs. Specifically, we first develop a Bellwether algorithm to identify and select an exemplary subset of data to be considered as the Bellwether to yield improved prediction accuracy against the growing portfolio benchmark. Next, we build a machine learning approach with different parameter settings to show the improvement of performance of VPMs. The prediction results of the suggested models were assessed in terms of precision, recall, F-measure, and other statistical measures. The preliminary result shows the Bellwether approach outperforms the benchmark technique across the applications studied with F-measure values ranging from 51.1%-98.5%.
@InProceedings{ISSTA19p424,
author = {Patrick Kwaku Kudjo and Jinfu Chen},
title = {A Cost-Effective Strategy for Software Vulnerability Prediction Based on Bellwether Analysis},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {424--427},
doi = {10.1145/3293882.3338985},
year = {2019},
}
Publisher's Version
Identifying Error Code Misuses in Complex System
Wensheng Tang
(Hong Kong University of Science and Technology, China)
Many complex software systems use error codes to differentiate error states. Therefore, it is crucial to ensure those error codes are used correctly. Misuses of error codes can lead to hardly sensible but fatal system failures. These errors are especially difficult to debug, since the failure points are usually far away from the root causes. Existing static analysis approaches to detecting error handling bugs mainly focus on how an error code is propagated or used in a program. However, they do not consider whether an error code is correctly chosen for propagation or usage within different program contexts, and thus miss to detect many error code misuse bugs. In this work, we conduct an empirical study on error code misuses in a mature commercial system. We collect error code issues from the commit history and conclude three main causes of them. To further resolve this problem, we propose a static approach that can automatically detect error code misuses. Our approach takes error code definition and error domain assignment as the input, and uses a novel static analysis method to detect the occurrence of the three categories of error code misuses in the source code.
@InProceedings{ISSTA19p428,
author = {Wensheng Tang},
title = {Identifying Error Code Misuses in Complex System},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {428--432},
doi = {10.1145/3293882.3338986},
year = {2019},
}
Publisher's Version
Conditional Dynamic Partial Order Reduction and Optimality Results
Miguel Isabel
(Complutense University of Madrid, Spain)
Testing concurrent systems requires exploring all possible non-deterministic interleavings that the concurrent execution may have, as any of the interleavings may reveal an erroneous behaviour of the system. This introduces a combinatorial explosion on the number of states that must be considered, which leads often to a computationally intractable problem. In the present PhD thesis, this challenge will be addressed through the development of new Partial Order Reduction techniques (POR). The cornerstone of POR theory is the notion of independence, that is used to decided whether each pair of concurrent events p and t are in a race and thus both executions p· t and t · p must be explored. A fundamental goal of this thesis is to introduce notions of conditional independence –which ensures the commutativity of the considered events p and t under certain conditions that can be evaluated in the explored state– with a DPOR algorithm in order to alleviate the combinatorial explosion problem. The new techniques that we propose in the thesis have been implemented within the SYCO tool. We have carried out accompanying experimental evaluations to prove the effectiveness and applicability of the proposed techniques. Finally, we have successfully verified a range of properties for several case studies of Software-Defined Networks to illustrate the potential of the approach, scaling to larger networks than related techniques.
@InProceedings{ISSTA19p433,
author = {Miguel Isabel},
title = {Conditional Dynamic Partial Order Reduction and Optimality Results},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {433--437},
doi = {10.1145/3293882.3338987},
year = {2019},
}
Publisher's Version
Towards Scalable Defense of Information Flow Security for Distributed Systems
Xiaoqin Fu
(Washington State University, USA)
It is particularly challenging to defend common distributed systems against security vulnerabilities because of the complexity and their large sizes.
However, traditional solutions, that attack the information flow security problem, often fail for large, complex real-world distributed systems due to scalability problems. The problem would be even exacerbated for the online defense of continuously-running systems. My proposed research consists of three connected themes.
First, I have developed metrics to help users understand and analyze the security characteristics of distributed systems at runtime in relation to their coupling measures.
Then, I have also developed a highly scalable, cost-effective dynamic information flow analysis approach for distributed systems.
It can detect implicit dependencies and find real security vulnerabilities in industrial distributed systems with practical portability and scalability.
In order to thoroughly solve the scalability problem in general scenarios, I am developing a self-adaptive dynamic dependency analysis framework to monitor security issues during continuous running.
In this proposal, I outline the three projects in a related manner as to how they consistently target the central objective of my thesis research.
@InProceedings{ISSTA19p438,
author = {Xiaoqin Fu},
title = {Towards Scalable Defense of Information Flow Security for Distributed Systems},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {438--442},
doi = {10.1145/3293882.3338988},
year = {2019},
}
Publisher's Version
On the Correctness of GPU Programs
Chao Peng
(University of Edinburgh, UK)
Testing is an important and challenging part of software development and its effectiveness depends on the quality of test cases. However, there exists no means of measuring quality of tests developed for GPU programs and as a result, no test case generation techniques for GPU programs aiming at high test effectiveness. Existing criteria for sequential and multithreaded CPU programs cannot be directly applied to GPU programs as GPU follows a completely different memory and execution model.
We surveyed existing work on GPU program verification and bug fixes of open source GPU programs. Based on our findings, we define barrier, branch and loop coverage criteria and propose a set of mutation operators to measure fault finding capabilities of test cases. CLTestCheck, a framework for measuring quality of tests developed for GPU programs by code coverage analysis, fault seeding and work-group schedule amplification has been developed and evaluated using industry standard benchmarks. Experiments show that the framework is able to automatically measure test effectiveness and reveal unusual behaviours. Our planned work includes data flow coverage adopted for GPU programs to probe the underlying cause of unusual kernel behaviours and a more comprehensive work-group scheduler. We also plan to design and develop an automatic test case generator aiming at generating high quality test suites for GPU programs.
@InProceedings{ISSTA19p443,
author = {Chao Peng},
title = {On the Correctness of GPU Programs},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {443--447},
doi = {10.1145/3293882.3338989},
year = {2019},
}
Publisher's Version
Info
JNI Program Analysis with Automatically Extracted C Semantic Summary
Sungho Lee
(KAIST, South Korea)
From Oracle JVM to Android Runtime, most Java runtime environments officially support Java Native Interface (JNI) for interaction between Java and C. Using JNI, developers can improve Java program performance or reuse existing libraries implemented in C. At the same time, differences between the languages can lead to various kinds of unexpected bugs when developers do not understand the differences or comprehensive interoperation semantics completely. Furthermore, existing program analysis techniques do not cover the interoperation, which can reduce the quality of JNI
programs.
We propose a JNI program analysis technique that analyzes Java and C code of JNI programs using analyzers targeting each language respectively. The C analyzer generates a semantic summary for each C function callable from Java and the Java analyzer constructs call graphs using the semantic summaries and Java code. In addition to the call graph construction, we extend the analysis technique to detect four bug types that can occur in the interoperation between the languages. We believe that our approach would be able to detect genuine bugs as well as improve the quality of JNI programs.
@InProceedings{ISSTA19p448,
author = {Sungho Lee},
title = {JNI Program Analysis with Automatically Extracted C Semantic Summary},
booktitle = {Proc.\ ISSTA},
publisher = {ACM},
pages = {448--451},
doi = {10.1145/3293882.3338990},
year = {2019},
}
Publisher's Version
proc time: 13.08