ESEC/FSE 2019 – Author Index |
Contents -
Abstracts -
Authors
|
A B C D E F G H I J K L M N O P Q R S T V W X Y Z
Abid, Shamsa |
ESEC/FSE '19: "Recommending Related Functions ..."
Recommending Related Functions from API Usage-Based Function Clone Structures
Shamsa Abid (Lahore University of Management Sciences, Pakistan) Developers need to be able to find reusable code for desired software features in a way that supports opportunistic programming for increased developer productivity. Our objective is to develop a recommendation system that provides a developer with function recommendations having functionality relevant to her development task. We employ a combination of information retrieval, static code analysis and data mining techniques to build the proposed recommendation system called FACER (Feature-driven API usage-based Code Examples Recommender). We performed an experimental evaluation on 122 projects from GitHub from selected categories to determine the accuracy of the retrieved code for related features. FACER recommended functions with a precision of 54% and 75% when evaluated using automated and manual methods respectively. @InProceedings{ESEC/FSE19p1193, author = {Shamsa Abid}, title = {Recommending Related Functions from API Usage-Based Function Clone Structures}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1193--1195}, doi = {10.1145/3338906.3342486}, year = {2019}, } Publisher's Version |
|
Abreu, Rui |
ESEC/FSE '19: "MOTSD: A Multi-Objective Test ..."
MOTSD: A Multi-Objective Test Selection Tool using Test Suite Diagnosability
Daniel Correia, Rui Abreu, Pedro Santos, and João Nadkarni (University of Lisbon, Portugal; OutSystems, Portugal) Performing regression testing on large software systems becomes unfeasible as it takes too long to run all the test cases every time a change is made. The main motivation of this work was to provide a faster and earlier feedback loop to the developers at OutSystems when a change is made. The developed tool, MOTSD, implements a multi-objective test selection approach in a C# code base using a test suite diagnosability metric and historical metrics as objectives and it is powered by a particle swarm optimization algorithm. We present implementation challenges, current experimental results and limitations of the tool when applied in an industrial context. Screencast demo link: https://www.youtube.com/watch?v=CYMfQTUu2BE @InProceedings{ESEC/FSE19p1070, author = {Daniel Correia and Rui Abreu and Pedro Santos and João Nadkarni}, title = {MOTSD: A Multi-Objective Test Selection Tool using Test Suite Diagnosability}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1070--1074}, doi = {10.1145/3338906.3341187}, year = {2019}, } Publisher's Version ESEC/FSE '19: "Empirical Review of Java Program ..." Empirical Review of Java Program Repair Tools: A Large-Scale Experiment on 2,141 Bugs and 23,551 Repair Attempts Thomas Durieux, Fernanda Madeiral, Matias Martinez, and Rui Abreu (University of Lisbon, Portugal; INESC-ID, Portugal; Federal University of Uberlândia, Brazil; Polytechnic University of Hauts-de-France, France) In the past decade, research on test-suite-based automatic program repair has grown significantly. Each year, new approaches and implementations are featured in major software engineering venues. However, most of those approaches are evaluated on a single benchmark of bugs, which are also rarely reproduced by other researchers. In this paper, we present a large-scale experiment using 11 Java test-suite-based repair tools and 2,141 bugs from 5 benchmarks. Our goal is to have a better understanding of the current state of automatic program repair tools on a large diversity of benchmarks. Our investigation is guided by the hypothesis that the repairability of repair tools might not be generalized across different benchmarks. We found that the 11 tools 1) are able to generate patches for 21% of the bugs from the 5 benchmarks, and 2) have better performance on Defects4J compared to other benchmarks, by generating patches for 47% of the bugs from Defects4J compared to 10-30% of bugs from the other benchmarks. Our experiment comprises 23,551 repair attempts, which we used to find causes of non-patch generation. These causes are reported in this paper, which can help repair tool designers to improve their approaches and tools. @InProceedings{ESEC/FSE19p302, author = {Thomas Durieux and Fernanda Madeiral and Matias Martinez and Rui Abreu}, title = {Empirical Review of Java Program Repair Tools: A Large-Scale Experiment on 2,141 Bugs and 23,551 Repair Attempts}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {302--313}, doi = {10.1145/3338906.3338911}, year = {2019}, } Publisher's Version Info Artifacts Reusable |
|
Adams, Bram |
ESEC/FSE '19: "Understanding GCC Builtins ..."
Understanding GCC Builtins to Develop Better Tools
Manuel Rigger, Stefan Marr, Bram Adams, and Hanspeter Mössenböck (JKU Linz, Austria; University of Kent, UK; Polytechnique Montréal, Canada) C programs can use compiler builtins to provide functionality that the C language lacks. On Linux, GCC provides several thousands of builtins that are also supported by other mature compilers, such as Clang and ICC. Maintainers of other tools lack guidance on whether and which builtins should be implemented to support popular projects. To assist tool developers who want to support GCC builtins, we analyzed builtin use in 4,913 C projects from GitHub. We found that 37% of these projects relied on at least one builtin. Supporting an increasing proportion of projects requires support of an exponentially increasing number of builtins; however, implementing only 10 builtins already covers over 30% of the projects. Since we found that many builtins in our corpus remained unused, the effort needed to support 90% of the projects is moderate, requiring about 110 builtins to be implemented. For each project, we analyzed the evolution of builtin use over time and found that the majority of projects mostly added builtins. This suggests that builtins are not a legacy feature and must be supported in future tools. Systematic testing of builtin support in existing tools revealed that many lacked support for builtins either partially or completely; we also discovered incorrect implementations in various tools, including the formally verified CompCert compiler. @InProceedings{ESEC/FSE19p74, author = {Manuel Rigger and Stefan Marr and Bram Adams and Hanspeter Mössenböck}, title = {Understanding GCC Builtins to Develop Better Tools}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {74--85}, doi = {10.1145/3338906.3338907}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Aftandilian, Edward |
ESEC/FSE '19: "DeepDelta: Learning to Repair ..."
DeepDelta: Learning to Repair Compilation Errors
Ali Mesbah, Andrew Rice, Emily Johnston, Nick Glorioso, and Edward Aftandilian (University of British Columbia, Canada; University of Cambridge, UK; Google, UK; Google, USA) Programmers spend a substantial amount of time manually repairing code that does not compile. We observe that the repairs for any particular error class typically follow a pattern and are highly mechanical. We propose a novel approach that automatically learns these patterns with a deep neural network and suggests program repairs for the most costly classes of build-time compilation failures. We describe how we collect all build errors and the human-authored, in-progress code changes that cause those failing builds to transition to successful builds at Google. We generate an AST diff from the textual code changes and transform it into a domain-specific language called Delta that encodes the change that must be made to make the code compile. We then feed the compiler diagnostic information (as source) and the Delta changes that resolved the diagnostic (as target) into a Neural Machine Translation network for training. For the two most prevalent and costly classes of Java compilation errors, namely missing symbols and mismatched method signatures, our system called DeepDelta, generates the correct repair changes for 19,314 out of 38,788 (50%) of unseen compilation errors. The correct changes are in the top three suggested fixes 86% of the time on average. @InProceedings{ESEC/FSE19p925, author = {Ali Mesbah and Andrew Rice and Emily Johnston and Nick Glorioso and Edward Aftandilian}, title = {DeepDelta: Learning to Repair Compilation Errors}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {925--936}, doi = {10.1145/3338906.3340455}, year = {2019}, } Publisher's Version |
|
Aggarwal, Aniya |
ESEC/FSE '19: "Black Box Fairness Testing ..."
Black Box Fairness Testing of Machine Learning Models
Aniya Aggarwal, Pranay Lohia, Seema Nagar, Kuntal Dey, and Diptikalyan Saha (IBM Research, India) Any given AI system cannot be accepted unless its trustworthiness is proven. An important characteristic of a trustworthy AI system is the absence of algorithmic bias. 'Individual discrimination' exists when a given individual different from another only in 'protected attributes' (e.g., age, gender, race, etc.) receives a different decision outcome from a given machine learning (ML) model as compared to the other individual. The current work addresses the problem of detecting the presence of individual discrimination in given ML models. Detection of individual discrimination is test-intensive in a black-box setting, which is not feasible for non-trivial systems. We propose a methodology for auto-generation of test inputs, for the task of detecting individual discrimination. Our approach combines two well-established techniques - symbolic execution and local explainability for effective test case generation. We empirically show that our approach to generate test cases is very effective as compared to the best-known benchmark systems that we examine. @InProceedings{ESEC/FSE19p625, author = {Aniya Aggarwal and Pranay Lohia and Seema Nagar and Kuntal Dey and Diptikalyan Saha}, title = {Black Box Fairness Testing of Machine Learning Models}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {625--635}, doi = {10.1145/3338906.3338937}, year = {2019}, } Publisher's Version |
|
Ahmadi, Reza |
ESEC/FSE '19: "Concolic Testing for Models ..."
Concolic Testing for Models of State-Based Systems
Reza Ahmadi and Juergen Dingel (Queen's University, Canada) Testing models of modern cyber-physical systems is not straightforward due to timing constraints, numerous if not infinite possible behaviors, and complex communications between components. Software testing tools and approaches that can generate test cases to test these systems are therefore important. Many of the existing automatic approaches support testing at the implementation level only. The existing model-level testing tools either treat the model as a black box (e.g., random testing approaches) or have limitations when it comes to generating complex test sequences (e.g., symbolic execution). This paper presents a novel approach and tool support for automatic unit testing of models of real-time embedded systems by conducting concolic testing, a hybrid testing technique based on concrete and symbolic execution. Our technique conducts automatic concolic testing in two phases. In the first phase, model is isolated from its environment, is transformed to a testable model and is integrated with a test harness. In the second phase, the harness tests the model concolically and reports the test execution results. We describe an implementation of our approach in the context of Papyrus-RT, an open source Model Driven Engineering (MDE) tool based on the modeling language UML-RT, and report the results of applying our concolic testing approach to a set of standard benchmark models to validate our approach. @InProceedings{ESEC/FSE19p4, author = {Reza Ahmadi and Juergen Dingel}, title = {Concolic Testing for Models of State-Based Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {4--15}, doi = {10.1145/3338906.3338908}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Amoui, Mehdi |
ESEC/FSE '19: "An IR-Based Approach towards ..."
An IR-Based Approach towards Automated Integration of Geo-Spatial Datasets in Map-Based Software Systems
Nima Miryeganeh, Mehdi Amoui, and Hadi Hemmati (University of Calgary, Canada; Localintel, Canada) Data is arguably the most valuable asset of the modern world. In this era, the success of any data-intensive solution relies on the quality of data that drives it. Among vast amount of data that are captured, managed, and analyzed everyday, geospatial data are one of the most interesting class of data that hold geographical information of real-world phenomena and can be visualized as digital maps. Geo-spatial data is the source of many enterprise solutions that provide local information and insights. Companies often aggregate geospacial datasets from various sources in order to increase the quality of such solutions. However, a lack of a global standard model for geospatial datasets makes the task of merging and integrating datasets difficult and error prone. Traditionally, this aggregation was accomplished by domain experts manually validating the data integration process by merging new data sources and/or new versions of previous data against conflicts and other requirement violations. However, this manual approach is not scalable is a hinder toward rapid release when dealing with big datasets which change frequently. Thus more automated approaches with limited interaction with domain experts is required. As a first step to tackle this problem, we have leveraged Information Retrieval (IR) and geospatial search techniques to propose a systematic and automated conflict identification approach. To evaluate our approach, we conduct a case study in which we measure the accuracy of our approach in several real-world scenarios and followed by interviews with Localintel Inc. software developers to get their feedbacks. @InProceedings{ESEC/FSE19p946, author = {Nima Miryeganeh and Mehdi Amoui and Hadi Hemmati}, title = {An IR-Based Approach towards Automated Integration of Geo-Spatial Datasets in Map-Based Software Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {946--954}, doi = {10.1145/3338906.3340454}, year = {2019}, } Publisher's Version |
|
An, Gabin |
ESEC/FSE '19: "PyGGI 2.0: Language Independent ..."
PyGGI 2.0: Language Independent Genetic Improvement Framework
Gabin An, Aymeric Blot, Justyna Petke, and Shin Yoo (KAIST, South Korea; University College London, UK) PyGGI is a research tool for Genetic Improvement (GI), that is designed to be versatile and easy to use. We present version 2.0 of PyGGI, the main feature of which is an XML-based intermediate program representation. It allows users to easily define GI operators and algorithms that can be reused with multiple target languages. Using the new version of PyGGI, we present two case studies. First, we conduct an Automated Program Repair (APR) experiment with the QuixBugs benchmark, one that contains defective programs in both Python and Java. Second, we replicate an existing work on runtime improvement through program specialisation for the MiniSAT satisfiability solver. PyGGI 2.0 was able to generate a patch for a bug not previously fixed by any APR tool. It was also able to achieve 14% runtime improvement in the case of MiniSAT. The presented results show the applicability and the expressiveness of the new version of PyGGI. A video of the tool demo is at: https://youtu.be/PxRUdlRDS40. @InProceedings{ESEC/FSE19p1100, author = {Gabin An and Aymeric Blot and Justyna Petke and Shin Yoo}, title = {PyGGI 2.0: Language Independent Genetic Improvement Framework}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1100--1104}, doi = {10.1145/3338906.3341184}, year = {2019}, } Publisher's Version Video |
|
Aniche, Maurício |
ESEC/FSE '19: "Monitoring-Aware IDEs ..."
Monitoring-Aware IDEs
Jos Winter, Maurício Aniche, Jürgen Cito, and Arie van Deursen (Adyen, Netherlands; Delft University of Technology, Netherlands; Massachusetts Institute of Technology, USA) Engineering modern large-scale software requires software developers to not solely focus on writing code, but also to continuously examine monitoring data to reason about the dynamic behavior of their systems. These additional monitoring responsibilities for developers have only emerged recently, in the light of DevOps culture. Interestingly, software development activities happen mainly in the IDE, while reasoning about production monitoring happens in separate monitoring tools. We propose an approach that integrates monitoring signals into the development environment and workflow. We conjecture that an IDE with such capability improves the performance of developers as time spent continuously context switching from development to monitoring would be eliminated. This paper takes a first step towards understanding the benefits of a possible monitoring-aware IDE. We implemented a prototype of a Monitoring-Aware IDE, connected to the monitoring systems of Adyen, a large-scale payment company that performs intense monitoring in their software systems. Given our results, we firmly believe that monitoring-aware IDEs can play an essential role in improving how developers perform monitoring. @InProceedings{ESEC/FSE19p420, author = {Jos Winter and Maurício Aniche and Jürgen Cito and Arie van Deursen}, title = {Monitoring-Aware IDEs}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {420--431}, doi = {10.1145/3338906.3338926}, year = {2019}, } Publisher's Version |
|
Ashok, B. |
ESEC/FSE '19: "WhoDo: Automating Reviewer ..."
WhoDo: Automating Reviewer Suggestions at Scale
Sumit Asthana, Rahul Kumar, Ranjita Bhagwan, Christian Bird, Chetan Bansal, Chandra Maddila, Sonu Mehta, and B. Ashok (Microsoft Research, India; Microsoft Research, USA) Today's software development is distributed and involves continuous changes for new features and yet, their development cycle has to be fast and agile. An important component of enabling this agility is selecting the right reviewers for every code-change - the smallest unit of the development cycle. Modern tool-based code review is proven to be an effective way to achieve appropriate code review of software changes. However, the selection of reviewers in these code review systems is at best manual. As software and teams scale, this poses the challenge of selecting the right reviewers, which in turn determines software quality over time. While previous work has suggested automatic approaches to code reviewer recommendations, it has been limited to retrospective analysis. We not only deploy a reviewer suggestions algorithm - WhoDo - and evaluate its effect but also incorporate load balancing as part of it to address one of its major shortcomings: of recommending experienced developers very frequently. We evaluate the effect of this hybrid recommendation + load balancing system on five repositories within Microsoft. Our results are based around various aspects of a commit and how code review affects that. We attempt to quantitatively answer questions which are supposed to play a vital role in effective code review through our data and substantiate it through qualitative feedback of partner repositories. @InProceedings{ESEC/FSE19p937, author = {Sumit Asthana and Rahul Kumar and Ranjita Bhagwan and Christian Bird and Chetan Bansal and Chandra Maddila and Sonu Mehta and B. Ashok}, title = {WhoDo: Automating Reviewer Suggestions at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {937--945}, doi = {10.1145/3338906.3340449}, year = {2019}, } Publisher's Version |
|
Asthana, Sumit |
ESEC/FSE '19: "WhoDo: Automating Reviewer ..."
WhoDo: Automating Reviewer Suggestions at Scale
Sumit Asthana, Rahul Kumar, Ranjita Bhagwan, Christian Bird, Chetan Bansal, Chandra Maddila, Sonu Mehta, and B. Ashok (Microsoft Research, India; Microsoft Research, USA) Today's software development is distributed and involves continuous changes for new features and yet, their development cycle has to be fast and agile. An important component of enabling this agility is selecting the right reviewers for every code-change - the smallest unit of the development cycle. Modern tool-based code review is proven to be an effective way to achieve appropriate code review of software changes. However, the selection of reviewers in these code review systems is at best manual. As software and teams scale, this poses the challenge of selecting the right reviewers, which in turn determines software quality over time. While previous work has suggested automatic approaches to code reviewer recommendations, it has been limited to retrospective analysis. We not only deploy a reviewer suggestions algorithm - WhoDo - and evaluate its effect but also incorporate load balancing as part of it to address one of its major shortcomings: of recommending experienced developers very frequently. We evaluate the effect of this hybrid recommendation + load balancing system on five repositories within Microsoft. Our results are based around various aspects of a commit and how code review affects that. We attempt to quantitatively answer questions which are supposed to play a vital role in effective code review through our data and substantiate it through qualitative feedback of partner repositories. @InProceedings{ESEC/FSE19p937, author = {Sumit Asthana and Rahul Kumar and Ranjita Bhagwan and Christian Bird and Chetan Bansal and Chandra Maddila and Sonu Mehta and B. Ashok}, title = {WhoDo: Automating Reviewer Suggestions at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {937--945}, doi = {10.1145/3338906.3340449}, year = {2019}, } Publisher's Version |
|
Atlee, Joanne M. |
ESEC/FSE '19: "Living with Feature Interactions ..."
Living with Feature Interactions (Keynote)
Joanne M. Atlee (University of Waterloo, Canada) Feature-oriented software development enables rapid software creation and evolution, through incremental and parallel feature development or through product line engineering. However, in practice, features are often not separate concerns. They behave differently in the presence of other features, and they sometimes interfere with each other in surprising ways. This talk will explore challenges in feature interactions and their resolutions. Resolution strategies can tackle large classes of interactions, but are imperfect and incomplete, leading to research opportunities in software architecture, composition semantics, and verification. @InProceedings{ESEC/FSE19p1, author = {Joanne M. Atlee}, title = {Living with Feature Interactions (Keynote)}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1--1}, doi = {10.1145/3338906.3342811}, year = {2019}, } Publisher's Version |
|
Atzei, Nicola |
ESEC/FSE '19: "Developing Secure Bitcoin ..."
Developing Secure Bitcoin Contracts with BitML
Nicola Atzei, Massimo Bartoletti, Stefano Lande, Nobuko Yoshida, and Roberto Zunino (University of Cagliari, Italy; Imperial College London, UK; University of Trento, Italy) We present a toolchain for developing and verifying smart contracts that can be executed on Bitcoin. The toolchain is based on BitML, a recent domain-specific language for smart contracts with a computationally sound embedding into Bitcoin. Our toolchain automatically verifies relevant properties of contracts, among which liquidity, ensuring that funds do not remain frozen within a contract forever. A compiler is provided to translate BitML contracts into sets of standard Bitcoin transactions: executing a contract corresponds to appending these transactions to the blockchain. We assess our toolchain through a benchmark of representative contracts. @InProceedings{ESEC/FSE19p1124, author = {Nicola Atzei and Massimo Bartoletti and Stefano Lande and Nobuko Yoshida and Roberto Zunino}, title = {Developing Secure Bitcoin Contracts with BitML}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1124--1128}, doi = {10.1145/3338906.3341173}, year = {2019}, } Publisher's Version Video Info |
|
Awadhutkar, Payas |
ESEC/FSE '19: "DISCOVER: Detecting Algorithmic ..."
DISCOVER: Detecting Algorithmic Complexity Vulnerabilities
Payas Awadhutkar, Ganesh Ram Santhanam, Benjamin Holland, and Suresh Kothari (Iowa State University, USA; EnSoft, USA) Algorithmic Complexity Vulnerabilities (ACV) are a class of vulnerabilities that enable Denial of Service Attacks. ACVs stem from asymmetric consumption of resources due to complex loop termination logic, recursion, and/or resource intensive library APIs. Completely automated detection of ACVs is intractable and it calls for tools that assist human analysts. We present DISCOVER, a suite of tools that facilitates human-on-the-loop detection of ACVs. DISCOVER's workflow can be broken into three phases - (1) Automated characterization of loops, (2) Selection of suspicious loops, and (3) Interactive audit of selected loops. We demonstrate DISCOVER using a case study using a DARPA challenge app. DISCOVER supports analysis of Java source code and Java bytecode. We demonstrate it for Java bytecode. @InProceedings{ESEC/FSE19p1129, author = {Payas Awadhutkar and Ganesh Ram Santhanam and Benjamin Holland and Suresh Kothari}, title = {DISCOVER: Detecting Algorithmic Complexity Vulnerabilities}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1129--1133}, doi = {10.1145/3338906.3341177}, year = {2019}, } Publisher's Version Video |
|
Babar, Muhammad Ali |
ESEC/FSE '19: "Ethnographic Research in Software ..."
Ethnographic Research in Software Engineering: A Critical Review and Checklist
He Zhang, Xin Huang, Xin Zhou, Huang Huang, and Muhammad Ali Babar (Nanjing University, China; University of Adelaide, Australia) Software Engineering (SE) community has recently been investing significant amount of effort in qualitative research to study the human and social aspects of SE processes, practices, and technologies. Ethnography is one of the major qualitative research methods, which is based on constructivist paradigm that is different from the hypothetic-deductive research model usually used in SE. Hence, the adoption of ethnographic research method in SE can present significant challenges in terms of sufficient understanding of the methodological requirements and the logistics of its applications. It is important to systematically identify and understand various aspects of adopting ethnography in SE and provide effective guidance. We carried out an empirical inquiry by integrating a systematic literature review and a confirmatory survey. By reviewing the ethnographic studies reported in 111 identified papers and 26 doctoral theses and analyzing the authors' responses of 29 of those papers, we revealed several unique insights. These identified insights were then transformed into a preliminary checklist that helps improve the state-of-the-practice of using ethnography in SE. This study also identifies the areas where methodological improvements of ethnography are needed in SE. @InProceedings{ESEC/FSE19p659, author = {He Zhang and Xin Huang and Xin Zhou and Huang Huang and Muhammad Ali Babar}, title = {Ethnographic Research in Software Engineering: A Critical Review and Checklist}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {659--670}, doi = {10.1145/3338906.3338976}, year = {2019}, } Publisher's Version |
|
Babić, Domagoj |
ESEC/FSE '19: "FUDGE: Fuzz Driver Generation ..."
FUDGE: Fuzz Driver Generation at Scale
Domagoj Babić, Stefan Bucur, Yaohui Chen, Franjo Ivančić, Tim King, Markus Kusano, Caroline Lemieux, László Szekeres, and Wei Wang (Google, USA; Northeastern University, USA; University of California at Berkeley, USA) At Google we have found tens of thousands of security and robustness bugs by fuzzing C and C++ libraries. To fuzz a library, a fuzzer requires a fuzz driver—which exercises some library code—to which it can pass inputs. Unfortunately, writing fuzz drivers remains a primarily manual exercise, a major hindrance to the widespread adoption of fuzzing. In this paper, we address this major hindrance by introducing the Fudge system for automated fuzz driver generation. Fudge automatically generates fuzz driver candidates for libraries based on existing client code. We have used Fudge to generate thousands of new drivers for a wide variety of libraries. Each generated driver includes a synthesized C/C++ program and a corresponding build script, and is automatically analyzed for quality. Developers have integrated over 200 of these generated drivers into continuous fuzzing services and have committed to address reported security bugs. Further, several of these fuzz drivers have been upstreamed to open source projects and integrated into the OSS-Fuzz fuzzing infrastructure. Running these fuzz drivers has resulted in over 150 bug fixes, including the elimination of numerous exploitable security vulnerabilities. @InProceedings{ESEC/FSE19p975, author = {Domagoj Babić and Stefan Bucur and Yaohui Chen and Franjo Ivančić and Tim King and Markus Kusano and Caroline Lemieux and László Szekeres and Wei Wang}, title = {FUDGE: Fuzz Driver Generation at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {975--985}, doi = {10.1145/3338906.3340456}, year = {2019}, } Publisher's Version |
|
Bacchelli, Alberto |
ESEC/FSE '19: "Understanding Flaky Tests: ..."
Understanding Flaky Tests: The Developer’s Perspective
Moritz Eck, Fabio Palomba, Marco Castelluccio, and Alberto Bacchelli (University of Zurich, Switzerland; Mozilla, UK) Flaky tests are software tests that exhibit a seemingly random outcome (pass or fail) despite exercising unchanged code. In this work, we examine the perceptions of software developers about the nature, relevance, and challenges of flaky tests. We asked 21 professional developers to classify 200 flaky tests they previously fixed, in terms of the nature and the origin of the flakiness, as well as of the fixing effort. We also examined developers' fixing strategies. Subsequently, we conducted an online survey with 121 developers with a median industrial programming experience of five years. Our research shows that: The flakiness is due to several different causes, four of which have never been reported before, despite being the most costly to fix; flakiness is perceived as significant by the vast majority of developers, regardless of their team's size and project's domain, and it can have effects on resource allocation, scheduling, and the perceived reliability of the test suite; and the challenges developers report to face regard mostly the reproduction of the flaky behavior and the identification of the cause for the flakiness. Public preprint [http://arxiv.org/abs/1907.01466], data and materials [https://doi.org/10.5281/zenodo.3265785]. @InProceedings{ESEC/FSE19p830, author = {Moritz Eck and Fabio Palomba and Marco Castelluccio and Alberto Bacchelli}, title = {Understanding Flaky Tests: The Developer’s Perspective}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {830--840}, doi = {10.1145/3338906.3338945}, year = {2019}, } Publisher's Version |
|
Bagherzadeh, Mehdi |
ESEC/FSE '19: "Going Big: A Large-Scale Study ..."
Going Big: A Large-Scale Study on What Big Data Developers Ask
Mehdi Bagherzadeh and Raffi Khatchadourian (Oakland University, USA; City University of New York, USA) Software developers are increasingly required to write big data code. However, they find big data software development challenging. To help these developers it is necessary to understand big data topics that they are interested in and the difficulty of finding answers for questions in these topics. In this work, we conduct a large-scale study on Stackoverflow to understand the interest and difficulties of big data developers. To conduct the study, we develop a set of big data tags to extract big data posts from Stackoverflow; use topic modeling to group these posts into big data topics; group similar topics into categories to construct a topic hierarchy; analyze popularity and difficulty of topics and their correlations; and discuss implications of our findings for practice, research and education of big data software development and investigate their coincidence with the findings of previous work. @InProceedings{ESEC/FSE19p432, author = {Mehdi Bagherzadeh and Raffi Khatchadourian}, title = {Going Big: A Large-Scale Study on What Big Data Developers Ask}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {432--442}, doi = {10.1145/3338906.3338939}, year = {2019}, } Publisher's Version |
|
Bai, Xuefang |
ESEC/FSE '19: "A Learning-Based Approach ..."
A Learning-Based Approach for Automatic Construction of Domain Glossary from Source Code and Documentation
Chong Wang, Xin Peng, Mingwei Liu, Zhenchang Xing, Xuefang Bai, Bing Xie, and Tuo Wang (Fudan University, China; Australian National University, Australia; Peking University, China) A domain glossary that organizes domain-specific concepts and their aliases and relations is essential for knowledge acquisition and software development. Existing approaches use linguistic heuristics or term-frequency-based statistics to identify domain specific terms from software documentation, and thus the accuracy is often low. In this paper, we propose a learning-based approach for automatic construction of domain glossary from source code and software documentation. The approach uses a set of high-quality seed terms identified from code identifiers and natural language concept definitions to train a domain-specific prediction model to recognize glossary terms based on the lexical and semantic context of the sentences mentioning domain-specific concepts. It then merges the aliases of the same concepts to their canonical names, selects a set of explanation sentences for each concept, and identifies "is a", "has a", and "related to" relations between the concepts. We apply our approach to deep learning domain and Hadoop domain and harvest 5,382 and 2,069 concepts together with 16,962 and 6,815 relations respectively. Our evaluation validates the accuracy of the extracted domain glossary and its usefulness for the fusion and acquisition of knowledge from different documents of different projects. @InProceedings{ESEC/FSE19p97, author = {Chong Wang and Xin Peng and Mingwei Liu and Zhenchang Xing and Xuefang Bai and Bing Xie and Tuo Wang}, title = {A Learning-Based Approach for Automatic Construction of Domain Glossary from Source Code and Documentation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {97--108}, doi = {10.1145/3338906.3338963}, year = {2019}, } Publisher's Version |
|
Baldwin, Haaken Martinson |
ESEC/FSE '19: "Effective Error-Specification ..."
Effective Error-Specification Inference via Domain-Knowledge Expansion
Daniel DeFreez, Haaken Martinson Baldwin, Cindy Rubio-González, and Aditya V. Thakur (University of California at Davis, USA) Error-handling code responds to the occurrence of runtime errors. Failure to correctly handle errors can lead to security vulnerabilities and data loss. This paper deals with error handling in software written in C that uses the return-code idiom: the presence and type of error is encoded in the return value of a function. This paper describes EESI, a static analysis that infers the set of values that a function can return on error. Such a function error-specification can then be used to identify bugs related to incorrect error handling. The key insight of EESI is to bootstrap the analysis with domain knowledge related to error handling provided by a developer. EESI uses a combination of intraprocedural, flow-sensitive analysis and interprocedural, context-insensitive analysis to ensure precision and scalability. We built a tool ECC to demonstrate how the function error-specifications inferred by EESI can be used to automatically find bugs related to incorrect error handling. ECC detected 246 bugs across 9 programs, of which 110 have been confirmed. ECC detected 220 previously unknown bugs, of which 99 are confirmed. Two patches have already been merged into OpenSSL. @InProceedings{ESEC/FSE19p466, author = {Daniel DeFreez and Haaken Martinson Baldwin and Cindy Rubio-González and Aditya V. Thakur}, title = {Effective Error-Specification Inference via Domain-Knowledge Expansion}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {466--476}, doi = {10.1145/3338906.3338960}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Banerjee, Subarno |
ESEC/FSE '19: "NullAway: Practical Type-Based ..."
NullAway: Practical Type-Based Null Safety for Java
Subarno Banerjee, Lazaro Clapp, and Manu Sridharan (University of Michigan, USA; Uber Technologies, USA; University of California at Riverside, USA) NullPointerExceptions (NPEs) are a key source of crashes in modern Java programs. Previous work has shown how such errors can be prevented at compile time via code annotations and pluggable type checking. However, such systems have been difficult to deploy on large-scale software projects, due to significant build-time overhead and / or a high annotation burden. This paper presents NullAway, a new type-based null safety checker for Java that overcomes these issues. NullAway has been carefully engineered for low overhead, so it can run as part of every build. Further, NullAway reduces annotation burden through targeted unsound assumptions, aiming for no false negatives in practice on checked code. Our evaluation shows that NullAway has significantly lower build-time overhead (1.15×) than comparable tools (2.8-5.1×). Further, on a corpus of production crash data for widely-used Android apps built with NullAway, remaining NPEs were due to unchecked third-party libraries (64%), deliberate error suppressions (17%), or reflection and other forms of post-checking code modification (17%), never due to NullAway’s unsound assumptions for checked code. @InProceedings{ESEC/FSE19p740, author = {Subarno Banerjee and Lazaro Clapp and Manu Sridharan}, title = {NullAway: Practical Type-Based Null Safety for Java}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {740--750}, doi = {10.1145/3338906.3338919}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Bansal, Chetan |
ESEC/FSE '19: "WhoDo: Automating Reviewer ..."
WhoDo: Automating Reviewer Suggestions at Scale
Sumit Asthana, Rahul Kumar, Ranjita Bhagwan, Christian Bird, Chetan Bansal, Chandra Maddila, Sonu Mehta, and B. Ashok (Microsoft Research, India; Microsoft Research, USA) Today's software development is distributed and involves continuous changes for new features and yet, their development cycle has to be fast and agile. An important component of enabling this agility is selecting the right reviewers for every code-change - the smallest unit of the development cycle. Modern tool-based code review is proven to be an effective way to achieve appropriate code review of software changes. However, the selection of reviewers in these code review systems is at best manual. As software and teams scale, this poses the challenge of selecting the right reviewers, which in turn determines software quality over time. While previous work has suggested automatic approaches to code reviewer recommendations, it has been limited to retrospective analysis. We not only deploy a reviewer suggestions algorithm - WhoDo - and evaluate its effect but also incorporate load balancing as part of it to address one of its major shortcomings: of recommending experienced developers very frequently. We evaluate the effect of this hybrid recommendation + load balancing system on five repositories within Microsoft. Our results are based around various aspects of a commit and how code review affects that. We attempt to quantitatively answer questions which are supposed to play a vital role in effective code review through our data and substantiate it through qualitative feedback of partner repositories. @InProceedings{ESEC/FSE19p937, author = {Sumit Asthana and Rahul Kumar and Ranjita Bhagwan and Christian Bird and Chetan Bansal and Chandra Maddila and Sonu Mehta and B. Ashok}, title = {WhoDo: Automating Reviewer Suggestions at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {937--945}, doi = {10.1145/3338906.3340449}, year = {2019}, } Publisher's Version ESEC/FSE '19: "Predicting Pull Request Completion ..." Predicting Pull Request Completion Time: A Case Study on Large Scale Cloud Services Chandra Maddila, Chetan Bansal, and Nachiappan Nagappan (Microsoft Research, USA) Effort estimation models have been long studied in software engineering research. Effort estimation models help organizations and individuals plan and track progress of their software projects and individual tasks to help plan delivery milestones better. Towards this end, there is a large body of work that has been done on effort estimation for projects but little work on an individual checkin (Pull Request) level. In this paper we present a methodology that provides effort estimates on individual developer check-ins which is displayed to developers to help them track their work items. Given the cloud development infrastructure pervasive in companies, it has enabled us to deploy our Pull Request Lifetime prediction system to several thousand developers across multiple software families. We observe from our deployment that the pull request lifetime prediction system conservatively helps save 44.61% of the developer time by accelerating Pull Requests to completion. @InProceedings{ESEC/FSE19p874, author = {Chandra Maddila and Chetan Bansal and Nachiappan Nagappan}, title = {Predicting Pull Request Completion Time: A Case Study on Large Scale Cloud Services}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {874--882}, doi = {10.1145/3338906.3340457}, year = {2019}, } Publisher's Version |
|
Barash, Guy |
ESEC/FSE '19: "Bridging the Gap between ML ..."
Bridging the Gap between ML Solutions and Their Business Requirements using Feature Interactions
Guy Barash, Eitan Farchi, Ilan Jayaraman, Orna Raz, Rachel Tzoref-Brill, and Marcel Zalmanovici (Western Digital, Israel; IBM Research, Israel; IBM, India) Machine Learning (ML) based solutions are becoming increasingly popular and pervasive. When testing such solutions, there is a tendency to focus on improving the ML metrics such as the F1-score and accuracy at the expense of ensuring business value and correctness by covering business requirements. In this work, we adapt test planning methods of classical software to ML solutions. We use combinatorial modeling methodology to define the space of business requirements and map it to the ML solution data, and use the notion of data slices to identify the weaker areas of the ML solution and strengthen them. We apply our approach to three real-world case studies and demonstrate its value. @InProceedings{ESEC/FSE19p1048, author = {Guy Barash and Eitan Farchi and Ilan Jayaraman and Orna Raz and Rachel Tzoref-Brill and Marcel Zalmanovici}, title = {Bridging the Gap between ML Solutions and Their Business Requirements using Feature Interactions}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1048--1058}, doi = {10.1145/3338906.3340442}, year = {2019}, } Publisher's Version |
|
Baresi, Luciano |
ESEC/FSE '19: "Symbolic Execution-Driven ..."
Symbolic Execution-Driven Extraction of the Parallel Execution Plans of Spark Applications
Luciano Baresi, Giovanni Denaro, and Giovanni Quattrocchi (Politecnico di Milano, Italy; University of Milano-Bicocca, Italy) The execution of Spark applications is based on the execution order and parallelism of the different jobs, given data and available resources. Spark reifies these dependencies in a graph that we refer to as the (parallel) execution plan of the application. All the approaches that have studied the estimation of the execution times and the dynamic provisioning of resources for this kind of applications have always assumed that the execution plan is unique, given the computing resources at hand. This assumption is at least simplistic for applications that include conditional branches or loops and limits the precision of the prediction techniques. This paper introduces SEEPEP, a novel technique based on symbolic execution and search-based test generation, that: i) automatically extracts the possible execution plans of a Spark application, along with dedicated launchers with properly synthesized data that can be used for profiling, and ii) tunes the allocation of resources at runtime based on the knowledge of the execution plans for which the path conditions hold. The assessment we carried out shows that SEEPEP can effectively complement dynaSpark, an extension of Spark with dynamic resource provisioning capabilities, to help predict the execution duration and the allocation of resources. @InProceedings{ESEC/FSE19p246, author = {Luciano Baresi and Giovanni Denaro and Giovanni Quattrocchi}, title = {Symbolic Execution-Driven Extraction of the Parallel Execution Plans of Spark Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {246--256}, doi = {10.1145/3338906.3338973}, year = {2019}, } Publisher's Version |
|
Bartoletti, Massimo |
ESEC/FSE '19: "Developing Secure Bitcoin ..."
Developing Secure Bitcoin Contracts with BitML
Nicola Atzei, Massimo Bartoletti, Stefano Lande, Nobuko Yoshida, and Roberto Zunino (University of Cagliari, Italy; Imperial College London, UK; University of Trento, Italy) We present a toolchain for developing and verifying smart contracts that can be executed on Bitcoin. The toolchain is based on BitML, a recent domain-specific language for smart contracts with a computationally sound embedding into Bitcoin. Our toolchain automatically verifies relevant properties of contracts, among which liquidity, ensuring that funds do not remain frozen within a contract forever. A compiler is provided to translate BitML contracts into sets of standard Bitcoin transactions: executing a contract corresponds to appending these transactions to the blockchain. We assess our toolchain through a benchmark of representative contracts. @InProceedings{ESEC/FSE19p1124, author = {Nicola Atzei and Massimo Bartoletti and Stefano Lande and Nobuko Yoshida and Roberto Zunino}, title = {Developing Secure Bitcoin Contracts with BitML}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1124--1128}, doi = {10.1145/3338906.3341173}, year = {2019}, } Publisher's Version Video Info |
|
Bastani, Osbert |
ESEC/FSE '19: "REINAM: Reinforcement Learning ..."
REINAM: Reinforcement Learning for Input-Grammar Inference
Zhengkai Wu, Evan Johnson, Wei Yang, Osbert Bastani, Dawn Song, Jian Peng, and Tao Xie (University of Illinois at Urbana-Champaign, USA; University of Texas at Dallas, USA; University of Pennsylvania, USA; University of California at Berkeley, USA) Program input grammars (i.e., grammars encoding the language of valid program inputs) facilitate a wide range of applications in software engineering such as symbolic execution and delta debugging. Grammars synthesized by existing approaches can cover only a small part of the valid input space mainly due to unanalyzable code (e.g., native code) in programs and lacking high-quality and high-variety seed inputs. To address these challenges, we present REINAM, a reinforcement-learning approach for synthesizing probabilistic context-free program input grammars without any seed inputs. REINAM uses an industrial symbolic execution engine to generate an initial set of inputs for the given target program, and then uses an iterative process of grammar generalization to proactively generate additional inputs to infer grammars generalized from these initial seed inputs. To efficiently search for target generalizations in a huge search space of candidate generalization operators, REINAM includes a novel formulation of the search problem as a reinforcement learning problem. Our evaluation on eleven real-world benchmarks shows that REINAM outperforms an existing state-of-the-art approach on precision and recall of synthesized grammars, and fuzz testing based on REINAM substantially increases the coverage of the space of valid inputs. REINAM is able to synthesize a grammar covering the entire valid input space for some benchmarks without decreasing the accuracy of the grammar. @InProceedings{ESEC/FSE19p488, author = {Zhengkai Wu and Evan Johnson and Wei Yang and Osbert Bastani and Dawn Song and Jian Peng and Tao Xie}, title = {REINAM: Reinforcement Learning for Input-Grammar Inference}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {488--498}, doi = {10.1145/3338906.3338958}, year = {2019}, } Publisher's Version Info |
|
Bavishi, Rohan |
ESEC/FSE '19: "Phoenix: Automated Data-Driven ..."
Phoenix: Automated Data-Driven Synthesis of Repairs for Static Analysis Violations
Rohan Bavishi, Hiroaki Yoshida, and Mukul R. Prasad (University of California at Berkeley, USA; Fujitsu Labs, USA) Traditional automatic program repair (APR) tools rely on a test-suite as a repair specification. But test suites even when available are not of specification quality, limiting the performance and hence viability of test-suite based repair. On the other hand, static analysis-based bug finding tools are seeing increasing adoption in industry but still face challenges since the reported violations are viewed as not easily actionable. We propose a novel solution that solves both these challenges through a technique for automatically generating high-quality patches for static analysis violations by learning from examples. Our approach uses the static analyzer as an oracle and does not require a test suite. We realize our solution in a system, Phoenix, that implements a fully-automated pipeline that mines and cleans patches for static analysis violations from the wild, learns generalized executable repair strategies as programs in a novel Domain Specific Language (DSL), and then instantiates concrete repairs from them on new unseen violations. Using Phoenix we mine a corpus of 5,389 unique violations and patches from 517 Github projects. In a cross-validation study on this corpus Phoenix successfully produced 4,596 bug-fixes, with a recall of 85% and a precision of 54%. When applied to the latest revisions of a further5 Github projects, Phoenix produced 94 correct patches to previously unknown bugs, 19 of which have already been accepted and merged by the development teams. To the best of our knowledge this constitutes, by far the largest application of any automatic patch generation technology to large-scale real-world systems @InProceedings{ESEC/FSE19p613, author = {Rohan Bavishi and Hiroaki Yoshida and Mukul R. Prasad}, title = {Phoenix: Automated Data-Driven Synthesis of Repairs for Static Analysis Violations}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {613--624}, doi = {10.1145/3338906.3338952}, year = {2019}, } Publisher's Version |
|
Berger, Thorsten |
ESEC/FSE '19: "Principles of Feature Modeling ..."
Principles of Feature Modeling
Damir Nešić, Jacob Krüger, Ștefan Stănciulescu, and Thorsten Berger (KTH, Sweden; University of Magdeburg, Germany; ABB, Switzerland; Chalmers University of Technology, Sweden; University of Gothenburg, Sweden) Feature models are arguably one of the most intuitive and successful notations for modeling the features of a variant-rich software system. Feature models help developers to keep an overall understanding of the system, and also support scoping, planning, development, variant derivation, configuration, and maintenance activities that sustain the system's long-term success. Unfortunately, feature models are difficult to build and evolve. Features need to be identified, grouped, organized in a hierarchy, and mapped to software assets. Also, dependencies between features need to be declared. While feature models have been the subject of three decades of research, resulting in many feature-modeling notations together with automated analysis and configuration techniques, a generic set of principles for engineering feature models is still missing. It is not even clear whether feature models could be engineered using recurrent principles. Our work shows that such principles in fact exist. We analyzed feature-modeling practices elicited from ten interviews conducted with industrial practitioners and from 31 relevant papers. We synthesized a set of 34 principles covering eight different phases of feature modeling, from planning over model construction, to model maintenance and evolution. Grounded in empirical evidence, these principles provide practical, context-specific advice on how to perform feature modeling, describe what information sources to consider, and highlight common characteristics of feature models. We believe that our principles can support researchers and practitioners enhancing feature-modeling tooling, synthesis, and analyses techniques, as well as scope future research. @InProceedings{ESEC/FSE19p62, author = {Damir Nešić and Jacob Krüger and Ștefan Stănciulescu and Thorsten Berger}, title = {Principles of Feature Modeling}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {62--73}, doi = {10.1145/3338906.3338974}, year = {2019}, } Publisher's Version Info ESEC/FSE '19: "Effects of Explicit Feature ..." Effects of Explicit Feature Traceability on Program Comprehension Jacob Krüger, Gül Çalıklı, Thorsten Berger, Thomas Leich, and Gunter Saake (University of Magdeburg, Germany; Chalmers University of Technology, Sweden; University of Gothenburg, Sweden; Harz University of Applied Sciences, Germany; METOP, Germany) Developers spend a substantial amount of their time with program comprehension. To improve their comprehension and refresh their memory, developers need to communicate with other developers, read the documentation, and analyze the source code. Many studies show that developers focus primarily on the source code and that small improvements can have a strong impact. As such, it is crucial to bring the code itself into a more comprehensible form. A particular technique for this purpose are explicit feature traces to easily identify a program’s functionalities. To improve our empirical understanding about the effects of feature traces, we report an online experiment with 49 professional software developers. We studied the impact of explicit feature traces, namely annotations and decomposition, on program comprehension and compared them to the same code without traces. Besides this experiment, we also asked our participants about their opinions in order to combine quantitative and qualitative data. Our results indicate that, as opposed to purely object-oriented code: (1) annotations can have positive effects on program comprehension; (2) decomposition can have a negative impact on bug localization; and (3) our participants perceive both techniques as beneficial. Moreover, none of the three code versions yields significant improvements on task completion time. Overall, our results indicate that lightweight traceability, such as using annotations, provides immediate benefits to developers during software development and maintenance without extensive training or tooling; and can improve current industrial practices that rely on heavyweight traceability tools (e.g., DOORS) and retroactive fulfillment of standards (e.g., ISO-26262, DO-178B). @InProceedings{ESEC/FSE19p338, author = {Jacob Krüger and Gül Çalıklı and Thorsten Berger and Thomas Leich and Gunter Saake}, title = {Effects of Explicit Feature Traceability on Program Comprehension}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {338--349}, doi = {10.1145/3338906.3338968}, year = {2019}, } Publisher's Version |
|
Bernal-Cárdenas, Carlos |
ESEC/FSE '19: "Assessing the Quality of the ..."
Assessing the Quality of the Steps to Reproduce in Bug Reports
Oscar Chaparro, Carlos Bernal-Cárdenas, Jing Lu, Kevin Moran, Andrian Marcus, Massimiliano Di Penta, Denys Poshyvanyk, and Vincent Ng (College of William and Mary, USA; University of Texas at Dallas, USA; University of Sannio, Italy) A major problem with user-written bug reports, indicated by developers and documented by researchers, is the (lack of high) quality of the reported steps to reproduce the bugs. Low-quality steps to reproduce lead to excessive manual effort spent on bug triage and resolution. This paper proposes Euler, an approach that automatically identifies and assesses the quality of the steps to reproduce in a bug report, providing feedback to the reporters, which they can use to improve the bug report. The feedback provided by Euler was assessed by external evaluators and the results indicate that Euler correctly identified 98% of the existing steps to reproduce and 58% of the missing ones, while 73% of its quality annotations are correct. @InProceedings{ESEC/FSE19p86, author = {Oscar Chaparro and Carlos Bernal-Cárdenas and Jing Lu and Kevin Moran and Andrian Marcus and Massimiliano Di Penta and Denys Poshyvanyk and Vincent Ng}, title = {Assessing the Quality of the Steps to Reproduce in Bug Reports}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {86--96}, doi = {10.1145/3338906.3338947}, year = {2019}, } Publisher's Version Info |
|
Bhagwan, Ranjita |
ESEC/FSE '19: "WhoDo: Automating Reviewer ..."
WhoDo: Automating Reviewer Suggestions at Scale
Sumit Asthana, Rahul Kumar, Ranjita Bhagwan, Christian Bird, Chetan Bansal, Chandra Maddila, Sonu Mehta, and B. Ashok (Microsoft Research, India; Microsoft Research, USA) Today's software development is distributed and involves continuous changes for new features and yet, their development cycle has to be fast and agile. An important component of enabling this agility is selecting the right reviewers for every code-change - the smallest unit of the development cycle. Modern tool-based code review is proven to be an effective way to achieve appropriate code review of software changes. However, the selection of reviewers in these code review systems is at best manual. As software and teams scale, this poses the challenge of selecting the right reviewers, which in turn determines software quality over time. While previous work has suggested automatic approaches to code reviewer recommendations, it has been limited to retrospective analysis. We not only deploy a reviewer suggestions algorithm - WhoDo - and evaluate its effect but also incorporate load balancing as part of it to address one of its major shortcomings: of recommending experienced developers very frequently. We evaluate the effect of this hybrid recommendation + load balancing system on five repositories within Microsoft. Our results are based around various aspects of a commit and how code review affects that. We attempt to quantitatively answer questions which are supposed to play a vital role in effective code review through our data and substantiate it through qualitative feedback of partner repositories. @InProceedings{ESEC/FSE19p937, author = {Sumit Asthana and Rahul Kumar and Ranjita Bhagwan and Christian Bird and Chetan Bansal and Chandra Maddila and Sonu Mehta and B. Ashok}, title = {WhoDo: Automating Reviewer Suggestions at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {937--945}, doi = {10.1145/3338906.3340449}, year = {2019}, } Publisher's Version |
|
Bhogill, Prithpal |
ESEC/FSE '19: "The Role of Limitations and ..."
The Role of Limitations and SLAs in the API Industry
Antonio Gamez-Diaz, Pablo Fernandez, Antonio Ruiz-Cortés, Pedro J. Molina, Nikhil Kolekar, Prithpal Bhogill, Madhurranjan Mohaan, and Francisco Méndez (University of Seville, Spain; Metadev, Spain; PayPal, USA; Google, USA; AsyncAPI Initiative, Spain) As software architecture design is evolving to a microservice paradigm, RESTful APIs are being established as the preferred choice to build applications. In such a scenario, there is a shift towards a growing market of APIs where providers offer different service levels with tailored limitations typically based on the cost. In this context, while there are well established standards to describe the functional elements of APIs (such as the OpenAPI Specification), having a standard model for Service Level Agreements (SLAs) for APIs may boost an open ecosystem of tools that would represent an improvement for the industry by automating certain tasks during the development such as: SLA-aware scaffolding, SLA-aware testing, or SLA-aware requesters. Unfortunately, despite there have been several proposals to describe SLAs for software in general and web services in particular during the past decades, there is an actual lack of a widely used standard due to the complex landscape of concepts surrounding the notion of SLAs and the multiple perspectives that can be addressed. In this paper, we aim to analyze the landscape for SLAs for APIs in two different directions: i) Clarifying the SLA-driven API development lifecycle: its activities and participants; 2) Developing a catalog of relevant concepts and an ulterior prioritization based on different perspectives from both Industry and Academia. As a main result, we present a scored list of concepts that paves the way to establish a concrete road-map for a standard industry-aligned specification to describe SLAs in APIs. @InProceedings{ESEC/FSE19p1006, author = {Antonio Gamez-Diaz and Pablo Fernandez and Antonio Ruiz-Cortés and Pedro J. Molina and Nikhil Kolekar and Prithpal Bhogill and Madhurranjan Mohaan and Francisco Méndez}, title = {The Role of Limitations and SLAs in the API Industry}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1006--1014}, doi = {10.1145/3338906.3340445}, year = {2019}, } Publisher's Version Info |
|
Biagiola, Matteo |
ESEC/FSE '19: "Web Test Dependency Detection ..."
Web Test Dependency Detection
Matteo Biagiola, Andrea Stocco, Ali Mesbah, Filippo Ricca, and Paolo Tonella (Fondazione Bruno Kessler, Italy; USI Lugano, Switzerland; University of British Columbia, Canada; University of Genoa, Italy) E2E web test suites are prone to test dependencies due to the heterogeneous multi-tiered nature of modern web apps, which makes it difficult for developers to create isolated program states for each test case. In this paper, we present the first approach for detecting and validating test dependencies present in E2E web test suites. Our approach employs string analysis to extract an approximated set of dependencies from the test code. It then filters potential false dependencies through natural language processing of test names. Finally, it validates all dependencies, and uses a novel recovery algorithm to ensure no true dependencies are missed in the final test dependency graph. Our approach is implemented in a tool called TEDD and evaluated on the test suites of six open-source web apps. Our results show that TEDD can correctly detect and validate test dependencies up to 72% faster than the baseline with the original test ordering in which the graph contains all possible dependencies. The test dependency graphs produced by TEDD enable test execution parallelization, with a speed-up factor of up to 7×. @InProceedings{ESEC/FSE19p154, author = {Matteo Biagiola and Andrea Stocco and Ali Mesbah and Filippo Ricca and Paolo Tonella}, title = {Web Test Dependency Detection}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {154--164}, doi = {10.1145/3338906.3338948}, year = {2019}, } Publisher's Version ESEC/FSE '19: "Diversity-Based Web Test Generation ..." Diversity-Based Web Test Generation Matteo Biagiola, Andrea Stocco, Filippo Ricca, and Paolo Tonella (Fondazione Bruno Kessler, Italy; USI Lugano, Switzerland; University of Genoa, Italy) Existing web test generators derive test paths from a navigational model of the web application, completed with either manually or randomly generated input values. However, manual test data selection is costly, while random generation often results in infeasible input sequences, which are rejected by the application under test. Random and search-based generation can achieve the desired level of model coverage only after a large number of test execution at- tempts, each slowed down by the need to interact with the browser during test execution. In this work, we present a novel web test generation algorithm that pre-selects the most promising candidate test cases based on their diversity from previously generated tests. As such, only the test cases that explore diverse behaviours of the application are considered for in-browser execution. We have implemented our approach in a tool called DIG. Our empirical evaluation on six real-world web applications shows that DIG achieves higher coverage and fault detection rates significantly earlier than crawling-based and search-based web test generators. @InProceedings{ESEC/FSE19p142, author = {Matteo Biagiola and Andrea Stocco and Filippo Ricca and Paolo Tonella}, title = {Diversity-Based Web Test Generation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {142--153}, doi = {10.1145/3338906.3338970}, year = {2019}, } Publisher's Version |
|
Bidokhti, Nematollah |
ESEC/FSE '19: "How Bad Can a Bug Get? An ..."
How Bad Can a Bug Get? An Empirical Analysis of Software Failures in the OpenStack Cloud Computing Platform
Domenico Cotroneo, Luigi De Simone, Pietro Liguori, Roberto Natella, and Nematollah Bidokhti (Federico II University of Naples, Italy; Futurewei Technologies, USA) Cloud management systems provide abstractions and APIs for programmatically configuring cloud infrastructures. Unfortunately, residual software bugs in these systems can potentially lead to high-severity failures, such as prolonged outages and data losses. In this paper, we investigate the impact of failures in the context widespread OpenStack cloud management system, by performing fault injection and by analyzing the impact of the resulting failures in terms of fail-stop behavior, failure detection through logging, and failure propagation across components. The analysis points out that most of the failures are not timely detected and notified; moreover, many of these failures can silently propagate over time and through components of the cloud management system, which call for more thorough run-time checks and fault containment. @InProceedings{ESEC/FSE19p200, author = {Domenico Cotroneo and Luigi De Simone and Pietro Liguori and Roberto Natella and Nematollah Bidokhti}, title = {How Bad Can a Bug Get? An Empirical Analysis of Software Failures in the OpenStack Cloud Computing Platform}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {200--211}, doi = {10.1145/3338906.3338916}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Bird, Christian |
ESEC/FSE '19: "WhoDo: Automating Reviewer ..."
WhoDo: Automating Reviewer Suggestions at Scale
Sumit Asthana, Rahul Kumar, Ranjita Bhagwan, Christian Bird, Chetan Bansal, Chandra Maddila, Sonu Mehta, and B. Ashok (Microsoft Research, India; Microsoft Research, USA) Today's software development is distributed and involves continuous changes for new features and yet, their development cycle has to be fast and agile. An important component of enabling this agility is selecting the right reviewers for every code-change - the smallest unit of the development cycle. Modern tool-based code review is proven to be an effective way to achieve appropriate code review of software changes. However, the selection of reviewers in these code review systems is at best manual. As software and teams scale, this poses the challenge of selecting the right reviewers, which in turn determines software quality over time. While previous work has suggested automatic approaches to code reviewer recommendations, it has been limited to retrospective analysis. We not only deploy a reviewer suggestions algorithm - WhoDo - and evaluate its effect but also incorporate load balancing as part of it to address one of its major shortcomings: of recommending experienced developers very frequently. We evaluate the effect of this hybrid recommendation + load balancing system on five repositories within Microsoft. Our results are based around various aspects of a commit and how code review affects that. We attempt to quantitatively answer questions which are supposed to play a vital role in effective code review through our data and substantiate it through qualitative feedback of partner repositories. @InProceedings{ESEC/FSE19p937, author = {Sumit Asthana and Rahul Kumar and Ranjita Bhagwan and Christian Bird and Chetan Bansal and Chandra Maddila and Sonu Mehta and B. Ashok}, title = {WhoDo: Automating Reviewer Suggestions at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {937--945}, doi = {10.1145/3338906.3340449}, year = {2019}, } Publisher's Version |
|
Bissyandé, Tegawendé F. |
ESEC/FSE '19: "iFixR: Bug Report driven Program ..."
iFixR: Bug Report driven Program Repair
Anil Koyuncu, Kui Liu, Tegawendé F. Bissyandé, Dongsun Kim, Martin Monperrus, Jacques Klein, and Yves Le Traon (University of Luxembourg, Luxembourg; Furiosa A.I., South Korea; KTH, Sweden) Issue tracking systems are commonly used in modern software development for collecting feedback from users and developers. An ultimate automation target of software maintenance is then the systematization of patch generation for user-reported bugs. Although this ambition is aligned with the momentum of automated program repair, the literature has, so far, mostly focused on generate-and- validate setups where fault localization and patch generation are driven by a well-defined test suite. On the one hand, however, the common (yet strong) assumption on the existence of relevant test cases does not hold in practice for most development settings: many bugs are reported without the available test suite being able to reveal them. On the other hand, for many projects, the number of bug reports generally outstrips the resources available to triage them. Towards increasing the adoption of patch generation tools by practitioners, we investigate a new repair pipeline, iFixR, driven by bug reports: (1) bug reports are fed to an IR-based fault localizer; (2) patches are generated from fix patterns and validated via regression testing; (3) a prioritized list of generated patches is proposed to developers. We evaluate iFixR on the Defects4J dataset, which we enriched (i.e., faults are linked to bug reports) and carefully-reorganized (i.e., the timeline of test-cases is naturally split). iFixR generates genuine/plausible patches for 21/44 Defects4J faults with its IR-based fault localizer. iFixR accurately places a genuine/plausible patch among its top-5 recommendation for 8/13 of these faults (without using future test cases in generation-and-validation). @InProceedings{ESEC/FSE19p314, author = {Anil Koyuncu and Kui Liu and Tegawendé F. Bissyandé and Dongsun Kim and Martin Monperrus and Jacques Klein and Yves Le Traon}, title = {iFixR: Bug Report driven Program Repair}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {314--325}, doi = {10.1145/3338906.3338935}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Blot, Aymeric |
ESEC/FSE '19: "PyGGI 2.0: Language Independent ..."
PyGGI 2.0: Language Independent Genetic Improvement Framework
Gabin An, Aymeric Blot, Justyna Petke, and Shin Yoo (KAIST, South Korea; University College London, UK) PyGGI is a research tool for Genetic Improvement (GI), that is designed to be versatile and easy to use. We present version 2.0 of PyGGI, the main feature of which is an XML-based intermediate program representation. It allows users to easily define GI operators and algorithms that can be reused with multiple target languages. Using the new version of PyGGI, we present two case studies. First, we conduct an Automated Program Repair (APR) experiment with the QuixBugs benchmark, one that contains defective programs in both Python and Java. Second, we replicate an existing work on runtime improvement through program specialisation for the MiniSAT satisfiability solver. PyGGI 2.0 was able to generate a patch for a bug not previously fixed by any APR tool. It was also able to achieve 14% runtime improvement in the case of MiniSAT. The presented results show the applicability and the expressiveness of the new version of PyGGI. A video of the tool demo is at: https://youtu.be/PxRUdlRDS40. @InProceedings{ESEC/FSE19p1100, author = {Gabin An and Aymeric Blot and Justyna Petke and Shin Yoo}, title = {PyGGI 2.0: Language Independent Genetic Improvement Framework}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1100--1104}, doi = {10.1145/3338906.3341184}, year = {2019}, } Publisher's Version Video |
|
Boyar, Seref |
ESEC/FSE '19: "Using Microservices for Non-intrusive ..."
Using Microservices for Non-intrusive Customization of Multi-tenant SaaS
Phu H. Nguyen, Hui Song, Franck Chauvel, Roy Muller, Seref Boyar, and Erik Levin (SINTEF, Norway; Visma, Norway) Enterprise software vendors often need to support their customer companies to customize the enterprise software products deployed on-premises of customers. But when software vendors are migrating their products to cloud-based Software-as-a-Service (SaaS), deep customization that used to be done on-premises is not applicable to the cloud-based multi-tenant context in which all tenants share the same SaaS. Enabling tenant-specific customization in cloud-based multi-tenant SaaS requires a novel approach. This paper proposes a Microservices-based non-intrusive Customization framework for multi-tenant Cloud-based SaaS, called MiSC-Cloud. Non-intrusive deep customization means that the microservices for customization of each tenant are isolated from the main software product and other microservices for customization of other tenants. MiSC-Cloud makes deep customization possible via authorized API calls through API gateways to the APIs of the customization microservices and the APIs of the main software product. We have implemented a proof-of-concept of our approach to enable non-intrusive deep customization of an open-source cloud native reference application of Microsoft called eShopOnContainers. Based on this work, we provide some lessons learned and directions for future work. @InProceedings{ESEC/FSE19p905, author = {Phu H. Nguyen and Hui Song and Franck Chauvel and Roy Muller and Seref Boyar and Erik Levin}, title = {Using Microservices for Non-intrusive Customization of Multi-tenant SaaS}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {905--915}, doi = {10.1145/3338906.3340452}, year = {2019}, } Publisher's Version |
|
Briand, Lionel C. |
ESEC/FSE '19: "Evaluating Model Testing and ..."
Evaluating Model Testing and Model Checking for Finding Requirements Violations in Simulink Models
Shiva Nejati, Khouloud Gaaloul, Claudio Menghi, Lionel C. Briand, Stephen Foster, and David Wolfe (University of Luxembourg, Luxembourg; QRA, Canada) Matlab/Simulink is a development and simulation language that is widely used by the Cyber-Physical System (CPS) industry to model dynamical systems. There are two mainstream approaches to verify CPS Simulink models: model testing that attempts to identify failures in models by executing them for a number of sampled test inputs, and model checking that attempts to exhaustively check the correctness of models against some given formal properties. In this paper, we present an industrial Simulink model benchmark, provide a categorization of different model types in the benchmark, describe the recurring logical patterns in the model requirements, and discuss the results of applying model checking and model testing approaches to identify requirements violations in the benchmarked models. Based on the results, we discuss the strengths and weaknesses of model testing and model checking. Our results further suggest that model checking and model testing are complementary and by combining them, we can significantly enhance the capabilities of each of these approaches individually. We conclude by providing guidelines as to how the two approaches can be best applied together. @InProceedings{ESEC/FSE19p1015, author = {Shiva Nejati and Khouloud Gaaloul and Claudio Menghi and Lionel C. Briand and Stephen Foster and David Wolfe}, title = {Evaluating Model Testing and Model Checking for Finding Requirements Violations in Simulink Models}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1015--1025}, doi = {10.1145/3338906.3340444}, year = {2019}, } Publisher's Version ESEC/FSE '19: "Generating Automated and Online ..." Generating Automated and Online Test Oracles for Simulink Models with Continuous and Uncertain Behaviors Claudio Menghi, Shiva Nejati, Khouloud Gaaloul, and Lionel C. Briand (University of Luxembourg, Luxembourg) Test automation requires automated oracles to assess test outputs. For cyber physical systems (CPS), oracles, in addition to be automated, should ensure some key objectives: (i) they should check test outputs in an online manner to stop expensive test executions as soon as a failure is detected; (ii) they should handle time- and magnitude-continuous CPS behaviors; (iii) they should provide a quantitative degree of satisfaction or failure measure instead of binary pass/fail outputs; and (iv) they should be able to handle uncertainties due to CPS interactions with the environment. We propose an automated approach to translate CPS requirements specified in a logic-based language into test oracles specified in Simulink - a widely-used development and simulation language for CPS. Our approach achieves the objectives noted above through the identification of a fragment of Signal First Order logic (SFOL) to specify requirements, the definition of a quantitative semantics for this fragment and a sound translation of the fragment into Simulink. The results from applying our approach on 11 industrial case studies show that: (i) our requirements language can express all the 98 requirements of our case studies; (ii) the time and effort required by our approach are acceptable, showing potentials for the adoption of our work in practice, and (iii) for large models, our approach can dramatically reduce the test execution time compared to when test outputs are checked in an offline manner. @InProceedings{ESEC/FSE19p27, author = {Claudio Menghi and Shiva Nejati and Khouloud Gaaloul and Lionel C. Briand}, title = {Generating Automated and Online Test Oracles for Simulink Models with Continuous and Uncertain Behaviors}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {27--38}, doi = {10.1145/3338906.3338920}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Bucur, Stefan |
ESEC/FSE '19: "FUDGE: Fuzz Driver Generation ..."
FUDGE: Fuzz Driver Generation at Scale
Domagoj Babić, Stefan Bucur, Yaohui Chen, Franjo Ivančić, Tim King, Markus Kusano, Caroline Lemieux, László Szekeres, and Wei Wang (Google, USA; Northeastern University, USA; University of California at Berkeley, USA) At Google we have found tens of thousands of security and robustness bugs by fuzzing C and C++ libraries. To fuzz a library, a fuzzer requires a fuzz driver—which exercises some library code—to which it can pass inputs. Unfortunately, writing fuzz drivers remains a primarily manual exercise, a major hindrance to the widespread adoption of fuzzing. In this paper, we address this major hindrance by introducing the Fudge system for automated fuzz driver generation. Fudge automatically generates fuzz driver candidates for libraries based on existing client code. We have used Fudge to generate thousands of new drivers for a wide variety of libraries. Each generated driver includes a synthesized C/C++ program and a corresponding build script, and is automatically analyzed for quality. Developers have integrated over 200 of these generated drivers into continuous fuzzing services and have committed to address reported security bugs. Further, several of these fuzz drivers have been upstreamed to open source projects and integrated into the OSS-Fuzz fuzzing infrastructure. Running these fuzz drivers has resulted in over 150 bug fixes, including the elimination of numerous exploitable security vulnerabilities. @InProceedings{ESEC/FSE19p975, author = {Domagoj Babić and Stefan Bucur and Yaohui Chen and Franjo Ivančić and Tim King and Markus Kusano and Caroline Lemieux and László Szekeres and Wei Wang}, title = {FUDGE: Fuzz Driver Generation at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {975--985}, doi = {10.1145/3338906.3340456}, year = {2019}, } Publisher's Version |
|
Bui, Nghi D. Q. |
ESEC/FSE '19: "SAR: Learning Cross-Language ..."
SAR: Learning Cross-Language API Mappings with Little Knowledge
Nghi D. Q. Bui, Yijun Yu, and Lingxiao Jiang (Singapore Management University, Singapore; Open University, UK) To save effort, developers often translate programs from one programming language to another, instead of implementing it from scratch. Translating application program interfaces (APIs) used in one language to functionally equivalent ones available in another language is an important aspect of program translation. Existing approaches facilitate the translation by automatically identifying the API mappings across programming languages. However, these approaches still require large amount of parallel corpora, ranging from pairs of APIs or code fragments that are functionally equivalent, to similar code comments. To minimize the need of parallel corpora, this paper aims at an automated approach that can map APIs across languages with much less a priori knowledge than other approaches. The approach is based on an realization of the notion of domain adaption, combined with code embedding, to better align two vector spaces. Taking as input large sets of programs, our approach first generates numeric vector representations of the programs (including the APIs used in each language), and it adapts generative adversarial networks (GAN) to align the vectors in different spaces of two languages. For a better alignment, we initialize the GAN with parameters derived from API mapping seeds that can be identified accurately with a simple automatic signature-based matching heuristic. Then the cross language API mappings can be identified via nearest-neighbors queries in the aligned vector spaces. We have implemented the approach (SAR, named after three main technical components in the approach) in a prototype for mapping APIs across Java and C# programs. Our evaluation on about 2 million Java files and 1 million C# files shows that the approach can achieve 54% and 82% mapping accuracy in its top-1 and top-10 API mapping results with only 174 automatically identified seeds, more accurate than other approaches using the same or much more mapping seeds. @InProceedings{ESEC/FSE19p796, author = {Nghi D. Q. Bui and Yijun Yu and Lingxiao Jiang}, title = {SAR: Learning Cross-Language API Mappings with Little Knowledge}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {796--806}, doi = {10.1145/3338906.3338924}, year = {2019}, } Publisher's Version Info Artifacts Reusable |
|
Burger, Andreas |
ESEC/FSE '19: "Architectural Decision Forces ..."
Architectural Decision Forces at Work: Experiences in an Industrial Consultancy Setting
Julius Rueckert, Andreas Burger, Heiko Koziolek, Thanikesavan Sivanthi, Alexandru Moga, and Carsten Franke (ABB Research, Germany; ABB Research, Switzerland) The concepts of decision forces and the decision forces viewpoint were proposed to help software architects to make architectural decisions more transparent and the documentation of their rationales more explicit. However, practical experience reports and guidelines on how to use the viewpoint in typical industrial project setups are not available. Existing works mainly focus on basic tool support for the documentation of the viewpoint or show how forces can be used as part of focused architecture review sessions. With this paper, we share experiences and lessons learned from applying the decision forces viewpoint in a distributed industrial project setup, which involves consultants supporting architects during the re-design process of an existing large software system. Alongside our findings, we describe new forces that can serve as template for similar projects, discuss challenges applying them in a distributed consultancy project, and share ideas for potential extensions. @InProceedings{ESEC/FSE19p996, author = {Julius Rueckert and Andreas Burger and Heiko Koziolek and Thanikesavan Sivanthi and Alexandru Moga and Carsten Franke}, title = {Architectural Decision Forces at Work: Experiences in an Industrial Consultancy Setting}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {996--1005}, doi = {10.1145/3338906.3340461}, year = {2019}, } Publisher's Version |
|
Cadar, Cristian |
ESEC/FSE '19: "A Segmented Memory Model for ..."
A Segmented Memory Model for Symbolic Execution
Timotej Kapus and Cristian Cadar (Imperial College London, UK) Symbolic execution is an effective technique for exploring paths in a program and reasoning about all possible values on those paths. However, the technique still struggles with code that uses complex heap data structures, in which a pointer is allowed to refer to more than one memory object. In such cases, symbolic execution typically forks execution into multiple states, one for each object to which the pointer could refer. In this paper, we propose a technique that avoids this expensive forking by using a segmented memory model. In this model, memory is split into segments, so that each symbolic pointer refers to objects in a single segment. The size of the segments are bound by a threshold, in order to avoid expensive constraints. This results in a memory model where forking due to symbolic pointer dereferences is significantly reduced, often completely. We evaluate our segmented memory model on a mix of whole program benchmarks (such as m4 and make) and library benchmarks (such as SQLite), and observe significant decreases in execution time and memory usage. @InProceedings{ESEC/FSE19p774, author = {Timotej Kapus and Cristian Cadar}, title = {A Segmented Memory Model for Symbolic Execution}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {774--784}, doi = {10.1145/3338906.3338936}, year = {2019}, } Publisher's Version Artifacts Reusable ESEC/FSE '19: "Just Fuzz It: Solving Floating-Point ..." Just Fuzz It: Solving Floating-Point Constraints using Coverage-Guided Fuzzing Daniel Liew, Cristian Cadar, Alastair F. Donaldson, and J. Ryan Stinnett (Imperial College London, UK; Mozilla, USA) We investigate the use of coverage-guided fuzzing as a means of proving satisfiability of SMT formulas over finite variable domains, with specific application to floating-point constraints. We show how an SMT formula can be encoded as a program containing a location that is reachable if and only if the program’s input corresponds to a satisfying assignment to the formula. A coverage-guided fuzzer can then be used to search for an input that reaches the location, yielding a satisfying assignment. We have implemented this idea in a tool, Just Fuzz-it Solver (JFS), and we present a large experimental evaluation showing that JFS is both competitive with and complementary to state-of-the-art SMT solvers with respect to solving floating-point constraints, and that the coverage-guided approach of JFS provides significant benefit over naive fuzzing in the floating-point domain. Applied in a portfolio manner, the JFS approach thus has the potential to complement traditional SMT solvers for program analysis tasks that involve reasoning about floating-point constraints. @InProceedings{ESEC/FSE19p521, author = {Daniel Liew and Cristian Cadar and Alastair F. Donaldson and J. Ryan Stinnett}, title = {Just Fuzz It: Solving Floating-Point Constraints using Coverage-Guided Fuzzing}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {521--532}, doi = {10.1145/3338906.3338921}, year = {2019}, } Publisher's Version |
|
Cai, Haipeng |
ESEC/FSE '19: "A Dynamic Taint Analyzer for ..."
A Dynamic Taint Analyzer for Distributed Systems
Xiaoqin Fu and Haipeng Cai (Washington State University, USA) As in other software domains, information flow security is a fundamental aspect of code security in distributed systems. However, most existing solutions to information flow security are limited to centralized software. For distributed systems, such solutions face multiple challenges, including technique applicability, tool portability, and analysis scalability. To overcome these challenges, we present DistTaint, a dynamic information flow (taint) analyzer for distributed systems. By partial-ordering method-execution events, DistTaint infers implicit dependencies in distributed programs, so as to resolve the applicability challenge. It resolves the portability challenge by working fully at application level, without customizing the runtime platform. To achieve scalability, it reduces analysis costs using a multi-phase analysis, where the pre-analysis phase generates method-level results to narrow down the scope of the following statement-level analysis. We evaluated DistTaint against eight real-world distributed systems. Empirical results showed DistTaint’s applicability to, portability with, and scalability for industry-scale distributed systems, along with its capability of discovering known and unknown vulnerabilities. A demo video for DistTaint can be downloaded from https://www.dropbox.com/l/scl/AAAkrm4p63Ffx0rZqblY3zlLFuaohbRxs0 or viewed here https://youtu.be/fy4yMIaKzPE online. The tool package is here: https://www.dropbox.com/sh/kfr9ixucyny1jp2/AAC00aI-I8Od4ywZCqwZ1uaa?dl=0 @InProceedings{ESEC/FSE19p1115, author = {Xiaoqin Fu and Haipeng Cai}, title = {A Dynamic Taint Analyzer for Distributed Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1115--1119}, doi = {10.1145/3338906.3341179}, year = {2019}, } Publisher's Version Video Info |
|
Cai, Liang |
ESEC/FSE '19: "AnswerBot: An Answer Summary ..."
AnswerBot: An Answer Summary Generation Tool Based on Stack Overflow
Liang Cai, Haoye Wang, Bowen Xu, Qiao Huang, Xin Xia, David Lo, and Zhenchang Xing (Zhejiang University, China; Singapore Management University, Singapore; Monash University, Australia; Australian National University, Australia) Software Q&A sites (like Stack Overflow) play an essential role in developers’ day-to-day work for problem-solving. Although search engines (like Google) are widely used to obtain a list of relevant posts for technical problems, we observed that the redundant relevant posts and sheer amount of information barriers developers to digest and identify the useful answers. In this paper, we propose a tool AnswerBot which enables to automatically generate an answer summary for a technical problem. AnswerBot consists of three main stages, (1) relevant question retrieval, (2) useful answer paragraph selection, (3) diverse answer summary generation. We implement it in the form of a search engine website. To evaluate AnswerBot, we first build a repository includes a large number of Java questions and their corresponding answers from Stack Overflow. Then, we conduct a user study that evaluates the answer summary generated by AnswerBot and two baselines (based on Google and Stack Overflow search engine) for 100 queries. The results show that the answer summaries generated by AnswerBot are more relevant, useful, and diverse. Moreover, we also substantially improved the efficiency of AnswerBot (from 309 to 8 seconds per query). @InProceedings{ESEC/FSE19p1134, author = {Liang Cai and Haoye Wang and Bowen Xu and Qiao Huang and Xin Xia and David Lo and Zhenchang Xing}, title = {AnswerBot: An Answer Summary Generation Tool Based on Stack Overflow}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1134--1138}, doi = {10.1145/3338906.3341186}, year = {2019}, } Publisher's Version ESEC/FSE '19: "BIKER: A Tool for Bi-Information ..." BIKER: A Tool for Bi-Information Source Based API Method Recommendation Liang Cai, Haoye Wang, Qiao Huang, Xin Xia, Zhenchang Xing, and David Lo (Zhejiang University, China; Monash University, Australia; Australian National University, Australia; Singapore Management University, Singapore) Application Programming Interfaces (APIs) in software libraries play an important role in modern software development. Although most libraries provide API documentation as a reference, developers may find it difficult to directly search for appropriate APIs in documentation using the natural language description of the programming tasks. We call such phenomenon as knowledge gap, which refers to the fact that API documentation mainly describes API functionality and structure but lacks other types of information like concepts and purposes. In this paper, we propose a Java API recommendation tool named BIKER (Bi-Information source based KnowledgE Recommendation) to bridge the knowledge gap. We implement BIKER as a search engine website. Given a query in natural language, instead of directly searching API documentation, BIKER first searches for similar API-related questions on Stack Overflow to extract candidate APIs. Then, BIKER ranks them by considering the query’s similarity with both Stack Overflow posts and API documentation. Finally, to help developers better understand why each API is recommended and how to use them in practice, BIKER summarizes and presents supplementary information (e.g., API description, code examples in Stack Overflow posts) for each recommended API. Our quantitative evaluation and user study demonstrate that BIKER can help developers find appropriate APIs more efficiently and precisely. @InProceedings{ESEC/FSE19p1075, author = {Liang Cai and Haoye Wang and Qiao Huang and Xin Xia and Zhenchang Xing and David Lo}, title = {BIKER: A Tool for Bi-Information Source Based API Method Recommendation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1075--1079}, doi = {10.1145/3338906.3341174}, year = {2019}, } Publisher's Version |
|
Cai, Shaowei |
ESEC/FSE '19: "Towards More Efficient Meta-heuristic ..."
Towards More Efficient Meta-heuristic Algorithms for Combinatorial Test Generation
Jinkun Lin, Shaowei Cai, Chuan Luo, Qingwei Lin, and Hongyu Zhang (Institute of Software at Chinese Academy of Sciences, China; Microsoft Research, China; University of Newcastle, Australia) Combinatorial interaction testing (CIT) is a popular approach to detecting faults in highly configurable software systems. The core task of CIT is to generate a small test suite called a t-way covering array (CA), where t is the covering strength. Many meta-heuristic algorithms have been proposed to solve the constrained covering array generating (CCAG) problem. A major drawback of existing algorithms is that they usually need considerable time to obtain a good-quality solution, which hinders the wider applications of such algorithms. We observe that the high time consumption of existing meta-heuristic algorithms for CCAG is mainly due to the procedure of score computation. In this work, we propose a much more efficient method for score computation. The score computation method is applied to a state-of-the-art algorithm TCA, showing significant improvements. The new score computation method opens a way to utilize algorithmic ideas relying on scores which were not affordable previously. We integrate a gradient descent search step to further improve the algorithm, leading to a new algorithm called FastCA. Experiments on a broad range of real-world benchmarks and synthetic benchmarks show that, FastCA significantly outperforms state-of-the-art algorithms for CCAG algorithms, in terms of both the size of obtained covering array and the run time. @InProceedings{ESEC/FSE19p212, author = {Jinkun Lin and Shaowei Cai and Chuan Luo and Qingwei Lin and Hongyu Zhang}, title = {Towards More Efficient Meta-heuristic Algorithms for Combinatorial Test Generation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {212--222}, doi = {10.1145/3338906.3338914}, year = {2019}, } Publisher's Version |
|
Cai, Yan |
ESEC/FSE '19: "Detecting Concurrency Memory ..."
Detecting Concurrency Memory Corruption Vulnerabilities
Yan Cai, Biyun Zhu, Ruijie Meng, Hao Yun, Liang He, Purui Su, and Bin Liang (Institute of Software at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China; Renmin University of China, China) Memory corruption vulnerabilities can occur in multithreaded executions, known as concurrency vulnerabilities in this paper. Due to non-deterministic multithreaded executions, they are extremely difficult to detect. Recently, researchers tried to apply data race detectors to detect concurrency vulnerabilities. Unfortunately, these detectors are ineffective on detecting concurrency vulnerabilities. For example, most (90%) of data races are benign. However, concurrency vulnerabilities are harmful and can usually be exploited to launch attacks. Techniques based on maximal causal model rely on constraints solvers to predict scheduling; they can miss concurrency vulnerabilities in practice. Our insight is, a concurrency vulnerability is more related to the orders of events that can be reversed in different executions, no matter whether the corresponding accesses can form data races. We then define exchangeable events to identify pairs of events such that their execution orders can be probably reversed in different executions. We further propose algorithms to detect three major kinds of concurrency vulnerabilities. To overcome potential imprecision of exchangeable events, we also adopt a validation to isolate real vulnerabilities. We implemented our algorithms as a tool ConVul and applied it on 10 known concurrency vulnerabilities and the MySQL database server. Compared with three widely-used race detectors and one detector based on maximal causal model, ConVul was significantly more effective by detecting 9 of 10 known vulnerabilities and 6 zero-day vulnerabilities on MySQL (four have been confirmed). However, other detectors only detected at most 3 out of the 16 known and zero-day vulnerabilities. @InProceedings{ESEC/FSE19p706, author = {Yan Cai and Biyun Zhu and Ruijie Meng and Hao Yun and Liang He and Purui Su and Bin Liang}, title = {Detecting Concurrency Memory Corruption Vulnerabilities}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {706--717}, doi = {10.1145/3338906.3338927}, year = {2019}, } Publisher's Version |
|
Çalıklı, Gül |
ESEC/FSE '19: "Effects of Explicit Feature ..."
Effects of Explicit Feature Traceability on Program Comprehension
Jacob Krüger, Gül Çalıklı, Thorsten Berger, Thomas Leich, and Gunter Saake (University of Magdeburg, Germany; Chalmers University of Technology, Sweden; University of Gothenburg, Sweden; Harz University of Applied Sciences, Germany; METOP, Germany) Developers spend a substantial amount of their time with program comprehension. To improve their comprehension and refresh their memory, developers need to communicate with other developers, read the documentation, and analyze the source code. Many studies show that developers focus primarily on the source code and that small improvements can have a strong impact. As such, it is crucial to bring the code itself into a more comprehensible form. A particular technique for this purpose are explicit feature traces to easily identify a program’s functionalities. To improve our empirical understanding about the effects of feature traces, we report an online experiment with 49 professional software developers. We studied the impact of explicit feature traces, namely annotations and decomposition, on program comprehension and compared them to the same code without traces. Besides this experiment, we also asked our participants about their opinions in order to combine quantitative and qualitative data. Our results indicate that, as opposed to purely object-oriented code: (1) annotations can have positive effects on program comprehension; (2) decomposition can have a negative impact on bug localization; and (3) our participants perceive both techniques as beneficial. Moreover, none of the three code versions yields significant improvements on task completion time. Overall, our results indicate that lightweight traceability, such as using annotations, provides immediate benefits to developers during software development and maintenance without extensive training or tooling; and can improve current industrial practices that rely on heavyweight traceability tools (e.g., DOORS) and retroactive fulfillment of standards (e.g., ISO-26262, DO-178B). @InProceedings{ESEC/FSE19p338, author = {Jacob Krüger and Gül Çalıklı and Thorsten Berger and Thomas Leich and Gunter Saake}, title = {Effects of Explicit Feature Traceability on Program Comprehension}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {338--349}, doi = {10.1145/3338906.3338968}, year = {2019}, } Publisher's Version |
|
Cambronero, Jose |
ESEC/FSE '19: "When Deep Learning Met Code ..."
When Deep Learning Met Code Search
Jose Cambronero, Hongyu Li, Seohyun Kim, Koushik Sen, and Satish Chandra (Massachusetts Institute of Technology, USA; Facebook, USA; University of California at Berkeley, USA) There have been multiple recent proposals on using deep neural networks for code search using natural language. Common across these proposals is the idea of embedding code and natural language queries into real vectors and then using vector distance to approximate semantic correlation between code and the query. Multiple approaches exist for learning these embeddings, including unsupervised techniques, which rely only on a corpus of code examples, and supervised techniques, which use an aligned corpus of paired code and natural language descriptions. The goal of this supervision is to produce embeddings that are more similar for a query and the corresponding desired code snippet. Clearly, there are choices in whether to use supervised techniques at all, and if one does, what sort of network and training to use for supervision. This paper is the first to evaluate these choices systematically. To this end, we assembled implementations of state-of-the-art techniques to run on a common platform, training and evaluation corpora. To explore the design space in network complexity, we also introduced a new design point that is a minimal supervision extension to an existing unsupervised technique. Our evaluation shows that: 1. adding supervision to an existing unsupervised technique can improve performance, though not necessarily by much; 2. simple networks for supervision can be more effective that more sophisticated sequence-based networks for code search; 3. while it is common to use docstrings to carry out supervision, there is a sizeable gap between the effectiveness of docstrings and a more query-appropriate supervision corpus. @InProceedings{ESEC/FSE19p964, author = {Jose Cambronero and Hongyu Li and Seohyun Kim and Koushik Sen and Satish Chandra}, title = {When Deep Learning Met Code Search}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {964--974}, doi = {10.1145/3338906.3340458}, year = {2019}, } Publisher's Version |
|
Cao, Chun |
ESEC/FSE '19: "Boosting Operational DNN Testing ..."
Boosting Operational DNN Testing Efficiency through Conditioning
Zenan Li, Xiaoxing Ma, Chang Xu, Chun Cao, Jingwei Xu, and Jian Lü (Nanjing University, China) With the increasing adoption of Deep Neural Network (DNN) models as integral parts of software systems, efficient operational testing of DNNs is much in demand to ensure these models' actual performance in field conditions. A challenge is that the testing often needs to produce precise results with a very limited budget for labeling data collected in field. Viewing software testing as a practice of reliability estimation through statistical sampling, we re-interpret the idea behind conventional structural coverages as conditioning for variance reduction. With this insight we propose an efficient DNN testing method based on the conditioning on the representation learned by the DNN model under testing. The representation is defined by the probability distribution of the output of neurons in the last hidden layer of the model. To sample from this high dimensional distribution in which the operational data are sparsely distributed, we design an algorithm leveraging cross entropy minimization. Experiments with various DNN models and datasets were conducted to evaluate the general efficiency of the approach. The results show that, compared with simple random sampling, this approach requires only about a half of labeled inputs to achieve the same level of precision. @InProceedings{ESEC/FSE19p499, author = {Zenan Li and Xiaoxing Ma and Chang Xu and Chun Cao and Jingwei Xu and Jian Lü}, title = {Boosting Operational DNN Testing Efficiency through Conditioning}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {499--509}, doi = {10.1145/3338906.3338930}, year = {2019}, } Publisher's Version |
|
Cao, Yanbin |
ESEC/FSE '19: "SEntiMoji: An Emoji-Powered ..."
SEntiMoji: An Emoji-Powered Learning Approach for Sentiment Analysis in Software Engineering
Zhenpeng Chen, Yanbin Cao, Xuan Lu, Qiaozhu Mei, and Xuanzhe Liu (Peking University, China; University of Michigan, USA) Sentiment analysis has various application scenarios in software engineering (SE), such as detecting developers' emotions in commit messages and identifying their opinions on Q&A forums. However, commonly used out-of-the-box sentiment analysis tools cannot obtain reliable results on SE tasks and the misunderstanding of technical jargon is demonstrated to be the main reason. Then, researchers have to utilize labeled SE-related texts to customize sentiment analysis for SE tasks via a variety of algorithms. However, the scarce labeled data can cover only very limited expressions and thus cannot guarantee the analysis quality. To address such a problem, we turn to the easily available emoji usage data for help. More specifically, we employ emotional emojis as noisy labels of sentiments and propose a representation learning approach that uses both Tweets and GitHub posts containing emojis to learn sentiment-aware representations for SE-related texts. These emoji-labeled posts can not only supply the technical jargon, but also incorporate more general sentiment patterns shared across domains. They as well as labeled data are used to learn the final sentiment classifier. Compared to the existing sentiment analysis methods used in SE, the proposed approach can achieve significant improvement on representative benchmark datasets. By further contrast experiments, we find that the Tweets make a key contribution to the power of our approach. This finding informs future research not to unilaterally pursue the domain-specific resource, but try to transform knowledge from the open domain through ubiquitous signals such as emojis. @InProceedings{ESEC/FSE19p841, author = {Zhenpeng Chen and Yanbin Cao and Xuan Lu and Qiaozhu Mei and Xuanzhe Liu}, title = {SEntiMoji: An Emoji-Powered Learning Approach for Sentiment Analysis in Software Engineering}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {841--852}, doi = {10.1145/3338906.3338977}, year = {2019}, } Publisher's Version |
|
Castelluccio, Marco |
ESEC/FSE '19: "Understanding Flaky Tests: ..."
Understanding Flaky Tests: The Developer’s Perspective
Moritz Eck, Fabio Palomba, Marco Castelluccio, and Alberto Bacchelli (University of Zurich, Switzerland; Mozilla, UK) Flaky tests are software tests that exhibit a seemingly random outcome (pass or fail) despite exercising unchanged code. In this work, we examine the perceptions of software developers about the nature, relevance, and challenges of flaky tests. We asked 21 professional developers to classify 200 flaky tests they previously fixed, in terms of the nature and the origin of the flakiness, as well as of the fixing effort. We also examined developers' fixing strategies. Subsequently, we conducted an online survey with 121 developers with a median industrial programming experience of five years. Our research shows that: The flakiness is due to several different causes, four of which have never been reported before, despite being the most costly to fix; flakiness is perceived as significant by the vast majority of developers, regardless of their team's size and project's domain, and it can have effects on resource allocation, scheduling, and the perceived reliability of the test suite; and the challenges developers report to face regard mostly the reproduction of the flaky behavior and the identification of the cause for the flakiness. Public preprint [http://arxiv.org/abs/1907.01466], data and materials [https://doi.org/10.5281/zenodo.3265785]. @InProceedings{ESEC/FSE19p830, author = {Moritz Eck and Fabio Palomba and Marco Castelluccio and Alberto Bacchelli}, title = {Understanding Flaky Tests: The Developer’s Perspective}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {830--840}, doi = {10.1145/3338906.3338945}, year = {2019}, } Publisher's Version |
|
Caulo, Maria |
ESEC/FSE '19: "A Taxonomy of Metrics for ..."
A Taxonomy of Metrics for Software Fault Prediction
Maria Caulo (University of Basilicata, Italy) In the field of Software Fault Prediction (SFP), researchers exploit software metrics to build predictive models using machine learning and/or statistical techniques. SFP has existed for several decades and the number of metrics used has increased dramatically. Thus, the need for a taxonomy of metrics for SFP arises firstly to standardize the lexicon used in this field so that the communication among researchers is simplified and then to organize and systematically classify the used metrics. In this doctoral symposium paper, I present my ongoing work which aims not only to build such a taxonomy as comprehensive as possible, but also to provide a global understanding of the metrics for SFP in terms of detailed information: acronym(s), extended name, univocal description, granularity of the fault prediction (e.g., method and class), category, and research papers in which they were used. @InProceedings{ESEC/FSE19p1144, author = {Maria Caulo}, title = {A Taxonomy of Metrics for Software Fault Prediction}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1144--1147}, doi = {10.1145/3338906.3341462}, year = {2019}, } Publisher's Version |
|
Cetin, H. Alperen |
ESEC/FSE '19: "Identifying the Most Valuable ..."
Identifying the Most Valuable Developers using Artifact Traceability Graphs
H. Alperen Cetin (Bilkent University, Turkey) Finding the most valuable and indispensable developers is a crucial task in software development. We categorize these valuable developers into two categories: connector and maven. A typical connector represents a developer who connects different groups of developers in a large-scale project. Mavens represent the developers who are the sole experts in specific modules of the project. To identify the connectors and mavens, we propose an approach using graph centrality metrics and connections of traceability graphs. We conducted a preliminary study on this approach by using two open source projects: QT 3D Studio and Android. Initial results show that the approach leads to identify the essential developers. @InProceedings{ESEC/FSE19p1196, author = {H. Alperen Cetin}, title = {Identifying the Most Valuable Developers using Artifact Traceability Graphs}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1196--1198}, doi = {10.1145/3338906.3342487}, year = {2019}, } Publisher's Version |
|
Chabbi, Milind |
ESEC/FSE '19: "Pinpointing Performance Inefficiencies ..."
Pinpointing Performance Inefficiencies in Java
Pengfei Su, Qingsen Wang, Milind Chabbi, and Xu Liu (College of William and Mary, USA; Scalable Machines Research, USA) Many performance inefficiencies such as inappropriate choice of algorithms or data structures, developers' inattention to performance, and missed compiler optimizations show up as wasteful memory operations. Wasteful memory operations are those that produce/consume data to/from memory that may have been avoided. We present, JXPerf, a lightweight performance analysis tool for pinpointing wasteful memory operations in Java programs. Traditional byte code instrumentation for such analysis (1) introduces prohibitive overheads and (2) misses inefficiencies in machine code generation. JXPerf overcomes both of these problems. JXPerf uses hardware performance monitoring units to sample memory locations accessed by a program and uses hardware debug registers to monitor subsequent accesses to the same memory. The result is a lightweight measurement at the machine code level with attribution of inefficiencies to their provenance --- machine and source code within full calling contexts. JXPerf introduces only 7% runtime overhead and 7% memory overhead making it useful in production. Guided by JXPerf, we optimize several Java applications by improving code generation and choosing superior data structures and algorithms, which yield significant speedups. @InProceedings{ESEC/FSE19p818, author = {Pengfei Su and Qingsen Wang and Milind Chabbi and Xu Liu}, title = {Pinpointing Performance Inefficiencies in Java}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {818--829}, doi = {10.1145/3338906.3338923}, year = {2019}, } Publisher's Version |
|
Chakraborty, Joymallya |
ESEC/FSE '19: "Predicting Breakdowns in Cloud ..."
Predicting Breakdowns in Cloud Services (with SPIKE)
Jianfeng Chen, Joymallya Chakraborty, Philip Clark, Kevin Haverlock, Snehit Cherian, and Tim Menzies (North Carolina State University, USA; LexisNexis, USA) Maintaining web-services is a mission-critical task where any down- time means loss of revenue and reputation (of being a reliable service provider). In the current competitive web services market, such a loss of reputation causes extensive loss of future revenue. To address this issue, we developed SPIKE, a data mining tool which can predict upcoming service breakdowns, half an hour into the future. Such predictions let an organization alert and assemble the tiger team to address the problem (e.g. by reconguring cloud hardware in order to reduce the likelihood of that breakdown). SPIKE utilizes (a) regression tree learning (with CART); (b) synthetic minority over-sampling (to handle how rare spikes are in our data); (c) hyperparameter optimization (to learn best settings for our local data) and (d) a technique we called “topology sampling” where training vectors are built from extensive details of an individual node plus summary details on all their neighbors. In the experiments reported here, SPIKE predicted service spikes 30 minutes into future with recalls and precision of 75% and above. Also, SPIKE performed relatively better than other widely-used learning methods (neural nets, random forests, logistic regression). @InProceedings{ESEC/FSE19p916, author = {Jianfeng Chen and Joymallya Chakraborty and Philip Clark and Kevin Haverlock and Snehit Cherian and Tim Menzies}, title = {Predicting Breakdowns in Cloud Services (with SPIKE)}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {916--924}, doi = {10.1145/3338906.3340450}, year = {2019}, } Publisher's Version |
|
Chandra, Satish |
ESEC/FSE '19: "When Deep Learning Met Code ..."
When Deep Learning Met Code Search
Jose Cambronero, Hongyu Li, Seohyun Kim, Koushik Sen, and Satish Chandra (Massachusetts Institute of Technology, USA; Facebook, USA; University of California at Berkeley, USA) There have been multiple recent proposals on using deep neural networks for code search using natural language. Common across these proposals is the idea of embedding code and natural language queries into real vectors and then using vector distance to approximate semantic correlation between code and the query. Multiple approaches exist for learning these embeddings, including unsupervised techniques, which rely only on a corpus of code examples, and supervised techniques, which use an aligned corpus of paired code and natural language descriptions. The goal of this supervision is to produce embeddings that are more similar for a query and the corresponding desired code snippet. Clearly, there are choices in whether to use supervised techniques at all, and if one does, what sort of network and training to use for supervision. This paper is the first to evaluate these choices systematically. To this end, we assembled implementations of state-of-the-art techniques to run on a common platform, training and evaluation corpora. To explore the design space in network complexity, we also introduced a new design point that is a minimal supervision extension to an existing unsupervised technique. Our evaluation shows that: 1. adding supervision to an existing unsupervised technique can improve performance, though not necessarily by much; 2. simple networks for supervision can be more effective that more sophisticated sequence-based networks for code search; 3. while it is common to use docstrings to carry out supervision, there is a sizeable gap between the effectiveness of docstrings and a more query-appropriate supervision corpus. @InProceedings{ESEC/FSE19p964, author = {Jose Cambronero and Hongyu Li and Seohyun Kim and Koushik Sen and Satish Chandra}, title = {When Deep Learning Met Code Search}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {964--974}, doi = {10.1145/3338906.3340458}, year = {2019}, } Publisher's Version |
|
Chan, W. K. |
ESEC/FSE '19: "AggrePlay: Efficient Record ..."
AggrePlay: Efficient Record and Replay of Multi-threaded Programs
Ernest Pobee and W. K. Chan (City University of Hong Kong, China) Deterministic replay presents challenges and often results in high memory and runtime overheads. Previous studies deterministically reproduce program outputs often only after several replay iterations or may produce a non-deterministic sequence of output to external sources. In this paper, we propose AggrePlay, a deterministic replay technique which is based on recording read-write interleavings leveraging thread-local determinism and summarized read values. During the record phase, AggrePlay records a read count vector clock for each thread on each memory location. Each thread checks the logged vector clock against the current read count in the replay phase before a write event. We present an experiment and analyze the results using the Splash2x benchmark suite as well as two real-world applications. The experimental results show that on average, AggrePlay experiences a better reduction in compressed log size, and 56% better runtime slowdown during the record phase, as well as a 41.58% higher probability in the replay phase than existing work. @InProceedings{ESEC/FSE19p567, author = {Ernest Pobee and W. K. Chan}, title = {AggrePlay: Efficient Record and Replay of Multi-threaded Programs}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {567--577}, doi = {10.1145/3338906.3338959}, year = {2019}, } Publisher's Version |
|
Chaparro, Oscar |
ESEC/FSE '19: "Assessing the Quality of the ..."
Assessing the Quality of the Steps to Reproduce in Bug Reports
Oscar Chaparro, Carlos Bernal-Cárdenas, Jing Lu, Kevin Moran, Andrian Marcus, Massimiliano Di Penta, Denys Poshyvanyk, and Vincent Ng (College of William and Mary, USA; University of Texas at Dallas, USA; University of Sannio, Italy) A major problem with user-written bug reports, indicated by developers and documented by researchers, is the (lack of high) quality of the reported steps to reproduce the bugs. Low-quality steps to reproduce lead to excessive manual effort spent on bug triage and resolution. This paper proposes Euler, an approach that automatically identifies and assesses the quality of the steps to reproduce in a bug report, providing feedback to the reporters, which they can use to improve the bug report. The feedback provided by Euler was assessed by external evaluators and the results indicate that Euler correctly identified 98% of the existing steps to reproduce and 58% of the missing ones, while 73% of its quality annotations are correct. @InProceedings{ESEC/FSE19p86, author = {Oscar Chaparro and Carlos Bernal-Cárdenas and Jing Lu and Kevin Moran and Andrian Marcus and Massimiliano Di Penta and Denys Poshyvanyk and Vincent Ng}, title = {Assessing the Quality of the Steps to Reproduce in Bug Reports}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {86--96}, doi = {10.1145/3338906.3338947}, year = {2019}, } Publisher's Version Info |
|
Cha, Sooyoung |
ESEC/FSE '19: "Concolic Testing with Adaptively ..."
Concolic Testing with Adaptively Changing Search Heuristics
Sooyoung Cha and Hakjoo Oh (Korea University, South Korea) We present Chameleon, a new approach for adaptively changing search heuristics during concolic testing. Search heuristics play a central role in concolic testing as they mitigate the path-explosion problem by focusing on particular program paths that are likely to increase code coverage as quickly as possible. A variety of techniques for search heuristics have been proposed over the past decade. However, existing approaches are limited in that they use the same search heuristics throughout the entire testing process, which is inherently insufficient to exercise various execution paths. Chameleon overcomes this limitation by adapting search heuristics on the fly via an algorithm that learns new search heuristics based on the knowledge accumulated during concolic testing. Experimental results show that the transition from the traditional non-adaptive approaches to ours greatly improves the practicality of concolic testing in terms of both code coverage and bug-finding. @InProceedings{ESEC/FSE19p235, author = {Sooyoung Cha and Hakjoo Oh}, title = {Concolic Testing with Adaptively Changing Search Heuristics}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {235--245}, doi = {10.1145/3338906.3338964}, year = {2019}, } Publisher's Version |
|
Chauvel, Franck |
ESEC/FSE '19: "Using Microservices for Non-intrusive ..."
Using Microservices for Non-intrusive Customization of Multi-tenant SaaS
Phu H. Nguyen, Hui Song, Franck Chauvel, Roy Muller, Seref Boyar, and Erik Levin (SINTEF, Norway; Visma, Norway) Enterprise software vendors often need to support their customer companies to customize the enterprise software products deployed on-premises of customers. But when software vendors are migrating their products to cloud-based Software-as-a-Service (SaaS), deep customization that used to be done on-premises is not applicable to the cloud-based multi-tenant context in which all tenants share the same SaaS. Enabling tenant-specific customization in cloud-based multi-tenant SaaS requires a novel approach. This paper proposes a Microservices-based non-intrusive Customization framework for multi-tenant Cloud-based SaaS, called MiSC-Cloud. Non-intrusive deep customization means that the microservices for customization of each tenant are isolated from the main software product and other microservices for customization of other tenants. MiSC-Cloud makes deep customization possible via authorized API calls through API gateways to the APIs of the customization microservices and the APIs of the main software product. We have implemented a proof-of-concept of our approach to enable non-intrusive deep customization of an open-source cloud native reference application of Microsoft called eShopOnContainers. Based on this work, we provide some lessons learned and directions for future work. @InProceedings{ESEC/FSE19p905, author = {Phu H. Nguyen and Hui Song and Franck Chauvel and Roy Muller and Seref Boyar and Erik Levin}, title = {Using Microservices for Non-intrusive Customization of Multi-tenant SaaS}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {905--915}, doi = {10.1145/3338906.3340452}, year = {2019}, } Publisher's Version |
|
Chechik, Marsha |
ESEC/FSE '19: "Lifting Datalog-Based Analyses ..."
Lifting Datalog-Based Analyses to Software Product Lines
Ramy Shahin, Marsha Chechik, and Rick Salay (University of Toronto, Canada) Applying program analyses to Software Product Lines (SPLs) has been a fundamental research problem at the intersection of Product Line Engineering and software analysis. Different attempts have been made to ”lift” particular product-level analyses to run on the entire product line. In this paper, we tackle the class of Datalog-based analyses (e.g., pointer and taint analyses), study the theoretical aspects of lifting Datalog inference, and implement a lifted inference algorithm inside the Soufflé Datalog engine. We evaluate our implementation on a set of benchmark product lines. We show significant savings in processing time and fact database size (billions of times faster on one of the benchmarks) compared to brute-force analysis of each product individually. @InProceedings{ESEC/FSE19p39, author = {Ramy Shahin and Marsha Chechik and Rick Salay}, title = {Lifting Datalog-Based Analyses to Software Product Lines}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {39--49}, doi = {10.1145/3338906.3338928}, year = {2019}, } Publisher's Version |
|
Chekam, Thierry Titcheu |
ESEC/FSE '19: "Mart: A Mutant Generation ..."
Mart: A Mutant Generation Tool for LLVM
Thierry Titcheu Chekam, Mike Papadakis, and Yves Le Traon (University of Luxembourg, Luxembourg) Program mutation makes small syntactic alterations to programs' code in order to artificially create faulty programs (mutants). Mutants creation (generation) tools are often characterized by their mutation operators and the way they create and represent the mutants. This paper presents Mart, a mutants generation tool, for LLVM bitcode, that supports the fine-grained definition of mutation operators (as matching rule - replacing pattern pair; uses 816 defined pairs by default) and the restriction of the code parts to mutate. New operators are implemented in Mart by implementing their matching rules and replacing patterns. Mart also implements in-memory Trivial Compiler Equivalence to eliminate equivalent and duplicate mutants during mutants generation. Mart generates mutant code as separated mutant files, meta-mutants file, weak mutation and mutant coverage instrumented files. Mart is publicly available (https://github.com/thierry-tct/mart). Mart has been applied to generate mutants for several research experiments and generated more than 4,000,000 mutants. @InProceedings{ESEC/FSE19p1080, author = {Thierry Titcheu Chekam and Mike Papadakis and Yves Le Traon}, title = {Mart: A Mutant Generation Tool for LLVM}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1080--1084}, doi = {10.1145/3338906.3341180}, year = {2019}, } Publisher's Version Video Info |
|
Chen, Bihuan |
ESEC/FSE '19: "A Large-Scale Empirical Study ..."
A Large-Scale Empirical Study of Compiler Errors in Continuous Integration
Chen Zhang, Bihuan Chen, Linlin Chen, Xin Peng, and Wenyun Zhao (Fudan University, China) Continuous Integration (CI) is a widely-used software development practice to reduce risks. CI builds often break, and a large amount of efforts are put into troubleshooting broken builds. Despite that compiler errors have been recognized as one of the most frequent types of build failures, little is known about the common types, fix efforts and fix patterns of compiler errors that occur in CI builds of open-source projects. To fill such a gap, we present a large-scale empirical study on 6,854,271 CI builds from 3,799 open-source Java projects hosted on GitHub. Using the build data, we measured the frequency of broken builds caused by compiler errors, investigated the ten most common compiler error types, and reported their fix time. We manually analyzed 325 broken builds to summarize fix patterns of the ten most common compiler error types. Our findings help to characterize and understand compiler errors during CI and provide practical implications to developers, tool builders and researchers. @InProceedings{ESEC/FSE19p176, author = {Chen Zhang and Bihuan Chen and Linlin Chen and Xin Peng and Wenyun Zhao}, title = {A Large-Scale Empirical Study of Compiler Errors in Continuous Integration}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {176--187}, doi = {10.1145/3338906.3338917}, year = {2019}, } Publisher's Version |
|
Cheng, Qian |
ESEC/FSE '19: "Robust Log-Based Anomaly Detection ..."
Robust Log-Based Anomaly Detection on Unstable Log Data
Xu Zhang, Yong Xu, Qingwei Lin, Bo Qiao, Hongyu Zhang, Yingnong Dang, Chunyu Xie, Xinsheng Yang, Qian Cheng, Ze Li, Junjie Chen, Xiaoting He, Randolph Yao, Jian-Guang Lou, Murali Chintalapati, Furao Shen, and Dongmei Zhang (Microsoft Research, China; Nanjing University, China; University of Newcastle, Australia; Microsoft, USA; Tianjin University, China) Logs are widely used by large and complex software-intensive systems for troubleshooting. There have been a lot of studies on log-based anomaly detection. To detect the anomalies, the existing methods mainly construct a detection model using log event data extracted from historical logs. However, we find that the existing methods do not work well in practice. These methods have the close-world assumption, which assumes that the log data is stable over time and the set of distinct log events is known. However, our empirical study shows that in practice, log data often contains previously unseen log events or log sequences. The instability of log data comes from two sources: 1) the evolution of logging statements, and 2) the processing noise in log data. In this paper, we propose a new log-based anomaly detection approach, called LogRobust. LogRobust extracts semantic information of log events and represents them as semantic vectors. It then detects anomalies by utilizing an attention-based Bi-LSTM model, which has the ability to capture the contextual information in the log sequences and automatically learn the importance of different log events. In this way, LogRobust is able to identify and handle unstable log events and sequences. We have evaluated LogRobust using logs collected from the Hadoop system and an actual online service system of Microsoft. The experimental results show that the proposed approach can well address the problem of log instability and achieve accurate and robust results on real-world, ever-changing log data. @InProceedings{ESEC/FSE19p807, author = {Xu Zhang and Yong Xu and Qingwei Lin and Bo Qiao and Hongyu Zhang and Yingnong Dang and Chunyu Xie and Xinsheng Yang and Qian Cheng and Ze Li and Junjie Chen and Xiaoting He and Randolph Yao and Jian-Guang Lou and Murali Chintalapati and Furao Shen and Dongmei Zhang}, title = {Robust Log-Based Anomaly Detection on Unstable Log Data}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {807--817}, doi = {10.1145/3338906.3338931}, year = {2019}, } Publisher's Version |
|
Chen, Hongxu |
ESEC/FSE '19: "Cerebro: Context-Aware Adaptive ..."
Cerebro: Context-Aware Adaptive Fuzzing for Effective Vulnerability Detection
Yuekang Li, Yinxing Xue, Hongxu Chen, Xiuheng Wu, Cen Zhang, Xiaofei Xie, Haijun Wang, and Yang Liu (University of Science and Technology of China, China; Nanyang Technological University, Singapore; Zhejiang Sci-Tech University, China) Existing greybox fuzzers mainly utilize program coverage as the goal to guide the fuzzing process. To maximize their outputs, coverage-based greybox fuzzers need to evaluate the quality of seeds properly, which involves making two decisions: 1) which is the most promising seed to fuzz next (seed prioritization), and 2) how many efforts should be made to the current seed (power scheduling). In this paper, we present our fuzzer, Cerebro, to address the above challenges. For the seed prioritization problem, we propose an online multi-objective based algorithm to balance various metrics such as code complexity, coverage, execution time, etc. To address the power scheduling problem, we introduce the concept of input potential to measure the complexity of uncovered code and propose a cost-effective algorithm to update it dynamically. Unlike previous approaches where the fuzzer evaluates an input solely based on the execution traces that it has covered, Cerebro is able to foresee the benefits of fuzzing the input by adaptively evaluating its input potential. We perform a thorough evaluation for Cerebro on 8 different real-world programs. The experiments show that Cerebro can find more vulnerabilities and achieve better coverage than state-of-the-art fuzzers such as AFL and AFLFast. @InProceedings{ESEC/FSE19p533, author = {Yuekang Li and Yinxing Xue and Hongxu Chen and Xiuheng Wu and Cen Zhang and Xiaofei Xie and Haijun Wang and Yang Liu}, title = {Cerebro: Context-Aware Adaptive Fuzzing for Effective Vulnerability Detection}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {533--544}, doi = {10.1145/3338906.3338975}, year = {2019}, } Publisher's Version |
|
Chen, Jianfeng |
ESEC/FSE '19: "Predicting Breakdowns in Cloud ..."
Predicting Breakdowns in Cloud Services (with SPIKE)
Jianfeng Chen, Joymallya Chakraborty, Philip Clark, Kevin Haverlock, Snehit Cherian, and Tim Menzies (North Carolina State University, USA; LexisNexis, USA) Maintaining web-services is a mission-critical task where any down- time means loss of revenue and reputation (of being a reliable service provider). In the current competitive web services market, such a loss of reputation causes extensive loss of future revenue. To address this issue, we developed SPIKE, a data mining tool which can predict upcoming service breakdowns, half an hour into the future. Such predictions let an organization alert and assemble the tiger team to address the problem (e.g. by reconguring cloud hardware in order to reduce the likelihood of that breakdown). SPIKE utilizes (a) regression tree learning (with CART); (b) synthetic minority over-sampling (to handle how rare spikes are in our data); (c) hyperparameter optimization (to learn best settings for our local data) and (d) a technique we called “topology sampling” where training vectors are built from extensive details of an individual node plus summary details on all their neighbors. In the experiments reported here, SPIKE predicted service spikes 30 minutes into future with recalls and precision of 75% and above. Also, SPIKE performed relatively better than other widely-used learning methods (neural nets, random forests, logistic regression). @InProceedings{ESEC/FSE19p916, author = {Jianfeng Chen and Joymallya Chakraborty and Philip Clark and Kevin Haverlock and Snehit Cherian and Tim Menzies}, title = {Predicting Breakdowns in Cloud Services (with SPIKE)}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {916--924}, doi = {10.1145/3338906.3340450}, year = {2019}, } Publisher's Version |
|
Chen, Junjie |
ESEC/FSE '19: "Robust Log-Based Anomaly Detection ..."
Robust Log-Based Anomaly Detection on Unstable Log Data
Xu Zhang, Yong Xu, Qingwei Lin, Bo Qiao, Hongyu Zhang, Yingnong Dang, Chunyu Xie, Xinsheng Yang, Qian Cheng, Ze Li, Junjie Chen, Xiaoting He, Randolph Yao, Jian-Guang Lou, Murali Chintalapati, Furao Shen, and Dongmei Zhang (Microsoft Research, China; Nanjing University, China; University of Newcastle, Australia; Microsoft, USA; Tianjin University, China) Logs are widely used by large and complex software-intensive systems for troubleshooting. There have been a lot of studies on log-based anomaly detection. To detect the anomalies, the existing methods mainly construct a detection model using log event data extracted from historical logs. However, we find that the existing methods do not work well in practice. These methods have the close-world assumption, which assumes that the log data is stable over time and the set of distinct log events is known. However, our empirical study shows that in practice, log data often contains previously unseen log events or log sequences. The instability of log data comes from two sources: 1) the evolution of logging statements, and 2) the processing noise in log data. In this paper, we propose a new log-based anomaly detection approach, called LogRobust. LogRobust extracts semantic information of log events and represents them as semantic vectors. It then detects anomalies by utilizing an attention-based Bi-LSTM model, which has the ability to capture the contextual information in the log sequences and automatically learn the importance of different log events. In this way, LogRobust is able to identify and handle unstable log events and sequences. We have evaluated LogRobust using logs collected from the Hadoop system and an actual online service system of Microsoft. The experimental results show that the proposed approach can well address the problem of log instability and achieve accurate and robust results on real-world, ever-changing log data. @InProceedings{ESEC/FSE19p807, author = {Xu Zhang and Yong Xu and Qingwei Lin and Bo Qiao and Hongyu Zhang and Yingnong Dang and Chunyu Xie and Xinsheng Yang and Qian Cheng and Ze Li and Junjie Chen and Xiaoting He and Randolph Yao and Jian-Guang Lou and Murali Chintalapati and Furao Shen and Dongmei Zhang}, title = {Robust Log-Based Anomaly Detection on Unstable Log Data}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {807--817}, doi = {10.1145/3338906.3338931}, year = {2019}, } Publisher's Version ESEC/FSE '19: "Compiler Bug Isolation via ..." Compiler Bug Isolation via Effective Witness Test Program Generation Junjie Chen, Jiaqi Han, Peiyi Sun, Lingming Zhang, Dan Hao, and Lu Zhang (Tianjin University, China; Peking University, China; University of Texas at Dallas, USA) Compiler bugs are extremely harmful, but are notoriously difficult to debug because compiler bugs usually produce few debugging information. Given a bug-triggering test program for a compiler, hundreds of compiler files are usually involved during compilation, and thus are suspect buggy files. Although there are lots of automated bug isolation techniques, they are not applicable to compilers due to the scalability or effectiveness problem. To solve this problem, in this paper, we transform the compiler bug isolation problem into a search problem, i.e., searching for a set of effective witness test programs that are able to eliminate innocent compiler files from suspects. Based on this intuition, we propose an automated compiler bug isolation technique, DiWi, which (1) proposes a heuristic-based search strategy to generate such a set of effective witness test programs via applying our designed witnessing mutation rules to the given failing test program, and (2) compares their coverage to isolate bugs following the practice of spectrum-based bug isolation. The experimental results on 90 real bugs from popular GCC and LLVM compilers show that DiWi effectively isolates 66.67%/78.89% bugs within Top-10/Top-20 compiler files, significantly outperforming state-of-the-art bug isolation techniques. @InProceedings{ESEC/FSE19p223, author = {Junjie Chen and Jiaqi Han and Peiyi Sun and Lingming Zhang and Dan Hao and Lu Zhang}, title = {Compiler Bug Isolation via Effective Witness Test Program Generation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {223--234}, doi = {10.1145/3338906.3338957}, year = {2019}, } Publisher's Version |
|
Chen, Linlin |
ESEC/FSE '19: "A Large-Scale Empirical Study ..."
A Large-Scale Empirical Study of Compiler Errors in Continuous Integration
Chen Zhang, Bihuan Chen, Linlin Chen, Xin Peng, and Wenyun Zhao (Fudan University, China) Continuous Integration (CI) is a widely-used software development practice to reduce risks. CI builds often break, and a large amount of efforts are put into troubleshooting broken builds. Despite that compiler errors have been recognized as one of the most frequent types of build failures, little is known about the common types, fix efforts and fix patterns of compiler errors that occur in CI builds of open-source projects. To fill such a gap, we present a large-scale empirical study on 6,854,271 CI builds from 3,799 open-source Java projects hosted on GitHub. Using the build data, we measured the frequency of broken builds caused by compiler errors, investigated the ten most common compiler error types, and reported their fix time. We manually analyzed 325 broken builds to summarize fix patterns of the ten most common compiler error types. Our findings help to characterize and understand compiler errors during CI and provide practical implications to developers, tool builders and researchers. @InProceedings{ESEC/FSE19p176, author = {Chen Zhang and Bihuan Chen and Linlin Chen and Xin Peng and Wenyun Zhao}, title = {A Large-Scale Empirical Study of Compiler Errors in Continuous Integration}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {176--187}, doi = {10.1145/3338906.3338917}, year = {2019}, } Publisher's Version |
|
Chen, Yanju |
ESEC/FSE '19: "Maximal Multi-layer Specification ..."
Maximal Multi-layer Specification Synthesis
Yanju Chen, Ruben Martins, and Yu Feng (University of California at Santa Barbara, USA; Carnegie Mellon University, USA) There has been a significant interest in applying programming-by-example to automate repetitive and tedious tasks. However, due to the incomplete nature of input-output examples, a synthesizer may generate programs that pass the examples but do not match the user intent. In this paper, we propose MARS, a novel synthesis framework that takes as input a multi-layer specification composed by input-output examples, textual description, and partial code snippets that capture the user intent. To accurately capture the user intent from the noisy and ambiguous description, we propose a hybrid model that combines the power of an LSTM-based sequence-to-sequence model with the apriori algorithm for mining association rules through unsupervised learning. We reduce the problem of solving a multi-layer specification synthesis to a Max-SMT problem, where hard constraints encode well-typed concrete programs and soft constraints encode the user intent learned by the hybrid model. We instantiate our hybrid model to the data wrangling domain and compare its performance against Morpheus, a state-of-the-art synthesizer for data wrangling tasks. Our experiments demonstrate that our approach outperforms MORPHEUS in terms of running time and solved benchmarks. For challenging benchmarks, our approach can suggest candidates with rankings that are an order of magnitude better than MORPHEUS which leads to running times that are 15x faster than MORPHEUS. @InProceedings{ESEC/FSE19p602, author = {Yanju Chen and Ruben Martins and Yu Feng}, title = {Maximal Multi-layer Specification Synthesis}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {602--612}, doi = {10.1145/3338906.3338951}, year = {2019}, } Publisher's Version |
|
Chen, Yaohui |
ESEC/FSE '19: "FUDGE: Fuzz Driver Generation ..."
FUDGE: Fuzz Driver Generation at Scale
Domagoj Babić, Stefan Bucur, Yaohui Chen, Franjo Ivančić, Tim King, Markus Kusano, Caroline Lemieux, László Szekeres, and Wei Wang (Google, USA; Northeastern University, USA; University of California at Berkeley, USA) At Google we have found tens of thousands of security and robustness bugs by fuzzing C and C++ libraries. To fuzz a library, a fuzzer requires a fuzz driver—which exercises some library code—to which it can pass inputs. Unfortunately, writing fuzz drivers remains a primarily manual exercise, a major hindrance to the widespread adoption of fuzzing. In this paper, we address this major hindrance by introducing the Fudge system for automated fuzz driver generation. Fudge automatically generates fuzz driver candidates for libraries based on existing client code. We have used Fudge to generate thousands of new drivers for a wide variety of libraries. Each generated driver includes a synthesized C/C++ program and a corresponding build script, and is automatically analyzed for quality. Developers have integrated over 200 of these generated drivers into continuous fuzzing services and have committed to address reported security bugs. Further, several of these fuzz drivers have been upstreamed to open source projects and integrated into the OSS-Fuzz fuzzing infrastructure. Running these fuzz drivers has resulted in over 150 bug fixes, including the elimination of numerous exploitable security vulnerabilities. @InProceedings{ESEC/FSE19p975, author = {Domagoj Babić and Stefan Bucur and Yaohui Chen and Franjo Ivančić and Tim King and Markus Kusano and Caroline Lemieux and László Szekeres and Wei Wang}, title = {FUDGE: Fuzz Driver Generation at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {975--985}, doi = {10.1145/3338906.3340456}, year = {2019}, } Publisher's Version |
|
Chen, Zhenpeng |
ESEC/FSE '19: "SEntiMoji: An Emoji-Powered ..."
SEntiMoji: An Emoji-Powered Learning Approach for Sentiment Analysis in Software Engineering
Zhenpeng Chen, Yanbin Cao, Xuan Lu, Qiaozhu Mei, and Xuanzhe Liu (Peking University, China; University of Michigan, USA) Sentiment analysis has various application scenarios in software engineering (SE), such as detecting developers' emotions in commit messages and identifying their opinions on Q&A forums. However, commonly used out-of-the-box sentiment analysis tools cannot obtain reliable results on SE tasks and the misunderstanding of technical jargon is demonstrated to be the main reason. Then, researchers have to utilize labeled SE-related texts to customize sentiment analysis for SE tasks via a variety of algorithms. However, the scarce labeled data can cover only very limited expressions and thus cannot guarantee the analysis quality. To address such a problem, we turn to the easily available emoji usage data for help. More specifically, we employ emotional emojis as noisy labels of sentiments and propose a representation learning approach that uses both Tweets and GitHub posts containing emojis to learn sentiment-aware representations for SE-related texts. These emoji-labeled posts can not only supply the technical jargon, but also incorporate more general sentiment patterns shared across domains. They as well as labeled data are used to learn the final sentiment classifier. Compared to the existing sentiment analysis methods used in SE, the proposed approach can achieve significant improvement on representative benchmark datasets. By further contrast experiments, we find that the Tweets make a key contribution to the power of our approach. This finding informs future research not to unilaterally pursue the domain-specific resource, but try to transform knowledge from the open domain through ubiquitous signals such as emojis. @InProceedings{ESEC/FSE19p841, author = {Zhenpeng Chen and Yanbin Cao and Xuan Lu and Qiaozhu Mei and Xuanzhe Liu}, title = {SEntiMoji: An Emoji-Powered Learning Approach for Sentiment Analysis in Software Engineering}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {841--852}, doi = {10.1145/3338906.3338977}, year = {2019}, } Publisher's Version |
|
Cherian, Snehit |
ESEC/FSE '19: "Predicting Breakdowns in Cloud ..."
Predicting Breakdowns in Cloud Services (with SPIKE)
Jianfeng Chen, Joymallya Chakraborty, Philip Clark, Kevin Haverlock, Snehit Cherian, and Tim Menzies (North Carolina State University, USA; LexisNexis, USA) Maintaining web-services is a mission-critical task where any down- time means loss of revenue and reputation (of being a reliable service provider). In the current competitive web services market, such a loss of reputation causes extensive loss of future revenue. To address this issue, we developed SPIKE, a data mining tool which can predict upcoming service breakdowns, half an hour into the future. Such predictions let an organization alert and assemble the tiger team to address the problem (e.g. by reconguring cloud hardware in order to reduce the likelihood of that breakdown). SPIKE utilizes (a) regression tree learning (with CART); (b) synthetic minority over-sampling (to handle how rare spikes are in our data); (c) hyperparameter optimization (to learn best settings for our local data) and (d) a technique we called “topology sampling” where training vectors are built from extensive details of an individual node plus summary details on all their neighbors. In the experiments reported here, SPIKE predicted service spikes 30 minutes into future with recalls and precision of 75% and above. Also, SPIKE performed relatively better than other widely-used learning methods (neural nets, random forests, logistic regression). @InProceedings{ESEC/FSE19p916, author = {Jianfeng Chen and Joymallya Chakraborty and Philip Clark and Kevin Haverlock and Snehit Cherian and Tim Menzies}, title = {Predicting Breakdowns in Cloud Services (with SPIKE)}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {916--924}, doi = {10.1145/3338906.3340450}, year = {2019}, } Publisher's Version ESEC/FSE '19: "TERMINATOR: Better Automated ..." TERMINATOR: Better Automated UI Test Case Prioritization Zhe Yu, Fahmid Fahid, Tim Menzies, Gregg Rothermel, Kyle Patrick, and Snehit Cherian (North Carolina State University, USA; LexisNexis, USA) Automated UI testing is an important component of the continuous integration process of software development. A modern web-based UI is an amalgam of reports from dozens of microservices written by multiple teams. Queries on a page that opens up another will fail if any of that page's microservices fails. As a result, the overall cost for automated UI testing is high since the UI elements cannot be tested in isolation. For example, the entire automated UI testing suite at LexisNexis takes around 30 hours (3-5 hours on the cloud) to execute, which slows down the continuous integration process. To mitigate this problem and give developers faster feedback on their code, test case prioritization techniques are used to reorder the automated UI test cases so that more failures can be detected earlier. Given that much of the automated UI testing is "black box" in nature, very little information (only the test case descriptions and testing results) can be utilized to prioritize these automated UI test cases. Hence, this paper evaluates 17 "black box" test case prioritization approaches that do not rely on source code information. Among these, we propose a novel TCP approach, that dynamically re-prioritizes the test cases when new failures are detected, by applying and adapting a state of the art framework from the total recall problem. Experimental results on LexisNexis automated UI testing data show that our new approach (which we call TERMINATOR), outperformed prior state of the art approaches in terms of failure detection rates with negligible CPU overhead. @InProceedings{ESEC/FSE19p883, author = {Zhe Yu and Fahmid Fahid and Tim Menzies and Gregg Rothermel and Kyle Patrick and Snehit Cherian}, title = {TERMINATOR: Better Automated UI Test Case Prioritization}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {883--894}, doi = {10.1145/3338906.3340448}, year = {2019}, } Publisher's Version |
|
Cheung, Shing-Chi |
ESEC/FSE '19: "Exploring and Exploiting the ..."
Exploring and Exploiting the Correlations between Bug-Inducing and Bug-Fixing Commits
Ming Wen, Rongxin Wu, Yepang Liu, Yongqiang Tian, Xuan Xie, Shing-Chi Cheung, and Zhendong Su (Hong Kong University of Science and Technology, China; Xiamen University, China; Southern University of Science and Technology, China; Sun Yat-sen University, China; ETH Zurich, Switzerland) Bug-inducing commits provide important information to understand when and how bugs were introduced. Therefore, they have been extensively investigated by existing studies and frequently leveraged to facilitate bug fixings in industrial practices. Due to the importance of bug-inducing commits in software debugging, we are motivated to conduct the first systematic empirical study to explore the correlations between bug-inducing and bug-fixing commits in terms of code elements and modifications. To facilitate the study, we collected the inducing and fixing commits for 333 bugs from seven large open-source projects. The empirical findings reveal important and significant correlations between a bug's inducing and fixing commits. We further exploit the usefulness of such correlation findings from two aspects. First, they explain why the SZZ algorithm, the most widely-adopted approach to collecting bug-inducing commits, is imprecise. In view of SZZ's imprecision, we revisited the findings of previous studies based on SZZ, and found that 8 out of 10 previous findings are significantly affected by SZZ's imprecision. Second, they shed lights on the design of automated debugging techniques. For demonstration, we designed approaches that exploit the correlations with respect to statements and change actions. Our experiments on Defects4J show that our approaches can boost the performance of fault localization significantly and also advance existing APR techniques. @InProceedings{ESEC/FSE19p326, author = {Ming Wen and Rongxin Wu and Yepang Liu and Yongqiang Tian and Xuan Xie and Shing-Chi Cheung and Zhendong Su}, title = {Exploring and Exploiting the Correlations between Bug-Inducing and Bug-Fixing Commits}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {326--337}, doi = {10.1145/3338906.3338962}, year = {2019}, } Publisher's Version Info |
|
Chintalapati, Murali |
ESEC/FSE '19: "Robust Log-Based Anomaly Detection ..."
Robust Log-Based Anomaly Detection on Unstable Log Data
Xu Zhang, Yong Xu, Qingwei Lin, Bo Qiao, Hongyu Zhang, Yingnong Dang, Chunyu Xie, Xinsheng Yang, Qian Cheng, Ze Li, Junjie Chen, Xiaoting He, Randolph Yao, Jian-Guang Lou, Murali Chintalapati, Furao Shen, and Dongmei Zhang (Microsoft Research, China; Nanjing University, China; University of Newcastle, Australia; Microsoft, USA; Tianjin University, China) Logs are widely used by large and complex software-intensive systems for troubleshooting. There have been a lot of studies on log-based anomaly detection. To detect the anomalies, the existing methods mainly construct a detection model using log event data extracted from historical logs. However, we find that the existing methods do not work well in practice. These methods have the close-world assumption, which assumes that the log data is stable over time and the set of distinct log events is known. However, our empirical study shows that in practice, log data often contains previously unseen log events or log sequences. The instability of log data comes from two sources: 1) the evolution of logging statements, and 2) the processing noise in log data. In this paper, we propose a new log-based anomaly detection approach, called LogRobust. LogRobust extracts semantic information of log events and represents them as semantic vectors. It then detects anomalies by utilizing an attention-based Bi-LSTM model, which has the ability to capture the contextual information in the log sequences and automatically learn the importance of different log events. In this way, LogRobust is able to identify and handle unstable log events and sequences. We have evaluated LogRobust using logs collected from the Hadoop system and an actual online service system of Microsoft. The experimental results show that the proposed approach can well address the problem of log instability and achieve accurate and robust results on real-world, ever-changing log data. @InProceedings{ESEC/FSE19p807, author = {Xu Zhang and Yong Xu and Qingwei Lin and Bo Qiao and Hongyu Zhang and Yingnong Dang and Chunyu Xie and Xinsheng Yang and Qian Cheng and Ze Li and Junjie Chen and Xiaoting He and Randolph Yao and Jian-Guang Lou and Murali Chintalapati and Furao Shen and Dongmei Zhang}, title = {Robust Log-Based Anomaly Detection on Unstable Log Data}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {807--817}, doi = {10.1145/3338906.3338931}, year = {2019}, } Publisher's Version |
|
Cito, Jürgen |
ESEC/FSE '19: "Monitoring-Aware IDEs ..."
Monitoring-Aware IDEs
Jos Winter, Maurício Aniche, Jürgen Cito, and Arie van Deursen (Adyen, Netherlands; Delft University of Technology, Netherlands; Massachusetts Institute of Technology, USA) Engineering modern large-scale software requires software developers to not solely focus on writing code, but also to continuously examine monitoring data to reason about the dynamic behavior of their systems. These additional monitoring responsibilities for developers have only emerged recently, in the light of DevOps culture. Interestingly, software development activities happen mainly in the IDE, while reasoning about production monitoring happens in separate monitoring tools. We propose an approach that integrates monitoring signals into the development environment and workflow. We conjecture that an IDE with such capability improves the performance of developers as time spent continuously context switching from development to monitoring would be eliminated. This paper takes a first step towards understanding the benefits of a possible monitoring-aware IDE. We implemented a prototype of a Monitoring-Aware IDE, connected to the monitoring systems of Adyen, a large-scale payment company that performs intense monitoring in their software systems. Given our results, we firmly believe that monitoring-aware IDEs can play an essential role in improving how developers perform monitoring. @InProceedings{ESEC/FSE19p420, author = {Jos Winter and Maurício Aniche and Jürgen Cito and Arie van Deursen}, title = {Monitoring-Aware IDEs}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {420--431}, doi = {10.1145/3338906.3338926}, year = {2019}, } Publisher's Version |
|
Clapp, Lazaro |
ESEC/FSE '19: "NullAway: Practical Type-Based ..."
NullAway: Practical Type-Based Null Safety for Java
Subarno Banerjee, Lazaro Clapp, and Manu Sridharan (University of Michigan, USA; Uber Technologies, USA; University of California at Riverside, USA) NullPointerExceptions (NPEs) are a key source of crashes in modern Java programs. Previous work has shown how such errors can be prevented at compile time via code annotations and pluggable type checking. However, such systems have been difficult to deploy on large-scale software projects, due to significant build-time overhead and / or a high annotation burden. This paper presents NullAway, a new type-based null safety checker for Java that overcomes these issues. NullAway has been carefully engineered for low overhead, so it can run as part of every build. Further, NullAway reduces annotation burden through targeted unsound assumptions, aiming for no false negatives in practice on checked code. Our evaluation shows that NullAway has significantly lower build-time overhead (1.15×) than comparable tools (2.8-5.1×). Further, on a corpus of production crash data for widely-used Android apps built with NullAway, remaining NPEs were due to unchecked third-party libraries (64%), deliberate error suppressions (17%), or reflection and other forms of post-checking code modification (17%), never due to NullAway’s unsound assumptions for checked code. @InProceedings{ESEC/FSE19p740, author = {Subarno Banerjee and Lazaro Clapp and Manu Sridharan}, title = {NullAway: Practical Type-Based Null Safety for Java}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {740--750}, doi = {10.1145/3338906.3338919}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Clark, Philip |
ESEC/FSE '19: "Predicting Breakdowns in Cloud ..."
Predicting Breakdowns in Cloud Services (with SPIKE)
Jianfeng Chen, Joymallya Chakraborty, Philip Clark, Kevin Haverlock, Snehit Cherian, and Tim Menzies (North Carolina State University, USA; LexisNexis, USA) Maintaining web-services is a mission-critical task where any down- time means loss of revenue and reputation (of being a reliable service provider). In the current competitive web services market, such a loss of reputation causes extensive loss of future revenue. To address this issue, we developed SPIKE, a data mining tool which can predict upcoming service breakdowns, half an hour into the future. Such predictions let an organization alert and assemble the tiger team to address the problem (e.g. by reconguring cloud hardware in order to reduce the likelihood of that breakdown). SPIKE utilizes (a) regression tree learning (with CART); (b) synthetic minority over-sampling (to handle how rare spikes are in our data); (c) hyperparameter optimization (to learn best settings for our local data) and (d) a technique we called “topology sampling” where training vectors are built from extensive details of an individual node plus summary details on all their neighbors. In the experiments reported here, SPIKE predicted service spikes 30 minutes into future with recalls and precision of 75% and above. Also, SPIKE performed relatively better than other widely-used learning methods (neural nets, random forests, logistic regression). @InProceedings{ESEC/FSE19p916, author = {Jianfeng Chen and Joymallya Chakraborty and Philip Clark and Kevin Haverlock and Snehit Cherian and Tim Menzies}, title = {Predicting Breakdowns in Cloud Services (with SPIKE)}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {916--924}, doi = {10.1145/3338906.3340450}, year = {2019}, } Publisher's Version |
|
Coghlan, Christy A. |
ESEC/FSE '19: "Why Aren’t Regular Expressions ..."
Why Aren’t Regular Expressions a Lingua Franca? An Empirical Study on the Re-use and Portability of Regular Expressions
James C. Davis, Louis G. Michael IV, Christy A. Coghlan, Francisco Servant, and Dongyoon Lee (Virginia Tech, USA) This paper explores the extent to which regular expressions (regexes) are portable across programming languages. Many languages offer similar regex syntaxes, and it would be natural to assume that regexes can be ported across language boundaries. But can regexes be copy/pasted across language boundaries while retaining their semantic and performance characteristics? In our survey of 158 professional software developers, most indicated that they re-use regexes across language boundaries and about half reported that they believe regexes are a universal language.We experimentally evaluated the riskiness of this practice using a novel regex corpus — 537,806 regexes from 193,524 projects written in JavaScript, Java, PHP, Python, Ruby, Go, Perl, and Rust. Using our polyglot regex corpus, we explored the hitherto-unstudied regex portability problems: logic errors due to semantic differences, and security vulnerabilities due to performance differences. We report that developers’ belief in a regex lingua franca is understandable but unfounded. Though most regexes compile across language boundaries, 15% exhibit semantic differences across languages and 10% exhibit performance differences across languages. We explained these differences using regex documentation, and further illuminate our findings by investigating regex engine implementations. Along the way we found bugs in the regex engines of JavaScript-V8, Python, Ruby, and Rust, and potential semantic and performance regex bugs in thousands of modules. @InProceedings{ESEC/FSE19p443, author = {James C. Davis and Louis G. Michael IV and Christy A. Coghlan and Francisco Servant and Dongyoon Lee}, title = {Why Aren’t Regular Expressions a Lingua Franca? An Empirical Study on the Re-use and Portability of Regular Expressions}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {443--454}, doi = {10.1145/3338906.3338909}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Correia, Daniel |
ESEC/FSE '19: "MOTSD: A Multi-Objective Test ..."
MOTSD: A Multi-Objective Test Selection Tool using Test Suite Diagnosability
Daniel Correia, Rui Abreu, Pedro Santos, and João Nadkarni (University of Lisbon, Portugal; OutSystems, Portugal) Performing regression testing on large software systems becomes unfeasible as it takes too long to run all the test cases every time a change is made. The main motivation of this work was to provide a faster and earlier feedback loop to the developers at OutSystems when a change is made. The developed tool, MOTSD, implements a multi-objective test selection approach in a C# code base using a test suite diagnosability metric and historical metrics as objectives and it is powered by a particle swarm optimization algorithm. We present implementation challenges, current experimental results and limitations of the tool when applied in an industrial context. Screencast demo link: https://www.youtube.com/watch?v=CYMfQTUu2BE @InProceedings{ESEC/FSE19p1070, author = {Daniel Correia and Rui Abreu and Pedro Santos and João Nadkarni}, title = {MOTSD: A Multi-Objective Test Selection Tool using Test Suite Diagnosability}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1070--1074}, doi = {10.1145/3338906.3341187}, year = {2019}, } Publisher's Version ESEC/FSE '19: "An Industrial Application ..." An Industrial Application of Test Selection using Test Suite Diagnosability Daniel Correia (University of Lisbon, Portugal; Instituto Superior Técnico, Portugal) Performing full regression testing every time a change is made on large software systems tends to be unfeasible as it takes too long to run all the test cases. The main motivation of this work was to provide a shorter and earlier feedback loop to the developers at OutSystems when a change is made (instead of having to wait for slower feedback from a CI pipeline). The developed tool, MOTSD, implements a multi-objective test selection approach in a C# code base using a test suite diagnosability metric and historical metrics as objectives and it is powered by a particle swarm optimization algorithm. This paper presents implementation challenges, current experimental results and limitations of the developed approach when applied in an industrial context. @InProceedings{ESEC/FSE19p1214, author = {Daniel Correia}, title = {An Industrial Application of Test Selection using Test Suite Diagnosability}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1214--1216}, doi = {10.1145/3338906.3342493}, year = {2019}, } Publisher's Version |
|
Corrello, Taylor |
ESEC/FSE '19: "Achilles’ Heel of Plug-and-Play ..."
Achilles’ Heel of Plug-and-Play Software Architectures: A Grounded Theory Based Approach
Joanna C. S. Santos, Adriana Sejfia, Taylor Corrello, Smruthi Gadenkanahalli, and Mehdi Mirakhorli (Rochester Institute of Technology, USA) Through a set of well-defined interfaces, plug-and-play architectures enable additional functionalities to be added or removed from a system at its runtime. However, plug-ins can also increase the application’s attack surface or introduce untrusted behavior into the system. In this paper, we (1) use a grounded theory-based approach to conduct an empirical study of common vulnerabilities in plug-and-play architectures; (2) conduct a systematic literature survey and evaluate the extent that the results of the empirical study are novel or supported by the literature; (3) evaluate the practicality of the findings by interviewing practitioners with several years of experience in plug-and-play systems. By analyzing Chromium, Thunderbird, Firefox, Pidgin, WordPress, Apache OfBiz, and OpenMRS, we found a total of 303 vulnerabilities rooted in extensibility design decisions and observed that these plugin-related vulnerabilities were caused by 16 different types of vulnerabilities. Out of these 16 vulnerability types we identified 19 mitigation procedures for fixing them. The literature review supported 12 vulnerability types and 8 mitigation techniques discovered in our empirical study, and indicated that 5 mitigation techniques were not covered in our empirical study. Furthermore, it indicated that 4 vulnerability types and 11 mitigation techniques discovered in our empirical study were not covered in the literature. The interviews with practitioners confirmed the relevance of the findings and highlighted ways that the results of this empirical study can have an impact in practice. @InProceedings{ESEC/FSE19p671, author = {Joanna C. S. Santos and Adriana Sejfia and Taylor Corrello and Smruthi Gadenkanahalli and Mehdi Mirakhorli}, title = {Achilles’ Heel of Plug-and-Play Software Architectures: A Grounded Theory Based Approach}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {671--682}, doi = {10.1145/3338906.3338969}, year = {2019}, } Publisher's Version Info |
|
Cotroneo, Domenico |
ESEC/FSE '19: "How Bad Can a Bug Get? An ..."
How Bad Can a Bug Get? An Empirical Analysis of Software Failures in the OpenStack Cloud Computing Platform
Domenico Cotroneo, Luigi De Simone, Pietro Liguori, Roberto Natella, and Nematollah Bidokhti (Federico II University of Naples, Italy; Futurewei Technologies, USA) Cloud management systems provide abstractions and APIs for programmatically configuring cloud infrastructures. Unfortunately, residual software bugs in these systems can potentially lead to high-severity failures, such as prolonged outages and data losses. In this paper, we investigate the impact of failures in the context widespread OpenStack cloud management system, by performing fault injection and by analyzing the impact of the resulting failures in terms of fail-stop behavior, failure detection through logging, and failure propagation across components. The analysis points out that most of the failures are not timely detected and notified; moreover, many of these failures can silently propagate over time and through components of the cloud management system, which call for more thorough run-time checks and fault containment. @InProceedings{ESEC/FSE19p200, author = {Domenico Cotroneo and Luigi De Simone and Pietro Liguori and Roberto Natella and Nematollah Bidokhti}, title = {How Bad Can a Bug Get? An Empirical Analysis of Software Failures in the OpenStack Cloud Computing Platform}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {200--211}, doi = {10.1145/3338906.3338916}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Coviello, Carmen |
ESEC/FSE '19: "Distributed Execution of Test ..."
Distributed Execution of Test Cases and Continuous Integration
Carmen Coviello (University of Basilicata, Italy) I present here a part of the research conducted in my Ph.D. course. In particular, I focus on my ongoing work on how to support testing in the context of Continuous Integration (CI) development by distributing the execution of test cases (TCs) on geographically dispersed servers. I show how to find a trade-off between the cost of leased servers and the time to execute a given test suite (TS). The distribution and the execution of TCs on servers is modeled as a multi-objective optimization problem, where the goal is to balance the cost to lease servers and the time to execute TCs. The preliminary results : (i) show evidence of the existence of a Pareto Front (trade-off between costs to lease servers and TCs time) and (ii) suggest that the found solutions are worthwhile as compared to a traditional non-distributed TS execution (i.e., a single server/PC). Although the obtained results cannot be considered conclusive, it seems that the solutions are worth to speed up the testing activities in the context of CI. @InProceedings{ESEC/FSE19p1148, author = {Carmen Coviello}, title = {Distributed Execution of Test Cases and Continuous Integration}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1148--1151}, doi = {10.1145/3338906.3341460}, year = {2019}, } Publisher's Version |
|
Dang, Yingnong |
ESEC/FSE '19: "Robust Log-Based Anomaly Detection ..."
Robust Log-Based Anomaly Detection on Unstable Log Data
Xu Zhang, Yong Xu, Qingwei Lin, Bo Qiao, Hongyu Zhang, Yingnong Dang, Chunyu Xie, Xinsheng Yang, Qian Cheng, Ze Li, Junjie Chen, Xiaoting He, Randolph Yao, Jian-Guang Lou, Murali Chintalapati, Furao Shen, and Dongmei Zhang (Microsoft Research, China; Nanjing University, China; University of Newcastle, Australia; Microsoft, USA; Tianjin University, China) Logs are widely used by large and complex software-intensive systems for troubleshooting. There have been a lot of studies on log-based anomaly detection. To detect the anomalies, the existing methods mainly construct a detection model using log event data extracted from historical logs. However, we find that the existing methods do not work well in practice. These methods have the close-world assumption, which assumes that the log data is stable over time and the set of distinct log events is known. However, our empirical study shows that in practice, log data often contains previously unseen log events or log sequences. The instability of log data comes from two sources: 1) the evolution of logging statements, and 2) the processing noise in log data. In this paper, we propose a new log-based anomaly detection approach, called LogRobust. LogRobust extracts semantic information of log events and represents them as semantic vectors. It then detects anomalies by utilizing an attention-based Bi-LSTM model, which has the ability to capture the contextual information in the log sequences and automatically learn the importance of different log events. In this way, LogRobust is able to identify and handle unstable log events and sequences. We have evaluated LogRobust using logs collected from the Hadoop system and an actual online service system of Microsoft. The experimental results show that the proposed approach can well address the problem of log instability and achieve accurate and robust results on real-world, ever-changing log data. @InProceedings{ESEC/FSE19p807, author = {Xu Zhang and Yong Xu and Qingwei Lin and Bo Qiao and Hongyu Zhang and Yingnong Dang and Chunyu Xie and Xinsheng Yang and Qian Cheng and Ze Li and Junjie Chen and Xiaoting He and Randolph Yao and Jian-Guang Lou and Murali Chintalapati and Furao Shen and Dongmei Zhang}, title = {Robust Log-Based Anomaly Detection on Unstable Log Data}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {807--817}, doi = {10.1145/3338906.3338931}, year = {2019}, } Publisher's Version |
|
Davis, James C. |
ESEC/FSE '19: "Why Aren’t Regular Expressions ..."
Why Aren’t Regular Expressions a Lingua Franca? An Empirical Study on the Re-use and Portability of Regular Expressions
James C. Davis, Louis G. Michael IV, Christy A. Coghlan, Francisco Servant, and Dongyoon Lee (Virginia Tech, USA) This paper explores the extent to which regular expressions (regexes) are portable across programming languages. Many languages offer similar regex syntaxes, and it would be natural to assume that regexes can be ported across language boundaries. But can regexes be copy/pasted across language boundaries while retaining their semantic and performance characteristics? In our survey of 158 professional software developers, most indicated that they re-use regexes across language boundaries and about half reported that they believe regexes are a universal language.We experimentally evaluated the riskiness of this practice using a novel regex corpus — 537,806 regexes from 193,524 projects written in JavaScript, Java, PHP, Python, Ruby, Go, Perl, and Rust. Using our polyglot regex corpus, we explored the hitherto-unstudied regex portability problems: logic errors due to semantic differences, and security vulnerabilities due to performance differences. We report that developers’ belief in a regex lingua franca is understandable but unfounded. Though most regexes compile across language boundaries, 15% exhibit semantic differences across languages and 10% exhibit performance differences across languages. We explained these differences using regex documentation, and further illuminate our findings by investigating regex engine implementations. Along the way we found bugs in the regex engines of JavaScript-V8, Python, Ruby, and Rust, and potential semantic and performance regex bugs in thousands of modules. @InProceedings{ESEC/FSE19p443, author = {James C. Davis and Louis G. Michael IV and Christy A. Coghlan and Francisco Servant and Dongyoon Lee}, title = {Why Aren’t Regular Expressions a Lingua Franca? An Empirical Study on the Re-use and Portability of Regular Expressions}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {443--454}, doi = {10.1145/3338906.3338909}, year = {2019}, } Publisher's Version Artifacts Reusable ESEC/FSE '19: "Rethinking Regex Engines to ..." Rethinking Regex Engines to Address ReDoS James C. Davis (Virginia Tech, USA) Regular expressions (regexes) are a powerful string manipulation tool. Unfortunately, in programming languages like Python, Java, and JavaScript, they are unnecessarily dangerous, implemented with worst-case exponential matching behavior. This high time complexity exposes software services to regular expression denial of service (ReDoS) attacks. We propose a data-driven redesign of regex engines, to reflect how regexes are used and what they typically look like. We report that about 95% of regexes in popular programming languages can be evaluated in linear time. The regex engine is a fundamental component of a programming language, and any changes risk introducing compatibility problems. We believe a full redesign is therefore impractical, and so we describe how the vast majority of regex matches can be made linear-time with minor, not major, changes to existing algorithms. Our prototype shows that on a kernel of the regex language, we can trade space for time to make regex matches safe @InProceedings{ESEC/FSE19p1256, author = {James C. Davis}, title = {Rethinking Regex Engines to Address ReDoS}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1256--1258}, doi = {10.1145/3338906.3342509}, year = {2019}, } Publisher's Version |
|
DeFreez, Daniel |
ESEC/FSE '19: "Effective Error-Specification ..."
Effective Error-Specification Inference via Domain-Knowledge Expansion
Daniel DeFreez, Haaken Martinson Baldwin, Cindy Rubio-González, and Aditya V. Thakur (University of California at Davis, USA) Error-handling code responds to the occurrence of runtime errors. Failure to correctly handle errors can lead to security vulnerabilities and data loss. This paper deals with error handling in software written in C that uses the return-code idiom: the presence and type of error is encoded in the return value of a function. This paper describes EESI, a static analysis that infers the set of values that a function can return on error. Such a function error-specification can then be used to identify bugs related to incorrect error handling. The key insight of EESI is to bootstrap the analysis with domain knowledge related to error handling provided by a developer. EESI uses a combination of intraprocedural, flow-sensitive analysis and interprocedural, context-insensitive analysis to ensure precision and scalability. We built a tool ECC to demonstrate how the function error-specifications inferred by EESI can be used to automatically find bugs related to incorrect error handling. ECC detected 246 bugs across 9 programs, of which 110 have been confirmed. ECC detected 220 previously unknown bugs, of which 99 are confirmed. Two patches have already been merged into OpenSSL. @InProceedings{ESEC/FSE19p466, author = {Daniel DeFreez and Haaken Martinson Baldwin and Cindy Rubio-González and Aditya V. Thakur}, title = {Effective Error-Specification Inference via Domain-Knowledge Expansion}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {466--476}, doi = {10.1145/3338906.3338960}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Denaro, Giovanni |
ESEC/FSE '19: "Symbolic Execution-Driven ..."
Symbolic Execution-Driven Extraction of the Parallel Execution Plans of Spark Applications
Luciano Baresi, Giovanni Denaro, and Giovanni Quattrocchi (Politecnico di Milano, Italy; University of Milano-Bicocca, Italy) The execution of Spark applications is based on the execution order and parallelism of the different jobs, given data and available resources. Spark reifies these dependencies in a graph that we refer to as the (parallel) execution plan of the application. All the approaches that have studied the estimation of the execution times and the dynamic provisioning of resources for this kind of applications have always assumed that the execution plan is unique, given the computing resources at hand. This assumption is at least simplistic for applications that include conditional branches or loops and limits the precision of the prediction techniques. This paper introduces SEEPEP, a novel technique based on symbolic execution and search-based test generation, that: i) automatically extracts the possible execution plans of a Spark application, along with dedicated launchers with properly synthesized data that can be used for profiling, and ii) tunes the allocation of resources at runtime based on the knowledge of the execution plans for which the path conditions hold. The assessment we carried out shows that SEEPEP can effectively complement dynaSpark, an extension of Spark with dynamic resource provisioning capabilities, to help predict the execution duration and the allocation of resources. @InProceedings{ESEC/FSE19p246, author = {Luciano Baresi and Giovanni Denaro and Giovanni Quattrocchi}, title = {Symbolic Execution-Driven Extraction of the Parallel Execution Plans of Spark Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {246--256}, doi = {10.1145/3338906.3338973}, year = {2019}, } Publisher's Version |
|
Denkers, Jasper |
ESEC/FSE '19: "A Longitudinal Field Study ..."
A Longitudinal Field Study on Creation and Use of Domain-Specific Languages in Industry
Jasper Denkers (Delft University of Technology, Netherlands) Domain-specific languages (DSLs) have extensively been investigated in research and have frequently been applied in practice for over 20 years. While DSLs have been attributed improvements in terms of productivity, maintainability, and taming accidental complexity, surprisingly, we know little about their actual impact on the software engineering practice. This PhD project, that is done in close collaboration with our industrial partner Océ - A Canon Company, offers a unique opportunity to study the application of DSLs using a longitudinal field study. In particular, we focus on introducing DSLs with language workbenches, i.e., infrastructures for designing and deploying DSLs, for projects that are already running for several years and for which extensive domain analysis outcomes are available. In doing so, we expect to gain a novel perspective on DSLs in practice. Additionally, we aim to derive best practices for DSL development and to identify and overcome limitations in the current state-of-the-art tooling for DSLs. @InProceedings{ESEC/FSE19p1152, author = {Jasper Denkers}, title = {A Longitudinal Field Study on Creation and Use of Domain-Specific Languages in Industry}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1152--1155}, doi = {10.1145/3338906.3341463}, year = {2019}, } Publisher's Version |
|
De Paula, Danielly |
ESEC/FSE '19: "Design Thinking in Practice: ..."
Design Thinking in Practice: Understanding Manifestations of Design Thinking in Software Engineering
Franziska Dobrigkeit and Danielly de Paula (HPI, Germany; National University of Ireland at Galway, Ireland) This industry case study explores where and how Design Thinking supports software development teams in their endeavour to create innovative software solutions. Design Thinking has found its way into software companies ranging from startups to SMEs and multinationals. It is mostly seen as a human centered innovation approach or a way to elicit requirements in a more agile fashion. However, research in Design Thinking suggests that being exposed to DT changes the mindset of employees. Thus this article aims to explore the wider use of DT within software companies through a case study in a multinational organization. Our results indicate, that once trained in DT, employees find various ways to implement it not only as a pre-phase to software development but throughout their projects even applying it to aspects of their surroundings such as the development process, team spaces and team work. Specifically we present a model of how DT manifests itself in a software development company. @InProceedings{ESEC/FSE19p1059, author = {Franziska Dobrigkeit and Danielly de Paula}, title = {Design Thinking in Practice: Understanding Manifestations of Design Thinking in Software Engineering}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1059--1069}, doi = {10.1145/3338906.3340451}, year = {2019}, } Publisher's Version |
|
De Simone, Luigi |
ESEC/FSE '19: "How Bad Can a Bug Get? An ..."
How Bad Can a Bug Get? An Empirical Analysis of Software Failures in the OpenStack Cloud Computing Platform
Domenico Cotroneo, Luigi De Simone, Pietro Liguori, Roberto Natella, and Nematollah Bidokhti (Federico II University of Naples, Italy; Futurewei Technologies, USA) Cloud management systems provide abstractions and APIs for programmatically configuring cloud infrastructures. Unfortunately, residual software bugs in these systems can potentially lead to high-severity failures, such as prolonged outages and data losses. In this paper, we investigate the impact of failures in the context widespread OpenStack cloud management system, by performing fault injection and by analyzing the impact of the resulting failures in terms of fail-stop behavior, failure detection through logging, and failure propagation across components. The analysis points out that most of the failures are not timely detected and notified; moreover, many of these failures can silently propagate over time and through components of the cloud management system, which call for more thorough run-time checks and fault containment. @InProceedings{ESEC/FSE19p200, author = {Domenico Cotroneo and Luigi De Simone and Pietro Liguori and Roberto Natella and Nematollah Bidokhti}, title = {How Bad Can a Bug Get? An Empirical Analysis of Software Failures in the OpenStack Cloud Computing Platform}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {200--211}, doi = {10.1145/3338906.3338916}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Deursen, Arie van |
ESEC/FSE '19: "Releasing Fast and Slow: An ..."
Releasing Fast and Slow: An Exploratory Case Study at ING
Elvan Kula, Ayushi Rastogi, Hennie Huijgens, Arie van Deursen, and Georgios Gousios (Delft University of Technology, Netherlands; ING Bank, Netherlands) The appeal of delivering new features faster has led many software projects to adopt rapid releases. However, it is not well understood what the effects of this practice are. This paper presents an exploratory case study of rapid releases at ING, a large banking company that develops software solutions in-house, to characterize rapid releases. Since 2011, ING has shifted to a rapid release model. This switch has resulted in a mixed environment of 611 teams releasing relatively fast and slow. We followed a mixed-methods approach in which we conducted a survey with 461 participants and corroborated their perceptions with 2 years of code quality data and 1 year of release delay data. Our research shows that: rapid releases are more commonly delayed than their non-rapid counterparts, however, rapid releases have shorter delays; rapid releases can be beneficial in terms of reviewing and user-perceived quality; rapidly released software tends to have a higher code churn, a higher test coverage and a lower average complexity; challenges in rapid releases are related to managing dependencies and certain code aspects, e.g., design debt. @InProceedings{ESEC/FSE19p785, author = {Elvan Kula and Ayushi Rastogi and Hennie Huijgens and Arie van Deursen and Georgios Gousios}, title = {Releasing Fast and Slow: An Exploratory Case Study at ING}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {785--795}, doi = {10.1145/3338906.3338978}, year = {2019}, } Publisher's Version ESEC/FSE '19: "Monitoring-Aware IDEs ..." Monitoring-Aware IDEs Jos Winter, Maurício Aniche, Jürgen Cito, and Arie van Deursen (Adyen, Netherlands; Delft University of Technology, Netherlands; Massachusetts Institute of Technology, USA) Engineering modern large-scale software requires software developers to not solely focus on writing code, but also to continuously examine monitoring data to reason about the dynamic behavior of their systems. These additional monitoring responsibilities for developers have only emerged recently, in the light of DevOps culture. Interestingly, software development activities happen mainly in the IDE, while reasoning about production monitoring happens in separate monitoring tools. We propose an approach that integrates monitoring signals into the development environment and workflow. We conjecture that an IDE with such capability improves the performance of developers as time spent continuously context switching from development to monitoring would be eliminated. This paper takes a first step towards understanding the benefits of a possible monitoring-aware IDE. We implemented a prototype of a Monitoring-Aware IDE, connected to the monitoring systems of Adyen, a large-scale payment company that performs intense monitoring in their software systems. Given our results, we firmly believe that monitoring-aware IDEs can play an essential role in improving how developers perform monitoring. @InProceedings{ESEC/FSE19p420, author = {Jos Winter and Maurício Aniche and Jürgen Cito and Arie van Deursen}, title = {Monitoring-Aware IDEs}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {420--431}, doi = {10.1145/3338906.3338926}, year = {2019}, } Publisher's Version |
|
Dey, Kuntal |
ESEC/FSE '19: "Black Box Fairness Testing ..."
Black Box Fairness Testing of Machine Learning Models
Aniya Aggarwal, Pranay Lohia, Seema Nagar, Kuntal Dey, and Diptikalyan Saha (IBM Research, India) Any given AI system cannot be accepted unless its trustworthiness is proven. An important characteristic of a trustworthy AI system is the absence of algorithmic bias. 'Individual discrimination' exists when a given individual different from another only in 'protected attributes' (e.g., age, gender, race, etc.) receives a different decision outcome from a given machine learning (ML) model as compared to the other individual. The current work addresses the problem of detecting the presence of individual discrimination in given ML models. Detection of individual discrimination is test-intensive in a black-box setting, which is not feasible for non-trivial systems. We propose a methodology for auto-generation of test inputs, for the task of detecting individual discrimination. Our approach combines two well-established techniques - symbolic execution and local explainability for effective test case generation. We empirically show that our approach to generate test cases is very effective as compared to the best-known benchmark systems that we examine. @InProceedings{ESEC/FSE19p625, author = {Aniya Aggarwal and Pranay Lohia and Seema Nagar and Kuntal Dey and Diptikalyan Saha}, title = {Black Box Fairness Testing of Machine Learning Models}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {625--635}, doi = {10.1145/3338906.3338937}, year = {2019}, } Publisher's Version |
|
Dingel, Juergen |
ESEC/FSE '19: "Concolic Testing for Models ..."
Concolic Testing for Models of State-Based Systems
Reza Ahmadi and Juergen Dingel (Queen's University, Canada) Testing models of modern cyber-physical systems is not straightforward due to timing constraints, numerous if not infinite possible behaviors, and complex communications between components. Software testing tools and approaches that can generate test cases to test these systems are therefore important. Many of the existing automatic approaches support testing at the implementation level only. The existing model-level testing tools either treat the model as a black box (e.g., random testing approaches) or have limitations when it comes to generating complex test sequences (e.g., symbolic execution). This paper presents a novel approach and tool support for automatic unit testing of models of real-time embedded systems by conducting concolic testing, a hybrid testing technique based on concrete and symbolic execution. Our technique conducts automatic concolic testing in two phases. In the first phase, model is isolated from its environment, is transformed to a testable model and is integrated with a test harness. In the second phase, the harness tests the model concolically and reports the test execution results. We describe an implementation of our approach in the context of Papyrus-RT, an open source Model Driven Engineering (MDE) tool based on the modeling language UML-RT, and report the results of applying our concolic testing approach to a set of standard benchmark models to validate our approach. @InProceedings{ESEC/FSE19p4, author = {Reza Ahmadi and Juergen Dingel}, title = {Concolic Testing for Models of State-Based Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {4--15}, doi = {10.1145/3338906.3338908}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Di Penta, Massimiliano |
ESEC/FSE '19: "Assessing the Quality of the ..."
Assessing the Quality of the Steps to Reproduce in Bug Reports
Oscar Chaparro, Carlos Bernal-Cárdenas, Jing Lu, Kevin Moran, Andrian Marcus, Massimiliano Di Penta, Denys Poshyvanyk, and Vincent Ng (College of William and Mary, USA; University of Texas at Dallas, USA; University of Sannio, Italy) A major problem with user-written bug reports, indicated by developers and documented by researchers, is the (lack of high) quality of the reported steps to reproduce the bugs. Low-quality steps to reproduce lead to excessive manual effort spent on bug triage and resolution. This paper proposes Euler, an approach that automatically identifies and assesses the quality of the steps to reproduce in a bug report, providing feedback to the reporters, which they can use to improve the bug report. The feedback provided by Euler was assessed by external evaluators and the results indicate that Euler correctly identified 98% of the existing steps to reproduce and 58% of the missing ones, while 73% of its quality annotations are correct. @InProceedings{ESEC/FSE19p86, author = {Oscar Chaparro and Carlos Bernal-Cárdenas and Jing Lu and Kevin Moran and Andrian Marcus and Massimiliano Di Penta and Denys Poshyvanyk and Vincent Ng}, title = {Assessing the Quality of the Steps to Reproduce in Bug Reports}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {86--96}, doi = {10.1145/3338906.3338947}, year = {2019}, } Publisher's Version Info |
|
Dobrigkeit, Franziska |
ESEC/FSE '19: "Design Thinking in Practice: ..."
Design Thinking in Practice: Understanding Manifestations of Design Thinking in Software Engineering
Franziska Dobrigkeit and Danielly de Paula (HPI, Germany; National University of Ireland at Galway, Ireland) This industry case study explores where and how Design Thinking supports software development teams in their endeavour to create innovative software solutions. Design Thinking has found its way into software companies ranging from startups to SMEs and multinationals. It is mostly seen as a human centered innovation approach or a way to elicit requirements in a more agile fashion. However, research in Design Thinking suggests that being exposed to DT changes the mindset of employees. Thus this article aims to explore the wider use of DT within software companies through a case study in a multinational organization. Our results indicate, that once trained in DT, employees find various ways to implement it not only as a pre-phase to software development but throughout their projects even applying it to aspects of their surroundings such as the development process, team spaces and team work. Specifically we present a model of how DT manifests itself in a software development company. @InProceedings{ESEC/FSE19p1059, author = {Franziska Dobrigkeit and Danielly de Paula}, title = {Design Thinking in Practice: Understanding Manifestations of Design Thinking in Software Engineering}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1059--1069}, doi = {10.1145/3338906.3340451}, year = {2019}, } Publisher's Version |
|
Donaldson, Alastair F. |
ESEC/FSE '19: "Just Fuzz It: Solving Floating-Point ..."
Just Fuzz It: Solving Floating-Point Constraints using Coverage-Guided Fuzzing
Daniel Liew, Cristian Cadar, Alastair F. Donaldson, and J. Ryan Stinnett (Imperial College London, UK; Mozilla, USA) We investigate the use of coverage-guided fuzzing as a means of proving satisfiability of SMT formulas over finite variable domains, with specific application to floating-point constraints. We show how an SMT formula can be encoded as a program containing a location that is reachable if and only if the program’s input corresponds to a satisfying assignment to the formula. A coverage-guided fuzzer can then be used to search for an input that reaches the location, yielding a satisfying assignment. We have implemented this idea in a tool, Just Fuzz-it Solver (JFS), and we present a large experimental evaluation showing that JFS is both competitive with and complementary to state-of-the-art SMT solvers with respect to solving floating-point constraints, and that the coverage-guided approach of JFS provides significant benefit over naive fuzzing in the floating-point domain. Applied in a portfolio manner, the JFS approach thus has the potential to complement traditional SMT solvers for program analysis tasks that involve reasoning about floating-point constraints. @InProceedings{ESEC/FSE19p521, author = {Daniel Liew and Cristian Cadar and Alastair F. Donaldson and J. Ryan Stinnett}, title = {Just Fuzz It: Solving Floating-Point Constraints using Coverage-Guided Fuzzing}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {521--532}, doi = {10.1145/3338906.3338921}, year = {2019}, } Publisher's Version |
|
Dou, Liang |
ESEC/FSE '19: "FinExpert: Domain-Specific ..."
FinExpert: Domain-Specific Test Generation for FinTech Systems
Tiancheng Jin, Qingshun Wang, Lihua Xu, Chunmei Pan, Liang Dou, Haifeng Qian, Liang He, and Tao Xie (East China Normal University, China; New York University Shanghai, China; CFETS Information Technology, China; University of Illinois at Urbana-Champaign, USA) To assure high quality of software systems, the comprehensiveness of the created test suite and efficiency of the adopted testing process are highly crucial, especially in the FinTech industry, due to a FinTech system’s complicated system logic, mission-critical nature, and large test suite. However, the state of the testing practice in the FinTech industry still heavily relies on manual efforts. Our recent research efforts contributed our previous approach as the first attempt to automate the testing process in China Foreign Exchange Trade System (CFETS) Information Technology Co. Ltd., a subsidiary of China’s Central Bank that provides China’s foreign exchange transactions, and revealed that automating test generation for such complex trading platform could help alleviate some of these manual efforts. In this paper, we investigate further the dilemmas faced in testing the CFETS trading platform, identify the importance of domain knowledge in its testing process, and propose a new approach of domain-specific test generation to further improve the effectiveness and efficiency of our previous approach in industrial settings. We also present findings of our empirical studies of conducting domain-specific testing on subsystems of the CFETS Trading Platform. @InProceedings{ESEC/FSE19p853, author = {Tiancheng Jin and Qingshun Wang and Lihua Xu and Chunmei Pan and Liang Dou and Haifeng Qian and Liang He and Tao Xie}, title = {FinExpert: Domain-Specific Test Generation for FinTech Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {853--862}, doi = {10.1145/3338906.3340441}, year = {2019}, } Publisher's Version |
|
Durieux, Thomas |
ESEC/FSE '19: "Empirical Review of Java Program ..."
Empirical Review of Java Program Repair Tools: A Large-Scale Experiment on 2,141 Bugs and 23,551 Repair Attempts
Thomas Durieux, Fernanda Madeiral, Matias Martinez, and Rui Abreu (University of Lisbon, Portugal; INESC-ID, Portugal; Federal University of Uberlândia, Brazil; Polytechnic University of Hauts-de-France, France) In the past decade, research on test-suite-based automatic program repair has grown significantly. Each year, new approaches and implementations are featured in major software engineering venues. However, most of those approaches are evaluated on a single benchmark of bugs, which are also rarely reproduced by other researchers. In this paper, we present a large-scale experiment using 11 Java test-suite-based repair tools and 2,141 bugs from 5 benchmarks. Our goal is to have a better understanding of the current state of automatic program repair tools on a large diversity of benchmarks. Our investigation is guided by the hypothesis that the repairability of repair tools might not be generalized across different benchmarks. We found that the 11 tools 1) are able to generate patches for 21% of the bugs from the 5 benchmarks, and 2) have better performance on Defects4J compared to other benchmarks, by generating patches for 47% of the bugs from Defects4J compared to 10-30% of bugs from the other benchmarks. Our experiment comprises 23,551 repair attempts, which we used to find causes of non-patch generation. These causes are reported in this paper, which can help repair tool designers to improve their approaches and tools. @InProceedings{ESEC/FSE19p302, author = {Thomas Durieux and Fernanda Madeiral and Matias Martinez and Rui Abreu}, title = {Empirical Review of Java Program Repair Tools: A Large-Scale Experiment on 2,141 Bugs and 23,551 Repair Attempts}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {302--313}, doi = {10.1145/3338906.3338911}, year = {2019}, } Publisher's Version Info Artifacts Reusable |
|
Dutta, Saikat |
ESEC/FSE '19: "Storm: Program Reduction for ..."
Storm: Program Reduction for Testing and Debugging Probabilistic Programming Systems
Saikat Dutta, Wenxian Zhang, Zixin Huang, and Sasa Misailovic (University of Illinois at Urbana-Champaign, USA) Probabilistic programming languages offer an intuitive way to model uncertainty by representing complex probability models as simple probabilistic programs. Probabilistic programming systems (PP systems) hide the complexity of inference algorithms away from the program developer. Unfortunately, if a failure occurs during the run of a PP system, a developer typically has very little support in finding the part of the probabilistic program that causes the failure in the system. This paper presents Storm, a novel general framework for reducing probabilistic programs. Given a probabilistic program (with associated data and inference arguments) that causes a failure in a PP system, Storm finds a smaller version of the program, data, and arguments that cause the same failure. Storm leverages both generic code and data transformations from compiler testing and domain-specific, probabilistic transformations. The paper presents new transformations that reduce the complexity of statements and expressions, reduce data size, and simplify inference arguments (e.g., the number of iterations of the inference algorithm). We evaluated Storm on 47 programs that caused failures in two popular probabilistic programming systems, Stan and Pyro. Our experimental results show Storm’s effectiveness. For Stan, our minimized programs have 49% less code, 67% less data, and 96% fewer iterations. For Pyro, our minimized programs have 58% less code, 96% less data, and 99% fewer iterations. We also show the benefits of Storm when debugging probabilistic programs. @InProceedings{ESEC/FSE19p729, author = {Saikat Dutta and Wenxian Zhang and Zixin Huang and Sasa Misailovic}, title = {Storm: Program Reduction for Testing and Debugging Probabilistic Programming Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {729--739}, doi = {10.1145/3338906.3338972}, year = {2019}, } Publisher's Version |
|
Du, Xiaoning |
ESEC/FSE '19: "DeepStellar: Model-Based Quantitative ..."
DeepStellar: Model-Based Quantitative Analysis of Stateful Deep Learning Systems
Xiaoning Du, Xiaofei Xie, Yi Li, Lei Ma, Yang Liu, and Jianjun Zhao (Nanyang Technological University, Singapore; Kyushu University, Japan; Zhejiang Sci-Tech University, China) Deep Learning (DL) has achieved tremendous success in many cutting-edge applications. However, the state-of-the-art DL systems still suffer from quality issues. While some recent progress has been made on the analysis of feed-forward DL systems, little study has been done on the Recurrent Neural Network (RNN)-based stateful DL systems, which are widely used in audio, natural languages and video processing, etc. In this paper, we initiate the very first step towards the quantitative analysis of RNN-based DL systems. We model RNN as an abstract state transition system to characterize its internal behaviors. Based on the abstract model, we design two trace similarity metrics and five coverage criteria which enable the quantitative analysis of RNNs. We further propose two algorithms powered by the quantitative measures for adversarial sample detection and coverage-guided test generation. We evaluate DeepStellar on four RNN-based systems covering image classification and automated speech recognition. The results demonstrate that the abstract model is useful in capturing the internal behaviors of RNNs, and confirm that (1) the similarity metrics could effectively capture the differences between samples even with very small perturbations (achieving 97% accuracy for detecting adversarial samples) and (2) the coverage criteria are useful in revealing erroneous behaviors (generating three times more adversarial samples than random testing and hundreds times more than the unrolling approach). @InProceedings{ESEC/FSE19p477, author = {Xiaoning Du and Xiaofei Xie and Yi Li and Lei Ma and Yang Liu and Jianjun Zhao}, title = {DeepStellar: Model-Based Quantitative Analysis of Stateful Deep Learning Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {477--487}, doi = {10.1145/3338906.3338954}, year = {2019}, } Publisher's Version |
|
Eck, Moritz |
ESEC/FSE '19: "Understanding Flaky Tests: ..."
Understanding Flaky Tests: The Developer’s Perspective
Moritz Eck, Fabio Palomba, Marco Castelluccio, and Alberto Bacchelli (University of Zurich, Switzerland; Mozilla, UK) Flaky tests are software tests that exhibit a seemingly random outcome (pass or fail) despite exercising unchanged code. In this work, we examine the perceptions of software developers about the nature, relevance, and challenges of flaky tests. We asked 21 professional developers to classify 200 flaky tests they previously fixed, in terms of the nature and the origin of the flakiness, as well as of the fixing effort. We also examined developers' fixing strategies. Subsequently, we conducted an online survey with 121 developers with a median industrial programming experience of five years. Our research shows that: The flakiness is due to several different causes, four of which have never been reported before, despite being the most costly to fix; flakiness is perceived as significant by the vast majority of developers, regardless of their team's size and project's domain, and it can have effects on resource allocation, scheduling, and the perceived reliability of the test suite; and the challenges developers report to face regard mostly the reproduction of the flaky behavior and the identification of the cause for the flakiness. Public preprint [http://arxiv.org/abs/1907.01466], data and materials [https://doi.org/10.5281/zenodo.3265785]. @InProceedings{ESEC/FSE19p830, author = {Moritz Eck and Fabio Palomba and Marco Castelluccio and Alberto Bacchelli}, title = {Understanding Flaky Tests: The Developer’s Perspective}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {830--840}, doi = {10.1145/3338906.3338945}, year = {2019}, } Publisher's Version |
|
Fahid, Fahmid |
ESEC/FSE '19: "TERMINATOR: Better Automated ..."
TERMINATOR: Better Automated UI Test Case Prioritization
Zhe Yu, Fahmid Fahid, Tim Menzies, Gregg Rothermel, Kyle Patrick, and Snehit Cherian (North Carolina State University, USA; LexisNexis, USA) Automated UI testing is an important component of the continuous integration process of software development. A modern web-based UI is an amalgam of reports from dozens of microservices written by multiple teams. Queries on a page that opens up another will fail if any of that page's microservices fails. As a result, the overall cost for automated UI testing is high since the UI elements cannot be tested in isolation. For example, the entire automated UI testing suite at LexisNexis takes around 30 hours (3-5 hours on the cloud) to execute, which slows down the continuous integration process. To mitigate this problem and give developers faster feedback on their code, test case prioritization techniques are used to reorder the automated UI test cases so that more failures can be detected earlier. Given that much of the automated UI testing is "black box" in nature, very little information (only the test case descriptions and testing results) can be utilized to prioritize these automated UI test cases. Hence, this paper evaluates 17 "black box" test case prioritization approaches that do not rely on source code information. Among these, we propose a novel TCP approach, that dynamically re-prioritizes the test cases when new failures are detected, by applying and adapting a state of the art framework from the total recall problem. Experimental results on LexisNexis automated UI testing data show that our new approach (which we call TERMINATOR), outperformed prior state of the art approaches in terms of failure detection rates with negligible CPU overhead. @InProceedings{ESEC/FSE19p883, author = {Zhe Yu and Fahmid Fahid and Tim Menzies and Gregg Rothermel and Kyle Patrick and Snehit Cherian}, title = {TERMINATOR: Better Automated UI Test Case Prioritization}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {883--894}, doi = {10.1145/3338906.3340448}, year = {2019}, } Publisher's Version |
|
Farchi, Eitan |
ESEC/FSE '19: "Bridging the Gap between ML ..."
Bridging the Gap between ML Solutions and Their Business Requirements using Feature Interactions
Guy Barash, Eitan Farchi, Ilan Jayaraman, Orna Raz, Rachel Tzoref-Brill, and Marcel Zalmanovici (Western Digital, Israel; IBM Research, Israel; IBM, India) Machine Learning (ML) based solutions are becoming increasingly popular and pervasive. When testing such solutions, there is a tendency to focus on improving the ML metrics such as the F1-score and accuracy at the expense of ensuring business value and correctness by covering business requirements. In this work, we adapt test planning methods of classical software to ML solutions. We use combinatorial modeling methodology to define the space of business requirements and map it to the ML solution data, and use the notion of data slices to identify the weaker areas of the ML solution and strengthen them. We apply our approach to three real-world case studies and demonstrate its value. @InProceedings{ESEC/FSE19p1048, author = {Guy Barash and Eitan Farchi and Ilan Jayaraman and Orna Raz and Rachel Tzoref-Brill and Marcel Zalmanovici}, title = {Bridging the Gap between ML Solutions and Their Business Requirements using Feature Interactions}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1048--1058}, doi = {10.1145/3338906.3340442}, year = {2019}, } Publisher's Version |
|
Feng, Yu |
ESEC/FSE '19: "Maximal Multi-layer Specification ..."
Maximal Multi-layer Specification Synthesis
Yanju Chen, Ruben Martins, and Yu Feng (University of California at Santa Barbara, USA; Carnegie Mellon University, USA) There has been a significant interest in applying programming-by-example to automate repetitive and tedious tasks. However, due to the incomplete nature of input-output examples, a synthesizer may generate programs that pass the examples but do not match the user intent. In this paper, we propose MARS, a novel synthesis framework that takes as input a multi-layer specification composed by input-output examples, textual description, and partial code snippets that capture the user intent. To accurately capture the user intent from the noisy and ambiguous description, we propose a hybrid model that combines the power of an LSTM-based sequence-to-sequence model with the apriori algorithm for mining association rules through unsupervised learning. We reduce the problem of solving a multi-layer specification synthesis to a Max-SMT problem, where hard constraints encode well-typed concrete programs and soft constraints encode the user intent learned by the hybrid model. We instantiate our hybrid model to the data wrangling domain and compare its performance against Morpheus, a state-of-the-art synthesizer for data wrangling tasks. Our experiments demonstrate that our approach outperforms MORPHEUS in terms of running time and solved benchmarks. For challenging benchmarks, our approach can suggest candidates with rankings that are an order of magnitude better than MORPHEUS which leads to running times that are 15x faster than MORPHEUS. @InProceedings{ESEC/FSE19p602, author = {Yanju Chen and Ruben Martins and Yu Feng}, title = {Maximal Multi-layer Specification Synthesis}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {602--612}, doi = {10.1145/3338906.3338951}, year = {2019}, } Publisher's Version |
|
Fernandez, Pablo |
ESEC/FSE '19: "Eagle: A Team Practices Audit ..."
Eagle: A Team Practices Audit Framework for Agile Software Development
Alejandro Guerrero, Rafael Fresno, An Ju, Armando Fox, Pablo Fernandez, Carlos Muller, and Antonio Ruiz-Cortés (University of Seville, Spain; University of California at Berkeley, USA) Agile/XP (Extreme Programming) software teams are expected to follow a number of specific practices in each iteration, such as estimating the effort (”points”) required to complete user stories, properly using branches and pull requests to coordinate merging multiple contributors’ code, having frequent ”standups” to keep all team members in sync, and conducting retrospectives to identify areas of improvement for future iterations. We combine two observations in developing a methodology and tools to help teams monitor their performance on these practices. On the one hand, many Agile practices are increasingly supported by web-based tools whose ”data exhaust” can provide insight into how closely the teams are following the practices. On the other hand, some of the practices can be expressed in terms similar to those developed for expressing service level objectives (SLO) in software as a service; as an example, a typical SLO for an interactive Web site might be ”over any 5-minute window, 99% of requests to the main page must be delivered within 200ms” and, analogously, a potential Team Practice (TP) for an Agile/XP team might be ”over any 2-week iteration, 75% of stories should be ’1-point’ stories”. Following this similarity, we adapt a system originally developed for monitoring and visualizing service level agreement (SLA) compliance to monitor selected TPs for Agile/XP software teams. Specifically, the system consumes and analyzes the data exhaust from widely-used tools such as GitHub and Pivotal Tracker and provides team(s) and coach(es) a ”dashboard” summarizing the teams’ adherence to various practices. As a qualitative initial investigation of its usefulness, we deployed it to twenty student teams in a four-sprint software engineering project course. We find an improvement of the adherence to team practice and a positive students’ self-evaluations of their team practices when using the tool, compared to previous experiences using an Agile/XP methodology. The demo video is located at https://youtu.be/A4xwJMEQh9c and a landing page with a live demo at https://isa-group.github.io/2019-05-eagle-demo/. @InProceedings{ESEC/FSE19p1139, author = {Alejandro Guerrero and Rafael Fresno and An Ju and Armando Fox and Pablo Fernandez and Carlos Muller and Antonio Ruiz-Cortés}, title = {Eagle: A Team Practices Audit Framework for Agile Software Development}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1139--1143}, doi = {10.1145/3338906.3341181}, year = {2019}, } Publisher's Version Video Info ESEC/FSE '19: "Governify for APIs: SLA-Driven ..." Governify for APIs: SLA-Driven Ecosystem for API Governance Antonio Gamez-Diaz, Pablo Fernandez, and Antonio Ruiz-Cortés (University of Seville, Spain) As software architecture design is evolving to a microservice paradigm, RESTful APIs are being established as the preferred choice to build applications. In such a scenario, there is a shift towards a growing market of APIs where providers offer different service levels with tailored limitations typically based on the cost. In such a context, while there are well-established standards to describe the functional elements of APIs (such as the OpenAPI Specification), having a standard model for Service Level Agreements (SLAs) for APIs may boost an open ecosystem of tools that would represent an improvement for the industry by automating certain tasks during the development. In this paper, we introduce Governify for APIs, an ecosystem of tools aimed to support the user during the SLA-Driven RESTful APIs’ development process. Namely, an SLA Editor, an SLA Engine and an SLA Instrumentation Library. We also present a fully operational SLA-Driven API Gateway built on the top of our ecosystem of tools. To evaluate our proposal, we used three sources for gathering validation feedback: industry, teaching and research. Website: links.governify.io/link/GovernifyForAPIs Video: links.governify.io/link/GovernifyForAPIsVideo @InProceedings{ESEC/FSE19p1120, author = {Antonio Gamez-Diaz and Pablo Fernandez and Antonio Ruiz-Cortés}, title = {Governify for APIs: SLA-Driven Ecosystem for API Governance}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1120--1123}, doi = {10.1145/3338906.3341176}, year = {2019}, } Publisher's Version Video Info ESEC/FSE '19: "The Role of Limitations and ..." The Role of Limitations and SLAs in the API Industry Antonio Gamez-Diaz, Pablo Fernandez, Antonio Ruiz-Cortés, Pedro J. Molina, Nikhil Kolekar, Prithpal Bhogill, Madhurranjan Mohaan, and Francisco Méndez (University of Seville, Spain; Metadev, Spain; PayPal, USA; Google, USA; AsyncAPI Initiative, Spain) As software architecture design is evolving to a microservice paradigm, RESTful APIs are being established as the preferred choice to build applications. In such a scenario, there is a shift towards a growing market of APIs where providers offer different service levels with tailored limitations typically based on the cost. In this context, while there are well established standards to describe the functional elements of APIs (such as the OpenAPI Specification), having a standard model for Service Level Agreements (SLAs) for APIs may boost an open ecosystem of tools that would represent an improvement for the industry by automating certain tasks during the development such as: SLA-aware scaffolding, SLA-aware testing, or SLA-aware requesters. Unfortunately, despite there have been several proposals to describe SLAs for software in general and web services in particular during the past decades, there is an actual lack of a widely used standard due to the complex landscape of concepts surrounding the notion of SLAs and the multiple perspectives that can be addressed. In this paper, we aim to analyze the landscape for SLAs for APIs in two different directions: i) Clarifying the SLA-driven API development lifecycle: its activities and participants; 2) Developing a catalog of relevant concepts and an ulterior prioritization based on different perspectives from both Industry and Academia. As a main result, we present a scored list of concepts that paves the way to establish a concrete road-map for a standard industry-aligned specification to describe SLAs in APIs. @InProceedings{ESEC/FSE19p1006, author = {Antonio Gamez-Diaz and Pablo Fernandez and Antonio Ruiz-Cortés and Pedro J. Molina and Nikhil Kolekar and Prithpal Bhogill and Madhurranjan Mohaan and Francisco Méndez}, title = {The Role of Limitations and SLAs in the API Industry}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1006--1014}, doi = {10.1145/3338906.3340445}, year = {2019}, } Publisher's Version Info |
|
Foster, Stephen |
ESEC/FSE '19: "Evaluating Model Testing and ..."
Evaluating Model Testing and Model Checking for Finding Requirements Violations in Simulink Models
Shiva Nejati, Khouloud Gaaloul, Claudio Menghi, Lionel C. Briand, Stephen Foster, and David Wolfe (University of Luxembourg, Luxembourg; QRA, Canada) Matlab/Simulink is a development and simulation language that is widely used by the Cyber-Physical System (CPS) industry to model dynamical systems. There are two mainstream approaches to verify CPS Simulink models: model testing that attempts to identify failures in models by executing them for a number of sampled test inputs, and model checking that attempts to exhaustively check the correctness of models against some given formal properties. In this paper, we present an industrial Simulink model benchmark, provide a categorization of different model types in the benchmark, describe the recurring logical patterns in the model requirements, and discuss the results of applying model checking and model testing approaches to identify requirements violations in the benchmarked models. Based on the results, we discuss the strengths and weaknesses of model testing and model checking. Our results further suggest that model checking and model testing are complementary and by combining them, we can significantly enhance the capabilities of each of these approaches individually. We conclude by providing guidelines as to how the two approaches can be best applied together. @InProceedings{ESEC/FSE19p1015, author = {Shiva Nejati and Khouloud Gaaloul and Claudio Menghi and Lionel C. Briand and Stephen Foster and David Wolfe}, title = {Evaluating Model Testing and Model Checking for Finding Requirements Violations in Simulink Models}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1015--1025}, doi = {10.1145/3338906.3340444}, year = {2019}, } Publisher's Version |
|
Fox, Armando |
ESEC/FSE '19: "Eagle: A Team Practices Audit ..."
Eagle: A Team Practices Audit Framework for Agile Software Development
Alejandro Guerrero, Rafael Fresno, An Ju, Armando Fox, Pablo Fernandez, Carlos Muller, and Antonio Ruiz-Cortés (University of Seville, Spain; University of California at Berkeley, USA) Agile/XP (Extreme Programming) software teams are expected to follow a number of specific practices in each iteration, such as estimating the effort (”points”) required to complete user stories, properly using branches and pull requests to coordinate merging multiple contributors’ code, having frequent ”standups” to keep all team members in sync, and conducting retrospectives to identify areas of improvement for future iterations. We combine two observations in developing a methodology and tools to help teams monitor their performance on these practices. On the one hand, many Agile practices are increasingly supported by web-based tools whose ”data exhaust” can provide insight into how closely the teams are following the practices. On the other hand, some of the practices can be expressed in terms similar to those developed for expressing service level objectives (SLO) in software as a service; as an example, a typical SLO for an interactive Web site might be ”over any 5-minute window, 99% of requests to the main page must be delivered within 200ms” and, analogously, a potential Team Practice (TP) for an Agile/XP team might be ”over any 2-week iteration, 75% of stories should be ’1-point’ stories”. Following this similarity, we adapt a system originally developed for monitoring and visualizing service level agreement (SLA) compliance to monitor selected TPs for Agile/XP software teams. Specifically, the system consumes and analyzes the data exhaust from widely-used tools such as GitHub and Pivotal Tracker and provides team(s) and coach(es) a ”dashboard” summarizing the teams’ adherence to various practices. As a qualitative initial investigation of its usefulness, we deployed it to twenty student teams in a four-sprint software engineering project course. We find an improvement of the adherence to team practice and a positive students’ self-evaluations of their team practices when using the tool, compared to previous experiences using an Agile/XP methodology. The demo video is located at https://youtu.be/A4xwJMEQh9c and a landing page with a live demo at https://isa-group.github.io/2019-05-eagle-demo/. @InProceedings{ESEC/FSE19p1139, author = {Alejandro Guerrero and Rafael Fresno and An Ju and Armando Fox and Pablo Fernandez and Carlos Muller and Antonio Ruiz-Cortés}, title = {Eagle: A Team Practices Audit Framework for Agile Software Development}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1139--1143}, doi = {10.1145/3338906.3341181}, year = {2019}, } Publisher's Version Video Info |
|
Franke, Carsten |
ESEC/FSE '19: "Architectural Decision Forces ..."
Architectural Decision Forces at Work: Experiences in an Industrial Consultancy Setting
Julius Rueckert, Andreas Burger, Heiko Koziolek, Thanikesavan Sivanthi, Alexandru Moga, and Carsten Franke (ABB Research, Germany; ABB Research, Switzerland) The concepts of decision forces and the decision forces viewpoint were proposed to help software architects to make architectural decisions more transparent and the documentation of their rationales more explicit. However, practical experience reports and guidelines on how to use the viewpoint in typical industrial project setups are not available. Existing works mainly focus on basic tool support for the documentation of the viewpoint or show how forces can be used as part of focused architecture review sessions. With this paper, we share experiences and lessons learned from applying the decision forces viewpoint in a distributed industrial project setup, which involves consultants supporting architects during the re-design process of an existing large software system. Alongside our findings, we describe new forces that can serve as template for similar projects, discuss challenges applying them in a distributed consultancy project, and share ideas for potential extensions. @InProceedings{ESEC/FSE19p996, author = {Julius Rueckert and Andreas Burger and Heiko Koziolek and Thanikesavan Sivanthi and Alexandru Moga and Carsten Franke}, title = {Architectural Decision Forces at Work: Experiences in an Industrial Consultancy Setting}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {996--1005}, doi = {10.1145/3338906.3340461}, year = {2019}, } Publisher's Version |
|
Franke, Ulrik |
ESEC/FSE '19: "Risks and Assets: A Qualitative ..."
Risks and Assets: A Qualitative Study of a Software Ecosystem in the Mining Industry
Thomas Olsson and Ulrik Franke (RISE SICS, Sweden) Digitalization and servitization are impacting many domains, including the mining industry. As the equipment becomes connected and technical infrastructure evolves, business models and risk management need to adapt. In this paper, we present a study on how changes in asset and risk distribution are evolving for the actors in a software ecosystem (SECO) and system-of-systems (SoS) around a mining operation. We have performed a survey to understand how Service Level Agreements (SLAs) -- a common mechanism for managing risk -- are used in other domains. Furthermore, we have performed a focus group study with companies. There is an overall trend in the mining industry to move the investment cost (CAPEX) from the mining operator to the vendors. Hence, the mining operator instead leases the equipment (as operational expense, OPEX) or even acquires a service. This change in business model impacts operation, as knowledge is moved from the mining operator to the suppliers. Furthermore, as the infrastructure becomes more complex, this implies that the mining operator is more and more reliant on the suppliers for the operation and maintenance. As this change is still in an early stage, there is no formalized risk management, e.g. through SLAs, in place. Rather, at present, the companies in the ecosystem rely more on trust and the incentives created by the promise of mutual future benefits of innovation activities. We believe there is a need to better understand how to manage risk in SECO as it is established and evolves. At the same time, in a SECO, the focus is on cooperation and innovation, the companies do not have incentives to address this unless there is an incident. Therefore, industry need, we believe, help in systematically understanding risk and defining quality aspects such as reliability and performance in the new business environment. @InProceedings{ESEC/FSE19p895, author = {Thomas Olsson and Ulrik Franke}, title = {Risks and Assets: A Qualitative Study of a Software Ecosystem in the Mining Industry}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {895--904}, doi = {10.1145/3338906.3340443}, year = {2019}, } Publisher's Version |
|
Fraser, Gordon |
ESEC/FSE '19: "Code Coverage at Google ..."
Code Coverage at Google
Marko Ivanković, Goran Petrović, René Just, and Gordon Fraser (Google, Switzerland; University of Washington, USA; University of Passau, Germany) Code coverage is a measure of the degree to which a test suite exercises a software system. Although coverage is well established in software engineering research, deployment in industry is often inhibited by the perceived usefulness and the computational costs of analyzing coverage at scale. At Google, coverage information is computed for one billion lines of code daily, for seven programming languages. A key aspect of making coverage information actionable is to apply it at the level of changesets and code review. This paper describes Google’s code coverage infrastructure and how the computed code coverage information is visualized and used. It also describes the challenges and solutions for adopting code coverage at scale. To study how code coverage is adopted and perceived by developers, this paper analyzes adoption rates, error rates, and average code coverage ratios over a five-year period, and it reports on 512 responses, received from surveying 3000 developers. Finally, this paper provides concrete suggestions for how to implement and use code coverage in an industrial setting. @InProceedings{ESEC/FSE19p955, author = {Marko Ivanković and Goran Petrović and René Just and Gordon Fraser}, title = {Code Coverage at Google}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {955--963}, doi = {10.1145/3338906.3340459}, year = {2019}, } Publisher's Version ESEC/FSE '19: "Generating Effective Test ..." Generating Effective Test Cases for Self-Driving Cars from Police Reports Alessio Gambi, Tri Huynh, and Gordon Fraser (University of Passau, Germany; Saarland University, Germany; CISPA, Germany) Autonomous driving carries the promise to drastically reduce the number of car accidents; however, recently reported fatal crashes involving self-driving cars show that such an important goal is not yet achieved. This calls for better testing of the software controlling self-driving cars, which is difficult because it requires producing challenging driving scenarios. To better test self-driving car soft- ware, we propose to specifically test car crash scenarios, which are critical par excellence. Since real car crashes are difficult to test in field operation, we recreate them as physically accurate simulations in an environment that can be used for testing self-driving car software. To cope with the scarcity of sensory data collected during real car crashes which does not enable a full reproduction, we extract the information to recreate real car crashes from the police reports which document them. Our extensive evaluation, consisting of a user study involving 34 participants and a quantitative analysis of the quality of the generated tests, shows that we can generate accurate simulations of car crashes in a matter of minutes. Compared to tests which implement non critical driving scenarios, our tests effectively stressed the test subject in different ways and exposed several shortcomings in its implementation. @InProceedings{ESEC/FSE19p257, author = {Alessio Gambi and Tri Huynh and Gordon Fraser}, title = {Generating Effective Test Cases for Self-Driving Cars from Police Reports}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {257--267}, doi = {10.1145/3338906.3338942}, year = {2019}, } Publisher's Version ESEC/FSE '19: "Testing Scratch Programs Automatically ..." Testing Scratch Programs Automatically Andreas Stahlbauer, Marvin Kreis, and Gordon Fraser (University of Passau, Germany) Block-based programming environments like Scratch foster engagement with computer programming and are used by millions of young learners. Scratch allows learners to quickly create entertaining programs and games, while eliminating syntactical program errors that could interfere with progress. However, functional programming errors may still lead to incorrect programs, and learners and their teachers need to identify and understand these errors. This is currently an entirely manual process. In this paper, we introduce a formal testing framework that describes the problem of Scratch testing in detail. We instantiate this formal framework with the Whisker tool, which provides automated and property-based testing functionality for Scratch programs. Empirical evaluation on real student and teacher programs demonstrates that Whisker can successfully test Scratch programs, and automatically achieves an average of 95.25 % code coverage. Although well-known testing problems such as test flakiness also exist in the scenario of Scratch testing, we show that automated and property-based testing can accurately reproduce and replace the manually and laboriously produced grading efforts of a teacher, and opens up new possibilities to support learners of programming in their struggles. @InProceedings{ESEC/FSE19p165, author = {Andreas Stahlbauer and Marvin Kreis and Gordon Fraser}, title = {Testing Scratch Programs Automatically}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {165--175}, doi = {10.1145/3338906.3338910}, year = {2019}, } Publisher's Version |
|
Fresno, Rafael |
ESEC/FSE '19: "Eagle: A Team Practices Audit ..."
Eagle: A Team Practices Audit Framework for Agile Software Development
Alejandro Guerrero, Rafael Fresno, An Ju, Armando Fox, Pablo Fernandez, Carlos Muller, and Antonio Ruiz-Cortés (University of Seville, Spain; University of California at Berkeley, USA) Agile/XP (Extreme Programming) software teams are expected to follow a number of specific practices in each iteration, such as estimating the effort (”points”) required to complete user stories, properly using branches and pull requests to coordinate merging multiple contributors’ code, having frequent ”standups” to keep all team members in sync, and conducting retrospectives to identify areas of improvement for future iterations. We combine two observations in developing a methodology and tools to help teams monitor their performance on these practices. On the one hand, many Agile practices are increasingly supported by web-based tools whose ”data exhaust” can provide insight into how closely the teams are following the practices. On the other hand, some of the practices can be expressed in terms similar to those developed for expressing service level objectives (SLO) in software as a service; as an example, a typical SLO for an interactive Web site might be ”over any 5-minute window, 99% of requests to the main page must be delivered within 200ms” and, analogously, a potential Team Practice (TP) for an Agile/XP team might be ”over any 2-week iteration, 75% of stories should be ’1-point’ stories”. Following this similarity, we adapt a system originally developed for monitoring and visualizing service level agreement (SLA) compliance to monitor selected TPs for Agile/XP software teams. Specifically, the system consumes and analyzes the data exhaust from widely-used tools such as GitHub and Pivotal Tracker and provides team(s) and coach(es) a ”dashboard” summarizing the teams’ adherence to various practices. As a qualitative initial investigation of its usefulness, we deployed it to twenty student teams in a four-sprint software engineering project course. We find an improvement of the adherence to team practice and a positive students’ self-evaluations of their team practices when using the tool, compared to previous experiences using an Agile/XP methodology. The demo video is located at https://youtu.be/A4xwJMEQh9c and a landing page with a live demo at https://isa-group.github.io/2019-05-eagle-demo/. @InProceedings{ESEC/FSE19p1139, author = {Alejandro Guerrero and Rafael Fresno and An Ju and Armando Fox and Pablo Fernandez and Carlos Muller and Antonio Ruiz-Cortés}, title = {Eagle: A Team Practices Audit Framework for Agile Software Development}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1139--1143}, doi = {10.1145/3338906.3341181}, year = {2019}, } Publisher's Version Video Info |
|
Fucci, Davide |
ESEC/FSE '19: "On Using Machine Learning ..."
On Using Machine Learning to Identify Knowledge in API Reference Documentation
Davide Fucci, Alireza Mollaalizadehbahnemiri, and Walid Maalej (University of Hamburg, Germany) Using API reference documentation like JavaDoc is an integral part of software development. Previous research introduced a grounded taxonomy that organizes API documentation knowledge in 12 types, including knowledge about the Functionality, Structure, and Quality of an API. We study how well modern text classification approaches can automatically identify documentation containing specific knowledge types. We compared conventional machine learning (k-NN and SVM) with deep learning approaches trained on manually-annotated Java and .NET API documentation (n = 5,574). When classifying the knowledge types individually (i.e., multiple binary classifiers) the best AUPRC was up to 87 @InProceedings{ESEC/FSE19p109, author = {Davide Fucci and Alireza Mollaalizadehbahnemiri and Walid Maalej}, title = {On Using Machine Learning to Identify Knowledge in API Reference Documentation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {109--119}, doi = {10.1145/3338906.3338943}, year = {2019}, } Publisher's Version |
|
Fu, Xiaoqin |
ESEC/FSE '19: "A Dynamic Taint Analyzer for ..."
A Dynamic Taint Analyzer for Distributed Systems
Xiaoqin Fu and Haipeng Cai (Washington State University, USA) As in other software domains, information flow security is a fundamental aspect of code security in distributed systems. However, most existing solutions to information flow security are limited to centralized software. For distributed systems, such solutions face multiple challenges, including technique applicability, tool portability, and analysis scalability. To overcome these challenges, we present DistTaint, a dynamic information flow (taint) analyzer for distributed systems. By partial-ordering method-execution events, DistTaint infers implicit dependencies in distributed programs, so as to resolve the applicability challenge. It resolves the portability challenge by working fully at application level, without customizing the runtime platform. To achieve scalability, it reduces analysis costs using a multi-phase analysis, where the pre-analysis phase generates method-level results to narrow down the scope of the following statement-level analysis. We evaluated DistTaint against eight real-world distributed systems. Empirical results showed DistTaint’s applicability to, portability with, and scalability for industry-scale distributed systems, along with its capability of discovering known and unknown vulnerabilities. A demo video for DistTaint can be downloaded from https://www.dropbox.com/l/scl/AAAkrm4p63Ffx0rZqblY3zlLFuaohbRxs0 or viewed here https://youtu.be/fy4yMIaKzPE online. The tool package is here: https://www.dropbox.com/sh/kfr9ixucyny1jp2/AAC00aI-I8Od4ywZCqwZ1uaa?dl=0 @InProceedings{ESEC/FSE19p1115, author = {Xiaoqin Fu and Haipeng Cai}, title = {A Dynamic Taint Analyzer for Distributed Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1115--1119}, doi = {10.1145/3338906.3341179}, year = {2019}, } Publisher's Version Video Info ESEC/FSE '19: "On the Scalable Dynamic Taint ..." On the Scalable Dynamic Taint Analysis for Distributed Systems Xiaoqin Fu (Washington State University, USA) To protect the privacy and search sensitive data leaks, we must solve multiple challenges (e.g., applicability, portability, and scalability) for developing an appropriate taint analysis for distributed systems.We hence present DistTaint, a dynamic taint analysis for distributed systems against these challenges. It could infer implicit dependencies from partial-ordering method events in executions to resolve the applicability challenge. DistTaint fully works at application-level without any customization of platforms to overcome the portability challenge. It exploits a multi-phase analysis to achieve scalability. By proposing a pre-analysis, DistTaint narrows down the following fine-grained analysis’ scope to reduce the overall cost significantly. Empirical results showed DistTaint’s practical applicability, portability, and scalability to industry-scale distributed programs, and its capability of discovering security vulnerabilities in real-world distributed systems. The tool package can be downloaded here: https://www.dropbox.com/sh/kfr9ixucyny1jp2/AAC00aI-I8O-d4ywZCqwZ1uaa?dl=0 @InProceedings{ESEC/FSE19p1247, author = {Xiaoqin Fu}, title = {On the Scalable Dynamic Taint Analysis for Distributed Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1247--1249}, doi = {10.1145/3338906.3342506}, year = {2019}, } Publisher's Version |
|
Fu, Ying |
ESEC/FSE '19: "EVMFuzzer: Detect EVM Vulnerabilities ..."
EVMFuzzer: Detect EVM Vulnerabilities via Fuzz Testing
Ying Fu, Meng Ren, Fuchen Ma, Heyuan Shi, Xin Yang, Yu Jiang, Huizhong Li, and Xiang Shi (Tsinghua University, China; WeBank, China) Ethereum Virtual Machine (EVM) is the run-time environment for smart contracts and its vulnerabilities may lead to serious problems to the Ethereum ecology. With lots of techniques being continuously developed for the validation of smart contracts, the testing of EVM remains challenging because of the special test input format and the absence of oracles. In this paper, we propose EVMFuzzer, the first tool that uses differential fuzzing technique to detect vulnerabilities of EVM. The core idea is to continuously generate seed contracts and feed them to the target EVM and the benchmark EVMs, so as to find as many inconsistencies among execution results as possible, eventually discover vulnerabilities with output cross-referencing. Given a target EVM and its APIs, EVMFuzzer generates seed contracts via a set of predefined mutators, and then employs dynamic priority scheduling algorithm to guide seed contracts selection and maximize the inconsistency. Finally, EVMFuzzer leverages benchmark EVMs as cross-referencing oracles to avoid manual checking. With EVMFuzzer, we have found several previously unknown security bugs in four widely used EVMs, and 5 of which had been included in Common Vulnerabilities and Exposures (CVE) IDs in U.S. National Vulnerability Database. The video is presented at https://youtu.be/9Lejgf2GSOk. @InProceedings{ESEC/FSE19p1110, author = {Ying Fu and Meng Ren and Fuchen Ma and Heyuan Shi and Xin Yang and Yu Jiang and Huizhong Li and Xiang Shi}, title = {EVMFuzzer: Detect EVM Vulnerabilities via Fuzz Testing}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1110--1114}, doi = {10.1145/3338906.3341175}, year = {2019}, } Publisher's Version ESEC/FSE '19: "Industry Practice of Coverage-Guided ..." Industry Practice of Coverage-Guided Enterprise Linux Kernel Fuzzing Heyuan Shi, Runzhe Wang, Ying Fu, Mingzhe Wang, Xiaohai Shi, Xun Jiao, Houbing Song, Yu Jiang, and Jiaguang Sun (Tsinghua University, China; Alibaba Group, China; Villanova University, USA; Embry-Riddle Aeronautical University, USA) Coverage-guided kernel fuzzing is a widely-used technique that has helped kernel developers and testers discover numerous vulnerabilities. However, due to the high complexity of application and hardware environment, there is little study on deploying fuzzing to the enterprise-level Linux kernel. In this paper, collaborating with the enterprise developers, we present the industry practice to deploy kernel fuzzing on four different enterprise Linux distributions that are responsible for internal business and external services of the company. We have addressed the following outstanding challenges when deploying a popular kernel fuzzer, syzkaller, to these enterprise Linux distributions: coverage support absence, kernel configuration inconsistency, bugs in shallow paths, and continuous fuzzing complexity. This leads to a vulnerability detection of 41 reproducible bugs which are previous unknown in these enterprise Linux kernel and 6 bugs with CVE IDs in U.S. National Vulnerability Database, including flaws that cause general protection fault, deadlock, and use-after-free. @InProceedings{ESEC/FSE19p986, author = {Heyuan Shi and Runzhe Wang and Ying Fu and Mingzhe Wang and Xiaohai Shi and Xun Jiao and Houbing Song and Yu Jiang and Jiaguang Sun}, title = {Industry Practice of Coverage-Guided Enterprise Linux Kernel Fuzzing}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {986--995}, doi = {10.1145/3338906.3340460}, year = {2019}, } Publisher's Version |
|
Gaaloul, Khouloud |
ESEC/FSE '19: "Evaluating Model Testing and ..."
Evaluating Model Testing and Model Checking for Finding Requirements Violations in Simulink Models
Shiva Nejati, Khouloud Gaaloul, Claudio Menghi, Lionel C. Briand, Stephen Foster, and David Wolfe (University of Luxembourg, Luxembourg; QRA, Canada) Matlab/Simulink is a development and simulation language that is widely used by the Cyber-Physical System (CPS) industry to model dynamical systems. There are two mainstream approaches to verify CPS Simulink models: model testing that attempts to identify failures in models by executing them for a number of sampled test inputs, and model checking that attempts to exhaustively check the correctness of models against some given formal properties. In this paper, we present an industrial Simulink model benchmark, provide a categorization of different model types in the benchmark, describe the recurring logical patterns in the model requirements, and discuss the results of applying model checking and model testing approaches to identify requirements violations in the benchmarked models. Based on the results, we discuss the strengths and weaknesses of model testing and model checking. Our results further suggest that model checking and model testing are complementary and by combining them, we can significantly enhance the capabilities of each of these approaches individually. We conclude by providing guidelines as to how the two approaches can be best applied together. @InProceedings{ESEC/FSE19p1015, author = {Shiva Nejati and Khouloud Gaaloul and Claudio Menghi and Lionel C. Briand and Stephen Foster and David Wolfe}, title = {Evaluating Model Testing and Model Checking for Finding Requirements Violations in Simulink Models}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1015--1025}, doi = {10.1145/3338906.3340444}, year = {2019}, } Publisher's Version ESEC/FSE '19: "Generating Automated and Online ..." Generating Automated and Online Test Oracles for Simulink Models with Continuous and Uncertain Behaviors Claudio Menghi, Shiva Nejati, Khouloud Gaaloul, and Lionel C. Briand (University of Luxembourg, Luxembourg) Test automation requires automated oracles to assess test outputs. For cyber physical systems (CPS), oracles, in addition to be automated, should ensure some key objectives: (i) they should check test outputs in an online manner to stop expensive test executions as soon as a failure is detected; (ii) they should handle time- and magnitude-continuous CPS behaviors; (iii) they should provide a quantitative degree of satisfaction or failure measure instead of binary pass/fail outputs; and (iv) they should be able to handle uncertainties due to CPS interactions with the environment. We propose an automated approach to translate CPS requirements specified in a logic-based language into test oracles specified in Simulink - a widely-used development and simulation language for CPS. Our approach achieves the objectives noted above through the identification of a fragment of Signal First Order logic (SFOL) to specify requirements, the definition of a quantitative semantics for this fragment and a sound translation of the fragment into Simulink. The results from applying our approach on 11 industrial case studies show that: (i) our requirements language can express all the 98 requirements of our case studies; (ii) the time and effort required by our approach are acceptable, showing potentials for the adoption of our work in practice, and (iii) for large models, our approach can dramatically reduce the test execution time compared to when test outputs are checked in an offline manner. @InProceedings{ESEC/FSE19p27, author = {Claudio Menghi and Shiva Nejati and Khouloud Gaaloul and Lionel C. Briand}, title = {Generating Automated and Online Test Oracles for Simulink Models with Continuous and Uncertain Behaviors}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {27--38}, doi = {10.1145/3338906.3338920}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Gadenkanahalli, Smruthi |
ESEC/FSE '19: "Achilles’ Heel of Plug-and-Play ..."
Achilles’ Heel of Plug-and-Play Software Architectures: A Grounded Theory Based Approach
Joanna C. S. Santos, Adriana Sejfia, Taylor Corrello, Smruthi Gadenkanahalli, and Mehdi Mirakhorli (Rochester Institute of Technology, USA) Through a set of well-defined interfaces, plug-and-play architectures enable additional functionalities to be added or removed from a system at its runtime. However, plug-ins can also increase the application’s attack surface or introduce untrusted behavior into the system. In this paper, we (1) use a grounded theory-based approach to conduct an empirical study of common vulnerabilities in plug-and-play architectures; (2) conduct a systematic literature survey and evaluate the extent that the results of the empirical study are novel or supported by the literature; (3) evaluate the practicality of the findings by interviewing practitioners with several years of experience in plug-and-play systems. By analyzing Chromium, Thunderbird, Firefox, Pidgin, WordPress, Apache OfBiz, and OpenMRS, we found a total of 303 vulnerabilities rooted in extensibility design decisions and observed that these plugin-related vulnerabilities were caused by 16 different types of vulnerabilities. Out of these 16 vulnerability types we identified 19 mitigation procedures for fixing them. The literature review supported 12 vulnerability types and 8 mitigation techniques discovered in our empirical study, and indicated that 5 mitigation techniques were not covered in our empirical study. Furthermore, it indicated that 4 vulnerability types and 11 mitigation techniques discovered in our empirical study were not covered in the literature. The interviews with practitioners confirmed the relevance of the findings and highlighted ways that the results of this empirical study can have an impact in practice. @InProceedings{ESEC/FSE19p671, author = {Joanna C. S. Santos and Adriana Sejfia and Taylor Corrello and Smruthi Gadenkanahalli and Mehdi Mirakhorli}, title = {Achilles’ Heel of Plug-and-Play Software Architectures: A Grounded Theory Based Approach}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {671--682}, doi = {10.1145/3338906.3338969}, year = {2019}, } Publisher's Version Info |
|
Gambi, Alessio |
ESEC/FSE '19: "Generating Effective Test ..."
Generating Effective Test Cases for Self-Driving Cars from Police Reports
Alessio Gambi, Tri Huynh, and Gordon Fraser (University of Passau, Germany; Saarland University, Germany; CISPA, Germany) Autonomous driving carries the promise to drastically reduce the number of car accidents; however, recently reported fatal crashes involving self-driving cars show that such an important goal is not yet achieved. This calls for better testing of the software controlling self-driving cars, which is difficult because it requires producing challenging driving scenarios. To better test self-driving car soft- ware, we propose to specifically test car crash scenarios, which are critical par excellence. Since real car crashes are difficult to test in field operation, we recreate them as physically accurate simulations in an environment that can be used for testing self-driving car software. To cope with the scarcity of sensory data collected during real car crashes which does not enable a full reproduction, we extract the information to recreate real car crashes from the police reports which document them. Our extensive evaluation, consisting of a user study involving 34 participants and a quantitative analysis of the quality of the generated tests, shows that we can generate accurate simulations of car crashes in a matter of minutes. Compared to tests which implement non critical driving scenarios, our tests effectively stressed the test subject in different ways and exposed several shortcomings in its implementation. @InProceedings{ESEC/FSE19p257, author = {Alessio Gambi and Tri Huynh and Gordon Fraser}, title = {Generating Effective Test Cases for Self-Driving Cars from Police Reports}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {257--267}, doi = {10.1145/3338906.3338942}, year = {2019}, } Publisher's Version |
|
Gamez-Diaz, Antonio |
ESEC/FSE '19: "Governify for APIs: SLA-Driven ..."
Governify for APIs: SLA-Driven Ecosystem for API Governance
Antonio Gamez-Diaz, Pablo Fernandez, and Antonio Ruiz-Cortés (University of Seville, Spain) As software architecture design is evolving to a microservice paradigm, RESTful APIs are being established as the preferred choice to build applications. In such a scenario, there is a shift towards a growing market of APIs where providers offer different service levels with tailored limitations typically based on the cost. In such a context, while there are well-established standards to describe the functional elements of APIs (such as the OpenAPI Specification), having a standard model for Service Level Agreements (SLAs) for APIs may boost an open ecosystem of tools that would represent an improvement for the industry by automating certain tasks during the development. In this paper, we introduce Governify for APIs, an ecosystem of tools aimed to support the user during the SLA-Driven RESTful APIs’ development process. Namely, an SLA Editor, an SLA Engine and an SLA Instrumentation Library. We also present a fully operational SLA-Driven API Gateway built on the top of our ecosystem of tools. To evaluate our proposal, we used three sources for gathering validation feedback: industry, teaching and research. Website: links.governify.io/link/GovernifyForAPIs Video: links.governify.io/link/GovernifyForAPIsVideo @InProceedings{ESEC/FSE19p1120, author = {Antonio Gamez-Diaz and Pablo Fernandez and Antonio Ruiz-Cortés}, title = {Governify for APIs: SLA-Driven Ecosystem for API Governance}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1120--1123}, doi = {10.1145/3338906.3341176}, year = {2019}, } Publisher's Version Video Info ESEC/FSE '19: "The Role of Limitations and ..." The Role of Limitations and SLAs in the API Industry Antonio Gamez-Diaz, Pablo Fernandez, Antonio Ruiz-Cortés, Pedro J. Molina, Nikhil Kolekar, Prithpal Bhogill, Madhurranjan Mohaan, and Francisco Méndez (University of Seville, Spain; Metadev, Spain; PayPal, USA; Google, USA; AsyncAPI Initiative, Spain) As software architecture design is evolving to a microservice paradigm, RESTful APIs are being established as the preferred choice to build applications. In such a scenario, there is a shift towards a growing market of APIs where providers offer different service levels with tailored limitations typically based on the cost. In this context, while there are well established standards to describe the functional elements of APIs (such as the OpenAPI Specification), having a standard model for Service Level Agreements (SLAs) for APIs may boost an open ecosystem of tools that would represent an improvement for the industry by automating certain tasks during the development such as: SLA-aware scaffolding, SLA-aware testing, or SLA-aware requesters. Unfortunately, despite there have been several proposals to describe SLAs for software in general and web services in particular during the past decades, there is an actual lack of a widely used standard due to the complex landscape of concepts surrounding the notion of SLAs and the multiple perspectives that can be addressed. In this paper, we aim to analyze the landscape for SLAs for APIs in two different directions: i) Clarifying the SLA-driven API development lifecycle: its activities and participants; 2) Developing a catalog of relevant concepts and an ulterior prioritization based on different perspectives from both Industry and Academia. As a main result, we present a scored list of concepts that paves the way to establish a concrete road-map for a standard industry-aligned specification to describe SLAs in APIs. @InProceedings{ESEC/FSE19p1006, author = {Antonio Gamez-Diaz and Pablo Fernandez and Antonio Ruiz-Cortés and Pedro J. Molina and Nikhil Kolekar and Prithpal Bhogill and Madhurranjan Mohaan and Francisco Méndez}, title = {The Role of Limitations and SLAs in the API Industry}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1006--1014}, doi = {10.1145/3338906.3340445}, year = {2019}, } Publisher's Version Info |
|
Garcia, Alessandro |
ESEC/FSE '19: "Java Reflection API: Revealing ..."
Java Reflection API: Revealing the Dark Side of the Mirror
Felipe Pontes, Rohit Gheyi, Sabrina Souto, Alessandro Garcia, and Márcio Ribeiro (Federal University of Campina Grande, Brazil; State University of Paraíba, Brazil; PUC-Rio, Brazil; Federal University of Alagoas, Brazil) Developers of widely used Java Virtual Machines (JVMs) implement and test the Java Reflection API based on a Javadoc, which is specified using a natural language. However, there is limited knowledge on whether Java Reflection API developers are able to systematically reveal i) underdetermined specifications; and ii) non-conformances between their implementation and the Javadoc. Moreover, current automatic test suite generators cannot be used to detect them. To better understand the problem, we analyze test suites of two widely used JVMs, and we conduct a survey with 130 developers who use the Java Reflection API to see whether the Javadoc impacts on their understanding. We also propose a technique to detect underdetermined specifications and non-conformances between the Javadoc and the implementations of the Java Reflection API. It automatically creates test cases, and executes them using different JVMs. Then, we manually execute some steps to identify underdetermined specifications and to confirm whether a non-conformance candidate is indeed a bug. We evaluate our technique in 439 input programs. Our technique identifies underdetermined specification and non-conformance candidates in 32 Java Reflection API public methods of 7 classes. We report underdetermined specification candidates in 12 Java Reflection API methods. Java Reflection API specifiers accept 3 underdetermined specification candidates (25%). We also report 24 non-conformance candidates to Eclipse OpenJ9 JVM, and 7 to Oracle JVM. Eclipse OpenJ9 JVM developers accept and fix 21 candidates (87.5%), and Oracle JVM developers accept 5 and fix 4 non-conformance candidates. @InProceedings{ESEC/FSE19p636, author = {Felipe Pontes and Rohit Gheyi and Sabrina Souto and Alessandro Garcia and Márcio Ribeiro}, title = {Java Reflection API: Revealing the Dark Side of the Mirror}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {636--646}, doi = {10.1145/3338906.3338946}, year = {2019}, } Publisher's Version |
|
García-Mireles, Gabriel Alberto |
ESEC/FSE '19: "Evolving with Patterns: A ..."
Evolving with Patterns: A 31-Month Startup Experience Report
Miguel Ehécatl Morales-Trujillo and Gabriel Alberto García-Mireles (University of Canterbury, New Zealand; Universidad de Sonora, Mexico) Software startups develop innovative products under extreme conditions of uncertainty. At the same time they represent a fast-growing sector in the economy and scale up research and technological advancement. This paper describes findings after observing a startup during its first 31 months of life. The data was collected through observations, unstructured interviews as well as from technical and managerial documentation of the startup. The findings are based on a deductive analysis and summarized in 24 contextualized patterns that concern communication, interaction with customer, teamwork, and management. Furthermore, 13 lessons learned are presented with the aim of sharing experience with other startups. This industry report contributes to understanding the applicability and usefulness of startups' patterns, providing valuable knowledge for the startup software engineering community. @InProceedings{ESEC/FSE19p1037, author = {Miguel Ehécatl Morales-Trujillo and Gabriel Alberto García-Mireles}, title = {Evolving with Patterns: A 31-Month Startup Experience Report}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1037--1047}, doi = {10.1145/3338906.3340447}, year = {2019}, } Publisher's Version |
|
Gauthier, François |
ESEC/FSE '19: "Nodest: Feedback-Driven Static ..."
Nodest: Feedback-Driven Static Analysis of Node.js Applications
Benjamin Barslev Nielsen, Behnaz Hassanshahi, and François Gauthier (Oracle Labs, Australia; Aarhus University, Denmark) Node.js provides the ability to write JavaScript programs for the server-side and has become a popular language for developing web applications. Node.js allows direct access to the underlying filesystem, operating system resources, and databases, but does not provide any security mechanism such as sandboxing of untrusted code, and injection vulnerabilities are now commonly reported in Node.js modules. Existing static dataflow analysis techniques do not scale to Node.js applications to find injection vulnerabilities because small Node.js web applications typically depend on many third-party modules. We present a new feedback-driven static analysis that scales well to detect injection vulnerabilities in Node.js applications. The key idea behind our new technique is that not all third-party modules need to be analyzed to detect an injection vulnerability. Results of running our analysis, Nodest, on real-world Node.js applications show that the technique scales to large applications and finds previously known as well as new vulnerabilities. In particular, Nodest finds 63 true positive taint flows in a set of our benchmarks, whereas a state-of-the-art static analysis reports 3 only. Moreover, our analysis scales to Express, the most popular Node.js web framework, and reports non-trivial injection vulnerabilities. @InProceedings{ESEC/FSE19p455, author = {Benjamin Barslev Nielsen and Behnaz Hassanshahi and François Gauthier}, title = {Nodest: Feedback-Driven Static Analysis of Node.js Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {455--465}, doi = {10.1145/3338906.3338933}, year = {2019}, } Publisher's Version |
|
Gazzillo, Paul |
ESEC/FSE '19: "An Empirical Study of Real-World ..."
An Empirical Study of Real-World Variability Bugs Detected by Variability-Oblivious Tools
Austin Mordahl, Jeho Oh, Ugur Koc, Shiyi Wei, and Paul Gazzillo (University of Texas at Dallas, USA; University of Texas at Austin, USA; University of Maryland, USA; University of Central Florida, USA) Many critical software systems developed in C utilize compile-time configurability. The many possible configurations of this software make bug detection through static analysis difficult. While variability-aware static analyses have been developed, there remains a gap between those and state-of-the-art static bug detection tools. In order to collect data on how such tools may perform and to develop real-world benchmarks, we present a way to leverage configuration sampling, off-the-shelf “variability-oblivious” bug detectors, and automatic feature identification techniques to simulate a variability-aware analysis. We instantiate our approach using four popular static analysis tools on three highly configurable, real-world C projects, obtaining 36,061 warnings, 80% of which are variability warnings. We analyze the warnings we collect from these experiments, finding that most results are variability warnings of a variety of kinds such as NULL dereference. We then manually investigate these warnings to produce a benchmark of 77 confirmed true bugs (52 of which are variability bugs) useful for future development of variability-aware analyses. @InProceedings{ESEC/FSE19p50, author = {Austin Mordahl and Jeho Oh and Ugur Koc and Shiyi Wei and Paul Gazzillo}, title = {An Empirical Study of Real-World Variability Bugs Detected by Variability-Oblivious Tools}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {50--61}, doi = {10.1145/3338906.3338967}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Gheyi, Rohit |
ESEC/FSE '19: "Java Reflection API: Revealing ..."
Java Reflection API: Revealing the Dark Side of the Mirror
Felipe Pontes, Rohit Gheyi, Sabrina Souto, Alessandro Garcia, and Márcio Ribeiro (Federal University of Campina Grande, Brazil; State University of Paraíba, Brazil; PUC-Rio, Brazil; Federal University of Alagoas, Brazil) Developers of widely used Java Virtual Machines (JVMs) implement and test the Java Reflection API based on a Javadoc, which is specified using a natural language. However, there is limited knowledge on whether Java Reflection API developers are able to systematically reveal i) underdetermined specifications; and ii) non-conformances between their implementation and the Javadoc. Moreover, current automatic test suite generators cannot be used to detect them. To better understand the problem, we analyze test suites of two widely used JVMs, and we conduct a survey with 130 developers who use the Java Reflection API to see whether the Javadoc impacts on their understanding. We also propose a technique to detect underdetermined specifications and non-conformances between the Javadoc and the implementations of the Java Reflection API. It automatically creates test cases, and executes them using different JVMs. Then, we manually execute some steps to identify underdetermined specifications and to confirm whether a non-conformance candidate is indeed a bug. We evaluate our technique in 439 input programs. Our technique identifies underdetermined specification and non-conformance candidates in 32 Java Reflection API public methods of 7 classes. We report underdetermined specification candidates in 12 Java Reflection API methods. Java Reflection API specifiers accept 3 underdetermined specification candidates (25%). We also report 24 non-conformance candidates to Eclipse OpenJ9 JVM, and 7 to Oracle JVM. Eclipse OpenJ9 JVM developers accept and fix 21 candidates (87.5%), and Oracle JVM developers accept 5 and fix 4 non-conformance candidates. @InProceedings{ESEC/FSE19p636, author = {Felipe Pontes and Rohit Gheyi and Sabrina Souto and Alessandro Garcia and Márcio Ribeiro}, title = {Java Reflection API: Revealing the Dark Side of the Mirror}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {636--646}, doi = {10.1145/3338906.3338946}, year = {2019}, } Publisher's Version |
|
Ginelli, Davide |
ESEC/FSE '19: "Failure-Driven Program Repair ..."
Failure-Driven Program Repair
Davide Ginelli (University of Milano-Bicocca, Italy) Program repair techniques can dramatically reduce the cost of program debugging by automatically generating program fixes. Although program repair has been already successful with several classes of faults, it also turned out to be quite limited in the complexity of the fixes that can be generated. This Ph.D. thesis addresses the problem of cost-effectively generating fixes of higher complexity by investigating how to exploit failure information to directly shape the repair process. In particular, this thesis proposes Failure-Driven Program Repair, which is a novel approach to program repair that exploits its knowledge about both the possible failures and the corresponding repair strategies, to produce highly specialized repair tasks that can effectively generate non-trivial fixes. @InProceedings{ESEC/FSE19p1156, author = {Davide Ginelli}, title = {Failure-Driven Program Repair}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1156--1159}, doi = {10.1145/3338906.3341464}, year = {2019}, } Publisher's Version |
|
Gizzatullina, Ilyuza |
ESEC/FSE '19: "Empirical Study of Customer ..."
Empirical Study of Customer Communication Problem in Agile Requirements Engineering
Ilyuza Gizzatullina (Innopolis University, Russia) As Agile principles and values become an integral part of the soft-ware development culture, development processes experience significant changes. Requirements engineering, an individual phase occurring at the beginning of the traditional development, is distributed between various activities according to agile. However, how customer communication related problems are solved within the context of agile requirements engineering (RE)? Empirical study of that problem is done using 2 methods: systematic literature review and semi-structured interviews. Problems related to customer communication in agile RE are revealed and composed into patterns. Patterns are to be supplemented with the solutions in the further research. @InProceedings{ESEC/FSE19p1262, author = {Ilyuza Gizzatullina}, title = {Empirical Study of Customer Communication Problem in Agile Requirements Engineering}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1262--1264}, doi = {10.1145/3338906.3342511}, year = {2019}, } Publisher's Version |
|
Gligoric, Milos |
ESEC/FSE '19: "A Framework for Writing Trigger-Action ..."
A Framework for Writing Trigger-Action Todo Comments in Executable Format
Pengyu Nie, Rishabh Rai, Junyi Jessy Li, Sarfraz Khurshid, Raymond J. Mooney, and Milos Gligoric (University of Texas at Austin, USA) Natural language elements, e.g., todo comments, are frequently used to communicate among developers and to describe tasks that need to be performed (actions) when specific conditions hold on artifacts related to the code repository (triggers), e.g., from the Apache Struts project: “remove expectedJDK15 and if() after switching to Java 1.6”. As projects evolve, development processes change, and development teams reorganize, these comments, because of their informal nature, frequently become irrelevant or forgotten. We present the first framework, dubbed TrigIt, to specify trigger-action todo comments in executable format. Thus, actions are executed automatically when triggers evaluate to true. TrigIt specifications are written in the host language (e.g., Java) and are evaluated as part of the build process. The triggers are specified as query statements over abstract syntax trees, abstract representation of build configuration scripts, issue tracking systems, and system clock time. The actions are either notifications to developers or code transformation steps. We implemented TrigIt for the Java programming language and migrated 44 existing trigger-action comments from several popular open-source projects. Evaluation of TrigIt, via a user study, showed that users find TrigIt easy to learn and use. TrigIt has the potential to enforce more discipline in writing and maintaining comments in large code repositories. @InProceedings{ESEC/FSE19p385, author = {Pengyu Nie and Rishabh Rai and Junyi Jessy Li and Sarfraz Khurshid and Raymond J. Mooney and Milos Gligoric}, title = {A Framework for Writing Trigger-Action Todo Comments in Executable Format}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {385--396}, doi = {10.1145/3338906.3338965}, year = {2019}, } Publisher's Version |
|
Glorioso, Nick |
ESEC/FSE '19: "DeepDelta: Learning to Repair ..."
DeepDelta: Learning to Repair Compilation Errors
Ali Mesbah, Andrew Rice, Emily Johnston, Nick Glorioso, and Edward Aftandilian (University of British Columbia, Canada; University of Cambridge, UK; Google, UK; Google, USA) Programmers spend a substantial amount of time manually repairing code that does not compile. We observe that the repairs for any particular error class typically follow a pattern and are highly mechanical. We propose a novel approach that automatically learns these patterns with a deep neural network and suggests program repairs for the most costly classes of build-time compilation failures. We describe how we collect all build errors and the human-authored, in-progress code changes that cause those failing builds to transition to successful builds at Google. We generate an AST diff from the textual code changes and transform it into a domain-specific language called Delta that encodes the change that must be made to make the code compile. We then feed the compiler diagnostic information (as source) and the Delta changes that resolved the diagnostic (as target) into a Neural Machine Translation network for training. For the two most prevalent and costly classes of Java compilation errors, namely missing symbols and mismatched method signatures, our system called DeepDelta, generates the correct repair changes for 19,314 out of 38,788 (50%) of unseen compilation errors. The correct changes are in the top three suggested fixes 86% of the time on average. @InProceedings{ESEC/FSE19p925, author = {Ali Mesbah and Andrew Rice and Emily Johnston and Nick Glorioso and Edward Aftandilian}, title = {DeepDelta: Learning to Repair Compilation Errors}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {925--936}, doi = {10.1145/3338906.3340455}, year = {2019}, } Publisher's Version |
|
Golzadeh, Mehdi |
ESEC/FSE '19: "Analysing Socio-technical ..."
Analysing Socio-technical Congruence in the Package Dependency Network of Cargo
Mehdi Golzadeh (University of Mons, Belgium) Software package distributions form large dependency networks maintained by large communities of contributors. My PhD research will consist of analysing the evolution of the socio-technical congruence of these package dependency networks, and studying its impact on the health of the ecosystem and its community. I have started a longitudinal empirical study of Cargo's dependency network and the social (commenting) and technical (development) activities in Cargo's package repositories on GitHub, and present some preliminary findings. @InProceedings{ESEC/FSE19p1226, author = {Mehdi Golzadeh}, title = {Analysing Socio-technical Congruence in the Package Dependency Network of Cargo}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1226--1228}, doi = {10.1145/3338906.3342497}, year = {2019}, } Publisher's Version |
|
Gousios, Georgios |
ESEC/FSE '19: "Releasing Fast and Slow: An ..."
Releasing Fast and Slow: An Exploratory Case Study at ING
Elvan Kula, Ayushi Rastogi, Hennie Huijgens, Arie van Deursen, and Georgios Gousios (Delft University of Technology, Netherlands; ING Bank, Netherlands) The appeal of delivering new features faster has led many software projects to adopt rapid releases. However, it is not well understood what the effects of this practice are. This paper presents an exploratory case study of rapid releases at ING, a large banking company that develops software solutions in-house, to characterize rapid releases. Since 2011, ING has shifted to a rapid release model. This switch has resulted in a mixed environment of 611 teams releasing relatively fast and slow. We followed a mixed-methods approach in which we conducted a survey with 461 participants and corroborated their perceptions with 2 years of code quality data and 1 year of release delay data. Our research shows that: rapid releases are more commonly delayed than their non-rapid counterparts, however, rapid releases have shorter delays; rapid releases can be beneficial in terms of reviewing and user-perceived quality; rapidly released software tends to have a higher code churn, a higher test coverage and a lower average complexity; challenges in rapid releases are related to managing dependencies and certain code aspects, e.g., design debt. @InProceedings{ESEC/FSE19p785, author = {Elvan Kula and Ayushi Rastogi and Hennie Huijgens and Arie van Deursen and Georgios Gousios}, title = {Releasing Fast and Slow: An Exploratory Case Study at ING}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {785--795}, doi = {10.1145/3338906.3338978}, year = {2019}, } Publisher's Version |
|
Greiner, Sandra |
ESEC/FSE '19: "On Extending Single-Variant ..."
On Extending Single-Variant Model Transformations for Reuse in Software Product Line Engineering
Sandra Greiner (University of Bayreuth, Germany) Software product line engineering (SPLE) aims at increasing productivity by following the principles of variability and organized reuse. Combining the discipline with model-driven software engineering (MDSE) seeks to intensify this effect by raising the level of abstraction. Typically, a product line developed in a model-driven way is composed of various kinds of models, like class diagrams and database schemata. To automatically generate further necessary representations from a initial (source) model, model transformations may create a respective target model. In annotative approaches to SPLE, variability annotations, which are boolean expressions over the features of the product line, state in which products a (model) element is visible. State-of-the-art single-variant model transformations (SVMT), however, do not consider variability annotations additionally associated with model elements. Thus, multi-variant model transformations (MVMT) should bridge the gap between existing SPLE and MDSE approaches by reusing already existing technology to propagate annotations additionally to the the target. The present contribution gives an overview on the research we conduct to reuse SVMTs in model-driven SPLE and provides a plan on which steps are still to be taken. @InProceedings{ESEC/FSE19p1160, author = {Sandra Greiner}, title = {On Extending Single-Variant Model Transformations for Reuse in Software Product Line Engineering}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1160--1163}, doi = {10.1145/3338906.3341467}, year = {2019}, } Publisher's Version |
|
Guerrero, Alejandro |
ESEC/FSE '19: "Eagle: A Team Practices Audit ..."
Eagle: A Team Practices Audit Framework for Agile Software Development
Alejandro Guerrero, Rafael Fresno, An Ju, Armando Fox, Pablo Fernandez, Carlos Muller, and Antonio Ruiz-Cortés (University of Seville, Spain; University of California at Berkeley, USA) Agile/XP (Extreme Programming) software teams are expected to follow a number of specific practices in each iteration, such as estimating the effort (”points”) required to complete user stories, properly using branches and pull requests to coordinate merging multiple contributors’ code, having frequent ”standups” to keep all team members in sync, and conducting retrospectives to identify areas of improvement for future iterations. We combine two observations in developing a methodology and tools to help teams monitor their performance on these practices. On the one hand, many Agile practices are increasingly supported by web-based tools whose ”data exhaust” can provide insight into how closely the teams are following the practices. On the other hand, some of the practices can be expressed in terms similar to those developed for expressing service level objectives (SLO) in software as a service; as an example, a typical SLO for an interactive Web site might be ”over any 5-minute window, 99% of requests to the main page must be delivered within 200ms” and, analogously, a potential Team Practice (TP) for an Agile/XP team might be ”over any 2-week iteration, 75% of stories should be ’1-point’ stories”. Following this similarity, we adapt a system originally developed for monitoring and visualizing service level agreement (SLA) compliance to monitor selected TPs for Agile/XP software teams. Specifically, the system consumes and analyzes the data exhaust from widely-used tools such as GitHub and Pivotal Tracker and provides team(s) and coach(es) a ”dashboard” summarizing the teams’ adherence to various practices. As a qualitative initial investigation of its usefulness, we deployed it to twenty student teams in a four-sprint software engineering project course. We find an improvement of the adherence to team practice and a positive students’ self-evaluations of their team practices when using the tool, compared to previous experiences using an Agile/XP methodology. The demo video is located at https://youtu.be/A4xwJMEQh9c and a landing page with a live demo at https://isa-group.github.io/2019-05-eagle-demo/. @InProceedings{ESEC/FSE19p1139, author = {Alejandro Guerrero and Rafael Fresno and An Ju and Armando Fox and Pablo Fernandez and Carlos Muller and Antonio Ruiz-Cortés}, title = {Eagle: A Team Practices Audit Framework for Agile Software Development}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1139--1143}, doi = {10.1145/3338906.3341181}, year = {2019}, } Publisher's Version Video Info |
|
Gulzar, Muhammad Ali |
ESEC/FSE '19: "White-Box Testing of Big Data ..."
White-Box Testing of Big Data Analytics with Complex User-Defined Functions
Muhammad Ali Gulzar, Shaghayegh Mardani, Madanlal Musuvathi, and Miryung Kim (University of California at Los Angeles, USA; Microsoft Research, USA) Data-intensive scalable computing (DISC) systems such as Google’s MapReduce, Apache Hadoop, and Apache Spark are being leveraged to process massive quantities of data in the cloud. Modern DISC applications pose new challenges in exhaustive, automatic testing because they consist of dataflow operators, and complex user-defined functions (UDF) are prevalent unlike SQL queries. We design a new white-box testing approach, called BigTest to reason about the internal semantics of UDFs in tandem with the equivalence classes created by each dataflow and relational operator. Our evaluation shows that, despite ultra-large scale input data size, real world DISC applications are often significantly skewed and inadequate in terms of test coverage, leaving 34% of Joint Dataflow and UDF (JDU) paths untested. BigTest shows the potential to minimize data size for local testing by 10^5 to 10^8 orders of magnitude while revealing 2X more manually-injected faults than the previous approach. Our experiment shows that only few of the data records (order of tens) are actually required to achieve the same JDU coverage as the entire production data. The reduction in test data also provides CPU time saving of 194X on average, demonstrating that interactive and fast local testing is feasible for big data analytics, obviating the need to test applications on huge production data. @InProceedings{ESEC/FSE19p290, author = {Muhammad Ali Gulzar and Shaghayegh Mardani and Madanlal Musuvathi and Miryung Kim}, title = {White-Box Testing of Big Data Analytics with Complex User-Defined Functions}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {290--301}, doi = {10.1145/3338906.3338953}, year = {2019}, } Publisher's Version |
|
Guzmàn, Michell |
ESEC/FSE '19: "VARYS: An Agnostic Model-Driven ..."
VARYS: An Agnostic Model-Driven Monitoring-as-a-Service Framework for the Cloud
Alessandro Tundo, Marco Mobilio, Matteo Orrù, Oliviero Riganelli, Michell Guzmàn, and Leonardo Mariani (University of Milano-Bicocca, Italy) Cloud systems are large scalable distributed systems that must be carefully monitored to timely detect problems and anomalies. While a number of cloud monitoring frameworks are available, only a few solutions address the problem of adaptively and dynamically selecting the indicators that must be collected, based on the actual needs of the operator. Unfortunately, these solutions are either limited to infrastructure-level indicators or technology-specific, for instance, they are designed to work with OpenStack but not with other cloud platforms. This paper presents the VARYS monitoring framework, a technology-agnostic Monitoring-as-a-Service solution that can address KPI monitoring at all levels of the Cloud stack, including the application-level. Operators use VARYS to indicate their monitoring goals declaratively, letting the framework to perform all the operations necessary to achieve a requested monitoring configuration automatically. Interestingly, the VARYS architecture is general and extendable, and can thus be used to support increasingly more platforms and probing technologies. @InProceedings{ESEC/FSE19p1085, author = {Alessandro Tundo and Marco Mobilio and Matteo Orrù and Oliviero Riganelli and Michell Guzmàn and Leonardo Mariani}, title = {VARYS: An Agnostic Model-Driven Monitoring-as-a-Service Framework for the Cloud}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1085--1089}, doi = {10.1145/3338906.3341185}, year = {2019}, } Publisher's Version Video Info |
|
Han, Jiaqi |
ESEC/FSE '19: "Compiler Bug Isolation via ..."
Compiler Bug Isolation via Effective Witness Test Program Generation
Junjie Chen, Jiaqi Han, Peiyi Sun, Lingming Zhang, Dan Hao, and Lu Zhang (Tianjin University, China; Peking University, China; University of Texas at Dallas, USA) Compiler bugs are extremely harmful, but are notoriously difficult to debug because compiler bugs usually produce few debugging information. Given a bug-triggering test program for a compiler, hundreds of compiler files are usually involved during compilation, and thus are suspect buggy files. Although there are lots of automated bug isolation techniques, they are not applicable to compilers due to the scalability or effectiveness problem. To solve this problem, in this paper, we transform the compiler bug isolation problem into a search problem, i.e., searching for a set of effective witness test programs that are able to eliminate innocent compiler files from suspects. Based on this intuition, we propose an automated compiler bug isolation technique, DiWi, which (1) proposes a heuristic-based search strategy to generate such a set of effective witness test programs via applying our designed witnessing mutation rules to the given failing test program, and (2) compares their coverage to isolate bugs following the practice of spectrum-based bug isolation. The experimental results on 90 real bugs from popular GCC and LLVM compilers show that DiWi effectively isolates 66.67%/78.89% bugs within Top-10/Top-20 compiler files, significantly outperforming state-of-the-art bug isolation techniques. @InProceedings{ESEC/FSE19p223, author = {Junjie Chen and Jiaqi Han and Peiyi Sun and Lingming Zhang and Dan Hao and Lu Zhang}, title = {Compiler Bug Isolation via Effective Witness Test Program Generation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {223--234}, doi = {10.1145/3338906.3338957}, year = {2019}, } Publisher's Version |
|
Hao, Dan |
ESEC/FSE '19: "Compiler Bug Isolation via ..."
Compiler Bug Isolation via Effective Witness Test Program Generation
Junjie Chen, Jiaqi Han, Peiyi Sun, Lingming Zhang, Dan Hao, and Lu Zhang (Tianjin University, China; Peking University, China; University of Texas at Dallas, USA) Compiler bugs are extremely harmful, but are notoriously difficult to debug because compiler bugs usually produce few debugging information. Given a bug-triggering test program for a compiler, hundreds of compiler files are usually involved during compilation, and thus are suspect buggy files. Although there are lots of automated bug isolation techniques, they are not applicable to compilers due to the scalability or effectiveness problem. To solve this problem, in this paper, we transform the compiler bug isolation problem into a search problem, i.e., searching for a set of effective witness test programs that are able to eliminate innocent compiler files from suspects. Based on this intuition, we propose an automated compiler bug isolation technique, DiWi, which (1) proposes a heuristic-based search strategy to generate such a set of effective witness test programs via applying our designed witnessing mutation rules to the given failing test program, and (2) compares their coverage to isolate bugs following the practice of spectrum-based bug isolation. The experimental results on 90 real bugs from popular GCC and LLVM compilers show that DiWi effectively isolates 66.67%/78.89% bugs within Top-10/Top-20 compiler files, significantly outperforming state-of-the-art bug isolation techniques. @InProceedings{ESEC/FSE19p223, author = {Junjie Chen and Jiaqi Han and Peiyi Sun and Lingming Zhang and Dan Hao and Lu Zhang}, title = {Compiler Bug Isolation via Effective Witness Test Program Generation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {223--234}, doi = {10.1145/3338906.3338957}, year = {2019}, } Publisher's Version |
|
Harman, Mark |
ESEC/FSE '19: "The Importance of Accounting ..."
The Importance of Accounting for Real-World Labelling When Predicting Software Vulnerabilities
Matthieu Jimenez, Renaud Rwemalika, Mike Papadakis, Federica Sarro, Yves Le Traon, and Mark Harman (University of Luxembourg, Luxembourg; University College London, UK; Facebook, UK) Previous work on vulnerability prediction assume that predictive models are trained with respect to perfect labelling information (includes labels from future, as yet undiscovered vulnerabilities). In this paper we present results from a comprehensive empirical study of 1,898 real-world vulnerabilities reported in 74 releases of three security-critical open source systems (Linux Kernel, OpenSSL and Wiresark). Our study investigates the effectiveness of three previously proposed vulnerability prediction approaches, in two settings: with and without the unrealistic labelling assumption. The results reveal that the unrealistic labelling assumption can profoundly mis- lead the scientific conclusions drawn; suggesting highly effective and deployable prediction results vanish when we fully account for realistically available labelling in the experimental methodology. More precisely, MCC mean values of predictive effectiveness drop from 0.77, 0.65 and 0.43 to 0.08, 0.22, 0.10 for Linux Kernel, OpenSSL and Wiresark, respectively. Similar results are also obtained for precision, recall and other assessments of predictive efficacy. The community therefore needs to upgrade experimental and empirical methodology for vulnerability prediction evaluation and development to ensure robust and actionable scientific findings. @InProceedings{ESEC/FSE19p695, author = {Matthieu Jimenez and Renaud Rwemalika and Mike Papadakis and Federica Sarro and Yves Le Traon and Mark Harman}, title = {The Importance of Accounting for Real-World Labelling When Predicting Software Vulnerabilities}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {695--705}, doi = {10.1145/3338906.3338941}, year = {2019}, } Publisher's Version |
|
Hassanshahi, Behnaz |
ESEC/FSE '19: "Nodest: Feedback-Driven Static ..."
Nodest: Feedback-Driven Static Analysis of Node.js Applications
Benjamin Barslev Nielsen, Behnaz Hassanshahi, and François Gauthier (Oracle Labs, Australia; Aarhus University, Denmark) Node.js provides the ability to write JavaScript programs for the server-side and has become a popular language for developing web applications. Node.js allows direct access to the underlying filesystem, operating system resources, and databases, but does not provide any security mechanism such as sandboxing of untrusted code, and injection vulnerabilities are now commonly reported in Node.js modules. Existing static dataflow analysis techniques do not scale to Node.js applications to find injection vulnerabilities because small Node.js web applications typically depend on many third-party modules. We present a new feedback-driven static analysis that scales well to detect injection vulnerabilities in Node.js applications. The key idea behind our new technique is that not all third-party modules need to be analyzed to detect an injection vulnerability. Results of running our analysis, Nodest, on real-world Node.js applications show that the technique scales to large applications and finds previously known as well as new vulnerabilities. In particular, Nodest finds 63 true positive taint flows in a set of our benchmarks, whereas a state-of-the-art static analysis reports 3 only. Moreover, our analysis scales to Express, the most popular Node.js web framework, and reports non-trivial injection vulnerabilities. @InProceedings{ESEC/FSE19p455, author = {Benjamin Barslev Nielsen and Behnaz Hassanshahi and François Gauthier}, title = {Nodest: Feedback-Driven Static Analysis of Node.js Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {455--465}, doi = {10.1145/3338906.3338933}, year = {2019}, } Publisher's Version |
|
Haverlock, Kevin |
ESEC/FSE '19: "Predicting Breakdowns in Cloud ..."
Predicting Breakdowns in Cloud Services (with SPIKE)
Jianfeng Chen, Joymallya Chakraborty, Philip Clark, Kevin Haverlock, Snehit Cherian, and Tim Menzies (North Carolina State University, USA; LexisNexis, USA) Maintaining web-services is a mission-critical task where any down- time means loss of revenue and reputation (of being a reliable service provider). In the current competitive web services market, such a loss of reputation causes extensive loss of future revenue. To address this issue, we developed SPIKE, a data mining tool which can predict upcoming service breakdowns, half an hour into the future. Such predictions let an organization alert and assemble the tiger team to address the problem (e.g. by reconguring cloud hardware in order to reduce the likelihood of that breakdown). SPIKE utilizes (a) regression tree learning (with CART); (b) synthetic minority over-sampling (to handle how rare spikes are in our data); (c) hyperparameter optimization (to learn best settings for our local data) and (d) a technique we called “topology sampling” where training vectors are built from extensive details of an individual node plus summary details on all their neighbors. In the experiments reported here, SPIKE predicted service spikes 30 minutes into future with recalls and precision of 75% and above. Also, SPIKE performed relatively better than other widely-used learning methods (neural nets, random forests, logistic regression). @InProceedings{ESEC/FSE19p916, author = {Jianfeng Chen and Joymallya Chakraborty and Philip Clark and Kevin Haverlock and Snehit Cherian and Tim Menzies}, title = {Predicting Breakdowns in Cloud Services (with SPIKE)}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {916--924}, doi = {10.1145/3338906.3340450}, year = {2019}, } Publisher's Version |
|
He, Chuan |
ESEC/FSE '19: "Latent Error Prediction and ..."
Latent Error Prediction and Fault Localization for Microservice Applications by Learning from System Trace Logs
Xiang Zhou, Xin Peng, Tao Xie, Jun Sun, Chao Ji, Dewei Liu, Qilin Xiang, and Chuan He (Fudan University, China; University of Illinois at Urbana-Champaign, USA; Singapore Management University, Singapore) In the production environment, a large part of microservice failures are related to the complex and dynamic interactions and runtime environments, such as those related to multiple instances, environmental configurations, and asynchronous interactions of microservices. Due to the complexity and dynamism of these failures, it is often hard to reproduce and diagnose them in testing environments. It is desirable yet still challenging that these failures can be detected and the faults can be located at runtime of the production environment to allow developers to resolve them efficiently. To address this challenge, in this paper, we propose MEPFL, an approach of latent error prediction and fault localization for microservice applications by learning from system trace logs. Based on a set of features defined on the system trace logs, MEPFL trains prediction models at both the trace level and the microservice level using the system trace logs collected from automatic executions of the target application and its faulty versions produced by fault injection. The prediction models thus can be used in the production environment to predict latent errors, faulty microservices, and fault types for trace instances captured at runtime. We implement MEPFL based on the infrastructure systems of container orchestrator and service mesh, and conduct a series of experimental studies with two opensource microservice applications (one of them being the largest open-source microservice application to our best knowledge). The results indicate that MEPFL can achieve high accuracy in intraapplication prediction of latent errors, faulty microservices, and fault types, and outperforms a state-of-the-art approach of failure diagnosis for distributed systems. The results also show that MEPFL can effectively predict latent errors caused by real-world fault cases. @InProceedings{ESEC/FSE19p683, author = {Xiang Zhou and Xin Peng and Tao Xie and Jun Sun and Chao Ji and Dewei Liu and Qilin Xiang and Chuan He}, title = {Latent Error Prediction and Fault Localization for Microservice Applications by Learning from System Trace Logs}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {683--694}, doi = {10.1145/3338906.3338961}, year = {2019}, } Publisher's Version |
|
He, Haochen |
ESEC/FSE '19: "Tuning Backfired? Not (Always) ..."
Tuning Backfired? Not (Always) Your Fault: Understanding and Detecting Configuration-Related Performance Bugs
Haochen He (National University of Defense Technology, China) Performance bugs (PBugs) are often hard to detect due to their non fail-stop symptoms. Existing debugging techniques can only detect PBugs with known patterns (e.g. inefficient loops). The key reason behind this incapability is the lack of a general test oracle. Here, we argue that the configuration tuning can serve as a strong candidate for PBugs detection. First, prior work shows that most performance bugs are related to configurations. Second, the tuning reflects users’ expectation of performance changes. If the actual performance behaves differently from the users’ intuition, the related code segment is likely to be problematic. In this paper, we first conduct a comprehensive study on configuration related performance bugs(CPBugs) from 7 representative softwares (i.e., MySQL, MariaDB, MongoDB, RocksDB, PostgreSQL, Apache and Nginx) and collect 135 real-world CPBugs. Next, by further analyzing the symptoms and root causes of the collected bugs, we identify 7 counter-intuitive patterns. Finally, by integrating the counter-intuitive patterns, we build a general test framework for detecting performance bugs. @InProceedings{ESEC/FSE19p1229, author = {Haochen He}, title = {Tuning Backfired? Not (Always) Your Fault: Understanding and Detecting Configuration-Related Performance Bugs}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1229--1231}, doi = {10.1145/3338906.3342498}, year = {2019}, } Publisher's Version |
|
He, Hao |
ESEC/FSE '19: "Understanding Source Code ..."
Understanding Source Code Comments at Large-Scale
Hao He (Peking University, China) Source code comments are important for any software, but the basic patterns of writing comments across domains and programming languages remain unclear. In this paper, we take a first step toward understanding differences in commenting practices by analyzing the comment density of 150 projects in 5 different programming languages. We have found that there are noticeable differences in comment density, which may be related to the programming language used in the project and the purpose of the project. @InProceedings{ESEC/FSE19p1217, author = {Hao He}, title = {Understanding Source Code Comments at Large-Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1217--1219}, doi = {10.1145/3338906.3342494}, year = {2019}, } Publisher's Version |
|
He, Liang |
ESEC/FSE '19: "FinExpert: Domain-Specific ..."
FinExpert: Domain-Specific Test Generation for FinTech Systems
Tiancheng Jin, Qingshun Wang, Lihua Xu, Chunmei Pan, Liang Dou, Haifeng Qian, Liang He, and Tao Xie (East China Normal University, China; New York University Shanghai, China; CFETS Information Technology, China; University of Illinois at Urbana-Champaign, USA) To assure high quality of software systems, the comprehensiveness of the created test suite and efficiency of the adopted testing process are highly crucial, especially in the FinTech industry, due to a FinTech system’s complicated system logic, mission-critical nature, and large test suite. However, the state of the testing practice in the FinTech industry still heavily relies on manual efforts. Our recent research efforts contributed our previous approach as the first attempt to automate the testing process in China Foreign Exchange Trade System (CFETS) Information Technology Co. Ltd., a subsidiary of China’s Central Bank that provides China’s foreign exchange transactions, and revealed that automating test generation for such complex trading platform could help alleviate some of these manual efforts. In this paper, we investigate further the dilemmas faced in testing the CFETS trading platform, identify the importance of domain knowledge in its testing process, and propose a new approach of domain-specific test generation to further improve the effectiveness and efficiency of our previous approach in industrial settings. We also present findings of our empirical studies of conducting domain-specific testing on subsystems of the CFETS Trading Platform. @InProceedings{ESEC/FSE19p853, author = {Tiancheng Jin and Qingshun Wang and Lihua Xu and Chunmei Pan and Liang Dou and Haifeng Qian and Liang He and Tao Xie}, title = {FinExpert: Domain-Specific Test Generation for FinTech Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {853--862}, doi = {10.1145/3338906.3340441}, year = {2019}, } Publisher's Version ESEC/FSE '19: "Detecting Concurrency Memory ..." Detecting Concurrency Memory Corruption Vulnerabilities Yan Cai, Biyun Zhu, Ruijie Meng, Hao Yun, Liang He, Purui Su, and Bin Liang (Institute of Software at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China; Renmin University of China, China) Memory corruption vulnerabilities can occur in multithreaded executions, known as concurrency vulnerabilities in this paper. Due to non-deterministic multithreaded executions, they are extremely difficult to detect. Recently, researchers tried to apply data race detectors to detect concurrency vulnerabilities. Unfortunately, these detectors are ineffective on detecting concurrency vulnerabilities. For example, most (90%) of data races are benign. However, concurrency vulnerabilities are harmful and can usually be exploited to launch attacks. Techniques based on maximal causal model rely on constraints solvers to predict scheduling; they can miss concurrency vulnerabilities in practice. Our insight is, a concurrency vulnerability is more related to the orders of events that can be reversed in different executions, no matter whether the corresponding accesses can form data races. We then define exchangeable events to identify pairs of events such that their execution orders can be probably reversed in different executions. We further propose algorithms to detect three major kinds of concurrency vulnerabilities. To overcome potential imprecision of exchangeable events, we also adopt a validation to isolate real vulnerabilities. We implemented our algorithms as a tool ConVul and applied it on 10 known concurrency vulnerabilities and the MySQL database server. Compared with three widely-used race detectors and one detector based on maximal causal model, ConVul was significantly more effective by detecting 9 of 10 known vulnerabilities and 6 zero-day vulnerabilities on MySQL (four have been confirmed). However, other detectors only detected at most 3 out of the 16 known and zero-day vulnerabilities. @InProceedings{ESEC/FSE19p706, author = {Yan Cai and Biyun Zhu and Ruijie Meng and Hao Yun and Liang He and Purui Su and Bin Liang}, title = {Detecting Concurrency Memory Corruption Vulnerabilities}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {706--717}, doi = {10.1145/3338906.3338927}, year = {2019}, } Publisher's Version |
|
Hemmati, Hadi |
ESEC/FSE '19: "An IR-Based Approach towards ..."
An IR-Based Approach towards Automated Integration of Geo-Spatial Datasets in Map-Based Software Systems
Nima Miryeganeh, Mehdi Amoui, and Hadi Hemmati (University of Calgary, Canada; Localintel, Canada) Data is arguably the most valuable asset of the modern world. In this era, the success of any data-intensive solution relies on the quality of data that drives it. Among vast amount of data that are captured, managed, and analyzed everyday, geospatial data are one of the most interesting class of data that hold geographical information of real-world phenomena and can be visualized as digital maps. Geo-spatial data is the source of many enterprise solutions that provide local information and insights. Companies often aggregate geospacial datasets from various sources in order to increase the quality of such solutions. However, a lack of a global standard model for geospatial datasets makes the task of merging and integrating datasets difficult and error prone. Traditionally, this aggregation was accomplished by domain experts manually validating the data integration process by merging new data sources and/or new versions of previous data against conflicts and other requirement violations. However, this manual approach is not scalable is a hinder toward rapid release when dealing with big datasets which change frequently. Thus more automated approaches with limited interaction with domain experts is required. As a first step to tackle this problem, we have leveraged Information Retrieval (IR) and geospatial search techniques to propose a systematic and automated conflict identification approach. To evaluate our approach, we conduct a case study in which we measure the accuracy of our approach in several real-world scenarios and followed by interviews with Localintel Inc. software developers to get their feedbacks. @InProceedings{ESEC/FSE19p946, author = {Nima Miryeganeh and Mehdi Amoui and Hadi Hemmati}, title = {An IR-Based Approach towards Automated Integration of Geo-Spatial Datasets in Map-Based Software Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {946--954}, doi = {10.1145/3338906.3340454}, year = {2019}, } Publisher's Version |
|
He, Sen |
ESEC/FSE '19: "A Statistics-Based Performance ..."
A Statistics-Based Performance Testing Methodology for Cloud Applications
Sen He, Glenna Manns, John Saunders, Wei Wang, Lori Pollock, and Mary Lou Soffa (University of Texas at San Antonio, USA; University of Virginia, USA; University of Delaware, USA) The low cost of resource ownership and flexibility have led users to increasingly port their applications to the clouds. To fully realize the cost benefits of cloud services, users usually need to reliably know the execution performance of their applications. However, due to the random performance fluctuations experienced by cloud applications, the black box nature of public clouds and the cloud usage costs, testing on clouds to acquire accurate performance results is extremely difficult. In this paper, we present a novel cloud performance testing methodology called PT4Cloud. By employing non-parametric statistical approaches of likelihood theory and the bootstrap method, PT4Cloud provides reliable stop conditions to obtain highly accurate performance distributions with confidence bands. These statistical approaches also allow users to specify intuitive accuracy goals and easily trade between accuracy and testing cost. We evaluated PT4Cloud with 33 benchmark configurations on Amazon Web Service and Chameleon clouds. When compared with performance data obtained from extensive performance tests, PT4Cloud provides testing results with 95.4% accuracy on average while reducing the number of test runs by 62%. We also propose two test execution reduction techniques for PT4Cloud, which can reduce the number of test runs by 90.1% while retaining an average accuracy of 91%. We compared our technique to three other techniques and found that our results are much more accurate. @InProceedings{ESEC/FSE19p188, author = {Sen He and Glenna Manns and John Saunders and Wei Wang and Lori Pollock and Mary Lou Soffa}, title = {A Statistics-Based Performance Testing Methodology for Cloud Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {188--199}, doi = {10.1145/3338906.3338912}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
He, Xiaoting |
ESEC/FSE '19: "Robust Log-Based Anomaly Detection ..."
Robust Log-Based Anomaly Detection on Unstable Log Data
Xu Zhang, Yong Xu, Qingwei Lin, Bo Qiao, Hongyu Zhang, Yingnong Dang, Chunyu Xie, Xinsheng Yang, Qian Cheng, Ze Li, Junjie Chen, Xiaoting He, Randolph Yao, Jian-Guang Lou, Murali Chintalapati, Furao Shen, and Dongmei Zhang (Microsoft Research, China; Nanjing University, China; University of Newcastle, Australia; Microsoft, USA; Tianjin University, China) Logs are widely used by large and complex software-intensive systems for troubleshooting. There have been a lot of studies on log-based anomaly detection. To detect the anomalies, the existing methods mainly construct a detection model using log event data extracted from historical logs. However, we find that the existing methods do not work well in practice. These methods have the close-world assumption, which assumes that the log data is stable over time and the set of distinct log events is known. However, our empirical study shows that in practice, log data often contains previously unseen log events or log sequences. The instability of log data comes from two sources: 1) the evolution of logging statements, and 2) the processing noise in log data. In this paper, we propose a new log-based anomaly detection approach, called LogRobust. LogRobust extracts semantic information of log events and represents them as semantic vectors. It then detects anomalies by utilizing an attention-based Bi-LSTM model, which has the ability to capture the contextual information in the log sequences and automatically learn the importance of different log events. In this way, LogRobust is able to identify and handle unstable log events and sequences. We have evaluated LogRobust using logs collected from the Hadoop system and an actual online service system of Microsoft. The experimental results show that the proposed approach can well address the problem of log instability and achieve accurate and robust results on real-world, ever-changing log data. @InProceedings{ESEC/FSE19p807, author = {Xu Zhang and Yong Xu and Qingwei Lin and Bo Qiao and Hongyu Zhang and Yingnong Dang and Chunyu Xie and Xinsheng Yang and Qian Cheng and Ze Li and Junjie Chen and Xiaoting He and Randolph Yao and Jian-Guang Lou and Murali Chintalapati and Furao Shen and Dongmei Zhang}, title = {Robust Log-Based Anomaly Detection on Unstable Log Data}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {807--817}, doi = {10.1145/3338906.3338931}, year = {2019}, } Publisher's Version |
|
Hilton, Michael |
ESEC/FSE '19: "A Conceptual Replication of ..."
A Conceptual Replication of Continuous Integration Pain Points in the Context of Travis CI
David Gray Widder, Michael Hilton, Christian Kästner, and Bogdan Vasilescu (Carnegie Mellon University, USA) Continuous integration (CI) is an established software quality assurance practice, and the focus of much prior research with a diverse range of methods and populations. In this paper, we first conduct a literature review of 37 papers on CI pain points. We then conduct a conceptual replication study on results from these papers using a triangulation design consisting of a survey with 132 responses, 12 interviews, and two logistic regressions predicting Travis CI abandonment and switching on a dataset of 6,239 GitHub projects. We report and discuss which past results we were able to replicate, those for which we found conflicting evidence, those for which we did not find evidence, and the implications of these findings. @InProceedings{ESEC/FSE19p647, author = {David Gray Widder and Michael Hilton and Christian Kästner and Bogdan Vasilescu}, title = {A Conceptual Replication of Continuous Integration Pain Points in the Context of Travis CI}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {647--658}, doi = {10.1145/3338906.3338922}, year = {2019}, } Publisher's Version Info |
|
Hirao, Toshiki |
ESEC/FSE '19: "The Review Linkage Graph for ..."
The Review Linkage Graph for Code Review Analytics: A Recovery Approach and Empirical Study
Toshiki Hirao, Shane McIntosh, Akinori Ihara, and Kenichi Matsumoto (NAIST, Japan; McGill University, Canada; Wakayama University, Japan) Modern Code Review (MCR) is a pillar of contemporary quality assurance approaches, where developers discuss and improve code changes prior to integration. Since review interactions (e.g., comments, revisions) are archived, analytics approaches like reviewer recommendation and review outcome prediction have been proposed to support the MCR process. These approaches assume that reviews evolve and are adjudicated independently; yet in practice, reviews can be interdependent. In this paper, we set out to better understand the impact of review linkage on code review analytics. To do so, we extract review linkage graphs where nodes represent reviews, while edges represent recovered links between reviews. Through a quantitative analysis of six software communities, we observe that (a) linked reviews occur regularly, with linked review rates of 25% in OpenStack, 17% in Chromium, and 3%–8% in Android, Qt, Eclipse, and Libreoffice; and (b) linkage has become more prevalent over time. Through qualitative analysis, we discover that links span 16 types that belong to five categories. To automate link category recovery, we train classifiers to label links according to the surrounding document content. Those classifiers achieve F1-scores of 0.71–0.79, at least doubling the F1-scores of a ZeroR baseline. Finally, we show that the F1-scores of reviewer recommenders can be improved by 37%–88% (5–14 percentage points) by incorporating information from linked reviews that is available at prediction time. Indeed, review linkage should be exploited by future code review analytics. @InProceedings{ESEC/FSE19p578, author = {Toshiki Hirao and Shane McIntosh and Akinori Ihara and Kenichi Matsumoto}, title = {The Review Linkage Graph for Code Review Analytics: A Recovery Approach and Empirical Study}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {578--589}, doi = {10.1145/3338906.3338949}, year = {2019}, } Publisher's Version |
|
Holland, Benjamin |
ESEC/FSE '19: "DISCOVER: Detecting Algorithmic ..."
DISCOVER: Detecting Algorithmic Complexity Vulnerabilities
Payas Awadhutkar, Ganesh Ram Santhanam, Benjamin Holland, and Suresh Kothari (Iowa State University, USA; EnSoft, USA) Algorithmic Complexity Vulnerabilities (ACV) are a class of vulnerabilities that enable Denial of Service Attacks. ACVs stem from asymmetric consumption of resources due to complex loop termination logic, recursion, and/or resource intensive library APIs. Completely automated detection of ACVs is intractable and it calls for tools that assist human analysts. We present DISCOVER, a suite of tools that facilitates human-on-the-loop detection of ACVs. DISCOVER's workflow can be broken into three phases - (1) Automated characterization of loops, (2) Selection of suspicious loops, and (3) Interactive audit of selected loops. We demonstrate DISCOVER using a case study using a DARPA challenge app. DISCOVER supports analysis of Java source code and Java bytecode. We demonstrate it for Java bytecode. @InProceedings{ESEC/FSE19p1129, author = {Payas Awadhutkar and Ganesh Ram Santhanam and Benjamin Holland and Suresh Kothari}, title = {DISCOVER: Detecting Algorithmic Complexity Vulnerabilities}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1129--1133}, doi = {10.1145/3338906.3341177}, year = {2019}, } Publisher's Version Video |
|
Hong, Shin |
ESEC/FSE '19: "Target-Driven Compositional ..."
Target-Driven Compositional Concolic Testing with Function Summary Refinement for Effective Bug Detection
Yunho Kim, Shin Hong, and Moonzoo Kim (KAIST, South Korea; Handong Global University, South Korea) Concolic testing is popular in unit testing because it can detect bugs quickly in a relatively small search space. But, in system-level testing, it suffers from the symbolic path explosion and often misses bugs. To resolve this problem, we have developed a focused compositional concolic testing technique, FOCAL, for effective bug detection. Focusing on a target unit failure v (a crash or an assert violation) detected by concolic unit testing, FOCAL generates a system-level test input that validates v. This test input is obtained by building and solving symbolic path formulas that represent system-level executions raising v. FOCAL builds such formulas by combining function summaries one by one backward from a function that raised v to main. If a function summary φa of function a conflicts with the summaries of the other functions, FOCAL refines φa to φa′ by applying a refining constraint learned from the conflict. FOCAL showed high system-level bug detection ability by detecting 71 out of the 100 real-world target bugs in the SIR benchmark, while other relevant cutting edge techniques (i.e., AFL-fast, KATCH, Mix-CCBSE) detected at most 40 bugs. Also, FOCAL detected 13 new crash bugs in popular file parsing programs. @InProceedings{ESEC/FSE19p16, author = {Yunho Kim and Shin Hong and Moonzoo Kim}, title = {Target-Driven Compositional Concolic Testing with Function Summary Refinement for Effective Bug Detection}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {16--26}, doi = {10.1145/3338906.3338934}, year = {2019}, } Publisher's Version Info |
|
Huang, Huang |
ESEC/FSE '19: "Ethnographic Research in Software ..."
Ethnographic Research in Software Engineering: A Critical Review and Checklist
He Zhang, Xin Huang, Xin Zhou, Huang Huang, and Muhammad Ali Babar (Nanjing University, China; University of Adelaide, Australia) Software Engineering (SE) community has recently been investing significant amount of effort in qualitative research to study the human and social aspects of SE processes, practices, and technologies. Ethnography is one of the major qualitative research methods, which is based on constructivist paradigm that is different from the hypothetic-deductive research model usually used in SE. Hence, the adoption of ethnographic research method in SE can present significant challenges in terms of sufficient understanding of the methodological requirements and the logistics of its applications. It is important to systematically identify and understand various aspects of adopting ethnography in SE and provide effective guidance. We carried out an empirical inquiry by integrating a systematic literature review and a confirmatory survey. By reviewing the ethnographic studies reported in 111 identified papers and 26 doctoral theses and analyzing the authors' responses of 29 of those papers, we revealed several unique insights. These identified insights were then transformed into a preliminary checklist that helps improve the state-of-the-practice of using ethnography in SE. This study also identifies the areas where methodological improvements of ethnography are needed in SE. @InProceedings{ESEC/FSE19p659, author = {He Zhang and Xin Huang and Xin Zhou and Huang Huang and Muhammad Ali Babar}, title = {Ethnographic Research in Software Engineering: A Critical Review and Checklist}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {659--670}, doi = {10.1145/3338906.3338976}, year = {2019}, } Publisher's Version |
|
Huang, Jeff |
ESEC/FSE '19: "ServDroid: Detecting Service ..."
ServDroid: Detecting Service Usage Inefficiencies in Android Applications
Wei Song, Jing Zhang, and Jeff Huang (Nanjing University of Science and Technology, China; Texas A&M University, USA) Services in Android applications are frequently-used components for performing time-consuming operations in the background. While services play a crucial role in the app performance, our study shows that service uses in practice are not as efficient as expected, e.g., they tend to cause unnecessary resource occupation and/or energy consumption. Moreover, as service usage inefficiencies do not manifest with immediate failures, e.g., app crashes, existing testing-based approaches fall short in finding them. In this paper, we identify four anti-patterns of such service usage inefficiency bugs, including premature create, late destroy, premature destroy, and service leak, and present a static analysis technique, ServDroid, to automatically and effectively detect them based on the anti-patterns. We have applied ServDroid to a large collection of popular real-world Android apps. Our results show that, surprisingly, service usage inefficiencies are prevalent and can severely impact the app performance. @InProceedings{ESEC/FSE19p362, author = {Wei Song and Jing Zhang and Jeff Huang}, title = {ServDroid: Detecting Service Usage Inefficiencies in Android Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {362--373}, doi = {10.1145/3338906.3338950}, year = {2019}, } Publisher's Version Info |
|
Huang, Qiao |
ESEC/FSE '19: "AnswerBot: An Answer Summary ..."
AnswerBot: An Answer Summary Generation Tool Based on Stack Overflow
Liang Cai, Haoye Wang, Bowen Xu, Qiao Huang, Xin Xia, David Lo, and Zhenchang Xing (Zhejiang University, China; Singapore Management University, Singapore; Monash University, Australia; Australian National University, Australia) Software Q&A sites (like Stack Overflow) play an essential role in developers’ day-to-day work for problem-solving. Although search engines (like Google) are widely used to obtain a list of relevant posts for technical problems, we observed that the redundant relevant posts and sheer amount of information barriers developers to digest and identify the useful answers. In this paper, we propose a tool AnswerBot which enables to automatically generate an answer summary for a technical problem. AnswerBot consists of three main stages, (1) relevant question retrieval, (2) useful answer paragraph selection, (3) diverse answer summary generation. We implement it in the form of a search engine website. To evaluate AnswerBot, we first build a repository includes a large number of Java questions and their corresponding answers from Stack Overflow. Then, we conduct a user study that evaluates the answer summary generated by AnswerBot and two baselines (based on Google and Stack Overflow search engine) for 100 queries. The results show that the answer summaries generated by AnswerBot are more relevant, useful, and diverse. Moreover, we also substantially improved the efficiency of AnswerBot (from 309 to 8 seconds per query). @InProceedings{ESEC/FSE19p1134, author = {Liang Cai and Haoye Wang and Bowen Xu and Qiao Huang and Xin Xia and David Lo and Zhenchang Xing}, title = {AnswerBot: An Answer Summary Generation Tool Based on Stack Overflow}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1134--1138}, doi = {10.1145/3338906.3341186}, year = {2019}, } Publisher's Version ESEC/FSE '19: "BIKER: A Tool for Bi-Information ..." BIKER: A Tool for Bi-Information Source Based API Method Recommendation Liang Cai, Haoye Wang, Qiao Huang, Xin Xia, Zhenchang Xing, and David Lo (Zhejiang University, China; Monash University, Australia; Australian National University, Australia; Singapore Management University, Singapore) Application Programming Interfaces (APIs) in software libraries play an important role in modern software development. Although most libraries provide API documentation as a reference, developers may find it difficult to directly search for appropriate APIs in documentation using the natural language description of the programming tasks. We call such phenomenon as knowledge gap, which refers to the fact that API documentation mainly describes API functionality and structure but lacks other types of information like concepts and purposes. In this paper, we propose a Java API recommendation tool named BIKER (Bi-Information source based KnowledgE Recommendation) to bridge the knowledge gap. We implement BIKER as a search engine website. Given a query in natural language, instead of directly searching API documentation, BIKER first searches for similar API-related questions on Stack Overflow to extract candidate APIs. Then, BIKER ranks them by considering the query’s similarity with both Stack Overflow posts and API documentation. Finally, to help developers better understand why each API is recommended and how to use them in practice, BIKER summarizes and presents supplementary information (e.g., API description, code examples in Stack Overflow posts) for each recommended API. Our quantitative evaluation and user study demonstrate that BIKER can help developers find appropriate APIs more efficiently and precisely. @InProceedings{ESEC/FSE19p1075, author = {Liang Cai and Haoye Wang and Qiao Huang and Xin Xia and Zhenchang Xing and David Lo}, title = {BIKER: A Tool for Bi-Information Source Based API Method Recommendation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1075--1079}, doi = {10.1145/3338906.3341174}, year = {2019}, } Publisher's Version |
|
Huang, Xin |
ESEC/FSE '19: "Ethnographic Research in Software ..."
Ethnographic Research in Software Engineering: A Critical Review and Checklist
He Zhang, Xin Huang, Xin Zhou, Huang Huang, and Muhammad Ali Babar (Nanjing University, China; University of Adelaide, Australia) Software Engineering (SE) community has recently been investing significant amount of effort in qualitative research to study the human and social aspects of SE processes, practices, and technologies. Ethnography is one of the major qualitative research methods, which is based on constructivist paradigm that is different from the hypothetic-deductive research model usually used in SE. Hence, the adoption of ethnographic research method in SE can present significant challenges in terms of sufficient understanding of the methodological requirements and the logistics of its applications. It is important to systematically identify and understand various aspects of adopting ethnography in SE and provide effective guidance. We carried out an empirical inquiry by integrating a systematic literature review and a confirmatory survey. By reviewing the ethnographic studies reported in 111 identified papers and 26 doctoral theses and analyzing the authors' responses of 29 of those papers, we revealed several unique insights. These identified insights were then transformed into a preliminary checklist that helps improve the state-of-the-practice of using ethnography in SE. This study also identifies the areas where methodological improvements of ethnography are needed in SE. @InProceedings{ESEC/FSE19p659, author = {He Zhang and Xin Huang and Xin Zhou and Huang Huang and Muhammad Ali Babar}, title = {Ethnographic Research in Software Engineering: A Critical Review and Checklist}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {659--670}, doi = {10.1145/3338906.3338976}, year = {2019}, } Publisher's Version |
|
Huang, Zixin |
ESEC/FSE '19: "Storm: Program Reduction for ..."
Storm: Program Reduction for Testing and Debugging Probabilistic Programming Systems
Saikat Dutta, Wenxian Zhang, Zixin Huang, and Sasa Misailovic (University of Illinois at Urbana-Champaign, USA) Probabilistic programming languages offer an intuitive way to model uncertainty by representing complex probability models as simple probabilistic programs. Probabilistic programming systems (PP systems) hide the complexity of inference algorithms away from the program developer. Unfortunately, if a failure occurs during the run of a PP system, a developer typically has very little support in finding the part of the probabilistic program that causes the failure in the system. This paper presents Storm, a novel general framework for reducing probabilistic programs. Given a probabilistic program (with associated data and inference arguments) that causes a failure in a PP system, Storm finds a smaller version of the program, data, and arguments that cause the same failure. Storm leverages both generic code and data transformations from compiler testing and domain-specific, probabilistic transformations. The paper presents new transformations that reduce the complexity of statements and expressions, reduce data size, and simplify inference arguments (e.g., the number of iterations of the inference algorithm). We evaluated Storm on 47 programs that caused failures in two popular probabilistic programming systems, Stan and Pyro. Our experimental results show Storm’s effectiveness. For Stan, our minimized programs have 49% less code, 67% less data, and 96% fewer iterations. For Pyro, our minimized programs have 58% less code, 96% less data, and 99% fewer iterations. We also show the benefits of Storm when debugging probabilistic programs. @InProceedings{ESEC/FSE19p729, author = {Saikat Dutta and Wenxian Zhang and Zixin Huang and Sasa Misailovic}, title = {Storm: Program Reduction for Testing and Debugging Probabilistic Programming Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {729--739}, doi = {10.1145/3338906.3338972}, year = {2019}, } Publisher's Version |
|
Huijgens, Hennie |
ESEC/FSE '19: "Releasing Fast and Slow: An ..."
Releasing Fast and Slow: An Exploratory Case Study at ING
Elvan Kula, Ayushi Rastogi, Hennie Huijgens, Arie van Deursen, and Georgios Gousios (Delft University of Technology, Netherlands; ING Bank, Netherlands) The appeal of delivering new features faster has led many software projects to adopt rapid releases. However, it is not well understood what the effects of this practice are. This paper presents an exploratory case study of rapid releases at ING, a large banking company that develops software solutions in-house, to characterize rapid releases. Since 2011, ING has shifted to a rapid release model. This switch has resulted in a mixed environment of 611 teams releasing relatively fast and slow. We followed a mixed-methods approach in which we conducted a survey with 461 participants and corroborated their perceptions with 2 years of code quality data and 1 year of release delay data. Our research shows that: rapid releases are more commonly delayed than their non-rapid counterparts, however, rapid releases have shorter delays; rapid releases can be beneficial in terms of reviewing and user-perceived quality; rapidly released software tends to have a higher code churn, a higher test coverage and a lower average complexity; challenges in rapid releases are related to managing dependencies and certain code aspects, e.g., design debt. @InProceedings{ESEC/FSE19p785, author = {Elvan Kula and Ayushi Rastogi and Hennie Huijgens and Arie van Deursen and Georgios Gousios}, title = {Releasing Fast and Slow: An Exploratory Case Study at ING}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {785--795}, doi = {10.1145/3338906.3338978}, year = {2019}, } Publisher's Version |
|
Huynh, Tri |
ESEC/FSE '19: "Generating Effective Test ..."
Generating Effective Test Cases for Self-Driving Cars from Police Reports
Alessio Gambi, Tri Huynh, and Gordon Fraser (University of Passau, Germany; Saarland University, Germany; CISPA, Germany) Autonomous driving carries the promise to drastically reduce the number of car accidents; however, recently reported fatal crashes involving self-driving cars show that such an important goal is not yet achieved. This calls for better testing of the software controlling self-driving cars, which is difficult because it requires producing challenging driving scenarios. To better test self-driving car soft- ware, we propose to specifically test car crash scenarios, which are critical par excellence. Since real car crashes are difficult to test in field operation, we recreate them as physically accurate simulations in an environment that can be used for testing self-driving car software. To cope with the scarcity of sensory data collected during real car crashes which does not enable a full reproduction, we extract the information to recreate real car crashes from the police reports which document them. Our extensive evaluation, consisting of a user study involving 34 participants and a quantitative analysis of the quality of the generated tests, shows that we can generate accurate simulations of car crashes in a matter of minutes. Compared to tests which implement non critical driving scenarios, our tests effectively stressed the test subject in different ways and exposed several shortcomings in its implementation. @InProceedings{ESEC/FSE19p257, author = {Alessio Gambi and Tri Huynh and Gordon Fraser}, title = {Generating Effective Test Cases for Self-Driving Cars from Police Reports}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {257--267}, doi = {10.1145/3338906.3338942}, year = {2019}, } Publisher's Version |
|
Ihara, Akinori |
ESEC/FSE '19: "The Review Linkage Graph for ..."
The Review Linkage Graph for Code Review Analytics: A Recovery Approach and Empirical Study
Toshiki Hirao, Shane McIntosh, Akinori Ihara, and Kenichi Matsumoto (NAIST, Japan; McGill University, Canada; Wakayama University, Japan) Modern Code Review (MCR) is a pillar of contemporary quality assurance approaches, where developers discuss and improve code changes prior to integration. Since review interactions (e.g., comments, revisions) are archived, analytics approaches like reviewer recommendation and review outcome prediction have been proposed to support the MCR process. These approaches assume that reviews evolve and are adjudicated independently; yet in practice, reviews can be interdependent. In this paper, we set out to better understand the impact of review linkage on code review analytics. To do so, we extract review linkage graphs where nodes represent reviews, while edges represent recovered links between reviews. Through a quantitative analysis of six software communities, we observe that (a) linked reviews occur regularly, with linked review rates of 25% in OpenStack, 17% in Chromium, and 3%–8% in Android, Qt, Eclipse, and Libreoffice; and (b) linkage has become more prevalent over time. Through qualitative analysis, we discover that links span 16 types that belong to five categories. To automate link category recovery, we train classifiers to label links according to the surrounding document content. Those classifiers achieve F1-scores of 0.71–0.79, at least doubling the F1-scores of a ZeroR baseline. Finally, we show that the F1-scores of reviewer recommenders can be improved by 37%–88% (5–14 percentage points) by incorporating information from linked reviews that is available at prediction time. Indeed, review linkage should be exploited by future code review analytics. @InProceedings{ESEC/FSE19p578, author = {Toshiki Hirao and Shane McIntosh and Akinori Ihara and Kenichi Matsumoto}, title = {The Review Linkage Graph for Code Review Analytics: A Recovery Approach and Empirical Study}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {578--589}, doi = {10.1145/3338906.3338949}, year = {2019}, } Publisher's Version |
|
Islam, Md Johirul |
ESEC/FSE '19: "A Comprehensive Study on Deep ..."
A Comprehensive Study on Deep Learning Bug Characteristics
Md Johirul Islam, Giang Nguyen, Rangeet Pan, and Hridesh Rajan (Iowa State University, USA) Deep learning has gained substantial popularity in recent years. Developers mainly rely on libraries and tools to add deep learning capabilities to their software. What kinds of bugs are frequently found in such software? What are the root causes of such bugs? What impacts do such bugs have? Which stages of deep learning pipeline are more bug prone? Are there any antipatterns? Understanding such characteristics of bugs in deep learning software has the potential to foster the development of better deep learning platforms, debugging mechanisms, development practices, and encourage the development of analysis and verification frameworks. Therefore, we study 2716 high-quality posts from Stack Overflow and 500 bug fix commits from Github about five popular deep learning libraries Caffe, Keras, Tensorflow, Theano, and Torch to understand the types of bugs, root causes of bugs, impacts of bugs, bug-prone stage of deep learning pipeline as well as whether there are some common antipatterns found in this buggy software. The key findings of our study include: data bug and logic bug are the most severe bug types in deep learning software appearing more than 48% of the times, major root causes of these bugs are Incorrect Model Parameter (IPS) and Structural Inefficiency (SI) showing up more than 43% of the times.We have also found that the bugs in the usage of deep learning libraries have some common antipatterns. @InProceedings{ESEC/FSE19p510, author = {Md Johirul Islam and Giang Nguyen and Rangeet Pan and Hridesh Rajan}, title = {A Comprehensive Study on Deep Learning Bug Characteristics}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {510--520}, doi = {10.1145/3338906.3338955}, year = {2019}, } Publisher's Version |
|
Ivančić, Franjo |
ESEC/FSE '19: "FUDGE: Fuzz Driver Generation ..."
FUDGE: Fuzz Driver Generation at Scale
Domagoj Babić, Stefan Bucur, Yaohui Chen, Franjo Ivančić, Tim King, Markus Kusano, Caroline Lemieux, László Szekeres, and Wei Wang (Google, USA; Northeastern University, USA; University of California at Berkeley, USA) At Google we have found tens of thousands of security and robustness bugs by fuzzing C and C++ libraries. To fuzz a library, a fuzzer requires a fuzz driver—which exercises some library code—to which it can pass inputs. Unfortunately, writing fuzz drivers remains a primarily manual exercise, a major hindrance to the widespread adoption of fuzzing. In this paper, we address this major hindrance by introducing the Fudge system for automated fuzz driver generation. Fudge automatically generates fuzz driver candidates for libraries based on existing client code. We have used Fudge to generate thousands of new drivers for a wide variety of libraries. Each generated driver includes a synthesized C/C++ program and a corresponding build script, and is automatically analyzed for quality. Developers have integrated over 200 of these generated drivers into continuous fuzzing services and have committed to address reported security bugs. Further, several of these fuzz drivers have been upstreamed to open source projects and integrated into the OSS-Fuzz fuzzing infrastructure. Running these fuzz drivers has resulted in over 150 bug fixes, including the elimination of numerous exploitable security vulnerabilities. @InProceedings{ESEC/FSE19p975, author = {Domagoj Babić and Stefan Bucur and Yaohui Chen and Franjo Ivančić and Tim King and Markus Kusano and Caroline Lemieux and László Szekeres and Wei Wang}, title = {FUDGE: Fuzz Driver Generation at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {975--985}, doi = {10.1145/3338906.3340456}, year = {2019}, } Publisher's Version |
|
Ivanković, Marko |
ESEC/FSE '19: "Code Coverage at Google ..."
Code Coverage at Google
Marko Ivanković, Goran Petrović, René Just, and Gordon Fraser (Google, Switzerland; University of Washington, USA; University of Passau, Germany) Code coverage is a measure of the degree to which a test suite exercises a software system. Although coverage is well established in software engineering research, deployment in industry is often inhibited by the perceived usefulness and the computational costs of analyzing coverage at scale. At Google, coverage information is computed for one billion lines of code daily, for seven programming languages. A key aspect of making coverage information actionable is to apply it at the level of changesets and code review. This paper describes Google’s code coverage infrastructure and how the computed code coverage information is visualized and used. It also describes the challenges and solutions for adopting code coverage at scale. To study how code coverage is adopted and perceived by developers, this paper analyzes adoption rates, error rates, and average code coverage ratios over a five-year period, and it reports on 512 responses, received from surveying 3000 developers. Finally, this paper provides concrete suggestions for how to implement and use code coverage in an industrial setting. @InProceedings{ESEC/FSE19p955, author = {Marko Ivanković and Goran Petrović and René Just and Gordon Fraser}, title = {Code Coverage at Google}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {955--963}, doi = {10.1145/3338906.3340459}, year = {2019}, } Publisher's Version |
|
Jayaraman, Ilan |
ESEC/FSE '19: "Bridging the Gap between ML ..."
Bridging the Gap between ML Solutions and Their Business Requirements using Feature Interactions
Guy Barash, Eitan Farchi, Ilan Jayaraman, Orna Raz, Rachel Tzoref-Brill, and Marcel Zalmanovici (Western Digital, Israel; IBM Research, Israel; IBM, India) Machine Learning (ML) based solutions are becoming increasingly popular and pervasive. When testing such solutions, there is a tendency to focus on improving the ML metrics such as the F1-score and accuracy at the expense of ensuring business value and correctness by covering business requirements. In this work, we adapt test planning methods of classical software to ML solutions. We use combinatorial modeling methodology to define the space of business requirements and map it to the ML solution data, and use the notion of data slices to identify the weaker areas of the ML solution and strengthen them. We apply our approach to three real-world case studies and demonstrate its value. @InProceedings{ESEC/FSE19p1048, author = {Guy Barash and Eitan Farchi and Ilan Jayaraman and Orna Raz and Rachel Tzoref-Brill and Marcel Zalmanovici}, title = {Bridging the Gap between ML Solutions and Their Business Requirements using Feature Interactions}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1048--1058}, doi = {10.1145/3338906.3340442}, year = {2019}, } Publisher's Version |
|
Jiang, Lingxiao |
ESEC/FSE '19: "SAR: Learning Cross-Language ..."
SAR: Learning Cross-Language API Mappings with Little Knowledge
Nghi D. Q. Bui, Yijun Yu, and Lingxiao Jiang (Singapore Management University, Singapore; Open University, UK) To save effort, developers often translate programs from one programming language to another, instead of implementing it from scratch. Translating application program interfaces (APIs) used in one language to functionally equivalent ones available in another language is an important aspect of program translation. Existing approaches facilitate the translation by automatically identifying the API mappings across programming languages. However, these approaches still require large amount of parallel corpora, ranging from pairs of APIs or code fragments that are functionally equivalent, to similar code comments. To minimize the need of parallel corpora, this paper aims at an automated approach that can map APIs across languages with much less a priori knowledge than other approaches. The approach is based on an realization of the notion of domain adaption, combined with code embedding, to better align two vector spaces. Taking as input large sets of programs, our approach first generates numeric vector representations of the programs (including the APIs used in each language), and it adapts generative adversarial networks (GAN) to align the vectors in different spaces of two languages. For a better alignment, we initialize the GAN with parameters derived from API mapping seeds that can be identified accurately with a simple automatic signature-based matching heuristic. Then the cross language API mappings can be identified via nearest-neighbors queries in the aligned vector spaces. We have implemented the approach (SAR, named after three main technical components in the approach) in a prototype for mapping APIs across Java and C# programs. Our evaluation on about 2 million Java files and 1 million C# files shows that the approach can achieve 54% and 82% mapping accuracy in its top-1 and top-10 API mapping results with only 174 automatically identified seeds, more accurate than other approaches using the same or much more mapping seeds. @InProceedings{ESEC/FSE19p796, author = {Nghi D. Q. Bui and Yijun Yu and Lingxiao Jiang}, title = {SAR: Learning Cross-Language API Mappings with Little Knowledge}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {796--806}, doi = {10.1145/3338906.3338924}, year = {2019}, } Publisher's Version Info Artifacts Reusable |
|
Jiang, Yanjie |
ESEC/FSE '19: "Semantic Relation Based Expansion ..."
Semantic Relation Based Expansion of Abbreviations
Yanjie Jiang, Hui Liu, and Lu Zhang (Beijing Institute of Technology, China; Peking University, China) Identifiers account for 70% of source code in terms of characters, and thus the quality of such identifiers is critical for program comprehension and software maintenance. For various reasons, however, many identifiers contain abbreviations, which reduces the readability and maintainability of source code. To this end, a number of approaches have been proposed to expand abbreviations in identifiers. However, such approaches are either inaccurate or confined to specific identifiers. To this end, in this paper we propose a generic and accurate approach to expand identifier abbreviations. The key insight of the approach is that abbreviations in the name of software entity e have great chance to find their full terms in names of software entities that are semantically related to e. Consequently, the proposed approach builds a knowledge graph to represent such entities and their relationships with e, and searches the graph for full terms. The optimal searching strategy for the graph could be learned automatically from a corpus of manually expanded abbreviations. We evaluate the proposed approach on nine well known open-source projects. Results of our k-fold evaluation suggest that the proposed approach improves the state of the art. It improves precision significantly from 29% to 85%, and recall from 29% to 77%. Evaluation results also suggest that the proposed generic approach is even better than the state-of-the-art parameter-specific approach in expanding parameter abbreviations, improving F1 score significantly from 75% to 87%. @InProceedings{ESEC/FSE19p131, author = {Yanjie Jiang and Hui Liu and Lu Zhang}, title = {Semantic Relation Based Expansion of Abbreviations}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {131--141}, doi = {10.1145/3338906.3338929}, year = {2019}, } Publisher's Version |
|
Jiang, Yu |
ESEC/FSE '19: "EVMFuzzer: Detect EVM Vulnerabilities ..."
EVMFuzzer: Detect EVM Vulnerabilities via Fuzz Testing
Ying Fu, Meng Ren, Fuchen Ma, Heyuan Shi, Xin Yang, Yu Jiang, Huizhong Li, and Xiang Shi (Tsinghua University, China; WeBank, China) Ethereum Virtual Machine (EVM) is the run-time environment for smart contracts and its vulnerabilities may lead to serious problems to the Ethereum ecology. With lots of techniques being continuously developed for the validation of smart contracts, the testing of EVM remains challenging because of the special test input format and the absence of oracles. In this paper, we propose EVMFuzzer, the first tool that uses differential fuzzing technique to detect vulnerabilities of EVM. The core idea is to continuously generate seed contracts and feed them to the target EVM and the benchmark EVMs, so as to find as many inconsistencies among execution results as possible, eventually discover vulnerabilities with output cross-referencing. Given a target EVM and its APIs, EVMFuzzer generates seed contracts via a set of predefined mutators, and then employs dynamic priority scheduling algorithm to guide seed contracts selection and maximize the inconsistency. Finally, EVMFuzzer leverages benchmark EVMs as cross-referencing oracles to avoid manual checking. With EVMFuzzer, we have found several previously unknown security bugs in four widely used EVMs, and 5 of which had been included in Common Vulnerabilities and Exposures (CVE) IDs in U.S. National Vulnerability Database. The video is presented at https://youtu.be/9Lejgf2GSOk. @InProceedings{ESEC/FSE19p1110, author = {Ying Fu and Meng Ren and Fuchen Ma and Heyuan Shi and Xin Yang and Yu Jiang and Huizhong Li and Xiang Shi}, title = {EVMFuzzer: Detect EVM Vulnerabilities via Fuzz Testing}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1110--1114}, doi = {10.1145/3338906.3341175}, year = {2019}, } Publisher's Version ESEC/FSE '19: "Industry Practice of Coverage-Guided ..." Industry Practice of Coverage-Guided Enterprise Linux Kernel Fuzzing Heyuan Shi, Runzhe Wang, Ying Fu, Mingzhe Wang, Xiaohai Shi, Xun Jiao, Houbing Song, Yu Jiang, and Jiaguang Sun (Tsinghua University, China; Alibaba Group, China; Villanova University, USA; Embry-Riddle Aeronautical University, USA) Coverage-guided kernel fuzzing is a widely-used technique that has helped kernel developers and testers discover numerous vulnerabilities. However, due to the high complexity of application and hardware environment, there is little study on deploying fuzzing to the enterprise-level Linux kernel. In this paper, collaborating with the enterprise developers, we present the industry practice to deploy kernel fuzzing on four different enterprise Linux distributions that are responsible for internal business and external services of the company. We have addressed the following outstanding challenges when deploying a popular kernel fuzzer, syzkaller, to these enterprise Linux distributions: coverage support absence, kernel configuration inconsistency, bugs in shallow paths, and continuous fuzzing complexity. This leads to a vulnerability detection of 41 reproducible bugs which are previous unknown in these enterprise Linux kernel and 6 bugs with CVE IDs in U.S. National Vulnerability Database, including flaws that cause general protection fault, deadlock, and use-after-free. @InProceedings{ESEC/FSE19p986, author = {Heyuan Shi and Runzhe Wang and Ying Fu and Mingzhe Wang and Xiaohai Shi and Xun Jiao and Houbing Song and Yu Jiang and Jiaguang Sun}, title = {Industry Practice of Coverage-Guided Enterprise Linux Kernel Fuzzing}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {986--995}, doi = {10.1145/3338906.3340460}, year = {2019}, } Publisher's Version |
|
Jiao, Xun |
ESEC/FSE '19: "Industry Practice of Coverage-Guided ..."
Industry Practice of Coverage-Guided Enterprise Linux Kernel Fuzzing
Heyuan Shi, Runzhe Wang, Ying Fu, Mingzhe Wang, Xiaohai Shi, Xun Jiao, Houbing Song, Yu Jiang, and Jiaguang Sun (Tsinghua University, China; Alibaba Group, China; Villanova University, USA; Embry-Riddle Aeronautical University, USA) Coverage-guided kernel fuzzing is a widely-used technique that has helped kernel developers and testers discover numerous vulnerabilities. However, due to the high complexity of application and hardware environment, there is little study on deploying fuzzing to the enterprise-level Linux kernel. In this paper, collaborating with the enterprise developers, we present the industry practice to deploy kernel fuzzing on four different enterprise Linux distributions that are responsible for internal business and external services of the company. We have addressed the following outstanding challenges when deploying a popular kernel fuzzer, syzkaller, to these enterprise Linux distributions: coverage support absence, kernel configuration inconsistency, bugs in shallow paths, and continuous fuzzing complexity. This leads to a vulnerability detection of 41 reproducible bugs which are previous unknown in these enterprise Linux kernel and 6 bugs with CVE IDs in U.S. National Vulnerability Database, including flaws that cause general protection fault, deadlock, and use-after-free. @InProceedings{ESEC/FSE19p986, author = {Heyuan Shi and Runzhe Wang and Ying Fu and Mingzhe Wang and Xiaohai Shi and Xun Jiao and Houbing Song and Yu Jiang and Jiaguang Sun}, title = {Industry Practice of Coverage-Guided Enterprise Linux Kernel Fuzzing}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {986--995}, doi = {10.1145/3338906.3340460}, year = {2019}, } Publisher's Version |
|
Jia, Zhouyang |
ESEC/FSE '19: "Automatically Detecting Missing ..."
Automatically Detecting Missing Cleanup for Ungraceful Exits
Zhouyang Jia, Shanshan Li, Tingting Yu, Xiangke Liao, and Ji Wang (National University of Defense Technology, China; University of Kentucky, USA) Software encounters ungraceful exits due to either bugs in the interrupt/signal handler code or the intention of developers to debug the software. Users may suffer from ”weird” problems caused by leftovers of the ungraceful exits. A common practice to fix these problems is rebooting, which wipes away the stale state of the software. This solution, however, is heavyweight and often leads to poor user experience because it requires restarting other normal processes. In this paper, we design SafeExit, a tool that can automatically detect and pinpoint the root causes of the problems caused by ungraceful exits, which can help users fix the problems using lightweight solutions. Specifically, SafeExit checks the program exit behaviors in the case of an interrupted execution against its expected exit behaviors to detect the missing cleanup behaviors required for avoiding the ungraceful exit. The expected behaviors are obtained by monitoring the program exit under a normal execution. We apply SafeExit to 38 programs across 10 domains. SafeExit finds 133 types of cleanup behaviors from 36 programs and detects 2861 missing behaviors from 292 interrupted executions. To predict missing behaviors for unseen input scenarios, SafeExit trains prediction models using a set of sampled input scenarios. The results show that SafeExit is accurate with an average F-measure of 92.5%. @InProceedings{ESEC/FSE19p751, author = {Zhouyang Jia and Shanshan Li and Tingting Yu and Xiangke Liao and Ji Wang}, title = {Automatically Detecting Missing Cleanup for Ungraceful Exits}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {751--762}, doi = {10.1145/3338906.3338938}, year = {2019}, } Publisher's Version |
|
Ji, Chao |
ESEC/FSE '19: "Latent Error Prediction and ..."
Latent Error Prediction and Fault Localization for Microservice Applications by Learning from System Trace Logs
Xiang Zhou, Xin Peng, Tao Xie, Jun Sun, Chao Ji, Dewei Liu, Qilin Xiang, and Chuan He (Fudan University, China; University of Illinois at Urbana-Champaign, USA; Singapore Management University, Singapore) In the production environment, a large part of microservice failures are related to the complex and dynamic interactions and runtime environments, such as those related to multiple instances, environmental configurations, and asynchronous interactions of microservices. Due to the complexity and dynamism of these failures, it is often hard to reproduce and diagnose them in testing environments. It is desirable yet still challenging that these failures can be detected and the faults can be located at runtime of the production environment to allow developers to resolve them efficiently. To address this challenge, in this paper, we propose MEPFL, an approach of latent error prediction and fault localization for microservice applications by learning from system trace logs. Based on a set of features defined on the system trace logs, MEPFL trains prediction models at both the trace level and the microservice level using the system trace logs collected from automatic executions of the target application and its faulty versions produced by fault injection. The prediction models thus can be used in the production environment to predict latent errors, faulty microservices, and fault types for trace instances captured at runtime. We implement MEPFL based on the infrastructure systems of container orchestrator and service mesh, and conduct a series of experimental studies with two opensource microservice applications (one of them being the largest open-source microservice application to our best knowledge). The results indicate that MEPFL can achieve high accuracy in intraapplication prediction of latent errors, faulty microservices, and fault types, and outperforms a state-of-the-art approach of failure diagnosis for distributed systems. The results also show that MEPFL can effectively predict latent errors caused by real-world fault cases. @InProceedings{ESEC/FSE19p683, author = {Xiang Zhou and Xin Peng and Tao Xie and Jun Sun and Chao Ji and Dewei Liu and Qilin Xiang and Chuan He}, title = {Latent Error Prediction and Fault Localization for Microservice Applications by Learning from System Trace Logs}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {683--694}, doi = {10.1145/3338906.3338961}, year = {2019}, } Publisher's Version |
|
Jimenez, Matthieu |
ESEC/FSE '19: "The Importance of Accounting ..."
The Importance of Accounting for Real-World Labelling When Predicting Software Vulnerabilities
Matthieu Jimenez, Renaud Rwemalika, Mike Papadakis, Federica Sarro, Yves Le Traon, and Mark Harman (University of Luxembourg, Luxembourg; University College London, UK; Facebook, UK) Previous work on vulnerability prediction assume that predictive models are trained with respect to perfect labelling information (includes labels from future, as yet undiscovered vulnerabilities). In this paper we present results from a comprehensive empirical study of 1,898 real-world vulnerabilities reported in 74 releases of three security-critical open source systems (Linux Kernel, OpenSSL and Wiresark). Our study investigates the effectiveness of three previously proposed vulnerability prediction approaches, in two settings: with and without the unrealistic labelling assumption. The results reveal that the unrealistic labelling assumption can profoundly mis- lead the scientific conclusions drawn; suggesting highly effective and deployable prediction results vanish when we fully account for realistically available labelling in the experimental methodology. More precisely, MCC mean values of predictive effectiveness drop from 0.77, 0.65 and 0.43 to 0.08, 0.22, 0.10 for Linux Kernel, OpenSSL and Wiresark, respectively. Similar results are also obtained for precision, recall and other assessments of predictive efficacy. The community therefore needs to upgrade experimental and empirical methodology for vulnerability prediction evaluation and development to ensure robust and actionable scientific findings. @InProceedings{ESEC/FSE19p695, author = {Matthieu Jimenez and Renaud Rwemalika and Mike Papadakis and Federica Sarro and Yves Le Traon and Mark Harman}, title = {The Importance of Accounting for Real-World Labelling When Predicting Software Vulnerabilities}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {695--705}, doi = {10.1145/3338906.3338941}, year = {2019}, } Publisher's Version |
|
Jin, Tiancheng |
ESEC/FSE '19: "FinExpert: Domain-Specific ..."
FinExpert: Domain-Specific Test Generation for FinTech Systems
Tiancheng Jin, Qingshun Wang, Lihua Xu, Chunmei Pan, Liang Dou, Haifeng Qian, Liang He, and Tao Xie (East China Normal University, China; New York University Shanghai, China; CFETS Information Technology, China; University of Illinois at Urbana-Champaign, USA) To assure high quality of software systems, the comprehensiveness of the created test suite and efficiency of the adopted testing process are highly crucial, especially in the FinTech industry, due to a FinTech system’s complicated system logic, mission-critical nature, and large test suite. However, the state of the testing practice in the FinTech industry still heavily relies on manual efforts. Our recent research efforts contributed our previous approach as the first attempt to automate the testing process in China Foreign Exchange Trade System (CFETS) Information Technology Co. Ltd., a subsidiary of China’s Central Bank that provides China’s foreign exchange transactions, and revealed that automating test generation for such complex trading platform could help alleviate some of these manual efforts. In this paper, we investigate further the dilemmas faced in testing the CFETS trading platform, identify the importance of domain knowledge in its testing process, and propose a new approach of domain-specific test generation to further improve the effectiveness and efficiency of our previous approach in industrial settings. We also present findings of our empirical studies of conducting domain-specific testing on subsystems of the CFETS Trading Platform. @InProceedings{ESEC/FSE19p853, author = {Tiancheng Jin and Qingshun Wang and Lihua Xu and Chunmei Pan and Liang Dou and Haifeng Qian and Liang He and Tao Xie}, title = {FinExpert: Domain-Specific Test Generation for FinTech Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {853--862}, doi = {10.1145/3338906.3340441}, year = {2019}, } Publisher's Version |
|
Johnson, Evan |
ESEC/FSE '19: "REINAM: Reinforcement Learning ..."
REINAM: Reinforcement Learning for Input-Grammar Inference
Zhengkai Wu, Evan Johnson, Wei Yang, Osbert Bastani, Dawn Song, Jian Peng, and Tao Xie (University of Illinois at Urbana-Champaign, USA; University of Texas at Dallas, USA; University of Pennsylvania, USA; University of California at Berkeley, USA) Program input grammars (i.e., grammars encoding the language of valid program inputs) facilitate a wide range of applications in software engineering such as symbolic execution and delta debugging. Grammars synthesized by existing approaches can cover only a small part of the valid input space mainly due to unanalyzable code (e.g., native code) in programs and lacking high-quality and high-variety seed inputs. To address these challenges, we present REINAM, a reinforcement-learning approach for synthesizing probabilistic context-free program input grammars without any seed inputs. REINAM uses an industrial symbolic execution engine to generate an initial set of inputs for the given target program, and then uses an iterative process of grammar generalization to proactively generate additional inputs to infer grammars generalized from these initial seed inputs. To efficiently search for target generalizations in a huge search space of candidate generalization operators, REINAM includes a novel formulation of the search problem as a reinforcement learning problem. Our evaluation on eleven real-world benchmarks shows that REINAM outperforms an existing state-of-the-art approach on precision and recall of synthesized grammars, and fuzz testing based on REINAM substantially increases the coverage of the space of valid inputs. REINAM is able to synthesize a grammar covering the entire valid input space for some benchmarks without decreasing the accuracy of the grammar. @InProceedings{ESEC/FSE19p488, author = {Zhengkai Wu and Evan Johnson and Wei Yang and Osbert Bastani and Dawn Song and Jian Peng and Tao Xie}, title = {REINAM: Reinforcement Learning for Input-Grammar Inference}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {488--498}, doi = {10.1145/3338906.3338958}, year = {2019}, } Publisher's Version Info |
|
Johnston, Emily |
ESEC/FSE '19: "DeepDelta: Learning to Repair ..."
DeepDelta: Learning to Repair Compilation Errors
Ali Mesbah, Andrew Rice, Emily Johnston, Nick Glorioso, and Edward Aftandilian (University of British Columbia, Canada; University of Cambridge, UK; Google, UK; Google, USA) Programmers spend a substantial amount of time manually repairing code that does not compile. We observe that the repairs for any particular error class typically follow a pattern and are highly mechanical. We propose a novel approach that automatically learns these patterns with a deep neural network and suggests program repairs for the most costly classes of build-time compilation failures. We describe how we collect all build errors and the human-authored, in-progress code changes that cause those failing builds to transition to successful builds at Google. We generate an AST diff from the textual code changes and transform it into a domain-specific language called Delta that encodes the change that must be made to make the code compile. We then feed the compiler diagnostic information (as source) and the Delta changes that resolved the diagnostic (as target) into a Neural Machine Translation network for training. For the two most prevalent and costly classes of Java compilation errors, namely missing symbols and mismatched method signatures, our system called DeepDelta, generates the correct repair changes for 19,314 out of 38,788 (50%) of unseen compilation errors. The correct changes are in the top three suggested fixes 86% of the time on average. @InProceedings{ESEC/FSE19p925, author = {Ali Mesbah and Andrew Rice and Emily Johnston and Nick Glorioso and Edward Aftandilian}, title = {DeepDelta: Learning to Repair Compilation Errors}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {925--936}, doi = {10.1145/3338906.3340455}, year = {2019}, } Publisher's Version |
|
Ju, An |
ESEC/FSE '19: "Eagle: A Team Practices Audit ..."
Eagle: A Team Practices Audit Framework for Agile Software Development
Alejandro Guerrero, Rafael Fresno, An Ju, Armando Fox, Pablo Fernandez, Carlos Muller, and Antonio Ruiz-Cortés (University of Seville, Spain; University of California at Berkeley, USA) Agile/XP (Extreme Programming) software teams are expected to follow a number of specific practices in each iteration, such as estimating the effort (”points”) required to complete user stories, properly using branches and pull requests to coordinate merging multiple contributors’ code, having frequent ”standups” to keep all team members in sync, and conducting retrospectives to identify areas of improvement for future iterations. We combine two observations in developing a methodology and tools to help teams monitor their performance on these practices. On the one hand, many Agile practices are increasingly supported by web-based tools whose ”data exhaust” can provide insight into how closely the teams are following the practices. On the other hand, some of the practices can be expressed in terms similar to those developed for expressing service level objectives (SLO) in software as a service; as an example, a typical SLO for an interactive Web site might be ”over any 5-minute window, 99% of requests to the main page must be delivered within 200ms” and, analogously, a potential Team Practice (TP) for an Agile/XP team might be ”over any 2-week iteration, 75% of stories should be ’1-point’ stories”. Following this similarity, we adapt a system originally developed for monitoring and visualizing service level agreement (SLA) compliance to monitor selected TPs for Agile/XP software teams. Specifically, the system consumes and analyzes the data exhaust from widely-used tools such as GitHub and Pivotal Tracker and provides team(s) and coach(es) a ”dashboard” summarizing the teams’ adherence to various practices. As a qualitative initial investigation of its usefulness, we deployed it to twenty student teams in a four-sprint software engineering project course. We find an improvement of the adherence to team practice and a positive students’ self-evaluations of their team practices when using the tool, compared to previous experiences using an Agile/XP methodology. The demo video is located at https://youtu.be/A4xwJMEQh9c and a landing page with a live demo at https://isa-group.github.io/2019-05-eagle-demo/. @InProceedings{ESEC/FSE19p1139, author = {Alejandro Guerrero and Rafael Fresno and An Ju and Armando Fox and Pablo Fernandez and Carlos Muller and Antonio Ruiz-Cortés}, title = {Eagle: A Team Practices Audit Framework for Agile Software Development}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1139--1143}, doi = {10.1145/3338906.3341181}, year = {2019}, } Publisher's Version Video Info |
|
Just, René |
ESEC/FSE '19: "Code Coverage at Google ..."
Code Coverage at Google
Marko Ivanković, Goran Petrović, René Just, and Gordon Fraser (Google, Switzerland; University of Washington, USA; University of Passau, Germany) Code coverage is a measure of the degree to which a test suite exercises a software system. Although coverage is well established in software engineering research, deployment in industry is often inhibited by the perceived usefulness and the computational costs of analyzing coverage at scale. At Google, coverage information is computed for one billion lines of code daily, for seven programming languages. A key aspect of making coverage information actionable is to apply it at the level of changesets and code review. This paper describes Google’s code coverage infrastructure and how the computed code coverage information is visualized and used. It also describes the challenges and solutions for adopting code coverage at scale. To study how code coverage is adopted and perceived by developers, this paper analyzes adoption rates, error rates, and average code coverage ratios over a five-year period, and it reports on 512 responses, received from surveying 3000 developers. Finally, this paper provides concrete suggestions for how to implement and use code coverage in an industrial setting. @InProceedings{ESEC/FSE19p955, author = {Marko Ivanković and Goran Petrović and René Just and Gordon Fraser}, title = {Code Coverage at Google}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {955--963}, doi = {10.1145/3338906.3340459}, year = {2019}, } Publisher's Version |
|
Kalhauge, Christian Gram |
ESEC/FSE '19: "Binary Reduction of Dependency ..."
Binary Reduction of Dependency Graphs
Christian Gram Kalhauge and Jens Palsberg (University of California at Los Angeles, USA) Delta debugging is a technique for reducing a failure-inducing input to a small input that reveals the cause of the failure. This has been successful for a wide variety of inputs including C programs, XML data, and thread schedules. However, for input that has many internal dependencies, delta debugging scales poorly. Such input includes C#, Java, and Java bytecode and they have presented a major challenge for input reduction until now. In this paper, we show that the core challenge is a reduction problem for dependency graphs, and we present a general strategy for reducing such graphs. We combine this with a novel algorithm for reduction called Binary Reduction in a tool called J-Reduce for Java bytecode. Our experiments show that our tool is 12x faster and achieves more reduction than delta debugging on average. This enabled us to create and submit short bug reports for three Java bytecode decompilers. @InProceedings{ESEC/FSE19p556, author = {Christian Gram Kalhauge and Jens Palsberg}, title = {Binary Reduction of Dependency Graphs}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {556--566}, doi = {10.1145/3338906.3338956}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Kannan, Kalapriya |
ESEC/FSE '19: "Design Diagrams as Ontological ..."
Design Diagrams as Ontological Source
Pranay Lohia, Kalapriya Kannan, Biplav Srivastava, and Sameep Mehta (IBM Research, India; IBM Research, USA) beginabstract In custom software development projects, it is frequently the case that the same type of software is being built for different customers. The deliverables are similar because they address the same market (e.g., Telecom, Banking) or have similar functions or both. However, most organisations do not take advantage of this similarity and conduct each project from scratch leading to lesser margins and lower quality. Our key observation is that the similarity among the projects alludes to the existence of a veritable domain of discourse whose ontology, if created, would make the similarity across the projects explicit. Design diagrams are an integral part of any commercial software project deliverables as they document crucial facets of the software solution. We propose an approach to extract ontological information from UML design diagrams (class and sequence diagrams) and represent it as domain ontology in a convenient representation. This ontology not only helps in developing a better understanding of the domain but also fosters software reuse for future software projects in that domain. Initial results on extracting ontology from thousands of model from public repository show that the created ontologies are accurate and help in better software reuse for new solutions. endabstract @InProceedings{ESEC/FSE19p863, author = {Pranay Lohia and Kalapriya Kannan and Biplav Srivastava and Sameep Mehta}, title = {Design Diagrams as Ontological Source}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {863--873}, doi = {10.1145/3338906.3340446}, year = {2019}, } Publisher's Version |
|
Kapus, Timotej |
ESEC/FSE '19: "A Segmented Memory Model for ..."
A Segmented Memory Model for Symbolic Execution
Timotej Kapus and Cristian Cadar (Imperial College London, UK) Symbolic execution is an effective technique for exploring paths in a program and reasoning about all possible values on those paths. However, the technique still struggles with code that uses complex heap data structures, in which a pointer is allowed to refer to more than one memory object. In such cases, symbolic execution typically forks execution into multiple states, one for each object to which the pointer could refer. In this paper, we propose a technique that avoids this expensive forking by using a segmented memory model. In this model, memory is split into segments, so that each symbolic pointer refers to objects in a single segment. The size of the segments are bound by a threshold, in order to avoid expensive constraints. This results in a memory model where forking due to symbolic pointer dereferences is significantly reduced, often completely. We evaluate our segmented memory model on a mix of whole program benchmarks (such as m4 and make) and library benchmarks (such as SQLite), and observe significant decreases in execution time and memory usage. @InProceedings{ESEC/FSE19p774, author = {Timotej Kapus and Cristian Cadar}, title = {A Segmented Memory Model for Symbolic Execution}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {774--784}, doi = {10.1145/3338906.3338936}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Karlsson, Stefan |
ESEC/FSE '19: "Exploratory Test Agents for ..."
Exploratory Test Agents for Stateful Software Systems
Stefan Karlsson (ABB, Sweden; Mälardalen University, Sweden) The adequate testing of stateful software systems is a hard and costly activity. Failures that result from complex stateful interactions can be of high impact, and it can be hard to replicate failures resulting from erroneous stateful interactions. Addressing this problem in an automatic way would save cost and time and increase the quality of software systems in the industry. In this paper, we propose an approach that uses agents to explore software systems with the intention to find faults and gain knowledge. @InProceedings{ESEC/FSE19p1164, author = {Stefan Karlsson}, title = {Exploratory Test Agents for Stateful Software Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1164--1167}, doi = {10.1145/3338906.3341458}, year = {2019}, } Publisher's Version |
|
Kästner, Christian |
ESEC/FSE '19: "A Conceptual Replication of ..."
A Conceptual Replication of Continuous Integration Pain Points in the Context of Travis CI
David Gray Widder, Michael Hilton, Christian Kästner, and Bogdan Vasilescu (Carnegie Mellon University, USA) Continuous integration (CI) is an established software quality assurance practice, and the focus of much prior research with a diverse range of methods and populations. In this paper, we first conduct a literature review of 37 papers on CI pain points. We then conduct a conceptual replication study on results from these papers using a triangulation design consisting of a survey with 132 responses, 12 interviews, and two logistic regressions predicting Travis CI abandonment and switching on a dataset of 6,239 GitHub projects. We report and discuss which past results we were able to replicate, those for which we found conflicting evidence, those for which we did not find evidence, and the implications of these findings. @InProceedings{ESEC/FSE19p647, author = {David Gray Widder and Michael Hilton and Christian Kästner and Bogdan Vasilescu}, title = {A Conceptual Replication of Continuous Integration Pain Points in the Context of Travis CI}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {647--658}, doi = {10.1145/3338906.3338922}, year = {2019}, } Publisher's Version Info ESEC/FSE '19: "What the Fork: A Study of ..." What the Fork: A Study of Inefficient and Efficient Forking Practices in Social Coding Shurui Zhou, Bogdan Vasilescu, and Christian Kästner (Carnegie Mellon University, USA) Forking and pull requests have been widely used in open-source communities as a uniform development and contribution mechanism, giving developers the flexibility to modify their own fork without affecting others before attempting to contribute back. However, not all projects use forks efficiently; many experience lost and duplicate contributions and fragmented communities. In this paper, we explore how open-source projects on GitHub differ with regard to forking inefficiencies. First, we observed that different communities experience these inefficiencies to widely different degrees and interviewed practitioners to understand why. Then, using multiple regression modeling, we analyzed which context factors correlate with fewer inefficiencies.We found that better modularity and centralized management are associated with more contributions and a higher fraction of accepted pull requests, suggesting specific best practices that project maintainers can adopt to reduce forking-related inefficiencies in their communities. @InProceedings{ESEC/FSE19p350, author = {Shurui Zhou and Bogdan Vasilescu and Christian Kästner}, title = {What the Fork: A Study of Inefficient and Efficient Forking Practices in Social Coding}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {350--361}, doi = {10.1145/3338906.3338918}, year = {2019}, } Publisher's Version Info |
|
Khanve, Vaishali |
ESEC/FSE '19: "Are Existing Code Smells Relevant ..."
Are Existing Code Smells Relevant in Web Games? An Empirical Study
Vaishali Khanve (IIT Tirupati, India) In software applications, code smells are considered as bad coding practices acquired at the time of development. The presence of such code smells in games may affect the process of game development adversely. Our preliminary study aims at investigating the existence of code smells in the games. To achieve this, we used JavaScript code smells detection tool JSNose against 361 JavaScript web games to find occurrences of JavaScript smells in games. Further, we conducted a manual study to find violations of known game programming patterns in 8 web games to verify the necessity of game-specific code smells detection tool. Our results shows that existing JavaScript code smells detection tool is not sufficient to find game-specific code smells in web games. @InProceedings{ESEC/FSE19p1241, author = {Vaishali Khanve}, title = {Are Existing Code Smells Relevant in Web Games? An Empirical Study}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1241--1243}, doi = {10.1145/3338906.3342504}, year = {2019}, } Publisher's Version |
|
Khatchadourian, Raffi |
ESEC/FSE '19: "Going Big: A Large-Scale Study ..."
Going Big: A Large-Scale Study on What Big Data Developers Ask
Mehdi Bagherzadeh and Raffi Khatchadourian (Oakland University, USA; City University of New York, USA) Software developers are increasingly required to write big data code. However, they find big data software development challenging. To help these developers it is necessary to understand big data topics that they are interested in and the difficulty of finding answers for questions in these topics. In this work, we conduct a large-scale study on Stackoverflow to understand the interest and difficulties of big data developers. To conduct the study, we develop a set of big data tags to extract big data posts from Stackoverflow; use topic modeling to group these posts into big data topics; group similar topics into categories to construct a topic hierarchy; analyze popularity and difficulty of topics and their correlations; and discuss implications of our findings for practice, research and education of big data software development and investigate their coincidence with the findings of previous work. @InProceedings{ESEC/FSE19p432, author = {Mehdi Bagherzadeh and Raffi Khatchadourian}, title = {Going Big: A Large-Scale Study on What Big Data Developers Ask}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {432--442}, doi = {10.1145/3338906.3338939}, year = {2019}, } Publisher's Version |
|
Khurshid, Sarfraz |
ESEC/FSE '19: "A Framework for Writing Trigger-Action ..."
A Framework for Writing Trigger-Action Todo Comments in Executable Format
Pengyu Nie, Rishabh Rai, Junyi Jessy Li, Sarfraz Khurshid, Raymond J. Mooney, and Milos Gligoric (University of Texas at Austin, USA) Natural language elements, e.g., todo comments, are frequently used to communicate among developers and to describe tasks that need to be performed (actions) when specific conditions hold on artifacts related to the code repository (triggers), e.g., from the Apache Struts project: “remove expectedJDK15 and if() after switching to Java 1.6”. As projects evolve, development processes change, and development teams reorganize, these comments, because of their informal nature, frequently become irrelevant or forgotten. We present the first framework, dubbed TrigIt, to specify trigger-action todo comments in executable format. Thus, actions are executed automatically when triggers evaluate to true. TrigIt specifications are written in the host language (e.g., Java) and are evaluated as part of the build process. The triggers are specified as query statements over abstract syntax trees, abstract representation of build configuration scripts, issue tracking systems, and system clock time. The actions are either notifications to developers or code transformation steps. We implemented TrigIt for the Java programming language and migrated 44 existing trigger-action comments from several popular open-source projects. Evaluation of TrigIt, via a user study, showed that users find TrigIt easy to learn and use. TrigIt has the potential to enforce more discipline in writing and maintaining comments in large code repositories. @InProceedings{ESEC/FSE19p385, author = {Pengyu Nie and Rishabh Rai and Junyi Jessy Li and Sarfraz Khurshid and Raymond J. Mooney and Milos Gligoric}, title = {A Framework for Writing Trigger-Action Todo Comments in Executable Format}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {385--396}, doi = {10.1145/3338906.3338965}, year = {2019}, } Publisher's Version |
|
Kim, Dongsun |
ESEC/FSE '19: "iFixR: Bug Report driven Program ..."
iFixR: Bug Report driven Program Repair
Anil Koyuncu, Kui Liu, Tegawendé F. Bissyandé, Dongsun Kim, Martin Monperrus, Jacques Klein, and Yves Le Traon (University of Luxembourg, Luxembourg; Furiosa A.I., South Korea; KTH, Sweden) Issue tracking systems are commonly used in modern software development for collecting feedback from users and developers. An ultimate automation target of software maintenance is then the systematization of patch generation for user-reported bugs. Although this ambition is aligned with the momentum of automated program repair, the literature has, so far, mostly focused on generate-and- validate setups where fault localization and patch generation are driven by a well-defined test suite. On the one hand, however, the common (yet strong) assumption on the existence of relevant test cases does not hold in practice for most development settings: many bugs are reported without the available test suite being able to reveal them. On the other hand, for many projects, the number of bug reports generally outstrips the resources available to triage them. Towards increasing the adoption of patch generation tools by practitioners, we investigate a new repair pipeline, iFixR, driven by bug reports: (1) bug reports are fed to an IR-based fault localizer; (2) patches are generated from fix patterns and validated via regression testing; (3) a prioritized list of generated patches is proposed to developers. We evaluate iFixR on the Defects4J dataset, which we enriched (i.e., faults are linked to bug reports) and carefully-reorganized (i.e., the timeline of test-cases is naturally split). iFixR generates genuine/plausible patches for 21/44 Defects4J faults with its IR-based fault localizer. iFixR accurately places a genuine/plausible patch among its top-5 recommendation for 8/13 of these faults (without using future test cases in generation-and-validation). @InProceedings{ESEC/FSE19p314, author = {Anil Koyuncu and Kui Liu and Tegawendé F. Bissyandé and Dongsun Kim and Martin Monperrus and Jacques Klein and Yves Le Traon}, title = {iFixR: Bug Report driven Program Repair}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {314--325}, doi = {10.1145/3338906.3338935}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Kim, Miryung |
ESEC/FSE '19: "White-Box Testing of Big Data ..."
White-Box Testing of Big Data Analytics with Complex User-Defined Functions
Muhammad Ali Gulzar, Shaghayegh Mardani, Madanlal Musuvathi, and Miryung Kim (University of California at Los Angeles, USA; Microsoft Research, USA) Data-intensive scalable computing (DISC) systems such as Google’s MapReduce, Apache Hadoop, and Apache Spark are being leveraged to process massive quantities of data in the cloud. Modern DISC applications pose new challenges in exhaustive, automatic testing because they consist of dataflow operators, and complex user-defined functions (UDF) are prevalent unlike SQL queries. We design a new white-box testing approach, called BigTest to reason about the internal semantics of UDFs in tandem with the equivalence classes created by each dataflow and relational operator. Our evaluation shows that, despite ultra-large scale input data size, real world DISC applications are often significantly skewed and inadequate in terms of test coverage, leaving 34% of Joint Dataflow and UDF (JDU) paths untested. BigTest shows the potential to minimize data size for local testing by 10^5 to 10^8 orders of magnitude while revealing 2X more manually-injected faults than the previous approach. Our experiment shows that only few of the data records (order of tens) are actually required to achieve the same JDU coverage as the entire production data. The reduction in test data also provides CPU time saving of 194X on average, demonstrating that interactive and fast local testing is feasible for big data analytics, obviating the need to test applications on huge production data. @InProceedings{ESEC/FSE19p290, author = {Muhammad Ali Gulzar and Shaghayegh Mardani and Madanlal Musuvathi and Miryung Kim}, title = {White-Box Testing of Big Data Analytics with Complex User-Defined Functions}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {290--301}, doi = {10.1145/3338906.3338953}, year = {2019}, } Publisher's Version |
|
Kim, Moonzoo |
ESEC/FSE '19: "Target-Driven Compositional ..."
Target-Driven Compositional Concolic Testing with Function Summary Refinement for Effective Bug Detection
Yunho Kim, Shin Hong, and Moonzoo Kim (KAIST, South Korea; Handong Global University, South Korea) Concolic testing is popular in unit testing because it can detect bugs quickly in a relatively small search space. But, in system-level testing, it suffers from the symbolic path explosion and often misses bugs. To resolve this problem, we have developed a focused compositional concolic testing technique, FOCAL, for effective bug detection. Focusing on a target unit failure v (a crash or an assert violation) detected by concolic unit testing, FOCAL generates a system-level test input that validates v. This test input is obtained by building and solving symbolic path formulas that represent system-level executions raising v. FOCAL builds such formulas by combining function summaries one by one backward from a function that raised v to main. If a function summary φa of function a conflicts with the summaries of the other functions, FOCAL refines φa to φa′ by applying a refining constraint learned from the conflict. FOCAL showed high system-level bug detection ability by detecting 71 out of the 100 real-world target bugs in the SIR benchmark, while other relevant cutting edge techniques (i.e., AFL-fast, KATCH, Mix-CCBSE) detected at most 40 bugs. Also, FOCAL detected 13 new crash bugs in popular file parsing programs. @InProceedings{ESEC/FSE19p16, author = {Yunho Kim and Shin Hong and Moonzoo Kim}, title = {Target-Driven Compositional Concolic Testing with Function Summary Refinement for Effective Bug Detection}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {16--26}, doi = {10.1145/3338906.3338934}, year = {2019}, } Publisher's Version Info |
|
Kim, Seohyun |
ESEC/FSE '19: "When Deep Learning Met Code ..."
When Deep Learning Met Code Search
Jose Cambronero, Hongyu Li, Seohyun Kim, Koushik Sen, and Satish Chandra (Massachusetts Institute of Technology, USA; Facebook, USA; University of California at Berkeley, USA) There have been multiple recent proposals on using deep neural networks for code search using natural language. Common across these proposals is the idea of embedding code and natural language queries into real vectors and then using vector distance to approximate semantic correlation between code and the query. Multiple approaches exist for learning these embeddings, including unsupervised techniques, which rely only on a corpus of code examples, and supervised techniques, which use an aligned corpus of paired code and natural language descriptions. The goal of this supervision is to produce embeddings that are more similar for a query and the corresponding desired code snippet. Clearly, there are choices in whether to use supervised techniques at all, and if one does, what sort of network and training to use for supervision. This paper is the first to evaluate these choices systematically. To this end, we assembled implementations of state-of-the-art techniques to run on a common platform, training and evaluation corpora. To explore the design space in network complexity, we also introduced a new design point that is a minimal supervision extension to an existing unsupervised technique. Our evaluation shows that: 1. adding supervision to an existing unsupervised technique can improve performance, though not necessarily by much; 2. simple networks for supervision can be more effective that more sophisticated sequence-based networks for code search; 3. while it is common to use docstrings to carry out supervision, there is a sizeable gap between the effectiveness of docstrings and a more query-appropriate supervision corpus. @InProceedings{ESEC/FSE19p964, author = {Jose Cambronero and Hongyu Li and Seohyun Kim and Koushik Sen and Satish Chandra}, title = {When Deep Learning Met Code Search}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {964--974}, doi = {10.1145/3338906.3340458}, year = {2019}, } Publisher's Version |
|
Kim, Yunho |
ESEC/FSE '19: "Target-Driven Compositional ..."
Target-Driven Compositional Concolic Testing with Function Summary Refinement for Effective Bug Detection
Yunho Kim, Shin Hong, and Moonzoo Kim (KAIST, South Korea; Handong Global University, South Korea) Concolic testing is popular in unit testing because it can detect bugs quickly in a relatively small search space. But, in system-level testing, it suffers from the symbolic path explosion and often misses bugs. To resolve this problem, we have developed a focused compositional concolic testing technique, FOCAL, for effective bug detection. Focusing on a target unit failure v (a crash or an assert violation) detected by concolic unit testing, FOCAL generates a system-level test input that validates v. This test input is obtained by building and solving symbolic path formulas that represent system-level executions raising v. FOCAL builds such formulas by combining function summaries one by one backward from a function that raised v to main. If a function summary φa of function a conflicts with the summaries of the other functions, FOCAL refines φa to φa′ by applying a refining constraint learned from the conflict. FOCAL showed high system-level bug detection ability by detecting 71 out of the 100 real-world target bugs in the SIR benchmark, while other relevant cutting edge techniques (i.e., AFL-fast, KATCH, Mix-CCBSE) detected at most 40 bugs. Also, FOCAL detected 13 new crash bugs in popular file parsing programs. @InProceedings{ESEC/FSE19p16, author = {Yunho Kim and Shin Hong and Moonzoo Kim}, title = {Target-Driven Compositional Concolic Testing with Function Summary Refinement for Effective Bug Detection}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {16--26}, doi = {10.1145/3338906.3338934}, year = {2019}, } Publisher's Version Info |
|
King, Tim |
ESEC/FSE '19: "FUDGE: Fuzz Driver Generation ..."
FUDGE: Fuzz Driver Generation at Scale
Domagoj Babić, Stefan Bucur, Yaohui Chen, Franjo Ivančić, Tim King, Markus Kusano, Caroline Lemieux, László Szekeres, and Wei Wang (Google, USA; Northeastern University, USA; University of California at Berkeley, USA) At Google we have found tens of thousands of security and robustness bugs by fuzzing C and C++ libraries. To fuzz a library, a fuzzer requires a fuzz driver—which exercises some library code—to which it can pass inputs. Unfortunately, writing fuzz drivers remains a primarily manual exercise, a major hindrance to the widespread adoption of fuzzing. In this paper, we address this major hindrance by introducing the Fudge system for automated fuzz driver generation. Fudge automatically generates fuzz driver candidates for libraries based on existing client code. We have used Fudge to generate thousands of new drivers for a wide variety of libraries. Each generated driver includes a synthesized C/C++ program and a corresponding build script, and is automatically analyzed for quality. Developers have integrated over 200 of these generated drivers into continuous fuzzing services and have committed to address reported security bugs. Further, several of these fuzz drivers have been upstreamed to open source projects and integrated into the OSS-Fuzz fuzzing infrastructure. Running these fuzz drivers has resulted in over 150 bug fixes, including the elimination of numerous exploitable security vulnerabilities. @InProceedings{ESEC/FSE19p975, author = {Domagoj Babić and Stefan Bucur and Yaohui Chen and Franjo Ivančić and Tim King and Markus Kusano and Caroline Lemieux and László Szekeres and Wei Wang}, title = {FUDGE: Fuzz Driver Generation at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {975--985}, doi = {10.1145/3338906.3340456}, year = {2019}, } Publisher's Version |
|
Klein, Jacques |
ESEC/FSE '19: "iFixR: Bug Report driven Program ..."
iFixR: Bug Report driven Program Repair
Anil Koyuncu, Kui Liu, Tegawendé F. Bissyandé, Dongsun Kim, Martin Monperrus, Jacques Klein, and Yves Le Traon (University of Luxembourg, Luxembourg; Furiosa A.I., South Korea; KTH, Sweden) Issue tracking systems are commonly used in modern software development for collecting feedback from users and developers. An ultimate automation target of software maintenance is then the systematization of patch generation for user-reported bugs. Although this ambition is aligned with the momentum of automated program repair, the literature has, so far, mostly focused on generate-and- validate setups where fault localization and patch generation are driven by a well-defined test suite. On the one hand, however, the common (yet strong) assumption on the existence of relevant test cases does not hold in practice for most development settings: many bugs are reported without the available test suite being able to reveal them. On the other hand, for many projects, the number of bug reports generally outstrips the resources available to triage them. Towards increasing the adoption of patch generation tools by practitioners, we investigate a new repair pipeline, iFixR, driven by bug reports: (1) bug reports are fed to an IR-based fault localizer; (2) patches are generated from fix patterns and validated via regression testing; (3) a prioritized list of generated patches is proposed to developers. We evaluate iFixR on the Defects4J dataset, which we enriched (i.e., faults are linked to bug reports) and carefully-reorganized (i.e., the timeline of test-cases is naturally split). iFixR generates genuine/plausible patches for 21/44 Defects4J faults with its IR-based fault localizer. iFixR accurately places a genuine/plausible patch among its top-5 recommendation for 8/13 of these faults (without using future test cases in generation-and-validation). @InProceedings{ESEC/FSE19p314, author = {Anil Koyuncu and Kui Liu and Tegawendé F. Bissyandé and Dongsun Kim and Martin Monperrus and Jacques Klein and Yves Le Traon}, title = {iFixR: Bug Report driven Program Repair}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {314--325}, doi = {10.1145/3338906.3338935}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Koc, Ugur |
ESEC/FSE '19: "An Empirical Study of Real-World ..."
An Empirical Study of Real-World Variability Bugs Detected by Variability-Oblivious Tools
Austin Mordahl, Jeho Oh, Ugur Koc, Shiyi Wei, and Paul Gazzillo (University of Texas at Dallas, USA; University of Texas at Austin, USA; University of Maryland, USA; University of Central Florida, USA) Many critical software systems developed in C utilize compile-time configurability. The many possible configurations of this software make bug detection through static analysis difficult. While variability-aware static analyses have been developed, there remains a gap between those and state-of-the-art static bug detection tools. In order to collect data on how such tools may perform and to develop real-world benchmarks, we present a way to leverage configuration sampling, off-the-shelf “variability-oblivious” bug detectors, and automatic feature identification techniques to simulate a variability-aware analysis. We instantiate our approach using four popular static analysis tools on three highly configurable, real-world C projects, obtaining 36,061 warnings, 80% of which are variability warnings. We analyze the warnings we collect from these experiments, finding that most results are variability warnings of a variety of kinds such as NULL dereference. We then manually investigate these warnings to produce a benchmark of 77 confirmed true bugs (52 of which are variability bugs) useful for future development of variability-aware analyses. @InProceedings{ESEC/FSE19p50, author = {Austin Mordahl and Jeho Oh and Ugur Koc and Shiyi Wei and Paul Gazzillo}, title = {An Empirical Study of Real-World Variability Bugs Detected by Variability-Oblivious Tools}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {50--61}, doi = {10.1145/3338906.3338967}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Kolekar, Nikhil |
ESEC/FSE '19: "The Role of Limitations and ..."
The Role of Limitations and SLAs in the API Industry
Antonio Gamez-Diaz, Pablo Fernandez, Antonio Ruiz-Cortés, Pedro J. Molina, Nikhil Kolekar, Prithpal Bhogill, Madhurranjan Mohaan, and Francisco Méndez (University of Seville, Spain; Metadev, Spain; PayPal, USA; Google, USA; AsyncAPI Initiative, Spain) As software architecture design is evolving to a microservice paradigm, RESTful APIs are being established as the preferred choice to build applications. In such a scenario, there is a shift towards a growing market of APIs where providers offer different service levels with tailored limitations typically based on the cost. In this context, while there are well established standards to describe the functional elements of APIs (such as the OpenAPI Specification), having a standard model for Service Level Agreements (SLAs) for APIs may boost an open ecosystem of tools that would represent an improvement for the industry by automating certain tasks during the development such as: SLA-aware scaffolding, SLA-aware testing, or SLA-aware requesters. Unfortunately, despite there have been several proposals to describe SLAs for software in general and web services in particular during the past decades, there is an actual lack of a widely used standard due to the complex landscape of concepts surrounding the notion of SLAs and the multiple perspectives that can be addressed. In this paper, we aim to analyze the landscape for SLAs for APIs in two different directions: i) Clarifying the SLA-driven API development lifecycle: its activities and participants; 2) Developing a catalog of relevant concepts and an ulterior prioritization based on different perspectives from both Industry and Academia. As a main result, we present a scored list of concepts that paves the way to establish a concrete road-map for a standard industry-aligned specification to describe SLAs in APIs. @InProceedings{ESEC/FSE19p1006, author = {Antonio Gamez-Diaz and Pablo Fernandez and Antonio Ruiz-Cortés and Pedro J. Molina and Nikhil Kolekar and Prithpal Bhogill and Madhurranjan Mohaan and Francisco Méndez}, title = {The Role of Limitations and SLAs in the API Industry}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1006--1014}, doi = {10.1145/3338906.3340445}, year = {2019}, } Publisher's Version Info |
|
Kothari, Suresh |
ESEC/FSE '19: "DISCOVER: Detecting Algorithmic ..."
DISCOVER: Detecting Algorithmic Complexity Vulnerabilities
Payas Awadhutkar, Ganesh Ram Santhanam, Benjamin Holland, and Suresh Kothari (Iowa State University, USA; EnSoft, USA) Algorithmic Complexity Vulnerabilities (ACV) are a class of vulnerabilities that enable Denial of Service Attacks. ACVs stem from asymmetric consumption of resources due to complex loop termination logic, recursion, and/or resource intensive library APIs. Completely automated detection of ACVs is intractable and it calls for tools that assist human analysts. We present DISCOVER, a suite of tools that facilitates human-on-the-loop detection of ACVs. DISCOVER's workflow can be broken into three phases - (1) Automated characterization of loops, (2) Selection of suspicious loops, and (3) Interactive audit of selected loops. We demonstrate DISCOVER using a case study using a DARPA challenge app. DISCOVER supports analysis of Java source code and Java bytecode. We demonstrate it for Java bytecode. @InProceedings{ESEC/FSE19p1129, author = {Payas Awadhutkar and Ganesh Ram Santhanam and Benjamin Holland and Suresh Kothari}, title = {DISCOVER: Detecting Algorithmic Complexity Vulnerabilities}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1129--1133}, doi = {10.1145/3338906.3341177}, year = {2019}, } Publisher's Version Video |
|
Koyuncu, Anil |
ESEC/FSE '19: "iFixR: Bug Report driven Program ..."
iFixR: Bug Report driven Program Repair
Anil Koyuncu, Kui Liu, Tegawendé F. Bissyandé, Dongsun Kim, Martin Monperrus, Jacques Klein, and Yves Le Traon (University of Luxembourg, Luxembourg; Furiosa A.I., South Korea; KTH, Sweden) Issue tracking systems are commonly used in modern software development for collecting feedback from users and developers. An ultimate automation target of software maintenance is then the systematization of patch generation for user-reported bugs. Although this ambition is aligned with the momentum of automated program repair, the literature has, so far, mostly focused on generate-and- validate setups where fault localization and patch generation are driven by a well-defined test suite. On the one hand, however, the common (yet strong) assumption on the existence of relevant test cases does not hold in practice for most development settings: many bugs are reported without the available test suite being able to reveal them. On the other hand, for many projects, the number of bug reports generally outstrips the resources available to triage them. Towards increasing the adoption of patch generation tools by practitioners, we investigate a new repair pipeline, iFixR, driven by bug reports: (1) bug reports are fed to an IR-based fault localizer; (2) patches are generated from fix patterns and validated via regression testing; (3) a prioritized list of generated patches is proposed to developers. We evaluate iFixR on the Defects4J dataset, which we enriched (i.e., faults are linked to bug reports) and carefully-reorganized (i.e., the timeline of test-cases is naturally split). iFixR generates genuine/plausible patches for 21/44 Defects4J faults with its IR-based fault localizer. iFixR accurately places a genuine/plausible patch among its top-5 recommendation for 8/13 of these faults (without using future test cases in generation-and-validation). @InProceedings{ESEC/FSE19p314, author = {Anil Koyuncu and Kui Liu and Tegawendé F. Bissyandé and Dongsun Kim and Martin Monperrus and Jacques Klein and Yves Le Traon}, title = {iFixR: Bug Report driven Program Repair}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {314--325}, doi = {10.1145/3338906.3338935}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Koziolek, Heiko |
ESEC/FSE '19: "Architectural Decision Forces ..."
Architectural Decision Forces at Work: Experiences in an Industrial Consultancy Setting
Julius Rueckert, Andreas Burger, Heiko Koziolek, Thanikesavan Sivanthi, Alexandru Moga, and Carsten Franke (ABB Research, Germany; ABB Research, Switzerland) The concepts of decision forces and the decision forces viewpoint were proposed to help software architects to make architectural decisions more transparent and the documentation of their rationales more explicit. However, practical experience reports and guidelines on how to use the viewpoint in typical industrial project setups are not available. Existing works mainly focus on basic tool support for the documentation of the viewpoint or show how forces can be used as part of focused architecture review sessions. With this paper, we share experiences and lessons learned from applying the decision forces viewpoint in a distributed industrial project setup, which involves consultants supporting architects during the re-design process of an existing large software system. Alongside our findings, we describe new forces that can serve as template for similar projects, discuss challenges applying them in a distributed consultancy project, and share ideas for potential extensions. @InProceedings{ESEC/FSE19p996, author = {Julius Rueckert and Andreas Burger and Heiko Koziolek and Thanikesavan Sivanthi and Alexandru Moga and Carsten Franke}, title = {Architectural Decision Forces at Work: Experiences in an Industrial Consultancy Setting}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {996--1005}, doi = {10.1145/3338906.3340461}, year = {2019}, } Publisher's Version |
|
Kreis, Marvin |
ESEC/FSE '19: "Testing Scratch Programs Automatically ..."
Testing Scratch Programs Automatically
Andreas Stahlbauer, Marvin Kreis, and Gordon Fraser (University of Passau, Germany) Block-based programming environments like Scratch foster engagement with computer programming and are used by millions of young learners. Scratch allows learners to quickly create entertaining programs and games, while eliminating syntactical program errors that could interfere with progress. However, functional programming errors may still lead to incorrect programs, and learners and their teachers need to identify and understand these errors. This is currently an entirely manual process. In this paper, we introduce a formal testing framework that describes the problem of Scratch testing in detail. We instantiate this formal framework with the Whisker tool, which provides automated and property-based testing functionality for Scratch programs. Empirical evaluation on real student and teacher programs demonstrates that Whisker can successfully test Scratch programs, and automatically achieves an average of 95.25 % code coverage. Although well-known testing problems such as test flakiness also exist in the scenario of Scratch testing, we show that automated and property-based testing can accurately reproduce and replace the manually and laboriously produced grading efforts of a teacher, and opens up new possibilities to support learners of programming in their struggles. @InProceedings{ESEC/FSE19p165, author = {Andreas Stahlbauer and Marvin Kreis and Gordon Fraser}, title = {Testing Scratch Programs Automatically}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {165--175}, doi = {10.1145/3338906.3338910}, year = {2019}, } Publisher's Version |
|
Krüger, Jacob |
ESEC/FSE '19: "Principles of Feature Modeling ..."
Principles of Feature Modeling
Damir Nešić, Jacob Krüger, Ștefan Stănciulescu, and Thorsten Berger (KTH, Sweden; University of Magdeburg, Germany; ABB, Switzerland; Chalmers University of Technology, Sweden; University of Gothenburg, Sweden) Feature models are arguably one of the most intuitive and successful notations for modeling the features of a variant-rich software system. Feature models help developers to keep an overall understanding of the system, and also support scoping, planning, development, variant derivation, configuration, and maintenance activities that sustain the system's long-term success. Unfortunately, feature models are difficult to build and evolve. Features need to be identified, grouped, organized in a hierarchy, and mapped to software assets. Also, dependencies between features need to be declared. While feature models have been the subject of three decades of research, resulting in many feature-modeling notations together with automated analysis and configuration techniques, a generic set of principles for engineering feature models is still missing. It is not even clear whether feature models could be engineered using recurrent principles. Our work shows that such principles in fact exist. We analyzed feature-modeling practices elicited from ten interviews conducted with industrial practitioners and from 31 relevant papers. We synthesized a set of 34 principles covering eight different phases of feature modeling, from planning over model construction, to model maintenance and evolution. Grounded in empirical evidence, these principles provide practical, context-specific advice on how to perform feature modeling, describe what information sources to consider, and highlight common characteristics of feature models. We believe that our principles can support researchers and practitioners enhancing feature-modeling tooling, synthesis, and analyses techniques, as well as scope future research. @InProceedings{ESEC/FSE19p62, author = {Damir Nešić and Jacob Krüger and Ștefan Stănciulescu and Thorsten Berger}, title = {Principles of Feature Modeling}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {62--73}, doi = {10.1145/3338906.3338974}, year = {2019}, } Publisher's Version Info ESEC/FSE '19: "Effects of Explicit Feature ..." Effects of Explicit Feature Traceability on Program Comprehension Jacob Krüger, Gül Çalıklı, Thorsten Berger, Thomas Leich, and Gunter Saake (University of Magdeburg, Germany; Chalmers University of Technology, Sweden; University of Gothenburg, Sweden; Harz University of Applied Sciences, Germany; METOP, Germany) Developers spend a substantial amount of their time with program comprehension. To improve their comprehension and refresh their memory, developers need to communicate with other developers, read the documentation, and analyze the source code. Many studies show that developers focus primarily on the source code and that small improvements can have a strong impact. As such, it is crucial to bring the code itself into a more comprehensible form. A particular technique for this purpose are explicit feature traces to easily identify a program’s functionalities. To improve our empirical understanding about the effects of feature traces, we report an online experiment with 49 professional software developers. We studied the impact of explicit feature traces, namely annotations and decomposition, on program comprehension and compared them to the same code without traces. Besides this experiment, we also asked our participants about their opinions in order to combine quantitative and qualitative data. Our results indicate that, as opposed to purely object-oriented code: (1) annotations can have positive effects on program comprehension; (2) decomposition can have a negative impact on bug localization; and (3) our participants perceive both techniques as beneficial. Moreover, none of the three code versions yields significant improvements on task completion time. Overall, our results indicate that lightweight traceability, such as using annotations, provides immediate benefits to developers during software development and maintenance without extensive training or tooling; and can improve current industrial practices that rely on heavyweight traceability tools (e.g., DOORS) and retroactive fulfillment of standards (e.g., ISO-26262, DO-178B). @InProceedings{ESEC/FSE19p338, author = {Jacob Krüger and Gül Çalıklı and Thorsten Berger and Thomas Leich and Gunter Saake}, title = {Effects of Explicit Feature Traceability on Program Comprehension}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {338--349}, doi = {10.1145/3338906.3338968}, year = {2019}, } Publisher's Version ESEC/FSE '19: "Tackling Knowledge Needs during ..." Tackling Knowledge Needs during Software Evolution Jacob Krüger (University of Magdeburg, Germany) Developers use a large amount of their time to understand the system they work on, an activity referred to as program comprehension. Especially software evolution and forgetting over time lead to developers becoming unfamiliar with a system. To support them during program comprehension, we can employ knowledge recovery to reverse engineer implicit information from the system and the platform (e.g., GitHub) it is hosted on. However, to recover useful knowledge and to provide it in a useful way, we first need to understand what knowledge developers forget to what extent, what sources are reliable to recover knowledge, and how to trace knowledge to the features in a system. We tackle these three issues, aiming to provide empirical insights and tooling to support developers during software evolution and maintenance. The results help practitioners, as we support the analysis and understanding of systems, as well as researchers, showing opportunities to automate, for example, reverse-engineering techniques. @InProceedings{ESEC/FSE19p1244, author = {Jacob Krüger}, title = {Tackling Knowledge Needs during Software Evolution}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1244--1246}, doi = {10.1145/3338906.3342505}, year = {2019}, } Publisher's Version |
|
Kula, Elvan |
ESEC/FSE '19: "Releasing Fast and Slow: An ..."
Releasing Fast and Slow: An Exploratory Case Study at ING
Elvan Kula, Ayushi Rastogi, Hennie Huijgens, Arie van Deursen, and Georgios Gousios (Delft University of Technology, Netherlands; ING Bank, Netherlands) The appeal of delivering new features faster has led many software projects to adopt rapid releases. However, it is not well understood what the effects of this practice are. This paper presents an exploratory case study of rapid releases at ING, a large banking company that develops software solutions in-house, to characterize rapid releases. Since 2011, ING has shifted to a rapid release model. This switch has resulted in a mixed environment of 611 teams releasing relatively fast and slow. We followed a mixed-methods approach in which we conducted a survey with 461 participants and corroborated their perceptions with 2 years of code quality data and 1 year of release delay data. Our research shows that: rapid releases are more commonly delayed than their non-rapid counterparts, however, rapid releases have shorter delays; rapid releases can be beneficial in terms of reviewing and user-perceived quality; rapidly released software tends to have a higher code churn, a higher test coverage and a lower average complexity; challenges in rapid releases are related to managing dependencies and certain code aspects, e.g., design debt. @InProceedings{ESEC/FSE19p785, author = {Elvan Kula and Ayushi Rastogi and Hennie Huijgens and Arie van Deursen and Georgios Gousios}, title = {Releasing Fast and Slow: An Exploratory Case Study at ING}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {785--795}, doi = {10.1145/3338906.3338978}, year = {2019}, } Publisher's Version |
|
Kumar, Rahul |
ESEC/FSE '19: "WhoDo: Automating Reviewer ..."
WhoDo: Automating Reviewer Suggestions at Scale
Sumit Asthana, Rahul Kumar, Ranjita Bhagwan, Christian Bird, Chetan Bansal, Chandra Maddila, Sonu Mehta, and B. Ashok (Microsoft Research, India; Microsoft Research, USA) Today's software development is distributed and involves continuous changes for new features and yet, their development cycle has to be fast and agile. An important component of enabling this agility is selecting the right reviewers for every code-change - the smallest unit of the development cycle. Modern tool-based code review is proven to be an effective way to achieve appropriate code review of software changes. However, the selection of reviewers in these code review systems is at best manual. As software and teams scale, this poses the challenge of selecting the right reviewers, which in turn determines software quality over time. While previous work has suggested automatic approaches to code reviewer recommendations, it has been limited to retrospective analysis. We not only deploy a reviewer suggestions algorithm - WhoDo - and evaluate its effect but also incorporate load balancing as part of it to address one of its major shortcomings: of recommending experienced developers very frequently. We evaluate the effect of this hybrid recommendation + load balancing system on five repositories within Microsoft. Our results are based around various aspects of a commit and how code review affects that. We attempt to quantitatively answer questions which are supposed to play a vital role in effective code review through our data and substantiate it through qualitative feedback of partner repositories. @InProceedings{ESEC/FSE19p937, author = {Sumit Asthana and Rahul Kumar and Ranjita Bhagwan and Christian Bird and Chetan Bansal and Chandra Maddila and Sonu Mehta and B. Ashok}, title = {WhoDo: Automating Reviewer Suggestions at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {937--945}, doi = {10.1145/3338906.3340449}, year = {2019}, } Publisher's Version |
|
Kusano, Markus |
ESEC/FSE '19: "FUDGE: Fuzz Driver Generation ..."
FUDGE: Fuzz Driver Generation at Scale
Domagoj Babić, Stefan Bucur, Yaohui Chen, Franjo Ivančić, Tim King, Markus Kusano, Caroline Lemieux, László Szekeres, and Wei Wang (Google, USA; Northeastern University, USA; University of California at Berkeley, USA) At Google we have found tens of thousands of security and robustness bugs by fuzzing C and C++ libraries. To fuzz a library, a fuzzer requires a fuzz driver—which exercises some library code—to which it can pass inputs. Unfortunately, writing fuzz drivers remains a primarily manual exercise, a major hindrance to the widespread adoption of fuzzing. In this paper, we address this major hindrance by introducing the Fudge system for automated fuzz driver generation. Fudge automatically generates fuzz driver candidates for libraries based on existing client code. We have used Fudge to generate thousands of new drivers for a wide variety of libraries. Each generated driver includes a synthesized C/C++ program and a corresponding build script, and is automatically analyzed for quality. Developers have integrated over 200 of these generated drivers into continuous fuzzing services and have committed to address reported security bugs. Further, several of these fuzz drivers have been upstreamed to open source projects and integrated into the OSS-Fuzz fuzzing infrastructure. Running these fuzz drivers has resulted in over 150 bug fixes, including the elimination of numerous exploitable security vulnerabilities. @InProceedings{ESEC/FSE19p975, author = {Domagoj Babić and Stefan Bucur and Yaohui Chen and Franjo Ivančić and Tim King and Markus Kusano and Caroline Lemieux and László Szekeres and Wei Wang}, title = {FUDGE: Fuzz Driver Generation at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {975--985}, doi = {10.1145/3338906.3340456}, year = {2019}, } Publisher's Version |
|
Kwiatkowska, Marta |
ESEC/FSE '19: "Safety and Robustness for ..."
Safety and Robustness for Deep Learning with Provable Guarantees (Keynote)
Marta Kwiatkowska (University of Oxford, UK) Computing systems are becoming ever more complex, with decisions increasingly often based on deep learning components. A wide variety of applications are being developed, many of them safety-critical, such as self-driving cars and medical diagnosis. Since deep learning is unstable with respect to adversarial perturbations, there is a need for rigorous software development methodologies that encompass machine learning components. This lecture will describe progress with developing automated verification and testing techniques for deep neural networks to ensure safety and robustness of their decisions with respect to input perturbations. The techniques exploit Lipschitz continuity of the networks and aim to approximate, for a given set of inputs, the reachable set of network outputs in terms of lower and upper bounds, in anytime manner, with provable guarantees. We develop novel algorithms based on feature-guided search, games, global optimisation and Bayesian methods, and evaluate them on state-of-the-art networks. The lecture will conclude with an overview of the challenges in this field. @InProceedings{ESEC/FSE19p2, author = {Marta Kwiatkowska}, title = {Safety and Robustness for Deep Learning with Provable Guarantees (Keynote)}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {2--2}, doi = {10.1145/3338906.3342812}, year = {2019}, } Publisher's Version |
|
Lång, John |
ESEC/FSE '19: "Model Checking a C++ Software ..."
Model Checking a C++ Software Framework: A Case Study
John Lång and I. S. W. B. Prasetya (University of Helsinki, Finland; Utrecht University, Netherlands) This paper presents a case study on applying two model checkers, Spin and Divine, to verify key properties of a C++ software framework, known as ADAPRO, originally developed at CERN. Spin was used for verifying properties on the design level. Divine was used for verifying simple test applications that interacted with the implementation. Both model checkers were found to have their own respective sets of pros and cons, but the overall experience was positive. Because both model checkers were used in a complementary manner, they provided valuable new insights into the framework, which would arguably have been hard to gain by traditional testing and analysis tools only. Translating the C++ source code into the modeling language of the Spin model checker helped to find flaws in the original design. With Divine, defects were found in parts of the code base that had already been subject to hundreds of hours of unit tests, integration tests, and acceptance tests. Most importantly, model checking was found to be easy to integrate into the workflow of the software project and bring added value, not only as verification, but also validation methodology. Therefore, using model checking for developing library-level code seems realistic and worth the effort. @InProceedings{ESEC/FSE19p1026, author = {John Lång and I. S. W. B. Prasetya}, title = {Model Checking a C++ Software Framework: A Case Study}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1026--1036}, doi = {10.1145/3338906.3340453}, year = {2019}, } Publisher's Version Info |
|
Lam, Wing |
ESEC/FSE '19: "iFixFlakies: A Framework for ..."
iFixFlakies: A Framework for Automatically Fixing Order-Dependent Flaky Tests
August Shi, Wing Lam, Reed Oei, Tao Xie, and Darko Marinov (University of Illinois at Urbana-Champaign, USA) Regression testing provides important pass or fail signals that developers use to make decisions after code changes. However, flaky tests, which pass or fail even when the code has not changed, can mislead developers. A common kind of flaky tests are order-dependent tests, which pass or fail depending on the order in which the tests are run. Fixing order-dependent tests is often tedious and time-consuming. We propose iFixFlakies, a framework for automatically fixing order-dependent tests. The key insight in iFixFlakies is that test suites often already have tests, which we call helpers, whose logic resets or sets the states for order-dependent tests to pass. iFixFlakies searches a test suite for helpers that make the order-dependent tests pass and then recommends patches for the order-dependent tests using code from these helpers. Our evaluation on 110 truly orderdependent tests from a public dataset shows that 58 of them have helpers, and iFixFlakies can fix all 58. We opened pull requests for 56 order-dependent tests (2 of 58 were already fixed), and developers have already accepted pull requests for 21 of them, with all the remaining ones still pending. @InProceedings{ESEC/FSE19p545, author = {August Shi and Wing Lam and Reed Oei and Tao Xie and Darko Marinov}, title = {iFixFlakies: A Framework for Automatically Fixing Order-Dependent Flaky Tests}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {545--555}, doi = {10.1145/3338906.3338925}, year = {2019}, } Publisher's Version |
|
Lande, Stefano |
ESEC/FSE '19: "Developing Secure Bitcoin ..."
Developing Secure Bitcoin Contracts with BitML
Nicola Atzei, Massimo Bartoletti, Stefano Lande, Nobuko Yoshida, and Roberto Zunino (University of Cagliari, Italy; Imperial College London, UK; University of Trento, Italy) We present a toolchain for developing and verifying smart contracts that can be executed on Bitcoin. The toolchain is based on BitML, a recent domain-specific language for smart contracts with a computationally sound embedding into Bitcoin. Our toolchain automatically verifies relevant properties of contracts, among which liquidity, ensuring that funds do not remain frozen within a contract forever. A compiler is provided to translate BitML contracts into sets of standard Bitcoin transactions: executing a contract corresponds to appending these transactions to the blockchain. We assess our toolchain through a benchmark of representative contracts. @InProceedings{ESEC/FSE19p1124, author = {Nicola Atzei and Massimo Bartoletti and Stefano Lande and Nobuko Yoshida and Roberto Zunino}, title = {Developing Secure Bitcoin Contracts with BitML}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1124--1128}, doi = {10.1145/3338906.3341173}, year = {2019}, } Publisher's Version Video Info |
|
Lee, Dongyoon |
ESEC/FSE '19: "Why Aren’t Regular Expressions ..."
Why Aren’t Regular Expressions a Lingua Franca? An Empirical Study on the Re-use and Portability of Regular Expressions
James C. Davis, Louis G. Michael IV, Christy A. Coghlan, Francisco Servant, and Dongyoon Lee (Virginia Tech, USA) This paper explores the extent to which regular expressions (regexes) are portable across programming languages. Many languages offer similar regex syntaxes, and it would be natural to assume that regexes can be ported across language boundaries. But can regexes be copy/pasted across language boundaries while retaining their semantic and performance characteristics? In our survey of 158 professional software developers, most indicated that they re-use regexes across language boundaries and about half reported that they believe regexes are a universal language.We experimentally evaluated the riskiness of this practice using a novel regex corpus — 537,806 regexes from 193,524 projects written in JavaScript, Java, PHP, Python, Ruby, Go, Perl, and Rust. Using our polyglot regex corpus, we explored the hitherto-unstudied regex portability problems: logic errors due to semantic differences, and security vulnerabilities due to performance differences. We report that developers’ belief in a regex lingua franca is understandable but unfounded. Though most regexes compile across language boundaries, 15% exhibit semantic differences across languages and 10% exhibit performance differences across languages. We explained these differences using regex documentation, and further illuminate our findings by investigating regex engine implementations. Along the way we found bugs in the regex engines of JavaScript-V8, Python, Ruby, and Rust, and potential semantic and performance regex bugs in thousands of modules. @InProceedings{ESEC/FSE19p443, author = {James C. Davis and Louis G. Michael IV and Christy A. Coghlan and Francisco Servant and Dongyoon Lee}, title = {Why Aren’t Regular Expressions a Lingua Franca? An Empirical Study on the Re-use and Portability of Regular Expressions}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {443--454}, doi = {10.1145/3338906.3338909}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Leich, Thomas |
ESEC/FSE '19: "Effects of Explicit Feature ..."
Effects of Explicit Feature Traceability on Program Comprehension
Jacob Krüger, Gül Çalıklı, Thorsten Berger, Thomas Leich, and Gunter Saake (University of Magdeburg, Germany; Chalmers University of Technology, Sweden; University of Gothenburg, Sweden; Harz University of Applied Sciences, Germany; METOP, Germany) Developers spend a substantial amount of their time with program comprehension. To improve their comprehension and refresh their memory, developers need to communicate with other developers, read the documentation, and analyze the source code. Many studies show that developers focus primarily on the source code and that small improvements can have a strong impact. As such, it is crucial to bring the code itself into a more comprehensible form. A particular technique for this purpose are explicit feature traces to easily identify a program’s functionalities. To improve our empirical understanding about the effects of feature traces, we report an online experiment with 49 professional software developers. We studied the impact of explicit feature traces, namely annotations and decomposition, on program comprehension and compared them to the same code without traces. Besides this experiment, we also asked our participants about their opinions in order to combine quantitative and qualitative data. Our results indicate that, as opposed to purely object-oriented code: (1) annotations can have positive effects on program comprehension; (2) decomposition can have a negative impact on bug localization; and (3) our participants perceive both techniques as beneficial. Moreover, none of the three code versions yields significant improvements on task completion time. Overall, our results indicate that lightweight traceability, such as using annotations, provides immediate benefits to developers during software development and maintenance without extensive training or tooling; and can improve current industrial practices that rely on heavyweight traceability tools (e.g., DOORS) and retroactive fulfillment of standards (e.g., ISO-26262, DO-178B). @InProceedings{ESEC/FSE19p338, author = {Jacob Krüger and Gül Çalıklı and Thorsten Berger and Thomas Leich and Gunter Saake}, title = {Effects of Explicit Feature Traceability on Program Comprehension}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {338--349}, doi = {10.1145/3338906.3338968}, year = {2019}, } Publisher's Version |
|
Lemieux, Caroline |
ESEC/FSE '19: "FUDGE: Fuzz Driver Generation ..."
FUDGE: Fuzz Driver Generation at Scale
Domagoj Babić, Stefan Bucur, Yaohui Chen, Franjo Ivančić, Tim King, Markus Kusano, Caroline Lemieux, László Szekeres, and Wei Wang (Google, USA; Northeastern University, USA; University of California at Berkeley, USA) At Google we have found tens of thousands of security and robustness bugs by fuzzing C and C++ libraries. To fuzz a library, a fuzzer requires a fuzz driver—which exercises some library code—to which it can pass inputs. Unfortunately, writing fuzz drivers remains a primarily manual exercise, a major hindrance to the widespread adoption of fuzzing. In this paper, we address this major hindrance by introducing the Fudge system for automated fuzz driver generation. Fudge automatically generates fuzz driver candidates for libraries based on existing client code. We have used Fudge to generate thousands of new drivers for a wide variety of libraries. Each generated driver includes a synthesized C/C++ program and a corresponding build script, and is automatically analyzed for quality. Developers have integrated over 200 of these generated drivers into continuous fuzzing services and have committed to address reported security bugs. Further, several of these fuzz drivers have been upstreamed to open source projects and integrated into the OSS-Fuzz fuzzing infrastructure. Running these fuzz drivers has resulted in over 150 bug fixes, including the elimination of numerous exploitable security vulnerabilities. @InProceedings{ESEC/FSE19p975, author = {Domagoj Babić and Stefan Bucur and Yaohui Chen and Franjo Ivančić and Tim King and Markus Kusano and Caroline Lemieux and László Szekeres and Wei Wang}, title = {FUDGE: Fuzz Driver Generation at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {975--985}, doi = {10.1145/3338906.3340456}, year = {2019}, } Publisher's Version |
|
Le Traon, Yves |
ESEC/FSE '19: "Mart: A Mutant Generation ..."
Mart: A Mutant Generation Tool for LLVM
Thierry Titcheu Chekam, Mike Papadakis, and Yves Le Traon (University of Luxembourg, Luxembourg) Program mutation makes small syntactic alterations to programs' code in order to artificially create faulty programs (mutants). Mutants creation (generation) tools are often characterized by their mutation operators and the way they create and represent the mutants. This paper presents Mart, a mutants generation tool, for LLVM bitcode, that supports the fine-grained definition of mutation operators (as matching rule - replacing pattern pair; uses 816 defined pairs by default) and the restriction of the code parts to mutate. New operators are implemented in Mart by implementing their matching rules and replacing patterns. Mart also implements in-memory Trivial Compiler Equivalence to eliminate equivalent and duplicate mutants during mutants generation. Mart generates mutant code as separated mutant files, meta-mutants file, weak mutation and mutant coverage instrumented files. Mart is publicly available (https://github.com/thierry-tct/mart). Mart has been applied to generate mutants for several research experiments and generated more than 4,000,000 mutants. @InProceedings{ESEC/FSE19p1080, author = {Thierry Titcheu Chekam and Mike Papadakis and Yves Le Traon}, title = {Mart: A Mutant Generation Tool for LLVM}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1080--1084}, doi = {10.1145/3338906.3341180}, year = {2019}, } Publisher's Version Video Info ESEC/FSE '19: "iFixR: Bug Report driven Program ..." iFixR: Bug Report driven Program Repair Anil Koyuncu, Kui Liu, Tegawendé F. Bissyandé, Dongsun Kim, Martin Monperrus, Jacques Klein, and Yves Le Traon (University of Luxembourg, Luxembourg; Furiosa A.I., South Korea; KTH, Sweden) Issue tracking systems are commonly used in modern software development for collecting feedback from users and developers. An ultimate automation target of software maintenance is then the systematization of patch generation for user-reported bugs. Although this ambition is aligned with the momentum of automated program repair, the literature has, so far, mostly focused on generate-and- validate setups where fault localization and patch generation are driven by a well-defined test suite. On the one hand, however, the common (yet strong) assumption on the existence of relevant test cases does not hold in practice for most development settings: many bugs are reported without the available test suite being able to reveal them. On the other hand, for many projects, the number of bug reports generally outstrips the resources available to triage them. Towards increasing the adoption of patch generation tools by practitioners, we investigate a new repair pipeline, iFixR, driven by bug reports: (1) bug reports are fed to an IR-based fault localizer; (2) patches are generated from fix patterns and validated via regression testing; (3) a prioritized list of generated patches is proposed to developers. We evaluate iFixR on the Defects4J dataset, which we enriched (i.e., faults are linked to bug reports) and carefully-reorganized (i.e., the timeline of test-cases is naturally split). iFixR generates genuine/plausible patches for 21/44 Defects4J faults with its IR-based fault localizer. iFixR accurately places a genuine/plausible patch among its top-5 recommendation for 8/13 of these faults (without using future test cases in generation-and-validation). @InProceedings{ESEC/FSE19p314, author = {Anil Koyuncu and Kui Liu and Tegawendé F. Bissyandé and Dongsun Kim and Martin Monperrus and Jacques Klein and Yves Le Traon}, title = {iFixR: Bug Report driven Program Repair}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {314--325}, doi = {10.1145/3338906.3338935}, year = {2019}, } Publisher's Version Artifacts Reusable ESEC/FSE '19: "The Importance of Accounting ..." The Importance of Accounting for Real-World Labelling When Predicting Software Vulnerabilities Matthieu Jimenez, Renaud Rwemalika, Mike Papadakis, Federica Sarro, Yves Le Traon, and Mark Harman (University of Luxembourg, Luxembourg; University College London, UK; Facebook, UK) Previous work on vulnerability prediction assume that predictive models are trained with respect to perfect labelling information (includes labels from future, as yet undiscovered vulnerabilities). In this paper we present results from a comprehensive empirical study of 1,898 real-world vulnerabilities reported in 74 releases of three security-critical open source systems (Linux Kernel, OpenSSL and Wiresark). Our study investigates the effectiveness of three previously proposed vulnerability prediction approaches, in two settings: with and without the unrealistic labelling assumption. The results reveal that the unrealistic labelling assumption can profoundly mis- lead the scientific conclusions drawn; suggesting highly effective and deployable prediction results vanish when we fully account for realistically available labelling in the experimental methodology. More precisely, MCC mean values of predictive effectiveness drop from 0.77, 0.65 and 0.43 to 0.08, 0.22, 0.10 for Linux Kernel, OpenSSL and Wiresark, respectively. Similar results are also obtained for precision, recall and other assessments of predictive efficacy. The community therefore needs to upgrade experimental and empirical methodology for vulnerability prediction evaluation and development to ensure robust and actionable scientific findings. @InProceedings{ESEC/FSE19p695, author = {Matthieu Jimenez and Renaud Rwemalika and Mike Papadakis and Federica Sarro and Yves Le Traon and Mark Harman}, title = {The Importance of Accounting for Real-World Labelling When Predicting Software Vulnerabilities}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {695--705}, doi = {10.1145/3338906.3338941}, year = {2019}, } Publisher's Version |
|
Levin, Erik |
ESEC/FSE '19: "Using Microservices for Non-intrusive ..."
Using Microservices for Non-intrusive Customization of Multi-tenant SaaS
Phu H. Nguyen, Hui Song, Franck Chauvel, Roy Muller, Seref Boyar, and Erik Levin (SINTEF, Norway; Visma, Norway) Enterprise software vendors often need to support their customer companies to customize the enterprise software products deployed on-premises of customers. But when software vendors are migrating their products to cloud-based Software-as-a-Service (SaaS), deep customization that used to be done on-premises is not applicable to the cloud-based multi-tenant context in which all tenants share the same SaaS. Enabling tenant-specific customization in cloud-based multi-tenant SaaS requires a novel approach. This paper proposes a Microservices-based non-intrusive Customization framework for multi-tenant Cloud-based SaaS, called MiSC-Cloud. Non-intrusive deep customization means that the microservices for customization of each tenant are isolated from the main software product and other microservices for customization of other tenants. MiSC-Cloud makes deep customization possible via authorized API calls through API gateways to the APIs of the customization microservices and the APIs of the main software product. We have implemented a proof-of-concept of our approach to enable non-intrusive deep customization of an open-source cloud native reference application of Microsoft called eShopOnContainers. Based on this work, we provide some lessons learned and directions for future work. @InProceedings{ESEC/FSE19p905, author = {Phu H. Nguyen and Hui Song and Franck Chauvel and Roy Muller and Seref Boyar and Erik Levin}, title = {Using Microservices for Non-intrusive Customization of Multi-tenant SaaS}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {905--915}, doi = {10.1145/3338906.3340452}, year = {2019}, } Publisher's Version |
|
Liang, Bin |
ESEC/FSE '19: "Detecting Concurrency Memory ..."
Detecting Concurrency Memory Corruption Vulnerabilities
Yan Cai, Biyun Zhu, Ruijie Meng, Hao Yun, Liang He, Purui Su, and Bin Liang (Institute of Software at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China; Renmin University of China, China) Memory corruption vulnerabilities can occur in multithreaded executions, known as concurrency vulnerabilities in this paper. Due to non-deterministic multithreaded executions, they are extremely difficult to detect. Recently, researchers tried to apply data race detectors to detect concurrency vulnerabilities. Unfortunately, these detectors are ineffective on detecting concurrency vulnerabilities. For example, most (90%) of data races are benign. However, concurrency vulnerabilities are harmful and can usually be exploited to launch attacks. Techniques based on maximal causal model rely on constraints solvers to predict scheduling; they can miss concurrency vulnerabilities in practice. Our insight is, a concurrency vulnerability is more related to the orders of events that can be reversed in different executions, no matter whether the corresponding accesses can form data races. We then define exchangeable events to identify pairs of events such that their execution orders can be probably reversed in different executions. We further propose algorithms to detect three major kinds of concurrency vulnerabilities. To overcome potential imprecision of exchangeable events, we also adopt a validation to isolate real vulnerabilities. We implemented our algorithms as a tool ConVul and applied it on 10 known concurrency vulnerabilities and the MySQL database server. Compared with three widely-used race detectors and one detector based on maximal causal model, ConVul was significantly more effective by detecting 9 of 10 known vulnerabilities and 6 zero-day vulnerabilities on MySQL (four have been confirmed). However, other detectors only detected at most 3 out of the 16 known and zero-day vulnerabilities. @InProceedings{ESEC/FSE19p706, author = {Yan Cai and Biyun Zhu and Ruijie Meng and Hao Yun and Liang He and Purui Su and Bin Liang}, title = {Detecting Concurrency Memory Corruption Vulnerabilities}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {706--717}, doi = {10.1145/3338906.3338927}, year = {2019}, } Publisher's Version |
|
Liao, Xiangke |
ESEC/FSE '19: "Automatically Detecting Missing ..."
Automatically Detecting Missing Cleanup for Ungraceful Exits
Zhouyang Jia, Shanshan Li, Tingting Yu, Xiangke Liao, and Ji Wang (National University of Defense Technology, China; University of Kentucky, USA) Software encounters ungraceful exits due to either bugs in the interrupt/signal handler code or the intention of developers to debug the software. Users may suffer from ”weird” problems caused by leftovers of the ungraceful exits. A common practice to fix these problems is rebooting, which wipes away the stale state of the software. This solution, however, is heavyweight and often leads to poor user experience because it requires restarting other normal processes. In this paper, we design SafeExit, a tool that can automatically detect and pinpoint the root causes of the problems caused by ungraceful exits, which can help users fix the problems using lightweight solutions. Specifically, SafeExit checks the program exit behaviors in the case of an interrupted execution against its expected exit behaviors to detect the missing cleanup behaviors required for avoiding the ungraceful exit. The expected behaviors are obtained by monitoring the program exit under a normal execution. We apply SafeExit to 38 programs across 10 domains. SafeExit finds 133 types of cleanup behaviors from 36 programs and detects 2861 missing behaviors from 292 interrupted executions. To predict missing behaviors for unseen input scenarios, SafeExit trains prediction models using a set of sampled input scenarios. The results show that SafeExit is accurate with an average F-measure of 92.5%. @InProceedings{ESEC/FSE19p751, author = {Zhouyang Jia and Shanshan Li and Tingting Yu and Xiangke Liao and Ji Wang}, title = {Automatically Detecting Missing Cleanup for Ungraceful Exits}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {751--762}, doi = {10.1145/3338906.3338938}, year = {2019}, } Publisher's Version |
|
Liew, Daniel |
ESEC/FSE '19: "Just Fuzz It: Solving Floating-Point ..."
Just Fuzz It: Solving Floating-Point Constraints using Coverage-Guided Fuzzing
Daniel Liew, Cristian Cadar, Alastair F. Donaldson, and J. Ryan Stinnett (Imperial College London, UK; Mozilla, USA) We investigate the use of coverage-guided fuzzing as a means of proving satisfiability of SMT formulas over finite variable domains, with specific application to floating-point constraints. We show how an SMT formula can be encoded as a program containing a location that is reachable if and only if the program’s input corresponds to a satisfying assignment to the formula. A coverage-guided fuzzer can then be used to search for an input that reaches the location, yielding a satisfying assignment. We have implemented this idea in a tool, Just Fuzz-it Solver (JFS), and we present a large experimental evaluation showing that JFS is both competitive with and complementary to state-of-the-art SMT solvers with respect to solving floating-point constraints, and that the coverage-guided approach of JFS provides significant benefit over naive fuzzing in the floating-point domain. Applied in a portfolio manner, the JFS approach thus has the potential to complement traditional SMT solvers for program analysis tasks that involve reasoning about floating-point constraints. @InProceedings{ESEC/FSE19p521, author = {Daniel Liew and Cristian Cadar and Alastair F. Donaldson and J. Ryan Stinnett}, title = {Just Fuzz It: Solving Floating-Point Constraints using Coverage-Guided Fuzzing}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {521--532}, doi = {10.1145/3338906.3338921}, year = {2019}, } Publisher's Version |
|
Liguori, Pietro |
ESEC/FSE '19: "How Bad Can a Bug Get? An ..."
How Bad Can a Bug Get? An Empirical Analysis of Software Failures in the OpenStack Cloud Computing Platform
Domenico Cotroneo, Luigi De Simone, Pietro Liguori, Roberto Natella, and Nematollah Bidokhti (Federico II University of Naples, Italy; Futurewei Technologies, USA) Cloud management systems provide abstractions and APIs for programmatically configuring cloud infrastructures. Unfortunately, residual software bugs in these systems can potentially lead to high-severity failures, such as prolonged outages and data losses. In this paper, we investigate the impact of failures in the context widespread OpenStack cloud management system, by performing fault injection and by analyzing the impact of the resulting failures in terms of fail-stop behavior, failure detection through logging, and failure propagation across components. The analysis points out that most of the failures are not timely detected and notified; moreover, many of these failures can silently propagate over time and through components of the cloud management system, which call for more thorough run-time checks and fault containment. @InProceedings{ESEC/FSE19p200, author = {Domenico Cotroneo and Luigi De Simone and Pietro Liguori and Roberto Natella and Nematollah Bidokhti}, title = {How Bad Can a Bug Get? An Empirical Analysis of Software Failures in the OpenStack Cloud Computing Platform}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {200--211}, doi = {10.1145/3338906.3338916}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Li, Hongyu |
ESEC/FSE '19: "When Deep Learning Met Code ..."
When Deep Learning Met Code Search
Jose Cambronero, Hongyu Li, Seohyun Kim, Koushik Sen, and Satish Chandra (Massachusetts Institute of Technology, USA; Facebook, USA; University of California at Berkeley, USA) There have been multiple recent proposals on using deep neural networks for code search using natural language. Common across these proposals is the idea of embedding code and natural language queries into real vectors and then using vector distance to approximate semantic correlation between code and the query. Multiple approaches exist for learning these embeddings, including unsupervised techniques, which rely only on a corpus of code examples, and supervised techniques, which use an aligned corpus of paired code and natural language descriptions. The goal of this supervision is to produce embeddings that are more similar for a query and the corresponding desired code snippet. Clearly, there are choices in whether to use supervised techniques at all, and if one does, what sort of network and training to use for supervision. This paper is the first to evaluate these choices systematically. To this end, we assembled implementations of state-of-the-art techniques to run on a common platform, training and evaluation corpora. To explore the design space in network complexity, we also introduced a new design point that is a minimal supervision extension to an existing unsupervised technique. Our evaluation shows that: 1. adding supervision to an existing unsupervised technique can improve performance, though not necessarily by much; 2. simple networks for supervision can be more effective that more sophisticated sequence-based networks for code search; 3. while it is common to use docstrings to carry out supervision, there is a sizeable gap between the effectiveness of docstrings and a more query-appropriate supervision corpus. @InProceedings{ESEC/FSE19p964, author = {Jose Cambronero and Hongyu Li and Seohyun Kim and Koushik Sen and Satish Chandra}, title = {When Deep Learning Met Code Search}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {964--974}, doi = {10.1145/3338906.3340458}, year = {2019}, } Publisher's Version |
|
Li, Huizhong |
ESEC/FSE '19: "EVMFuzzer: Detect EVM Vulnerabilities ..."
EVMFuzzer: Detect EVM Vulnerabilities via Fuzz Testing
Ying Fu, Meng Ren, Fuchen Ma, Heyuan Shi, Xin Yang, Yu Jiang, Huizhong Li, and Xiang Shi (Tsinghua University, China; WeBank, China) Ethereum Virtual Machine (EVM) is the run-time environment for smart contracts and its vulnerabilities may lead to serious problems to the Ethereum ecology. With lots of techniques being continuously developed for the validation of smart contracts, the testing of EVM remains challenging because of the special test input format and the absence of oracles. In this paper, we propose EVMFuzzer, the first tool that uses differential fuzzing technique to detect vulnerabilities of EVM. The core idea is to continuously generate seed contracts and feed them to the target EVM and the benchmark EVMs, so as to find as many inconsistencies among execution results as possible, eventually discover vulnerabilities with output cross-referencing. Given a target EVM and its APIs, EVMFuzzer generates seed contracts via a set of predefined mutators, and then employs dynamic priority scheduling algorithm to guide seed contracts selection and maximize the inconsistency. Finally, EVMFuzzer leverages benchmark EVMs as cross-referencing oracles to avoid manual checking. With EVMFuzzer, we have found several previously unknown security bugs in four widely used EVMs, and 5 of which had been included in Common Vulnerabilities and Exposures (CVE) IDs in U.S. National Vulnerability Database. The video is presented at https://youtu.be/9Lejgf2GSOk. @InProceedings{ESEC/FSE19p1110, author = {Ying Fu and Meng Ren and Fuchen Ma and Heyuan Shi and Xin Yang and Yu Jiang and Huizhong Li and Xiang Shi}, title = {EVMFuzzer: Detect EVM Vulnerabilities via Fuzz Testing}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1110--1114}, doi = {10.1145/3338906.3341175}, year = {2019}, } Publisher's Version |
|
Li, Junyi Jessy |
ESEC/FSE '19: "A Framework for Writing Trigger-Action ..."
A Framework for Writing Trigger-Action Todo Comments in Executable Format
Pengyu Nie, Rishabh Rai, Junyi Jessy Li, Sarfraz Khurshid, Raymond J. Mooney, and Milos Gligoric (University of Texas at Austin, USA) Natural language elements, e.g., todo comments, are frequently used to communicate among developers and to describe tasks that need to be performed (actions) when specific conditions hold on artifacts related to the code repository (triggers), e.g., from the Apache Struts project: “remove expectedJDK15 and if() after switching to Java 1.6”. As projects evolve, development processes change, and development teams reorganize, these comments, because of their informal nature, frequently become irrelevant or forgotten. We present the first framework, dubbed TrigIt, to specify trigger-action todo comments in executable format. Thus, actions are executed automatically when triggers evaluate to true. TrigIt specifications are written in the host language (e.g., Java) and are evaluated as part of the build process. The triggers are specified as query statements over abstract syntax trees, abstract representation of build configuration scripts, issue tracking systems, and system clock time. The actions are either notifications to developers or code transformation steps. We implemented TrigIt for the Java programming language and migrated 44 existing trigger-action comments from several popular open-source projects. Evaluation of TrigIt, via a user study, showed that users find TrigIt easy to learn and use. TrigIt has the potential to enforce more discipline in writing and maintaining comments in large code repositories. @InProceedings{ESEC/FSE19p385, author = {Pengyu Nie and Rishabh Rai and Junyi Jessy Li and Sarfraz Khurshid and Raymond J. Mooney and Milos Gligoric}, title = {A Framework for Writing Trigger-Action Todo Comments in Executable Format}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {385--396}, doi = {10.1145/3338906.3338965}, year = {2019}, } Publisher's Version |
|
Lin, Jinkun |
ESEC/FSE '19: "Towards More Efficient Meta-heuristic ..."
Towards More Efficient Meta-heuristic Algorithms for Combinatorial Test Generation
Jinkun Lin, Shaowei Cai, Chuan Luo, Qingwei Lin, and Hongyu Zhang (Institute of Software at Chinese Academy of Sciences, China; Microsoft Research, China; University of Newcastle, Australia) Combinatorial interaction testing (CIT) is a popular approach to detecting faults in highly configurable software systems. The core task of CIT is to generate a small test suite called a t-way covering array (CA), where t is the covering strength. Many meta-heuristic algorithms have been proposed to solve the constrained covering array generating (CCAG) problem. A major drawback of existing algorithms is that they usually need considerable time to obtain a good-quality solution, which hinders the wider applications of such algorithms. We observe that the high time consumption of existing meta-heuristic algorithms for CCAG is mainly due to the procedure of score computation. In this work, we propose a much more efficient method for score computation. The score computation method is applied to a state-of-the-art algorithm TCA, showing significant improvements. The new score computation method opens a way to utilize algorithmic ideas relying on scores which were not affordable previously. We integrate a gradient descent search step to further improve the algorithm, leading to a new algorithm called FastCA. Experiments on a broad range of real-world benchmarks and synthetic benchmarks show that, FastCA significantly outperforms state-of-the-art algorithms for CCAG algorithms, in terms of both the size of obtained covering array and the run time. @InProceedings{ESEC/FSE19p212, author = {Jinkun Lin and Shaowei Cai and Chuan Luo and Qingwei Lin and Hongyu Zhang}, title = {Towards More Efficient Meta-heuristic Algorithms for Combinatorial Test Generation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {212--222}, doi = {10.1145/3338906.3338914}, year = {2019}, } Publisher's Version |
|
Lin, Qingwei |
ESEC/FSE '19: "Robust Log-Based Anomaly Detection ..."
Robust Log-Based Anomaly Detection on Unstable Log Data
Xu Zhang, Yong Xu, Qingwei Lin, Bo Qiao, Hongyu Zhang, Yingnong Dang, Chunyu Xie, Xinsheng Yang, Qian Cheng, Ze Li, Junjie Chen, Xiaoting He, Randolph Yao, Jian-Guang Lou, Murali Chintalapati, Furao Shen, and Dongmei Zhang (Microsoft Research, China; Nanjing University, China; University of Newcastle, Australia; Microsoft, USA; Tianjin University, China) Logs are widely used by large and complex software-intensive systems for troubleshooting. There have been a lot of studies on log-based anomaly detection. To detect the anomalies, the existing methods mainly construct a detection model using log event data extracted from historical logs. However, we find that the existing methods do not work well in practice. These methods have the close-world assumption, which assumes that the log data is stable over time and the set of distinct log events is known. However, our empirical study shows that in practice, log data often contains previously unseen log events or log sequences. The instability of log data comes from two sources: 1) the evolution of logging statements, and 2) the processing noise in log data. In this paper, we propose a new log-based anomaly detection approach, called LogRobust. LogRobust extracts semantic information of log events and represents them as semantic vectors. It then detects anomalies by utilizing an attention-based Bi-LSTM model, which has the ability to capture the contextual information in the log sequences and automatically learn the importance of different log events. In this way, LogRobust is able to identify and handle unstable log events and sequences. We have evaluated LogRobust using logs collected from the Hadoop system and an actual online service system of Microsoft. The experimental results show that the proposed approach can well address the problem of log instability and achieve accurate and robust results on real-world, ever-changing log data. @InProceedings{ESEC/FSE19p807, author = {Xu Zhang and Yong Xu and Qingwei Lin and Bo Qiao and Hongyu Zhang and Yingnong Dang and Chunyu Xie and Xinsheng Yang and Qian Cheng and Ze Li and Junjie Chen and Xiaoting He and Randolph Yao and Jian-Guang Lou and Murali Chintalapati and Furao Shen and Dongmei Zhang}, title = {Robust Log-Based Anomaly Detection on Unstable Log Data}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {807--817}, doi = {10.1145/3338906.3338931}, year = {2019}, } Publisher's Version ESEC/FSE '19: "Towards More Efficient Meta-heuristic ..." Towards More Efficient Meta-heuristic Algorithms for Combinatorial Test Generation Jinkun Lin, Shaowei Cai, Chuan Luo, Qingwei Lin, and Hongyu Zhang (Institute of Software at Chinese Academy of Sciences, China; Microsoft Research, China; University of Newcastle, Australia) Combinatorial interaction testing (CIT) is a popular approach to detecting faults in highly configurable software systems. The core task of CIT is to generate a small test suite called a t-way covering array (CA), where t is the covering strength. Many meta-heuristic algorithms have been proposed to solve the constrained covering array generating (CCAG) problem. A major drawback of existing algorithms is that they usually need considerable time to obtain a good-quality solution, which hinders the wider applications of such algorithms. We observe that the high time consumption of existing meta-heuristic algorithms for CCAG is mainly due to the procedure of score computation. In this work, we propose a much more efficient method for score computation. The score computation method is applied to a state-of-the-art algorithm TCA, showing significant improvements. The new score computation method opens a way to utilize algorithmic ideas relying on scores which were not affordable previously. We integrate a gradient descent search step to further improve the algorithm, leading to a new algorithm called FastCA. Experiments on a broad range of real-world benchmarks and synthetic benchmarks show that, FastCA significantly outperforms state-of-the-art algorithms for CCAG algorithms, in terms of both the size of obtained covering array and the run time. @InProceedings{ESEC/FSE19p212, author = {Jinkun Lin and Shaowei Cai and Chuan Luo and Qingwei Lin and Hongyu Zhang}, title = {Towards More Efficient Meta-heuristic Algorithms for Combinatorial Test Generation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {212--222}, doi = {10.1145/3338906.3338914}, year = {2019}, } Publisher's Version |
|
Lin, Shang-Wei |
ESEC/FSE '19: "Locating Vulnerabilities in ..."
Locating Vulnerabilities in Binaries via Memory Layout Recovering
Haijun Wang, Xiaofei Xie, Shang-Wei Lin, Yun Lin, Yuekang Li, Shengchao Qin, Yang Liu, and Ting Liu (Shenzhen University, China; Nanyang Technological University, Singapore; National University of Singapore, Singapore; Teesside University, UK; Xi'an Jiaotong University, China) Locating vulnerabilities is an important task for security auditing, exploit writing, and code hardening. However, it is challenging to locate vulnerabilities in binary code, because most program semantics (e.g., boundaries of an array) is missing after compilation. Without program semantics, it is difficult to determine whether a memory access exceeds its valid boundaries in binary code. In this work, we propose an approach to locate vulnerabilities based on memory layout recovery. First, we collect a set of passed executions and one failed execution. Then, for passed and failed executions, we restore their program semantics by recovering fine-grained memory layouts based on the memory addressing model. With the memory layouts recovered in passed executions as reference, we can locate vulnerabilities in failed execution by memory layout identification and comparison. Our experiments show that the proposed approach is effective to locate vulnerabilities on 24 out of 25 DARPA’s CGC programs (96%), and can effectively classifies 453 program crashes (in 5 Linux programs) into 19 groups based on their root causes. @InProceedings{ESEC/FSE19p718, author = {Haijun Wang and Xiaofei Xie and Shang-Wei Lin and Yun Lin and Yuekang Li and Shengchao Qin and Yang Liu and Ting Liu}, title = {Locating Vulnerabilities in Binaries via Memory Layout Recovering}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {718--728}, doi = {10.1145/3338906.3338966}, year = {2019}, } Publisher's Version |
|
Lin, Yun |
ESEC/FSE '19: "Locating Vulnerabilities in ..."
Locating Vulnerabilities in Binaries via Memory Layout Recovering
Haijun Wang, Xiaofei Xie, Shang-Wei Lin, Yun Lin, Yuekang Li, Shengchao Qin, Yang Liu, and Ting Liu (Shenzhen University, China; Nanyang Technological University, Singapore; National University of Singapore, Singapore; Teesside University, UK; Xi'an Jiaotong University, China) Locating vulnerabilities is an important task for security auditing, exploit writing, and code hardening. However, it is challenging to locate vulnerabilities in binary code, because most program semantics (e.g., boundaries of an array) is missing after compilation. Without program semantics, it is difficult to determine whether a memory access exceeds its valid boundaries in binary code. In this work, we propose an approach to locate vulnerabilities based on memory layout recovery. First, we collect a set of passed executions and one failed execution. Then, for passed and failed executions, we restore their program semantics by recovering fine-grained memory layouts based on the memory addressing model. With the memory layouts recovered in passed executions as reference, we can locate vulnerabilities in failed execution by memory layout identification and comparison. Our experiments show that the proposed approach is effective to locate vulnerabilities on 24 out of 25 DARPA’s CGC programs (96%), and can effectively classifies 453 program crashes (in 5 Linux programs) into 19 groups based on their root causes. @InProceedings{ESEC/FSE19p718, author = {Haijun Wang and Xiaofei Xie and Shang-Wei Lin and Yun Lin and Yuekang Li and Shengchao Qin and Yang Liu and Ting Liu}, title = {Locating Vulnerabilities in Binaries via Memory Layout Recovering}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {718--728}, doi = {10.1145/3338906.3338966}, year = {2019}, } Publisher's Version |
|
Li, Shanshan |
ESEC/FSE '19: "Automatically Detecting Missing ..."
Automatically Detecting Missing Cleanup for Ungraceful Exits
Zhouyang Jia, Shanshan Li, Tingting Yu, Xiangke Liao, and Ji Wang (National University of Defense Technology, China; University of Kentucky, USA) Software encounters ungraceful exits due to either bugs in the interrupt/signal handler code or the intention of developers to debug the software. Users may suffer from ”weird” problems caused by leftovers of the ungraceful exits. A common practice to fix these problems is rebooting, which wipes away the stale state of the software. This solution, however, is heavyweight and often leads to poor user experience because it requires restarting other normal processes. In this paper, we design SafeExit, a tool that can automatically detect and pinpoint the root causes of the problems caused by ungraceful exits, which can help users fix the problems using lightweight solutions. Specifically, SafeExit checks the program exit behaviors in the case of an interrupted execution against its expected exit behaviors to detect the missing cleanup behaviors required for avoiding the ungraceful exit. The expected behaviors are obtained by monitoring the program exit under a normal execution. We apply SafeExit to 38 programs across 10 domains. SafeExit finds 133 types of cleanup behaviors from 36 programs and detects 2861 missing behaviors from 292 interrupted executions. To predict missing behaviors for unseen input scenarios, SafeExit trains prediction models using a set of sampled input scenarios. The results show that SafeExit is accurate with an average F-measure of 92.5%. @InProceedings{ESEC/FSE19p751, author = {Zhouyang Jia and Shanshan Li and Tingting Yu and Xiangke Liao and Ji Wang}, title = {Automatically Detecting Missing Cleanup for Ungraceful Exits}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {751--762}, doi = {10.1145/3338906.3338938}, year = {2019}, } Publisher's Version |
|
Liu, Dewei |
ESEC/FSE '19: "Latent Error Prediction and ..."
Latent Error Prediction and Fault Localization for Microservice Applications by Learning from System Trace Logs
Xiang Zhou, Xin Peng, Tao Xie, Jun Sun, Chao Ji, Dewei Liu, Qilin Xiang, and Chuan He (Fudan University, China; University of Illinois at Urbana-Champaign, USA; Singapore Management University, Singapore) In the production environment, a large part of microservice failures are related to the complex and dynamic interactions and runtime environments, such as those related to multiple instances, environmental configurations, and asynchronous interactions of microservices. Due to the complexity and dynamism of these failures, it is often hard to reproduce and diagnose them in testing environments. It is desirable yet still challenging that these failures can be detected and the faults can be located at runtime of the production environment to allow developers to resolve them efficiently. To address this challenge, in this paper, we propose MEPFL, an approach of latent error prediction and fault localization for microservice applications by learning from system trace logs. Based on a set of features defined on the system trace logs, MEPFL trains prediction models at both the trace level and the microservice level using the system trace logs collected from automatic executions of the target application and its faulty versions produced by fault injection. The prediction models thus can be used in the production environment to predict latent errors, faulty microservices, and fault types for trace instances captured at runtime. We implement MEPFL based on the infrastructure systems of container orchestrator and service mesh, and conduct a series of experimental studies with two opensource microservice applications (one of them being the largest open-source microservice application to our best knowledge). The results indicate that MEPFL can achieve high accuracy in intraapplication prediction of latent errors, faulty microservices, and fault types, and outperforms a state-of-the-art approach of failure diagnosis for distributed systems. The results also show that MEPFL can effectively predict latent errors caused by real-world fault cases. @InProceedings{ESEC/FSE19p683, author = {Xiang Zhou and Xin Peng and Tao Xie and Jun Sun and Chao Ji and Dewei Liu and Qilin Xiang and Chuan He}, title = {Latent Error Prediction and Fault Localization for Microservice Applications by Learning from System Trace Logs}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {683--694}, doi = {10.1145/3338906.3338961}, year = {2019}, } Publisher's Version |
|
Liu, Hui |
ESEC/FSE '19: "Semantic Relation Based Expansion ..."
Semantic Relation Based Expansion of Abbreviations
Yanjie Jiang, Hui Liu, and Lu Zhang (Beijing Institute of Technology, China; Peking University, China) Identifiers account for 70% of source code in terms of characters, and thus the quality of such identifiers is critical for program comprehension and software maintenance. For various reasons, however, many identifiers contain abbreviations, which reduces the readability and maintainability of source code. To this end, a number of approaches have been proposed to expand abbreviations in identifiers. However, such approaches are either inaccurate or confined to specific identifiers. To this end, in this paper we propose a generic and accurate approach to expand identifier abbreviations. The key insight of the approach is that abbreviations in the name of software entity e have great chance to find their full terms in names of software entities that are semantically related to e. Consequently, the proposed approach builds a knowledge graph to represent such entities and their relationships with e, and searches the graph for full terms. The optimal searching strategy for the graph could be learned automatically from a corpus of manually expanded abbreviations. We evaluate the proposed approach on nine well known open-source projects. Results of our k-fold evaluation suggest that the proposed approach improves the state of the art. It improves precision significantly from 29% to 85%, and recall from 29% to 77%. Evaluation results also suggest that the proposed generic approach is even better than the state-of-the-art parameter-specific approach in expanding parameter abbreviations, improving F1 score significantly from 75% to 87%. @InProceedings{ESEC/FSE19p131, author = {Yanjie Jiang and Hui Liu and Lu Zhang}, title = {Semantic Relation Based Expansion of Abbreviations}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {131--141}, doi = {10.1145/3338906.3338929}, year = {2019}, } Publisher's Version |
|
Liu, Kui |
ESEC/FSE '19: "iFixR: Bug Report driven Program ..."
iFixR: Bug Report driven Program Repair
Anil Koyuncu, Kui Liu, Tegawendé F. Bissyandé, Dongsun Kim, Martin Monperrus, Jacques Klein, and Yves Le Traon (University of Luxembourg, Luxembourg; Furiosa A.I., South Korea; KTH, Sweden) Issue tracking systems are commonly used in modern software development for collecting feedback from users and developers. An ultimate automation target of software maintenance is then the systematization of patch generation for user-reported bugs. Although this ambition is aligned with the momentum of automated program repair, the literature has, so far, mostly focused on generate-and- validate setups where fault localization and patch generation are driven by a well-defined test suite. On the one hand, however, the common (yet strong) assumption on the existence of relevant test cases does not hold in practice for most development settings: many bugs are reported without the available test suite being able to reveal them. On the other hand, for many projects, the number of bug reports generally outstrips the resources available to triage them. Towards increasing the adoption of patch generation tools by practitioners, we investigate a new repair pipeline, iFixR, driven by bug reports: (1) bug reports are fed to an IR-based fault localizer; (2) patches are generated from fix patterns and validated via regression testing; (3) a prioritized list of generated patches is proposed to developers. We evaluate iFixR on the Defects4J dataset, which we enriched (i.e., faults are linked to bug reports) and carefully-reorganized (i.e., the timeline of test-cases is naturally split). iFixR generates genuine/plausible patches for 21/44 Defects4J faults with its IR-based fault localizer. iFixR accurately places a genuine/plausible patch among its top-5 recommendation for 8/13 of these faults (without using future test cases in generation-and-validation). @InProceedings{ESEC/FSE19p314, author = {Anil Koyuncu and Kui Liu and Tegawendé F. Bissyandé and Dongsun Kim and Martin Monperrus and Jacques Klein and Yves Le Traon}, title = {iFixR: Bug Report driven Program Repair}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {314--325}, doi = {10.1145/3338906.3338935}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Liu, Mingwei |
ESEC/FSE '19: "A Learning-Based Approach ..."
A Learning-Based Approach for Automatic Construction of Domain Glossary from Source Code and Documentation
Chong Wang, Xin Peng, Mingwei Liu, Zhenchang Xing, Xuefang Bai, Bing Xie, and Tuo Wang (Fudan University, China; Australian National University, Australia; Peking University, China) A domain glossary that organizes domain-specific concepts and their aliases and relations is essential for knowledge acquisition and software development. Existing approaches use linguistic heuristics or term-frequency-based statistics to identify domain specific terms from software documentation, and thus the accuracy is often low. In this paper, we propose a learning-based approach for automatic construction of domain glossary from source code and software documentation. The approach uses a set of high-quality seed terms identified from code identifiers and natural language concept definitions to train a domain-specific prediction model to recognize glossary terms based on the lexical and semantic context of the sentences mentioning domain-specific concepts. It then merges the aliases of the same concepts to their canonical names, selects a set of explanation sentences for each concept, and identifies "is a", "has a", and "related to" relations between the concepts. We apply our approach to deep learning domain and Hadoop domain and harvest 5,382 and 2,069 concepts together with 16,962 and 6,815 relations respectively. Our evaluation validates the accuracy of the extracted domain glossary and its usefulness for the fusion and acquisition of knowledge from different documents of different projects. @InProceedings{ESEC/FSE19p97, author = {Chong Wang and Xin Peng and Mingwei Liu and Zhenchang Xing and Xuefang Bai and Bing Xie and Tuo Wang}, title = {A Learning-Based Approach for Automatic Construction of Domain Glossary from Source Code and Documentation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {97--108}, doi = {10.1145/3338906.3338963}, year = {2019}, } Publisher's Version ESEC/FSE '19: "Generating Query-Specific ..." Generating Query-Specific Class API Summaries Mingwei Liu, Xin Peng, Andrian Marcus, Zhenchang Xing, Wenkai Xie, Shuangshuang Xing, and Yang Liu (Fudan University, China; University of Texas at Dallas, USA; Australian National University, Australia) Source code summaries are concise representations, in form of text and/or code, of complex code elements and are meant to help developers gain a quick understanding that in turns help them perform specific tasks. Generation of summaries that are task-specific is still a challenge in the automatic code summarization field. We propose an approach for generating on-demand, extrinsic hybrid summaries for API classes, relevant to a programming task, formulated as a natural language query. The summaries include the most relevant sentences extracted from the API reference documentation and the most relevant methods. External evaluators assessed the summaries generated for classes retrieved from JDK and Android libraries for several programming tasks. The majority found that the summaries are complete, concise, and readable. A comparison with summaries produce by three baseline approaches revealed that the information present only in our summaries is more relevant than the one present only in the baselines summaries. Finally, an extrinsic evaluation study showed that the summaries help the users evaluating the correctness of API retrieval results, faster and more accurately. @InProceedings{ESEC/FSE19p120, author = {Mingwei Liu and Xin Peng and Andrian Marcus and Zhenchang Xing and Wenkai Xie and Shuangshuang Xing and Yang Liu}, title = {Generating Query-Specific Class API Summaries}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {120--130}, doi = {10.1145/3338906.3338971}, year = {2019}, } Publisher's Version |
|
Liu, Ting |
ESEC/FSE '19: "Locating Vulnerabilities in ..."
Locating Vulnerabilities in Binaries via Memory Layout Recovering
Haijun Wang, Xiaofei Xie, Shang-Wei Lin, Yun Lin, Yuekang Li, Shengchao Qin, Yang Liu, and Ting Liu (Shenzhen University, China; Nanyang Technological University, Singapore; National University of Singapore, Singapore; Teesside University, UK; Xi'an Jiaotong University, China) Locating vulnerabilities is an important task for security auditing, exploit writing, and code hardening. However, it is challenging to locate vulnerabilities in binary code, because most program semantics (e.g., boundaries of an array) is missing after compilation. Without program semantics, it is difficult to determine whether a memory access exceeds its valid boundaries in binary code. In this work, we propose an approach to locate vulnerabilities based on memory layout recovery. First, we collect a set of passed executions and one failed execution. Then, for passed and failed executions, we restore their program semantics by recovering fine-grained memory layouts based on the memory addressing model. With the memory layouts recovered in passed executions as reference, we can locate vulnerabilities in failed execution by memory layout identification and comparison. Our experiments show that the proposed approach is effective to locate vulnerabilities on 24 out of 25 DARPA’s CGC programs (96%), and can effectively classifies 453 program crashes (in 5 Linux programs) into 19 groups based on their root causes. @InProceedings{ESEC/FSE19p718, author = {Haijun Wang and Xiaofei Xie and Shang-Wei Lin and Yun Lin and Yuekang Li and Shengchao Qin and Yang Liu and Ting Liu}, title = {Locating Vulnerabilities in Binaries via Memory Layout Recovering}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {718--728}, doi = {10.1145/3338906.3338966}, year = {2019}, } Publisher's Version |
|
Liu, Xuanzhe |
ESEC/FSE '19: "SEntiMoji: An Emoji-Powered ..."
SEntiMoji: An Emoji-Powered Learning Approach for Sentiment Analysis in Software Engineering
Zhenpeng Chen, Yanbin Cao, Xuan Lu, Qiaozhu Mei, and Xuanzhe Liu (Peking University, China; University of Michigan, USA) Sentiment analysis has various application scenarios in software engineering (SE), such as detecting developers' emotions in commit messages and identifying their opinions on Q&A forums. However, commonly used out-of-the-box sentiment analysis tools cannot obtain reliable results on SE tasks and the misunderstanding of technical jargon is demonstrated to be the main reason. Then, researchers have to utilize labeled SE-related texts to customize sentiment analysis for SE tasks via a variety of algorithms. However, the scarce labeled data can cover only very limited expressions and thus cannot guarantee the analysis quality. To address such a problem, we turn to the easily available emoji usage data for help. More specifically, we employ emotional emojis as noisy labels of sentiments and propose a representation learning approach that uses both Tweets and GitHub posts containing emojis to learn sentiment-aware representations for SE-related texts. These emoji-labeled posts can not only supply the technical jargon, but also incorporate more general sentiment patterns shared across domains. They as well as labeled data are used to learn the final sentiment classifier. Compared to the existing sentiment analysis methods used in SE, the proposed approach can achieve significant improvement on representative benchmark datasets. By further contrast experiments, we find that the Tweets make a key contribution to the power of our approach. This finding informs future research not to unilaterally pursue the domain-specific resource, but try to transform knowledge from the open domain through ubiquitous signals such as emojis. @InProceedings{ESEC/FSE19p841, author = {Zhenpeng Chen and Yanbin Cao and Xuan Lu and Qiaozhu Mei and Xuanzhe Liu}, title = {SEntiMoji: An Emoji-Powered Learning Approach for Sentiment Analysis in Software Engineering}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {841--852}, doi = {10.1145/3338906.3338977}, year = {2019}, } Publisher's Version |
|
Liu, Xu |
ESEC/FSE '19: "Pinpointing Performance Inefficiencies ..."
Pinpointing Performance Inefficiencies in Java
Pengfei Su, Qingsen Wang, Milind Chabbi, and Xu Liu (College of William and Mary, USA; Scalable Machines Research, USA) Many performance inefficiencies such as inappropriate choice of algorithms or data structures, developers' inattention to performance, and missed compiler optimizations show up as wasteful memory operations. Wasteful memory operations are those that produce/consume data to/from memory that may have been avoided. We present, JXPerf, a lightweight performance analysis tool for pinpointing wasteful memory operations in Java programs. Traditional byte code instrumentation for such analysis (1) introduces prohibitive overheads and (2) misses inefficiencies in machine code generation. JXPerf overcomes both of these problems. JXPerf uses hardware performance monitoring units to sample memory locations accessed by a program and uses hardware debug registers to monitor subsequent accesses to the same memory. The result is a lightweight measurement at the machine code level with attribution of inefficiencies to their provenance --- machine and source code within full calling contexts. JXPerf introduces only 7% runtime overhead and 7% memory overhead making it useful in production. Guided by JXPerf, we optimize several Java applications by improving code generation and choosing superior data structures and algorithms, which yield significant speedups. @InProceedings{ESEC/FSE19p818, author = {Pengfei Su and Qingsen Wang and Milind Chabbi and Xu Liu}, title = {Pinpointing Performance Inefficiencies in Java}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {818--829}, doi = {10.1145/3338906.3338923}, year = {2019}, } Publisher's Version |
|
Liu, Yang |
ESEC/FSE '19: "Cerebro: Context-Aware Adaptive ..."
Cerebro: Context-Aware Adaptive Fuzzing for Effective Vulnerability Detection
Yuekang Li, Yinxing Xue, Hongxu Chen, Xiuheng Wu, Cen Zhang, Xiaofei Xie, Haijun Wang, and Yang Liu (University of Science and Technology of China, China; Nanyang Technological University, Singapore; Zhejiang Sci-Tech University, China) Existing greybox fuzzers mainly utilize program coverage as the goal to guide the fuzzing process. To maximize their outputs, coverage-based greybox fuzzers need to evaluate the quality of seeds properly, which involves making two decisions: 1) which is the most promising seed to fuzz next (seed prioritization), and 2) how many efforts should be made to the current seed (power scheduling). In this paper, we present our fuzzer, Cerebro, to address the above challenges. For the seed prioritization problem, we propose an online multi-objective based algorithm to balance various metrics such as code complexity, coverage, execution time, etc. To address the power scheduling problem, we introduce the concept of input potential to measure the complexity of uncovered code and propose a cost-effective algorithm to update it dynamically. Unlike previous approaches where the fuzzer evaluates an input solely based on the execution traces that it has covered, Cerebro is able to foresee the benefits of fuzzing the input by adaptively evaluating its input potential. We perform a thorough evaluation for Cerebro on 8 different real-world programs. The experiments show that Cerebro can find more vulnerabilities and achieve better coverage than state-of-the-art fuzzers such as AFL and AFLFast. @InProceedings{ESEC/FSE19p533, author = {Yuekang Li and Yinxing Xue and Hongxu Chen and Xiuheng Wu and Cen Zhang and Xiaofei Xie and Haijun Wang and Yang Liu}, title = {Cerebro: Context-Aware Adaptive Fuzzing for Effective Vulnerability Detection}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {533--544}, doi = {10.1145/3338906.3338975}, year = {2019}, } Publisher's Version ESEC/FSE '19: "Locating Vulnerabilities in ..." Locating Vulnerabilities in Binaries via Memory Layout Recovering Haijun Wang, Xiaofei Xie, Shang-Wei Lin, Yun Lin, Yuekang Li, Shengchao Qin, Yang Liu, and Ting Liu (Shenzhen University, China; Nanyang Technological University, Singapore; National University of Singapore, Singapore; Teesside University, UK; Xi'an Jiaotong University, China) Locating vulnerabilities is an important task for security auditing, exploit writing, and code hardening. However, it is challenging to locate vulnerabilities in binary code, because most program semantics (e.g., boundaries of an array) is missing after compilation. Without program semantics, it is difficult to determine whether a memory access exceeds its valid boundaries in binary code. In this work, we propose an approach to locate vulnerabilities based on memory layout recovery. First, we collect a set of passed executions and one failed execution. Then, for passed and failed executions, we restore their program semantics by recovering fine-grained memory layouts based on the memory addressing model. With the memory layouts recovered in passed executions as reference, we can locate vulnerabilities in failed execution by memory layout identification and comparison. Our experiments show that the proposed approach is effective to locate vulnerabilities on 24 out of 25 DARPA’s CGC programs (96%), and can effectively classifies 453 program crashes (in 5 Linux programs) into 19 groups based on their root causes. @InProceedings{ESEC/FSE19p718, author = {Haijun Wang and Xiaofei Xie and Shang-Wei Lin and Yun Lin and Yuekang Li and Shengchao Qin and Yang Liu and Ting Liu}, title = {Locating Vulnerabilities in Binaries via Memory Layout Recovering}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {718--728}, doi = {10.1145/3338906.3338966}, year = {2019}, } Publisher's Version ESEC/FSE '19: "DeepStellar: Model-Based Quantitative ..." DeepStellar: Model-Based Quantitative Analysis of Stateful Deep Learning Systems Xiaoning Du, Xiaofei Xie, Yi Li, Lei Ma, Yang Liu, and Jianjun Zhao (Nanyang Technological University, Singapore; Kyushu University, Japan; Zhejiang Sci-Tech University, China) Deep Learning (DL) has achieved tremendous success in many cutting-edge applications. However, the state-of-the-art DL systems still suffer from quality issues. While some recent progress has been made on the analysis of feed-forward DL systems, little study has been done on the Recurrent Neural Network (RNN)-based stateful DL systems, which are widely used in audio, natural languages and video processing, etc. In this paper, we initiate the very first step towards the quantitative analysis of RNN-based DL systems. We model RNN as an abstract state transition system to characterize its internal behaviors. Based on the abstract model, we design two trace similarity metrics and five coverage criteria which enable the quantitative analysis of RNNs. We further propose two algorithms powered by the quantitative measures for adversarial sample detection and coverage-guided test generation. We evaluate DeepStellar on four RNN-based systems covering image classification and automated speech recognition. The results demonstrate that the abstract model is useful in capturing the internal behaviors of RNNs, and confirm that (1) the similarity metrics could effectively capture the differences between samples even with very small perturbations (achieving 97% accuracy for detecting adversarial samples) and (2) the coverage criteria are useful in revealing erroneous behaviors (generating three times more adversarial samples than random testing and hundreds times more than the unrolling approach). @InProceedings{ESEC/FSE19p477, author = {Xiaoning Du and Xiaofei Xie and Yi Li and Lei Ma and Yang Liu and Jianjun Zhao}, title = {DeepStellar: Model-Based Quantitative Analysis of Stateful Deep Learning Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {477--487}, doi = {10.1145/3338906.3338954}, year = {2019}, } Publisher's Version ESEC/FSE '19: "Generating Query-Specific ..." Generating Query-Specific Class API Summaries Mingwei Liu, Xin Peng, Andrian Marcus, Zhenchang Xing, Wenkai Xie, Shuangshuang Xing, and Yang Liu (Fudan University, China; University of Texas at Dallas, USA; Australian National University, Australia) Source code summaries are concise representations, in form of text and/or code, of complex code elements and are meant to help developers gain a quick understanding that in turns help them perform specific tasks. Generation of summaries that are task-specific is still a challenge in the automatic code summarization field. We propose an approach for generating on-demand, extrinsic hybrid summaries for API classes, relevant to a programming task, formulated as a natural language query. The summaries include the most relevant sentences extracted from the API reference documentation and the most relevant methods. External evaluators assessed the summaries generated for classes retrieved from JDK and Android libraries for several programming tasks. The majority found that the summaries are complete, concise, and readable. A comparison with summaries produce by three baseline approaches revealed that the information present only in our summaries is more relevant than the one present only in the baselines summaries. Finally, an extrinsic evaluation study showed that the summaries help the users evaluating the correctness of API retrieval results, faster and more accurately. @InProceedings{ESEC/FSE19p120, author = {Mingwei Liu and Xin Peng and Andrian Marcus and Zhenchang Xing and Wenkai Xie and Shuangshuang Xing and Yang Liu}, title = {Generating Query-Specific Class API Summaries}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {120--130}, doi = {10.1145/3338906.3338971}, year = {2019}, } Publisher's Version |
|
Liu, Yepang |
ESEC/FSE '19: "Exploring and Exploiting the ..."
Exploring and Exploiting the Correlations between Bug-Inducing and Bug-Fixing Commits
Ming Wen, Rongxin Wu, Yepang Liu, Yongqiang Tian, Xuan Xie, Shing-Chi Cheung, and Zhendong Su (Hong Kong University of Science and Technology, China; Xiamen University, China; Southern University of Science and Technology, China; Sun Yat-sen University, China; ETH Zurich, Switzerland) Bug-inducing commits provide important information to understand when and how bugs were introduced. Therefore, they have been extensively investigated by existing studies and frequently leveraged to facilitate bug fixings in industrial practices. Due to the importance of bug-inducing commits in software debugging, we are motivated to conduct the first systematic empirical study to explore the correlations between bug-inducing and bug-fixing commits in terms of code elements and modifications. To facilitate the study, we collected the inducing and fixing commits for 333 bugs from seven large open-source projects. The empirical findings reveal important and significant correlations between a bug's inducing and fixing commits. We further exploit the usefulness of such correlation findings from two aspects. First, they explain why the SZZ algorithm, the most widely-adopted approach to collecting bug-inducing commits, is imprecise. In view of SZZ's imprecision, we revisited the findings of previous studies based on SZZ, and found that 8 out of 10 previous findings are significantly affected by SZZ's imprecision. Second, they shed lights on the design of automated debugging techniques. For demonstration, we designed approaches that exploit the correlations with respect to statements and change actions. Our experiments on Defects4J show that our approaches can boost the performance of fault localization significantly and also advance existing APR techniques. @InProceedings{ESEC/FSE19p326, author = {Ming Wen and Rongxin Wu and Yepang Liu and Yongqiang Tian and Xuan Xie and Shing-Chi Cheung and Zhendong Su}, title = {Exploring and Exploiting the Correlations between Bug-Inducing and Bug-Fixing Commits}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {326--337}, doi = {10.1145/3338906.3338962}, year = {2019}, } Publisher's Version Info |
|
Li, Xuandong |
ESEC/FSE '19: "Preference-Wise Testing for ..."
Preference-Wise Testing for Android Applications
Yifei Lu, Minxue Pan, Juan Zhai, Tian Zhang, and Xuandong Li (Nanjing University, China) Preferences, the setting options provided by Android, are an essential part of Android apps. Preferences allow users to change app features and behaviors dynamically, and therefore, need to be thoroughly tested. Unfortunately, the specific preferences used in test cases are typically not explicitly specified, forcing testers to manually set options or blindly try different option combinations. To effectively test the impacts of different preference options, this paper presents PREFEST, as a preference-wise enhanced automatic testing approach, for Android apps. Given a set of test cases, PREFEST can locate the preferences that may affect the test cases with a static and dynamic combined analysis on the app under test, and execute these test cases only under necessary option combinations. The evaluation shows that PREFEST can improve 6.8% code coverage and 12.3% branch coverage and find five more real bugs compared to testing with the original test cases. The test cost is reduced by 99% for both the number of test cases and the testing time, compared to testing under pairwise combination of options. @InProceedings{ESEC/FSE19p268, author = {Yifei Lu and Minxue Pan and Juan Zhai and Tian Zhang and Xuandong Li}, title = {Preference-Wise Testing for Android Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {268--278}, doi = {10.1145/3338906.3338980}, year = {2019}, } Publisher's Version |
|
Li, Yi |
ESEC/FSE '19: "DeepStellar: Model-Based Quantitative ..."
DeepStellar: Model-Based Quantitative Analysis of Stateful Deep Learning Systems
Xiaoning Du, Xiaofei Xie, Yi Li, Lei Ma, Yang Liu, and Jianjun Zhao (Nanyang Technological University, Singapore; Kyushu University, Japan; Zhejiang Sci-Tech University, China) Deep Learning (DL) has achieved tremendous success in many cutting-edge applications. However, the state-of-the-art DL systems still suffer from quality issues. While some recent progress has been made on the analysis of feed-forward DL systems, little study has been done on the Recurrent Neural Network (RNN)-based stateful DL systems, which are widely used in audio, natural languages and video processing, etc. In this paper, we initiate the very first step towards the quantitative analysis of RNN-based DL systems. We model RNN as an abstract state transition system to characterize its internal behaviors. Based on the abstract model, we design two trace similarity metrics and five coverage criteria which enable the quantitative analysis of RNNs. We further propose two algorithms powered by the quantitative measures for adversarial sample detection and coverage-guided test generation. We evaluate DeepStellar on four RNN-based systems covering image classification and automated speech recognition. The results demonstrate that the abstract model is useful in capturing the internal behaviors of RNNs, and confirm that (1) the similarity metrics could effectively capture the differences between samples even with very small perturbations (achieving 97% accuracy for detecting adversarial samples) and (2) the coverage criteria are useful in revealing erroneous behaviors (generating three times more adversarial samples than random testing and hundreds times more than the unrolling approach). @InProceedings{ESEC/FSE19p477, author = {Xiaoning Du and Xiaofei Xie and Yi Li and Lei Ma and Yang Liu and Jianjun Zhao}, title = {DeepStellar: Model-Based Quantitative Analysis of Stateful Deep Learning Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {477--487}, doi = {10.1145/3338906.3338954}, year = {2019}, } Publisher's Version |
|
Li, Yuekang |
ESEC/FSE '19: "Cerebro: Context-Aware Adaptive ..."
Cerebro: Context-Aware Adaptive Fuzzing for Effective Vulnerability Detection
Yuekang Li, Yinxing Xue, Hongxu Chen, Xiuheng Wu, Cen Zhang, Xiaofei Xie, Haijun Wang, and Yang Liu (University of Science and Technology of China, China; Nanyang Technological University, Singapore; Zhejiang Sci-Tech University, China) Existing greybox fuzzers mainly utilize program coverage as the goal to guide the fuzzing process. To maximize their outputs, coverage-based greybox fuzzers need to evaluate the quality of seeds properly, which involves making two decisions: 1) which is the most promising seed to fuzz next (seed prioritization), and 2) how many efforts should be made to the current seed (power scheduling). In this paper, we present our fuzzer, Cerebro, to address the above challenges. For the seed prioritization problem, we propose an online multi-objective based algorithm to balance various metrics such as code complexity, coverage, execution time, etc. To address the power scheduling problem, we introduce the concept of input potential to measure the complexity of uncovered code and propose a cost-effective algorithm to update it dynamically. Unlike previous approaches where the fuzzer evaluates an input solely based on the execution traces that it has covered, Cerebro is able to foresee the benefits of fuzzing the input by adaptively evaluating its input potential. We perform a thorough evaluation for Cerebro on 8 different real-world programs. The experiments show that Cerebro can find more vulnerabilities and achieve better coverage than state-of-the-art fuzzers such as AFL and AFLFast. @InProceedings{ESEC/FSE19p533, author = {Yuekang Li and Yinxing Xue and Hongxu Chen and Xiuheng Wu and Cen Zhang and Xiaofei Xie and Haijun Wang and Yang Liu}, title = {Cerebro: Context-Aware Adaptive Fuzzing for Effective Vulnerability Detection}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {533--544}, doi = {10.1145/3338906.3338975}, year = {2019}, } Publisher's Version ESEC/FSE '19: "Locating Vulnerabilities in ..." Locating Vulnerabilities in Binaries via Memory Layout Recovering Haijun Wang, Xiaofei Xie, Shang-Wei Lin, Yun Lin, Yuekang Li, Shengchao Qin, Yang Liu, and Ting Liu (Shenzhen University, China; Nanyang Technological University, Singapore; National University of Singapore, Singapore; Teesside University, UK; Xi'an Jiaotong University, China) Locating vulnerabilities is an important task for security auditing, exploit writing, and code hardening. However, it is challenging to locate vulnerabilities in binary code, because most program semantics (e.g., boundaries of an array) is missing after compilation. Without program semantics, it is difficult to determine whether a memory access exceeds its valid boundaries in binary code. In this work, we propose an approach to locate vulnerabilities based on memory layout recovery. First, we collect a set of passed executions and one failed execution. Then, for passed and failed executions, we restore their program semantics by recovering fine-grained memory layouts based on the memory addressing model. With the memory layouts recovered in passed executions as reference, we can locate vulnerabilities in failed execution by memory layout identification and comparison. Our experiments show that the proposed approach is effective to locate vulnerabilities on 24 out of 25 DARPA’s CGC programs (96%), and can effectively classifies 453 program crashes (in 5 Linux programs) into 19 groups based on their root causes. @InProceedings{ESEC/FSE19p718, author = {Haijun Wang and Xiaofei Xie and Shang-Wei Lin and Yun Lin and Yuekang Li and Shengchao Qin and Yang Liu and Ting Liu}, title = {Locating Vulnerabilities in Binaries via Memory Layout Recovering}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {718--728}, doi = {10.1145/3338906.3338966}, year = {2019}, } Publisher's Version |
|
Li, Ze |
ESEC/FSE '19: "Robust Log-Based Anomaly Detection ..."
Robust Log-Based Anomaly Detection on Unstable Log Data
Xu Zhang, Yong Xu, Qingwei Lin, Bo Qiao, Hongyu Zhang, Yingnong Dang, Chunyu Xie, Xinsheng Yang, Qian Cheng, Ze Li, Junjie Chen, Xiaoting He, Randolph Yao, Jian-Guang Lou, Murali Chintalapati, Furao Shen, and Dongmei Zhang (Microsoft Research, China; Nanjing University, China; University of Newcastle, Australia; Microsoft, USA; Tianjin University, China) Logs are widely used by large and complex software-intensive systems for troubleshooting. There have been a lot of studies on log-based anomaly detection. To detect the anomalies, the existing methods mainly construct a detection model using log event data extracted from historical logs. However, we find that the existing methods do not work well in practice. These methods have the close-world assumption, which assumes that the log data is stable over time and the set of distinct log events is known. However, our empirical study shows that in practice, log data often contains previously unseen log events or log sequences. The instability of log data comes from two sources: 1) the evolution of logging statements, and 2) the processing noise in log data. In this paper, we propose a new log-based anomaly detection approach, called LogRobust. LogRobust extracts semantic information of log events and represents them as semantic vectors. It then detects anomalies by utilizing an attention-based Bi-LSTM model, which has the ability to capture the contextual information in the log sequences and automatically learn the importance of different log events. In this way, LogRobust is able to identify and handle unstable log events and sequences. We have evaluated LogRobust using logs collected from the Hadoop system and an actual online service system of Microsoft. The experimental results show that the proposed approach can well address the problem of log instability and achieve accurate and robust results on real-world, ever-changing log data. @InProceedings{ESEC/FSE19p807, author = {Xu Zhang and Yong Xu and Qingwei Lin and Bo Qiao and Hongyu Zhang and Yingnong Dang and Chunyu Xie and Xinsheng Yang and Qian Cheng and Ze Li and Junjie Chen and Xiaoting He and Randolph Yao and Jian-Guang Lou and Murali Chintalapati and Furao Shen and Dongmei Zhang}, title = {Robust Log-Based Anomaly Detection on Unstable Log Data}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {807--817}, doi = {10.1145/3338906.3338931}, year = {2019}, } Publisher's Version |
|
Li, Zenan |
ESEC/FSE '19: "Boosting Operational DNN Testing ..."
Boosting Operational DNN Testing Efficiency through Conditioning
Zenan Li, Xiaoxing Ma, Chang Xu, Chun Cao, Jingwei Xu, and Jian Lü (Nanjing University, China) With the increasing adoption of Deep Neural Network (DNN) models as integral parts of software systems, efficient operational testing of DNNs is much in demand to ensure these models' actual performance in field conditions. A challenge is that the testing often needs to produce precise results with a very limited budget for labeling data collected in field. Viewing software testing as a practice of reliability estimation through statistical sampling, we re-interpret the idea behind conventional structural coverages as conditioning for variance reduction. With this insight we propose an efficient DNN testing method based on the conditioning on the representation learned by the DNN model under testing. The representation is defined by the probability distribution of the output of neurons in the last hidden layer of the model. To sample from this high dimensional distribution in which the operational data are sparsely distributed, we design an algorithm leveraging cross entropy minimization. Experiments with various DNN models and datasets were conducted to evaluate the general efficiency of the approach. The results show that, compared with simple random sampling, this approach requires only about a half of labeled inputs to achieve the same level of precision. @InProceedings{ESEC/FSE19p499, author = {Zenan Li and Xiaoxing Ma and Chang Xu and Chun Cao and Jingwei Xu and Jian Lü}, title = {Boosting Operational DNN Testing Efficiency through Conditioning}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {499--509}, doi = {10.1145/3338906.3338930}, year = {2019}, } Publisher's Version |
|
Lo, David |
ESEC/FSE '19: "AnswerBot: An Answer Summary ..."
AnswerBot: An Answer Summary Generation Tool Based on Stack Overflow
Liang Cai, Haoye Wang, Bowen Xu, Qiao Huang, Xin Xia, David Lo, and Zhenchang Xing (Zhejiang University, China; Singapore Management University, Singapore; Monash University, Australia; Australian National University, Australia) Software Q&A sites (like Stack Overflow) play an essential role in developers’ day-to-day work for problem-solving. Although search engines (like Google) are widely used to obtain a list of relevant posts for technical problems, we observed that the redundant relevant posts and sheer amount of information barriers developers to digest and identify the useful answers. In this paper, we propose a tool AnswerBot which enables to automatically generate an answer summary for a technical problem. AnswerBot consists of three main stages, (1) relevant question retrieval, (2) useful answer paragraph selection, (3) diverse answer summary generation. We implement it in the form of a search engine website. To evaluate AnswerBot, we first build a repository includes a large number of Java questions and their corresponding answers from Stack Overflow. Then, we conduct a user study that evaluates the answer summary generated by AnswerBot and two baselines (based on Google and Stack Overflow search engine) for 100 queries. The results show that the answer summaries generated by AnswerBot are more relevant, useful, and diverse. Moreover, we also substantially improved the efficiency of AnswerBot (from 309 to 8 seconds per query). @InProceedings{ESEC/FSE19p1134, author = {Liang Cai and Haoye Wang and Bowen Xu and Qiao Huang and Xin Xia and David Lo and Zhenchang Xing}, title = {AnswerBot: An Answer Summary Generation Tool Based on Stack Overflow}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1134--1138}, doi = {10.1145/3338906.3341186}, year = {2019}, } Publisher's Version ESEC/FSE '19: "BIKER: A Tool for Bi-Information ..." BIKER: A Tool for Bi-Information Source Based API Method Recommendation Liang Cai, Haoye Wang, Qiao Huang, Xin Xia, Zhenchang Xing, and David Lo (Zhejiang University, China; Monash University, Australia; Australian National University, Australia; Singapore Management University, Singapore) Application Programming Interfaces (APIs) in software libraries play an important role in modern software development. Although most libraries provide API documentation as a reference, developers may find it difficult to directly search for appropriate APIs in documentation using the natural language description of the programming tasks. We call such phenomenon as knowledge gap, which refers to the fact that API documentation mainly describes API functionality and structure but lacks other types of information like concepts and purposes. In this paper, we propose a Java API recommendation tool named BIKER (Bi-Information source based KnowledgE Recommendation) to bridge the knowledge gap. We implement BIKER as a search engine website. Given a query in natural language, instead of directly searching API documentation, BIKER first searches for similar API-related questions on Stack Overflow to extract candidate APIs. Then, BIKER ranks them by considering the query’s similarity with both Stack Overflow posts and API documentation. Finally, to help developers better understand why each API is recommended and how to use them in practice, BIKER summarizes and presents supplementary information (e.g., API description, code examples in Stack Overflow posts) for each recommended API. Our quantitative evaluation and user study demonstrate that BIKER can help developers find appropriate APIs more efficiently and precisely. @InProceedings{ESEC/FSE19p1075, author = {Liang Cai and Haoye Wang and Qiao Huang and Xin Xia and Zhenchang Xing and David Lo}, title = {BIKER: A Tool for Bi-Information Source Based API Method Recommendation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1075--1079}, doi = {10.1145/3338906.3341174}, year = {2019}, } Publisher's Version |
|
Lohia, Pranay |
ESEC/FSE '19: "Design Diagrams as Ontological ..."
Design Diagrams as Ontological Source
Pranay Lohia, Kalapriya Kannan, Biplav Srivastava, and Sameep Mehta (IBM Research, India; IBM Research, USA) beginabstract In custom software development projects, it is frequently the case that the same type of software is being built for different customers. The deliverables are similar because they address the same market (e.g., Telecom, Banking) or have similar functions or both. However, most organisations do not take advantage of this similarity and conduct each project from scratch leading to lesser margins and lower quality. Our key observation is that the similarity among the projects alludes to the existence of a veritable domain of discourse whose ontology, if created, would make the similarity across the projects explicit. Design diagrams are an integral part of any commercial software project deliverables as they document crucial facets of the software solution. We propose an approach to extract ontological information from UML design diagrams (class and sequence diagrams) and represent it as domain ontology in a convenient representation. This ontology not only helps in developing a better understanding of the domain but also fosters software reuse for future software projects in that domain. Initial results on extracting ontology from thousands of model from public repository show that the created ontologies are accurate and help in better software reuse for new solutions. endabstract @InProceedings{ESEC/FSE19p863, author = {Pranay Lohia and Kalapriya Kannan and Biplav Srivastava and Sameep Mehta}, title = {Design Diagrams as Ontological Source}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {863--873}, doi = {10.1145/3338906.3340446}, year = {2019}, } Publisher's Version ESEC/FSE '19: "Black Box Fairness Testing ..." Black Box Fairness Testing of Machine Learning Models Aniya Aggarwal, Pranay Lohia, Seema Nagar, Kuntal Dey, and Diptikalyan Saha (IBM Research, India) Any given AI system cannot be accepted unless its trustworthiness is proven. An important characteristic of a trustworthy AI system is the absence of algorithmic bias. 'Individual discrimination' exists when a given individual different from another only in 'protected attributes' (e.g., age, gender, race, etc.) receives a different decision outcome from a given machine learning (ML) model as compared to the other individual. The current work addresses the problem of detecting the presence of individual discrimination in given ML models. Detection of individual discrimination is test-intensive in a black-box setting, which is not feasible for non-trivial systems. We propose a methodology for auto-generation of test inputs, for the task of detecting individual discrimination. Our approach combines two well-established techniques - symbolic execution and local explainability for effective test case generation. We empirically show that our approach to generate test cases is very effective as compared to the best-known benchmark systems that we examine. @InProceedings{ESEC/FSE19p625, author = {Aniya Aggarwal and Pranay Lohia and Seema Nagar and Kuntal Dey and Diptikalyan Saha}, title = {Black Box Fairness Testing of Machine Learning Models}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {625--635}, doi = {10.1145/3338906.3338937}, year = {2019}, } Publisher's Version |
|
Lou, Jian-Guang |
ESEC/FSE '19: "Robust Log-Based Anomaly Detection ..."
Robust Log-Based Anomaly Detection on Unstable Log Data
Xu Zhang, Yong Xu, Qingwei Lin, Bo Qiao, Hongyu Zhang, Yingnong Dang, Chunyu Xie, Xinsheng Yang, Qian Cheng, Ze Li, Junjie Chen, Xiaoting He, Randolph Yao, Jian-Guang Lou, Murali Chintalapati, Furao Shen, and Dongmei Zhang (Microsoft Research, China; Nanjing University, China; University of Newcastle, Australia; Microsoft, USA; Tianjin University, China) Logs are widely used by large and complex software-intensive systems for troubleshooting. There have been a lot of studies on log-based anomaly detection. To detect the anomalies, the existing methods mainly construct a detection model using log event data extracted from historical logs. However, we find that the existing methods do not work well in practice. These methods have the close-world assumption, which assumes that the log data is stable over time and the set of distinct log events is known. However, our empirical study shows that in practice, log data often contains previously unseen log events or log sequences. The instability of log data comes from two sources: 1) the evolution of logging statements, and 2) the processing noise in log data. In this paper, we propose a new log-based anomaly detection approach, called LogRobust. LogRobust extracts semantic information of log events and represents them as semantic vectors. It then detects anomalies by utilizing an attention-based Bi-LSTM model, which has the ability to capture the contextual information in the log sequences and automatically learn the importance of different log events. In this way, LogRobust is able to identify and handle unstable log events and sequences. We have evaluated LogRobust using logs collected from the Hadoop system and an actual online service system of Microsoft. The experimental results show that the proposed approach can well address the problem of log instability and achieve accurate and robust results on real-world, ever-changing log data. @InProceedings{ESEC/FSE19p807, author = {Xu Zhang and Yong Xu and Qingwei Lin and Bo Qiao and Hongyu Zhang and Yingnong Dang and Chunyu Xie and Xinsheng Yang and Qian Cheng and Ze Li and Junjie Chen and Xiaoting He and Randolph Yao and Jian-Guang Lou and Murali Chintalapati and Furao Shen and Dongmei Zhang}, title = {Robust Log-Based Anomaly Detection on Unstable Log Data}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {807--817}, doi = {10.1145/3338906.3338931}, year = {2019}, } Publisher's Version |
|
Loukeris, Michail |
ESEC/FSE '19: "Efficient Computing in a Safe ..."
Efficient Computing in a Safe Environment
Michail Loukeris (Athens University of Economics and Business, Greece) Modern computer systems are facing security challenges and thus are forced to employ various encryption, mitigation mechanisms, and other measures that affect significantly their performance. In this study, we aim to identify the energy and run-time performance implications of Meltdown and Spectre mitigation mechanisms. To achieve our goal, we experiment on server platform using different test cases. Our results highlight that request handling and memory operations are noticeably affected from mitigation mechanisms, both in terms of energy and run-time performance. @InProceedings{ESEC/FSE19p1208, author = {Michail Loukeris}, title = {Efficient Computing in a Safe Environment}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1208--1210}, doi = {10.1145/3338906.3342491}, year = {2019}, } Publisher's Version |
|
Lü, Jian |
ESEC/FSE '19: "Boosting Operational DNN Testing ..."
Boosting Operational DNN Testing Efficiency through Conditioning
Zenan Li, Xiaoxing Ma, Chang Xu, Chun Cao, Jingwei Xu, and Jian Lü (Nanjing University, China) With the increasing adoption of Deep Neural Network (DNN) models as integral parts of software systems, efficient operational testing of DNNs is much in demand to ensure these models' actual performance in field conditions. A challenge is that the testing often needs to produce precise results with a very limited budget for labeling data collected in field. Viewing software testing as a practice of reliability estimation through statistical sampling, we re-interpret the idea behind conventional structural coverages as conditioning for variance reduction. With this insight we propose an efficient DNN testing method based on the conditioning on the representation learned by the DNN model under testing. The representation is defined by the probability distribution of the output of neurons in the last hidden layer of the model. To sample from this high dimensional distribution in which the operational data are sparsely distributed, we design an algorithm leveraging cross entropy minimization. Experiments with various DNN models and datasets were conducted to evaluate the general efficiency of the approach. The results show that, compared with simple random sampling, this approach requires only about a half of labeled inputs to achieve the same level of precision. @InProceedings{ESEC/FSE19p499, author = {Zenan Li and Xiaoxing Ma and Chang Xu and Chun Cao and Jingwei Xu and Jian Lü}, title = {Boosting Operational DNN Testing Efficiency through Conditioning}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {499--509}, doi = {10.1145/3338906.3338930}, year = {2019}, } Publisher's Version |
|
Lu, Jing |
ESEC/FSE '19: "Assessing the Quality of the ..."
Assessing the Quality of the Steps to Reproduce in Bug Reports
Oscar Chaparro, Carlos Bernal-Cárdenas, Jing Lu, Kevin Moran, Andrian Marcus, Massimiliano Di Penta, Denys Poshyvanyk, and Vincent Ng (College of William and Mary, USA; University of Texas at Dallas, USA; University of Sannio, Italy) A major problem with user-written bug reports, indicated by developers and documented by researchers, is the (lack of high) quality of the reported steps to reproduce the bugs. Low-quality steps to reproduce lead to excessive manual effort spent on bug triage and resolution. This paper proposes Euler, an approach that automatically identifies and assesses the quality of the steps to reproduce in a bug report, providing feedback to the reporters, which they can use to improve the bug report. The feedback provided by Euler was assessed by external evaluators and the results indicate that Euler correctly identified 98% of the existing steps to reproduce and 58% of the missing ones, while 73% of its quality annotations are correct. @InProceedings{ESEC/FSE19p86, author = {Oscar Chaparro and Carlos Bernal-Cárdenas and Jing Lu and Kevin Moran and Andrian Marcus and Massimiliano Di Penta and Denys Poshyvanyk and Vincent Ng}, title = {Assessing the Quality of the Steps to Reproduce in Bug Reports}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {86--96}, doi = {10.1145/3338906.3338947}, year = {2019}, } Publisher's Version Info |
|
Luo, Chuan |
ESEC/FSE '19: "Towards More Efficient Meta-heuristic ..."
Towards More Efficient Meta-heuristic Algorithms for Combinatorial Test Generation
Jinkun Lin, Shaowei Cai, Chuan Luo, Qingwei Lin, and Hongyu Zhang (Institute of Software at Chinese Academy of Sciences, China; Microsoft Research, China; University of Newcastle, Australia) Combinatorial interaction testing (CIT) is a popular approach to detecting faults in highly configurable software systems. The core task of CIT is to generate a small test suite called a t-way covering array (CA), where t is the covering strength. Many meta-heuristic algorithms have been proposed to solve the constrained covering array generating (CCAG) problem. A major drawback of existing algorithms is that they usually need considerable time to obtain a good-quality solution, which hinders the wider applications of such algorithms. We observe that the high time consumption of existing meta-heuristic algorithms for CCAG is mainly due to the procedure of score computation. In this work, we propose a much more efficient method for score computation. The score computation method is applied to a state-of-the-art algorithm TCA, showing significant improvements. The new score computation method opens a way to utilize algorithmic ideas relying on scores which were not affordable previously. We integrate a gradient descent search step to further improve the algorithm, leading to a new algorithm called FastCA. Experiments on a broad range of real-world benchmarks and synthetic benchmarks show that, FastCA significantly outperforms state-of-the-art algorithms for CCAG algorithms, in terms of both the size of obtained covering array and the run time. @InProceedings{ESEC/FSE19p212, author = {Jinkun Lin and Shaowei Cai and Chuan Luo and Qingwei Lin and Hongyu Zhang}, title = {Towards More Efficient Meta-heuristic Algorithms for Combinatorial Test Generation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {212--222}, doi = {10.1145/3338906.3338914}, year = {2019}, } Publisher's Version |
|
Lu, Xuan |
ESEC/FSE '19: "SEntiMoji: An Emoji-Powered ..."
SEntiMoji: An Emoji-Powered Learning Approach for Sentiment Analysis in Software Engineering
Zhenpeng Chen, Yanbin Cao, Xuan Lu, Qiaozhu Mei, and Xuanzhe Liu (Peking University, China; University of Michigan, USA) Sentiment analysis has various application scenarios in software engineering (SE), such as detecting developers' emotions in commit messages and identifying their opinions on Q&A forums. However, commonly used out-of-the-box sentiment analysis tools cannot obtain reliable results on SE tasks and the misunderstanding of technical jargon is demonstrated to be the main reason. Then, researchers have to utilize labeled SE-related texts to customize sentiment analysis for SE tasks via a variety of algorithms. However, the scarce labeled data can cover only very limited expressions and thus cannot guarantee the analysis quality. To address such a problem, we turn to the easily available emoji usage data for help. More specifically, we employ emotional emojis as noisy labels of sentiments and propose a representation learning approach that uses both Tweets and GitHub posts containing emojis to learn sentiment-aware representations for SE-related texts. These emoji-labeled posts can not only supply the technical jargon, but also incorporate more general sentiment patterns shared across domains. They as well as labeled data are used to learn the final sentiment classifier. Compared to the existing sentiment analysis methods used in SE, the proposed approach can achieve significant improvement on representative benchmark datasets. By further contrast experiments, we find that the Tweets make a key contribution to the power of our approach. This finding informs future research not to unilaterally pursue the domain-specific resource, but try to transform knowledge from the open domain through ubiquitous signals such as emojis. @InProceedings{ESEC/FSE19p841, author = {Zhenpeng Chen and Yanbin Cao and Xuan Lu and Qiaozhu Mei and Xuanzhe Liu}, title = {SEntiMoji: An Emoji-Powered Learning Approach for Sentiment Analysis in Software Engineering}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {841--852}, doi = {10.1145/3338906.3338977}, year = {2019}, } Publisher's Version |
|
Lu, Yifei |
ESEC/FSE '19: "Preference-Wise Testing for ..."
Preference-Wise Testing for Android Applications
Yifei Lu, Minxue Pan, Juan Zhai, Tian Zhang, and Xuandong Li (Nanjing University, China) Preferences, the setting options provided by Android, are an essential part of Android apps. Preferences allow users to change app features and behaviors dynamically, and therefore, need to be thoroughly tested. Unfortunately, the specific preferences used in test cases are typically not explicitly specified, forcing testers to manually set options or blindly try different option combinations. To effectively test the impacts of different preference options, this paper presents PREFEST, as a preference-wise enhanced automatic testing approach, for Android apps. Given a set of test cases, PREFEST can locate the preferences that may affect the test cases with a static and dynamic combined analysis on the app under test, and execute these test cases only under necessary option combinations. The evaluation shows that PREFEST can improve 6.8% code coverage and 12.3% branch coverage and find five more real bugs compared to testing with the original test cases. The test cost is reduced by 99% for both the number of test cases and the testing time, compared to testing under pairwise combination of options. @InProceedings{ESEC/FSE19p268, author = {Yifei Lu and Minxue Pan and Juan Zhai and Tian Zhang and Xuandong Li}, title = {Preference-Wise Testing for Android Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {268--278}, doi = {10.1145/3338906.3338980}, year = {2019}, } Publisher's Version |
|
Maalej, Walid |
ESEC/FSE '19: "On Using Machine Learning ..."
On Using Machine Learning to Identify Knowledge in API Reference Documentation
Davide Fucci, Alireza Mollaalizadehbahnemiri, and Walid Maalej (University of Hamburg, Germany) Using API reference documentation like JavaDoc is an integral part of software development. Previous research introduced a grounded taxonomy that organizes API documentation knowledge in 12 types, including knowledge about the Functionality, Structure, and Quality of an API. We study how well modern text classification approaches can automatically identify documentation containing specific knowledge types. We compared conventional machine learning (k-NN and SVM) with deep learning approaches trained on manually-annotated Java and .NET API documentation (n = 5,574). When classifying the knowledge types individually (i.e., multiple binary classifiers) the best AUPRC was up to 87 @InProceedings{ESEC/FSE19p109, author = {Davide Fucci and Alireza Mollaalizadehbahnemiri and Walid Maalej}, title = {On Using Machine Learning to Identify Knowledge in API Reference Documentation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {109--119}, doi = {10.1145/3338906.3338943}, year = {2019}, } Publisher's Version |
|
Maddila, Chandra |
ESEC/FSE '19: "WhoDo: Automating Reviewer ..."
WhoDo: Automating Reviewer Suggestions at Scale
Sumit Asthana, Rahul Kumar, Ranjita Bhagwan, Christian Bird, Chetan Bansal, Chandra Maddila, Sonu Mehta, and B. Ashok (Microsoft Research, India; Microsoft Research, USA) Today's software development is distributed and involves continuous changes for new features and yet, their development cycle has to be fast and agile. An important component of enabling this agility is selecting the right reviewers for every code-change - the smallest unit of the development cycle. Modern tool-based code review is proven to be an effective way to achieve appropriate code review of software changes. However, the selection of reviewers in these code review systems is at best manual. As software and teams scale, this poses the challenge of selecting the right reviewers, which in turn determines software quality over time. While previous work has suggested automatic approaches to code reviewer recommendations, it has been limited to retrospective analysis. We not only deploy a reviewer suggestions algorithm - WhoDo - and evaluate its effect but also incorporate load balancing as part of it to address one of its major shortcomings: of recommending experienced developers very frequently. We evaluate the effect of this hybrid recommendation + load balancing system on five repositories within Microsoft. Our results are based around various aspects of a commit and how code review affects that. We attempt to quantitatively answer questions which are supposed to play a vital role in effective code review through our data and substantiate it through qualitative feedback of partner repositories. @InProceedings{ESEC/FSE19p937, author = {Sumit Asthana and Rahul Kumar and Ranjita Bhagwan and Christian Bird and Chetan Bansal and Chandra Maddila and Sonu Mehta and B. Ashok}, title = {WhoDo: Automating Reviewer Suggestions at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {937--945}, doi = {10.1145/3338906.3340449}, year = {2019}, } Publisher's Version ESEC/FSE '19: "Predicting Pull Request Completion ..." Predicting Pull Request Completion Time: A Case Study on Large Scale Cloud Services Chandra Maddila, Chetan Bansal, and Nachiappan Nagappan (Microsoft Research, USA) Effort estimation models have been long studied in software engineering research. Effort estimation models help organizations and individuals plan and track progress of their software projects and individual tasks to help plan delivery milestones better. Towards this end, there is a large body of work that has been done on effort estimation for projects but little work on an individual checkin (Pull Request) level. In this paper we present a methodology that provides effort estimates on individual developer check-ins which is displayed to developers to help them track their work items. Given the cloud development infrastructure pervasive in companies, it has enabled us to deploy our Pull Request Lifetime prediction system to several thousand developers across multiple software families. We observe from our deployment that the pull request lifetime prediction system conservatively helps save 44.61% of the developer time by accelerating Pull Requests to completion. @InProceedings{ESEC/FSE19p874, author = {Chandra Maddila and Chetan Bansal and Nachiappan Nagappan}, title = {Predicting Pull Request Completion Time: A Case Study on Large Scale Cloud Services}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {874--882}, doi = {10.1145/3338906.3340457}, year = {2019}, } Publisher's Version |
|
Madeiral, Fernanda |
ESEC/FSE '19: "Empirical Review of Java Program ..."
Empirical Review of Java Program Repair Tools: A Large-Scale Experiment on 2,141 Bugs and 23,551 Repair Attempts
Thomas Durieux, Fernanda Madeiral, Matias Martinez, and Rui Abreu (University of Lisbon, Portugal; INESC-ID, Portugal; Federal University of Uberlândia, Brazil; Polytechnic University of Hauts-de-France, France) In the past decade, research on test-suite-based automatic program repair has grown significantly. Each year, new approaches and implementations are featured in major software engineering venues. However, most of those approaches are evaluated on a single benchmark of bugs, which are also rarely reproduced by other researchers. In this paper, we present a large-scale experiment using 11 Java test-suite-based repair tools and 2,141 bugs from 5 benchmarks. Our goal is to have a better understanding of the current state of automatic program repair tools on a large diversity of benchmarks. Our investigation is guided by the hypothesis that the repairability of repair tools might not be generalized across different benchmarks. We found that the 11 tools 1) are able to generate patches for 21% of the bugs from the 5 benchmarks, and 2) have better performance on Defects4J compared to other benchmarks, by generating patches for 47% of the bugs from Defects4J compared to 10-30% of bugs from the other benchmarks. Our experiment comprises 23,551 repair attempts, which we used to find causes of non-patch generation. These causes are reported in this paper, which can help repair tool designers to improve their approaches and tools. @InProceedings{ESEC/FSE19p302, author = {Thomas Durieux and Fernanda Madeiral and Matias Martinez and Rui Abreu}, title = {Empirical Review of Java Program Repair Tools: A Large-Scale Experiment on 2,141 Bugs and 23,551 Repair Attempts}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {302--313}, doi = {10.1145/3338906.3338911}, year = {2019}, } Publisher's Version Info Artifacts Reusable |
|
Ma, Fuchen |
ESEC/FSE '19: "EVMFuzzer: Detect EVM Vulnerabilities ..."
EVMFuzzer: Detect EVM Vulnerabilities via Fuzz Testing
Ying Fu, Meng Ren, Fuchen Ma, Heyuan Shi, Xin Yang, Yu Jiang, Huizhong Li, and Xiang Shi (Tsinghua University, China; WeBank, China) Ethereum Virtual Machine (EVM) is the run-time environment for smart contracts and its vulnerabilities may lead to serious problems to the Ethereum ecology. With lots of techniques being continuously developed for the validation of smart contracts, the testing of EVM remains challenging because of the special test input format and the absence of oracles. In this paper, we propose EVMFuzzer, the first tool that uses differential fuzzing technique to detect vulnerabilities of EVM. The core idea is to continuously generate seed contracts and feed them to the target EVM and the benchmark EVMs, so as to find as many inconsistencies among execution results as possible, eventually discover vulnerabilities with output cross-referencing. Given a target EVM and its APIs, EVMFuzzer generates seed contracts via a set of predefined mutators, and then employs dynamic priority scheduling algorithm to guide seed contracts selection and maximize the inconsistency. Finally, EVMFuzzer leverages benchmark EVMs as cross-referencing oracles to avoid manual checking. With EVMFuzzer, we have found several previously unknown security bugs in four widely used EVMs, and 5 of which had been included in Common Vulnerabilities and Exposures (CVE) IDs in U.S. National Vulnerability Database. The video is presented at https://youtu.be/9Lejgf2GSOk. @InProceedings{ESEC/FSE19p1110, author = {Ying Fu and Meng Ren and Fuchen Ma and Heyuan Shi and Xin Yang and Yu Jiang and Huizhong Li and Xiang Shi}, title = {EVMFuzzer: Detect EVM Vulnerabilities via Fuzz Testing}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1110--1114}, doi = {10.1145/3338906.3341175}, year = {2019}, } Publisher's Version |
|
Ma, Lei |
ESEC/FSE '19: "DeepStellar: Model-Based Quantitative ..."
DeepStellar: Model-Based Quantitative Analysis of Stateful Deep Learning Systems
Xiaoning Du, Xiaofei Xie, Yi Li, Lei Ma, Yang Liu, and Jianjun Zhao (Nanyang Technological University, Singapore; Kyushu University, Japan; Zhejiang Sci-Tech University, China) Deep Learning (DL) has achieved tremendous success in many cutting-edge applications. However, the state-of-the-art DL systems still suffer from quality issues. While some recent progress has been made on the analysis of feed-forward DL systems, little study has been done on the Recurrent Neural Network (RNN)-based stateful DL systems, which are widely used in audio, natural languages and video processing, etc. In this paper, we initiate the very first step towards the quantitative analysis of RNN-based DL systems. We model RNN as an abstract state transition system to characterize its internal behaviors. Based on the abstract model, we design two trace similarity metrics and five coverage criteria which enable the quantitative analysis of RNNs. We further propose two algorithms powered by the quantitative measures for adversarial sample detection and coverage-guided test generation. We evaluate DeepStellar on four RNN-based systems covering image classification and automated speech recognition. The results demonstrate that the abstract model is useful in capturing the internal behaviors of RNNs, and confirm that (1) the similarity metrics could effectively capture the differences between samples even with very small perturbations (achieving 97% accuracy for detecting adversarial samples) and (2) the coverage criteria are useful in revealing erroneous behaviors (generating three times more adversarial samples than random testing and hundreds times more than the unrolling approach). @InProceedings{ESEC/FSE19p477, author = {Xiaoning Du and Xiaofei Xie and Yi Li and Lei Ma and Yang Liu and Jianjun Zhao}, title = {DeepStellar: Model-Based Quantitative Analysis of Stateful Deep Learning Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {477--487}, doi = {10.1145/3338906.3338954}, year = {2019}, } Publisher's Version |
|
Manns, Glenna |
ESEC/FSE '19: "A Statistics-Based Performance ..."
A Statistics-Based Performance Testing Methodology for Cloud Applications
Sen He, Glenna Manns, John Saunders, Wei Wang, Lori Pollock, and Mary Lou Soffa (University of Texas at San Antonio, USA; University of Virginia, USA; University of Delaware, USA) The low cost of resource ownership and flexibility have led users to increasingly port their applications to the clouds. To fully realize the cost benefits of cloud services, users usually need to reliably know the execution performance of their applications. However, due to the random performance fluctuations experienced by cloud applications, the black box nature of public clouds and the cloud usage costs, testing on clouds to acquire accurate performance results is extremely difficult. In this paper, we present a novel cloud performance testing methodology called PT4Cloud. By employing non-parametric statistical approaches of likelihood theory and the bootstrap method, PT4Cloud provides reliable stop conditions to obtain highly accurate performance distributions with confidence bands. These statistical approaches also allow users to specify intuitive accuracy goals and easily trade between accuracy and testing cost. We evaluated PT4Cloud with 33 benchmark configurations on Amazon Web Service and Chameleon clouds. When compared with performance data obtained from extensive performance tests, PT4Cloud provides testing results with 95.4% accuracy on average while reducing the number of test runs by 62%. We also propose two test execution reduction techniques for PT4Cloud, which can reduce the number of test runs by 90.1% while retaining an average accuracy of 91%. We compared our technique to three other techniques and found that our results are much more accurate. @InProceedings{ESEC/FSE19p188, author = {Sen He and Glenna Manns and John Saunders and Wei Wang and Lori Pollock and Mary Lou Soffa}, title = {A Statistics-Based Performance Testing Methodology for Cloud Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {188--199}, doi = {10.1145/3338906.3338912}, year = {2019}, } Publisher's Version Artifacts Reusable |
|
Marcus, Andrian |
ESEC/FSE '19: "Assessing the Quality of the ..."
Assessing the Quality of the Steps to Reproduce in Bug Reports
Oscar Chaparro, Carlos Bernal-Cárdenas, Jing Lu, Kevin Moran, Andrian Marcus, Massimiliano Di Penta, Denys Poshyvanyk, and Vincent Ng (College of William and Mary, USA; University of Texas at Dallas, USA; University of Sannio, Italy) A major problem with user-written bug reports, indicated by developers and documented by researchers, is the (lack of high) quality of the reported steps to reproduce the bugs. Low-quality steps to reproduce lead to excessive manual effort spent on bug triage and resolution. This paper proposes Euler, an approach that automatically identifies and assesses the quality of the steps to reproduce in a bug report, providing feedback to the reporters, which they can use to improve the bug report. The feedback provided by Euler was assessed by external evaluators and the results indicate that Euler correctly identified 98% of the existing steps to reproduce and 58% of the missing ones, while 73% of its quality annotations are correct. @InProceedings{ESEC/FSE19p86, author = {Oscar Chaparro and Carlos Bernal-Cárdenas and Jing Lu and Kevin Moran and Andrian Marcus and Massimiliano Di Penta and Denys Poshyvanyk and Vincent Ng}, title = {Assessing the Quality of the Steps to Reproduce in Bug Reports}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {86--96}, doi = {10.1145/3338906.3338947}, year = {2019}, } Publisher's Version Info ESEC/FSE '19: "Generating Query-Specific ..." Generating Query-Specific Class API Summaries Mingwei Liu, |