ESEC/FSE 2021 – Author Index |
Contents -
Abstracts -
Authors
|
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Abuduweili, Abulikemu |
ESEC/FSE '21: "Effective Low Capacity Status ..."
Effective Low Capacity Status Prediction for Cloud Systems
Hang Dong, Si Qin, Yong Xu, Bo Qiao, Shandan Zhou, Xian Yang, Chuan Luo, Pu Zhao, Qingwei Lin, Hongyu Zhang, Abulikemu Abuduweili, Sanjay Ramanujan, Karthikeyan Subramanian, Andrew Zhou, Saravanakumar Rajmohan, Dongmei Zhang, and Thomas Moscibroda (Microsoft Research, China; Microsoft Azure, USA; Hong Kong Baptist University, China; University of Newcastle, Australia; Microsoft 365, China; Microsoft 365, USA) In cloud systems, an accurate capacity planning is very important for cloud provider to improve service availability. Traditional methods simply predicting "when the available resources is exhausted" are not effective due to customer demand fragmentation and platform allocation constraints. In this paper, we propose a novel prediction approach which proactively predicts the level of resource allocation failures from the perspective of low capacity status. By jointly considering the data from different sources in both time series form and static form, the proposed approach can make accurate LCS predictions in a complex and dynamic cloud environment, and thereby improve the service availability of cloud systems. The proposed approach is evaluated by real-world datasets collected from a large scale public cloud platform, and the results confirm its effectiveness. @InProceedings{ESEC/FSE21p1236, author = {Hang Dong and Si Qin and Yong Xu and Bo Qiao and Shandan Zhou and Xian Yang and Chuan Luo and Pu Zhao and Qingwei Lin and Hongyu Zhang and Abulikemu Abuduweili and Sanjay Ramanujan and Karthikeyan Subramanian and Andrew Zhou and Saravanakumar Rajmohan and Dongmei Zhang and Thomas Moscibroda}, title = {Effective Low Capacity Status Prediction for Cloud Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1236--1241}, doi = {10.1145/3468264.3473917}, year = {2021}, } Publisher's Version |
|
Agarwal, Abhimanyu |
ESEC/FSE '21: "A Replication of ‘DeepBugs: ..."
A Replication of ‘DeepBugs: A Learning Approach to Name-based Bug Detection’
Jordan Winkler, Abhimanyu Agarwal, Caleb Tung, Dario Rios Ugalde, Young Jin Jung, and James C. Davis (Purdue University, USA; Lockheed Martin, USA) We replicated the main result of DeepBugs, a bug detection algorithm for name-based bugs. The original authors evaluated it in three contexts: swapped-argument bugs, wrong binary operator,and wrong binary operator operands. We followed the algorithm and replicated the results for swapped-argument bugs. Our replication used independent implementations of the major components: training set generation, token vectorization, and neural network data pipeline, model, and loss function. Using the same dataset and the same testing process, we report comparable performance: within 2% of the accuracy reported by Pradel and Sen. @InProceedings{ESEC/FSE21p1604, author = {Jordan Winkler and Abhimanyu Agarwal and Caleb Tung and Dario Rios Ugalde and Young Jin Jung and James C. Davis}, title = {A Replication of ‘DeepBugs: A Learning Approach to Name-based Bug Detection’}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1604--1604}, doi = {10.1145/3468264.3477221}, year = {2021}, } Publisher's Version |
|
Agrawal, Ankit |
ESEC/FSE '21: "Hazard Analysis for Human-on-the-Loop ..."
Hazard Analysis for Human-on-the-Loop Interactions in sUAS Systems
Michael Vierhauser, Md Nafee Al Islam, Ankit Agrawal, Jane Cleland-Huang, and James Mason (JKU Linz, Austria; University of Notre Dame, USA; Northrop Grumman, USA) With the rise of new AI technologies, autonomous systems are moving towards a paradigm in which increasing levels of responsibility are shifted from the human to the system, creating a transition from human-in-the-loop systems to human-on-the-loop (HoTL) systems. This has a significant impact on the safety analysis of such systems, as new types of errors occurring at the boundaries of human-machine interactions need to be taken into consideration. Traditional safety analysis typically focuses on system-level hazards with little focus on user-related or user-induced hazards that can cause critical system failures. To address this issue, we construct domain-level safety analysis assets for sUAS (small unmanned aerial systems) applications and describe the process we followed to explicitly, and systematically identify Human Interaction Points (HiPs), Hazard Factors and Mitigations from system hazards. We evaluate our approach by first investigating the extent to which recent sUAS incidents are covered by our hazard trees, and second by performing a study with six domain experts using our hazard trees to identify and document hazards for sUAS usage scenarios. Our study showed that our hazard trees provided effective coverage for a wide variety of sUAS application scenarios and were useful for stimulating safety thinking and helping users to identify and potentially mitigate human-interaction hazards. @InProceedings{ESEC/FSE21p8, author = {Michael Vierhauser and Md Nafee Al Islam and Ankit Agrawal and Jane Cleland-Huang and James Mason}, title = {Hazard Analysis for Human-on-the-Loop Interactions in sUAS Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {8--19}, doi = {10.1145/3468264.3468534}, year = {2021}, } Publisher's Version Info |
|
Ahad, Ali |
ESEC/FSE '21: "Swarmbug: Debugging Configuration ..."
Swarmbug: Debugging Configuration Bugs in Swarm Robotics
Chijung Jung, Ali Ahad, Jinho Jung, Sebastian Elbaum, and Yonghwi Kwon (University of Virginia, USA; Georgia Institute of Technology, USA) Swarm robotics collectively solve problems that are challenging for individual robots, from environmental monitoring to entertainment. The algorithms enabling swarms allow individual robots of the swarm to plan, share, and coordinate their trajectories and tasks to achieve a common goal. Such algorithms rely on a large number of configurable parameters that can be tailored to target particular scenarios. This large configuration space, the complexity of the algorithms, and the dependencies with the robots’ setup and performance make debugging and fixing swarms configuration bugs extremely challenging. This paper proposes Swarmbug, a swarm debugging system that automatically diagnoses and fixes buggy behaviors caused by misconfiguration. The essence of Swarmbug is the novel concept called the degree of causal contribution (Dcc), which abstracts impacts of environment configurations (e.g., obstacles) to the drones in a swarm via behavior causal analysis. Swarmbug automatically generates, validates, and ranks fixes for configuration bugs. We evaluate Swarmbug on four diverse swarm algorithms. Swarmbug successfully fixes four configuration bugs in the evaluated algorithms, showing that it is generic and effective. We also conduct a real-world experiment with physical drones to show the Swarmbug’s fix is effective in the real-world. @InProceedings{ESEC/FSE21p868, author = {Chijung Jung and Ali Ahad and Jinho Jung and Sebastian Elbaum and Yonghwi Kwon}, title = {Swarmbug: Debugging Configuration Bugs in Swarm Robotics}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {868--880}, doi = {10.1145/3468264.3468601}, year = {2021}, } Publisher's Version Info |
|
Ahmed, Khaled |
ESEC/FSE '21: "Slicer4J: A Dynamic Slicer ..."
Slicer4J: A Dynamic Slicer for Java
Khaled Ahmed, Mieszko Lis, and Julia Rubin (University of British Columbia, Canada) Dynamic program slicing is used in a variety of tasks, including program debugging and security analysis. Despite being extensively studied in the literature, the only dynamic slicing solution for Java programs that is publicly available today is a tool named JavaSlicer. Unfortunately, JavaSlicer only supports programs written in Java 6 or below and does not support multithreading. To address these limitations, this paper contributes a new dynamic slicing tool for Java, named Slicer4J. Slicer4J uses low-overhead instrumentation to collect a runtime execution trace; it then constructs a thread-aware, inter-procedural dynamic control-flow graph and uses the graph to compute the slice. To support slicing through Java framework methods and native code, Slicer4J relies on a set of pre-constructed data-flow summaries of the main framework methods. It also allows the users to further customize this set, adding user-defined methods when needed. We demonstrate the applicability of Slicer4J on ten benchmark and open-source Java programs, comparing it with JavaSlicer, and discuss how to use and extend the tool. @InProceedings{ESEC/FSE21p1570, author = {Khaled Ahmed and Mieszko Lis and Julia Rubin}, title = {Slicer4J: A Dynamic Slicer for Java}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1570--1574}, doi = {10.1145/3468264.3473123}, year = {2021}, } Publisher's Version Video Info |
|
Ahmed, Toufique |
ESEC/FSE '21: "Learning Type Annotation: ..."
Learning Type Annotation: Is Big Data Enough?
Kevin Jesse, Premkumar T. Devanbu, and Toufique Ahmed (University of California at Davis, USA) TypeScript is a widely used optionally-typed language where developers can adopt “pay as you go” typing: they can add types as desired, and benefit from static typing. The “type annotation tax” or manual effort required to annotate new or existing TypeScript can be reduced by a variety of automatic methods. Probabilistic machine-learning (ML) approaches work quite well. ML approaches use different inductive biases, ranging from simple token sequences to complex graphical neural network (GNN) models capturing syntax and semantic relations. More sophisticated inductive biases are hand-engineered to exploit the formal nature of software. Rather than deploying fancy inductive biases for code, can we just use “big data” to learn natural patterns relevant to typing? We find evidence suggesting that this is the case. We present TypeBert, demonstrating that even with simple token-sequence inductive bias used in BERT-style models and enough data, type-annotation performance of the most sophisticated models can be surpassed. @InProceedings{ESEC/FSE21p1483, author = {Kevin Jesse and Premkumar T. Devanbu and Toufique Ahmed}, title = {Learning Type Annotation: Is Big Data Enough?}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1483--1486}, doi = {10.1145/3468264.3473135}, year = {2021}, } Publisher's Version |
|
Akiki, Paul A. |
ESEC/FSE '21: "Towards an Approach for Resource-Driven ..."
Towards an Approach for Resource-Driven Adaptation
Paul A. Akiki (Open University, UK) Resource-driven systems have tasks that are bound by limited resources. These systems must adapt their tasks that cannot gain access to sufficient resources. This dissertation proposes a new resource-driven adaptation approach, which aims to support (1) task prioritisation using multiple criteria such as the time of day that a task should be executed, the role of involved users, and selection of the least costly adaptation types; (2) collaboration between a human user and a software tool for preparing adapted task behaviour to be used when resources are substituted; and (3) resource extensibility and heterogeneity. The proposed approach is being implemented and will be evaluated with scenarios from enterprise applications. @InProceedings{ESEC/FSE21p1625, author = {Paul A. Akiki}, title = {Towards an Approach for Resource-Driven Adaptation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1625--1629}, doi = {10.1145/3468264.3473098}, year = {2021}, } Publisher's Version |
|
Alegria, António |
ESEC/FSE '21: "Duplicated Code Pattern Mining ..."
Duplicated Code Pattern Mining in Visual Programming Languages
Miguel Terra-Neves, João Nadkarni, Miguel Ventura, Pedro Resende, Hugo Veiga, and António Alegria (OutSystems, Portugal) Visual Programming Languages (VPLs), coupled with the high-level abstractions that are commonplace in visual programming environments, enable users with less technical knowledge to become proficient programmers. However, the lower skill floor required by VPLs also entails that programmers are more likely to not adhere to best practices of software development, producing systems with high technical debt, and thus poor maintainability. Duplicated code is one important example of such technical debt. In fact, we observed that the amount of duplication in the OutSystems VPL code bases can reach as high as 39%. Duplicated code detection in text-based programming languages is still an active area of research with important implications regarding software maintainability and evolution. However, to the best of our knowledge, the literature on duplicated code detection for VPLs is very limited. We propose a novel and scalable duplicated code pattern mining algorithm that leverages the visual structure of VPLs in order to not only detect duplicated code, but also highlight duplicated code patterns that explain the reported duplication. The performance of the proposed approach is evaluated on a wide range of real-world mobile and web applications developed using OutSystems. @InProceedings{ESEC/FSE21p1348, author = {Miguel Terra-Neves and João Nadkarni and Miguel Ventura and Pedro Resende and Hugo Veiga and António Alegria}, title = {Duplicated Code Pattern Mining in Visual Programming Languages}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1348--1359}, doi = {10.1145/3468264.3473928}, year = {2021}, } Publisher's Version |
|
Alipour, Mohammad Amin |
ESEC/FSE '21: "Understanding Neural Code ..."
Understanding Neural Code Intelligence through Program Simplification
Md Rafiqul Islam Rabin, Vincent J. Hellendoorn, and Mohammad Amin Alipour (University of Houston, USA; Carnegie Mellon University, USA) A wide range of code intelligence (CI) tools, powered by deep neural networks, have been developed recently to improve programming productivity and perform program analysis. To reliably use such tools, developers often need to reason about the behavior of the underlying models and the factors that affect them. This is especially challenging for tools backed by deep neural networks. Various methods have tried to reduce this opacity in the vein of "transparent/interpretable-AI". However, these approaches are often specific to a particular set of network architectures, even requiring access to the network's parameters. This makes them difficult to use for the average programmer, which hinders the reliable adoption of neural CI systems. In this paper, we propose a simple, model-agnostic approach to identify critical input features for models in CI systems, by drawing on software debugging research, specifically delta debugging. Our approach, SIVAND, uses simplification techniques that reduce the size of input programs of a CI model while preserving the predictions of the model. We show that this approach yields remarkably small outputs and is broadly applicable across many model architectures and problem domains. We find that the models in our experiments often rely heavily on just a few syntactic features in input programs. We believe that SIVAND's extracted features may help understand neural CI systems' predictions and learned behavior. @InProceedings{ESEC/FSE21p441, author = {Md Rafiqul Islam Rabin and Vincent J. Hellendoorn and Mohammad Amin Alipour}, title = {Understanding Neural Code Intelligence through Program Simplification}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {441--452}, doi = {10.1145/3468264.3468539}, year = {2021}, } Publisher's Version |
|
Almarimi, Nuri |
ESEC/FSE '21: "csDetector: An Open Source ..."
csDetector: An Open Source Tool for Community Smells Detection
Nuri Almarimi, Ali Ouni, Moataz Chouchen, and Mohamed Wiem Mkaouer (ETS, Canada; University of Quebec, Canada; Rochester Institute of Technology, USA) Community smells represent symptoms of sub-optimal organizational and social issues within software development communities that often lead to additional project costs and reduced software quality. Previous research identified a variety of community smells that are connected to sub-optimal patterns under different perspectives of organizational-social structures in the software development community. To detect community smells and understanding the characteristics of such organizational-social structures in a project, we propose csDetector, an open source tool that is able to automatically detect community smells within a project and provide relevant socio-technical metrics. csDetector uses a machine learning based detection approach that learns from various existing bad community development practices to provide automated support in detecting related community smells. We evaluate the effectiveness of csDetector on a benchmark of 143 open source projects from GitHub. Our results show that the csDetector tool can detect ten commonly occurring community smells in open software projects with an average F1 score of 84%. csDetector is publicly available, with a demo video, at: https://github.com/Nuri22/csDetector. @InProceedings{ESEC/FSE21p1560, author = {Nuri Almarimi and Ali Ouni and Moataz Chouchen and Mohamed Wiem Mkaouer}, title = {csDetector: An Open Source Tool for Community Smells Detection}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1560--1564}, doi = {10.1145/3468264.3473121}, year = {2021}, } Publisher's Version |
|
AlOmar, Eman Abdullah |
ESEC/FSE '21: "One Thousand and One Stories: ..."
One Thousand and One Stories: A Large-Scale Survey of Software Refactoring
Yaroslav Golubev, Zarina Kurbatova, Eman Abdullah AlOmar, Timofey Bryksin, and Mohamed Wiem Mkaouer (JetBrains Research, Russia; Rochester Institute of Technology, USA; HSE University, Russia) Despite the availability of refactoring as a feature in popular IDEs, recent studies revealed that developers are reluctant to use them, and still prefer the manual refactoring of their code. At JetBrains, our goal is to fully support refactoring features in IntelliJ-based IDEs and improve their adoption in practice. Therefore, we start by raising the following main questions. How exactly do people refactor code? What refactorings are the most popular? Why do some developers tend not to use convenient IDE refactoring tools? In this paper, we investigate the raised questions through the design and implementation of a survey targeting 1,183 users of IntelliJ-based IDEs. Our quantitative and qualitative analysis of the survey results shows that almost two-thirds of developers spend more than one hour in a single session refactoring their code; that refactoring types vary greatly in popularity; and that a lot of developers would like to know more about IDE refactoring features but lack the means to do so. These results serve us internally to support the next generation of refactoring features, as well as can help our research community to establish new directions in the refactoring usability research. @InProceedings{ESEC/FSE21p1303, author = {Yaroslav Golubev and Zarina Kurbatova and Eman Abdullah AlOmar and Timofey Bryksin and Mohamed Wiem Mkaouer}, title = {One Thousand and One Stories: A Large-Scale Survey of Software Refactoring}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1303--1313}, doi = {10.1145/3468264.3473924}, year = {2021}, } Publisher's Version |
|
Alonso, Juan C. |
ESEC/FSE '21: "Automated Generation of Realistic ..."
Automated Generation of Realistic Test Inputs for Web APIs
Juan C. Alonso (University of Seville, Spain) Testing web APIs automatically requires generating input data values such as addressess, coordinates or country codes. Generating meaningful values for these types of parameters randomly is rarely feasible, which means a major obstacle for current test case generation approaches. In this paper, we present ARTE, the first semantic-based approach for the Automated generation of Realistic TEst inputs for web APIs. Specifically, ARTE leverages the specification of the API under test to search for meaningful test inputs for the API parameters in knowledge bases like DBpedia. Our approach has been integrated into RESTest, a state-of-the-art tool for API testing, achieving an unprecedented level of automation which allows to generate up to 100% more valid API calls than existing fuzzing techniques, 30% on average. Evaluation results on a set of 26 real-world APIs show that ARTE can generate realistic inputs for 7 out of every 10 parameters, outperforming related approaches. @InProceedings{ESEC/FSE21p1666, author = {Juan C. Alonso}, title = {Automated Generation of Realistic Test Inputs for Web APIs}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1666--1668}, doi = {10.1145/3468264.3473491}, year = {2021}, } Publisher's Version |
|
Alotaibi, Ali S. |
ESEC/FSE '21: "Detecting and Localizing Keyboard ..."
Detecting and Localizing Keyboard Accessibility Failures in Web Applications
Paul T. Chiou, Ali S. Alotaibi, and William G. J. Halfond (University of Southern California, USA) The keyboard is the most universally supported input method operable by people with disabilities. Yet, many popular websites lack keyboard accessible mechanism, which could cause failures that make the website unusable. In this paper, we present a novel approach for automatically detecting and localizing keyboard accessibility failures in web applications. Our extensive evaluation of our technique on real world web pages showed that our technique was able to detect keyboard failures in web applications with high precision and recall and was able to accurately identify the underlying elements in the web pages that led to the observed problems. @InProceedings{ESEC/FSE21p855, author = {Paul T. Chiou and Ali S. Alotaibi and William G. J. Halfond}, title = {Detecting and Localizing Keyboard Accessibility Failures in Web Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {855--867}, doi = {10.1145/3468264.3468581}, year = {2021}, } Publisher's Version Artifacts Reusable |
|
An, Gabin |
ESEC/FSE '21: "Reducing the Search Space ..."
Reducing the Search Space of Bug Inducing Commits using Failure Coverage
Gabin An and Shin Yoo (KAIST, South Korea) Knowing how exactly a bug has been introduced into the code can help developers debug the bug efficiently. However, techniques currently used to retrieve Bug Inducing Commits (BICs) from the repository timeline are limited in their accuracy. Automated bisection of the version history depends on the bug revealing test case being executable against all candidate previous versions, whereas blaming the last commits that touched the same parts as the fixing commit (à la SZZ) requires that the bug has already been fixed. We show that filtering commits using the coverage of the bug revealing test cases can effectively reduce the search space for both bisection and SZZ-like blame models by 87.6% and 27.9%, respectively, significantly reducing the cost of BIC retrieval. The application of our approach to bugs in Defects4J also reveals inconsistencies in some of their BICs known in the literature. @InProceedings{ESEC/FSE21p1459, author = {Gabin An and Shin Yoo}, title = {Reducing the Search Space of Bug Inducing Commits using Failure Coverage}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1459--1462}, doi = {10.1145/3468264.3473129}, year = {2021}, } Publisher's Version Info |
|
Aniche, Maurício |
ESEC/FSE '21: "Data-Driven Extract Method ..."
Data-Driven Extract Method Recommendations: A Study at ING
David van der Leij, Jasper Binda, Robbert van Dalen, Pieter Vallen, Yaping Luo, and Maurício Aniche (Delft University of Technology, Netherlands; ING, Netherlands; Eindhoven University of Technology, Netherlands) The sound identification of refactoring opportunities is still an open problem in software engineering. Recent studies have shown the effectiveness of machine learning models in recommending methods that should undergo different refactoring operations. In this work, we experiment with such approaches to identify methods that should undergo an Extract Method refactoring, in the context of ING, a large financial organization. More specifically, we (i) compare the code metrics distributions, which are used as features by the models, between open-source and ING systems, (ii) measure the accuracy of different machine learning models in recommending Extract Method refactorings, (iii) compare the recommendations given by the models with the opinions of ING experts. Our results show that the feature distributions of ING systems and open-source systems are somewhat different, that machine learning models can recommend Extract Method refactorings with high accuracy, and that experts tend to agree with most of the recommendations of the model. @InProceedings{ESEC/FSE21p1337, author = {David van der Leij and Jasper Binda and Robbert van Dalen and Pieter Vallen and Yaping Luo and Maurício Aniche}, title = {Data-Driven Extract Method Recommendations: A Study at ING}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1337--1347}, doi = {10.1145/3468264.3473927}, year = {2021}, } Publisher's Version |
|
Anish, Preethu Rose |
ESEC/FSE '21: "Domain Adaptation for an Automated ..."
Domain Adaptation for an Automated Classification of Deontic Modalities in Software Engineering Contracts
Vivek Joshi, Preethu Rose Anish, and Smita Ghaisas (TCS Research, India) Contracts are agreements between parties engaging in economic transactions. They specify deontic modalities that the signatories should be held responsible for and state the penalties or actions to be taken if the stated agreements are not met. Additionally, contracts have also been known to be source of Software Engineering (SE) requirements. Identifying the deontic modalities in contracts can therefore add value to the Requirements Engineering (RE) phase of SE. The complex and ambiguous language of contracts make it difficult and time-consuming to identify the deontic modalities (obligations, permissions, prohibitions), embedded in the text. State-of-art neural network models are effective for text classification; however, they require substantial amounts of training data. The availability of contracts data is sparse owing to the confidentiality concerns of customers. In this paper, we leverage the linguistic and taxonomical similarities between regulations (available abundantly in the public domain) and contracts to demonstrate that it is possible to use regulations as training data for classifying deontic modalities in real-life contracts. We discuss the results of a range of experiments from the use of rule-based approach to Bidirectional Encoder Representations from Transformers (BERT) for automating the classification of deontic modalities. With BERT, we obtained an average precision and recall of 90% and 89.66% respectively. @InProceedings{ESEC/FSE21p1275, author = {Vivek Joshi and Preethu Rose Anish and Smita Ghaisas}, title = {Domain Adaptation for an Automated Classification of Deontic Modalities in Software Engineering Contracts}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1275--1280}, doi = {10.1145/3468264.3473921}, year = {2021}, } Publisher's Version |
|
Apel, Sven |
ESEC/FSE '21: "When Life Gives You Oranges: ..."
When Life Gives You Oranges: Detecting and Diagnosing Intermittent Job Failures at Mozilla
Johannes Lampel, Sascha Just, Sven Apel, and Andreas Zeller (CISPA, Germany; Saarland University, Germany; Microsoft, USA) Continuous delivery of cloud systems requires constant running of jobs (build processes, tests, etc.). One issue that plagues this continuous integration (CI) process are intermittent failures - non-deterministic, false alarms that do not result from a bug in the software or job specification, but rather from issues in the underlying infrastructure. At Mozilla, such intermittent failures are called oranges as a reference to the color of the build status indicator. As such intermittent failures disrupt CI and lead to failures, they erode the developers' trust in the jobs. We present a novel approach that automatically classifies failing jobs to determine whether job execution failures arise from an actual software bug or were caused by flakiness in the job (e.g., test) or the underlying infrastructure. For this purpose, we train classification models using job telemetry data to diagnose failure patterns involving features such as runtime, cpu load, operating system version, or specific platform with high precision. In an evaluation on a set of Mozilla CI jobs, our approach achieves precision scores of 73%, on average, across all data sets with some test suites achieving precision scores good enough for fully automated classification (i.e., precision scores of up to 100%), and recall scores of 82% on average (up to 94%). @InProceedings{ESEC/FSE21p1381, author = {Johannes Lampel and Sascha Just and Sven Apel and Andreas Zeller}, title = {When Life Gives You Oranges: Detecting and Diagnosing Intermittent Job Failures at Mozilla}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1381--1392}, doi = {10.1145/3468264.3473931}, year = {2021}, } Publisher's Version ESEC/FSE '21: "How Can Manual Testing Processes ..." How Can Manual Testing Processes Be Optimized? Developer Survey, Optimization Guidelines, and Case Studies Roman Haas, Daniel Elsner, Elmar Juergens, Alexander Pretschner, and Sven Apel (Saarland University, Germany; CQSE, Germany; TU Munich, Germany) Manual software testing is tedious and costly as it involves significant human effort. Yet, it is still widely applied in industry and will be in the foreseeable future. Although there is arguably a great need for optimization of manual testing processes, research focuses mostly on optimization techniques for automated tests. Accordingly, there is no precise understanding of the practices and processes of manual testing in industry nor about pitfalls and optimization potential that is untapped. To shed light on this issue, we conducted a survey among 38 testing professionals from 16 companies, to investigate their manual testing processes and to identify potential for optimization. We synthesize guidelines when optimization techniques from automated testing can be implemented for manual testing. By means of case studies on two industrial software projects, we show that fault detection likelihood, test feedback time and test creation efforts can be improved when following our guidelines. @InProceedings{ESEC/FSE21p1281, author = {Roman Haas and Daniel Elsner and Elmar Juergens and Alexander Pretschner and Sven Apel}, title = {How Can Manual Testing Processes Be Optimized? Developer Survey, Optimization Guidelines, and Case Studies}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1281--1291}, doi = {10.1145/3468264.3473922}, year = {2021}, } Publisher's Version Info |
|
Arnaoudova, Venera |
ESEC/FSE '21: "Reassessing Automatic Evaluation ..."
Reassessing Automatic Evaluation Metrics for Code Summarization Tasks
Devjeet Roy, Sarah Fakhoury, and Venera Arnaoudova (Washington State University, USA) In recent years, research in the domain of source code summarization has adopted data-driven techniques pioneered in machine translation (MT). Automatic evaluation metrics such as BLEU, METEOR, and ROUGE, are fundamental to the evaluation of MT systems and have been adopted as proxies of human evaluation in the code summarization domain. However, the extent to which automatic metrics agree with the gold standard of human evaluation has not been evaluated on code summarization tasks. Despite this, marginal improvements in metric scores are often used to discriminate between the performance of competing summarization models. In this paper, we present a critical exploration of the applicability and interpretation of automatic metrics as evaluation techniques for code summarization tasks. We conduct an empirical study with 226 human annotators to assess the degree to which automatic metrics reflect human evaluation. Results indicate that metric improvements of less than 2 points do not guarantee systematic improvements in summarization quality, and are unreliable as proxies of human evaluation. When the difference between metric scores for two summarization approaches increases but remains within 5 points, some metrics such as METEOR and chrF become highly reliable proxies, whereas others, such as corpus BLEU, remain unreliable. Based on these findings, we make several recommendations for the use of automatic metrics to discriminate model performance in code summarization. @InProceedings{ESEC/FSE21p1105, author = {Devjeet Roy and Sarah Fakhoury and Venera Arnaoudova}, title = {Reassessing Automatic Evaluation Metrics for Code Summarization Tasks}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1105--1116}, doi = {10.1145/3468264.3468588}, year = {2021}, } Publisher's Version |
|
Arratibel, Maite |
ESEC/FSE '21: "Generating Metamorphic Relations ..."
Generating Metamorphic Relations for Cyber-Physical Systems with Genetic Programming: An Industrial Case Study
Jon Ayerdi, Valerio Terragni, Aitor Arrieta, Paolo Tonella, Goiuria Sagardui, and Maite Arratibel (Mondragon University, Spain; University of Auckland, New Zealand; USI Lugano, Switzerland; Orona, Spain) One of the major challenges in the verification of complex industrial Cyber-Physical Systems is the difficulty of determining whether a particular system output or behaviour is correct or not, the so-called test oracle problem. Metamorphic testing alleviates the oracle problem by reasoning on the relations that are expected to hold among multiple executions of the system under test, which are known as Metamorphic Relations (MRs). However, the development of effective MRs is often challenging and requires the involvement of domain experts. In this paper, we present a case study aiming at automating this process. To this end, we implemented GAssertMRs, a tool to automatically generate MRs with genetic programming. We assess the cost-effectiveness of this tool in the context of an industrial case study from the elevation domain. Our experimental results show that in most cases GAssertMRs outperforms the other baselines, including manually generated MRs developed with the help of domain experts. We then describe the lessons learned from our experiments and we outline the future work for the adoption of this technique by industrial practitioners. @InProceedings{ESEC/FSE21p1264, author = {Jon Ayerdi and Valerio Terragni and Aitor Arrieta and Paolo Tonella and Goiuria Sagardui and Maite Arratibel}, title = {Generating Metamorphic Relations for Cyber-Physical Systems with Genetic Programming: An Industrial Case Study}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1264--1274}, doi = {10.1145/3468264.3473920}, year = {2021}, } Publisher's Version |
|
Arrieta, Aitor |
ESEC/FSE '21: "Generating Metamorphic Relations ..."
Generating Metamorphic Relations for Cyber-Physical Systems with Genetic Programming: An Industrial Case Study
Jon Ayerdi, Valerio Terragni, Aitor Arrieta, Paolo Tonella, Goiuria Sagardui, and Maite Arratibel (Mondragon University, Spain; University of Auckland, New Zealand; USI Lugano, Switzerland; Orona, Spain) One of the major challenges in the verification of complex industrial Cyber-Physical Systems is the difficulty of determining whether a particular system output or behaviour is correct or not, the so-called test oracle problem. Metamorphic testing alleviates the oracle problem by reasoning on the relations that are expected to hold among multiple executions of the system under test, which are known as Metamorphic Relations (MRs). However, the development of effective MRs is often challenging and requires the involvement of domain experts. In this paper, we present a case study aiming at automating this process. To this end, we implemented GAssertMRs, a tool to automatically generate MRs with genetic programming. We assess the cost-effectiveness of this tool in the context of an industrial case study from the elevation domain. Our experimental results show that in most cases GAssertMRs outperforms the other baselines, including manually generated MRs developed with the help of domain experts. We then describe the lessons learned from our experiments and we outline the future work for the adoption of this technique by industrial practitioners. @InProceedings{ESEC/FSE21p1264, author = {Jon Ayerdi and Valerio Terragni and Aitor Arrieta and Paolo Tonella and Goiuria Sagardui and Maite Arratibel}, title = {Generating Metamorphic Relations for Cyber-Physical Systems with Genetic Programming: An Industrial Case Study}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1264--1274}, doi = {10.1145/3468264.3473920}, year = {2021}, } Publisher's Version |
|
Arya, Abhishek |
ESEC/FSE '21: "FuzzBench: An Open Fuzzer ..."
FuzzBench: An Open Fuzzer Benchmarking Platform and Service
Jonathan Metzman, László Szekeres, Laurent Simon, Read Sprabery, and Abhishek Arya (Google, USA) Fuzzing is a key tool used to reduce bugs in production software. At Google, fuzzing has uncovered tens of thousands of bugs. Fuzzing is also a popular subject of academic research. In 2020 alone, over 120 papers were published on the topic of improving, developing, and evaluating fuzzers and fuzzing techniques. Yet, proper evaluation of fuzzing techniques remains elusive. The community has struggled to converge on methodology and standard tools for fuzzer evaluation. To address this problem, we introduce FuzzBench as an open-source turnkey platform and free service for evaluating fuzzers. It aims to be easy to use, fast, reliable, and provides reproducible experiments. Since its release in March 2020, FuzzBench has been widely used both in industry and academia, carrying out more than 150 experiments for external users. It has been used by several published and in-the-work papers from academic groups, and has had real impact on the most widely used fuzzing tools in industry. The presented case studies suggest that FuzzBench is on its way to becoming a standard fuzzer benchmarking platform. @InProceedings{ESEC/FSE21p1393, author = {Jonathan Metzman and László Szekeres and Laurent Simon and Read Sprabery and Abhishek Arya}, title = {FuzzBench: An Open Fuzzer Benchmarking Platform and Service}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1393--1403}, doi = {10.1145/3468264.3473932}, year = {2021}, } Publisher's Version Info |
|
Asyrofi, Muhammad Hilmi |
ESEC/FSE '21: "CrossASR++: A Modular Differential ..."
CrossASR++: A Modular Differential Testing Framework for Automatic Speech Recognition
Muhammad Hilmi Asyrofi, Zhou Yang, and David Lo (Singapore Management University, Singapore) Developers need to perform adequate testing to ensure the quality of Automatic Speech Recognition (ASR) systems. However, manually collecting required test cases is tedious and time-consuming. Our recent work proposes CrossASR, a differential testing method for ASR systems. This method first utilizes Text-to-Speech (TTS) to generate audios from texts automatically and then feed these audios into different ASR systems for cross-referencing to uncover failed test cases. It also leverages a failure estimator to find failing test cases more efficiently. Such a method is inherently self-improvable: the performance can increase by leveraging more advanced TTS and ASR systems. So, in this accompanying tool demo paper, we further engineer CrossASR and propose CrossASR++, an easy-to-use ASR testing tool that can be conveniently extended to incorporate different TTS and ASR systems, and failure estimators. We also make CrossASR++ chunk texts from a given corpus dynamically and enable the estimator to work in a more effective and flexible way. We demonstrate that the new features can help CrossASR++ discover more failed test cases. Using the same TTS and ASR systems, CrossASR++ can uncover 26.2% more failed test cases for 4 ASRs than the original tool. Moreover, by simply adding one more ASR for cross-referencing, we can increase the number of failed test cases uncovered for each of the 4 ASR systems by 25.07%, 39.63%, 20.95% and 8.17% respectively. We also extend CrossASR++ with 5 additional failure estimators. Compared to worst estimator, the best one can discover 10.41% more failed test cases within the same amount of time. The demo video for CrossASR++ can be viewed at https://youtu.be/ddRk-f0QV-g and the source code can be found at https://github.com/soarsmu/CrossASRplus. @InProceedings{ESEC/FSE21p1575, author = {Muhammad Hilmi Asyrofi and Zhou Yang and David Lo}, title = {CrossASR++: A Modular Differential Testing Framework for Automatic Speech Recognition}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1575--1579}, doi = {10.1145/3468264.3473124}, year = {2021}, } Publisher's Version Video Info ESEC/FSE '21: "BiasRV: Uncovering Biased ..." BiasRV: Uncovering Biased Sentiment Predictions at Runtime Zhou Yang, Muhammad Hilmi Asyrofi, and David Lo (Singapore Management University, Singapore) Sentiment analysis (SA) systems, though widely applied in many domains, have been demonstrated to produce biased results. Some research works have been done in automatically generating test cases to reveal unfairness in SA systems, but the community still lacks tools that can monitor and uncover biased predictions at runtime. This paper fills this gap by proposing BiasRV, the first tool to raise an alarm when a deployed SA system makes a biased prediction on a given input text. To implement this feature, BiasRV dynamically extracts a template from an input text and generates gender-discriminatory mutants (semantically-equivalent texts that only differ in gender information) from the template. Based on popular metrics used to evaluate the overall fairness of an SA system, we define the distributional fairness property for an individual prediction of an SA system. This property specifies a requirement that for one piece of text, mutants from different gender classes should be treated similarly. Verifying the distributional fairness property causes much overhead to the running system. To run more efficiently, BiasRV adopts a two-step heuristic: (1) sampling several mutants from each gender and checking if the system predicts them as of the same sentiment, (2) checking distributional fairness only when sampled mutants have conflicting results. Experiments show that when compared to directly checking the distributional fairness property for each input text, our two-step heuristic can decrease the overhead used for analyzing mutants by 73.81% while only resulting in 6.7% of biased predictions being missed. Besides, BiasRV can be used conveniently without knowing the implementation of SA systems. Future researchers can easily extend BiasRV to detect more types of bias, e.g., race and occupation. The demo video for BiasRV can be viewed at https://youtu.be/WPe4Ml77d3U and the source code can be found at https://github.com/soarsmu/BiasRV. @InProceedings{ESEC/FSE21p1540, author = {Zhou Yang and Muhammad Hilmi Asyrofi and David Lo}, title = {BiasRV: Uncovering Biased Sentiment Predictions at Runtime}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1540--1544}, doi = {10.1145/3468264.3473117}, year = {2021}, } Publisher's Version Video |
|
Ayerdi, Jon |
ESEC/FSE '21: "Generating Metamorphic Relations ..."
Generating Metamorphic Relations for Cyber-Physical Systems with Genetic Programming: An Industrial Case Study
Jon Ayerdi, Valerio Terragni, Aitor Arrieta, Paolo Tonella, Goiuria Sagardui, and Maite Arratibel (Mondragon University, Spain; University of Auckland, New Zealand; USI Lugano, Switzerland; Orona, Spain) One of the major challenges in the verification of complex industrial Cyber-Physical Systems is the difficulty of determining whether a particular system output or behaviour is correct or not, the so-called test oracle problem. Metamorphic testing alleviates the oracle problem by reasoning on the relations that are expected to hold among multiple executions of the system under test, which are known as Metamorphic Relations (MRs). However, the development of effective MRs is often challenging and requires the involvement of domain experts. In this paper, we present a case study aiming at automating this process. To this end, we implemented GAssertMRs, a tool to automatically generate MRs with genetic programming. We assess the cost-effectiveness of this tool in the context of an industrial case study from the elevation domain. Our experimental results show that in most cases GAssertMRs outperforms the other baselines, including manually generated MRs developed with the help of domain experts. We then describe the lessons learned from our experiments and we outline the future work for the adoption of this technique by industrial practitioners. @InProceedings{ESEC/FSE21p1264, author = {Jon Ayerdi and Valerio Terragni and Aitor Arrieta and Paolo Tonella and Goiuria Sagardui and Maite Arratibel}, title = {Generating Metamorphic Relations for Cyber-Physical Systems with Genetic Programming: An Industrial Case Study}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1264--1274}, doi = {10.1145/3468264.3473920}, year = {2021}, } Publisher's Version |
|
Babar, Muhammad Ali |
ESEC/FSE '21: "A Grounded Theory of the Role ..."
A Grounded Theory of the Role of Coordination in Software Security Patch Management
Nesara Dissanayake, Mansooreh Zahedi, Asangi Jayatilaka, and Muhammad Ali Babar (University of Adelaide, Australia) Several disastrous security attacks can be attributed to delays in patching software vulnerabilities. While researchers and practitioners have paid significant attention to automate vulnerabilities identification and patch development activities of software security patch management, there has been relatively little effort dedicated to gain an in-depth understanding of the socio-technical aspects, e.g., coordination of interdependent activities of the patching process and patching decisions, that may cause delays in applying security patches. We report on a Grounded Theory study of the role of coordination in security patch management. The reported theory consists of four inter-related dimensions, i.e., causes, breakdowns, constraints, and mechanisms. The theory explains the causes that define the need for coordination among interdependent software/hardware components and multiple stakeholders’ decisions, the constraints that can negatively impact coordination, the breakdowns in coordination, and the potential corrective measures. This study provides potentially useful insights for researchers and practitioners who can carefully consider the needs of and devise suitable solutions for supporting the coordination of interdependencies involved in security patch management. @InProceedings{ESEC/FSE21p793, author = {Nesara Dissanayake and Mansooreh Zahedi and Asangi Jayatilaka and Muhammad Ali Babar}, title = {A Grounded Theory of the Role of Coordination in Software Security Patch Management}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {793--805}, doi = {10.1145/3468264.3468595}, year = {2021}, } Publisher's Version |
|
Bacchelli, Alberto |
ESEC/FSE '21: "Authorship Attribution of ..."
Authorship Attribution of Source Code: A Language-Agnostic Approach and Applicability in Software Engineering
Egor Bogomolov, Vladimir Kovalenko, Yurii Rebryk, Alberto Bacchelli, and Timofey Bryksin (JetBrains Research, Russia; HSE University, Russia; JetBrains Research, Netherlands; University of Zurich, Switzerland) Authorship attribution (i.e., determining who is the author of a piece of source code) is an established research topic. State-of-the-art results for the authorship attribution problem look promising for the software engineering field, where they could be applied to detect plagiarized code and prevent legal issues. With this article, we first introduce a new language-agnostic approach to authorship attribution of source code. Then, we discuss limitations of existing synthetic datasets for authorship attribution, and propose a data collection approach that delivers datasets that better reflect aspects important for potential practical use in software engineering. Finally, we demonstrate that high accuracy of authorship attribution models on existing datasets drastically drops when they are evaluated on more realistic data. We outline next steps for the design and evaluation of authorship attribution models that could bring the research efforts closer to practical use for software engineering. @InProceedings{ESEC/FSE21p932, author = {Egor Bogomolov and Vladimir Kovalenko and Yurii Rebryk and Alberto Bacchelli and Timofey Bryksin}, title = {Authorship Attribution of Source Code: A Language-Agnostic Approach and Applicability in Software Engineering}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {932--944}, doi = {10.1145/3468264.3468606}, year = {2021}, } Publisher's Version |
|
Bai, Xuefang |
ESEC/FSE '21: "Learning-Based Extraction ..."
Learning-Based Extraction of First-Order Logic Representations of API Directives
Mingwei Liu, Xin Peng, Andrian Marcus, Christoph Treude, Xuefang Bai, Gang Lyu, Jiazhan Xie, and Xiaoxin Zhang (Fudan University, China; University of Texas at Dallas, USA; University of Adelaide, Australia) Developers often rely on API documentation to learn API directives, i.e., constraints and guidelines related to API usage. Failing to follow API directives may cause defects or improper implementations. Since there are no industry-wide standards on how to document API directives, they take many forms and are often hard to understand by developers or challenging to parse with tools. In this paper, we propose a learning based approach for extracting first-order logic representations of API directives (FOL directives for short). The approach, called LEADFOL, uses a joint learning method to extract atomic formulas by identifying the predicates and arguments involved in directive sentences, and recognizes the logical relations between atomic formulas, by parsing the sentence structures. It then parses the arguments and uses a learning based method to link API references to their corresponding API elements. Finally, it groups the formulas of the same class or method together and transforms them into conjunctive normal form. Our evaluation shows that LEADFOL can accurately extract more FOL directives than a state-of-the-art approach and that the extracted FOL directives are useful in supporting code reviews. @InProceedings{ESEC/FSE21p491, author = {Mingwei Liu and Xin Peng and Andrian Marcus and Christoph Treude and Xuefang Bai and Gang Lyu and Jiazhan Xie and Xiaoxin Zhang}, title = {Learning-Based Extraction of First-Order Logic Representations of API Directives}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {491--502}, doi = {10.1145/3468264.3468618}, year = {2021}, } Publisher's Version |
|
Baig, Zubair |
ESEC/FSE '21: "ICME: An Informed Consent ..."
ICME: An Informed Consent Management Engine for Conformance in Smart Building Environments
Chehara Pathmabandu, John Grundy, Mohan Baruwal Chhetri, and Zubair Baig (Monash University, Australia; CSIRO’s Data61, Australia; Deakin University, Australia) Smart buildings can reveal highly sensitive insights about their inhabitants and expose them to new privacy threats and vulnerabilities. Yet, convenience overrides privacy concerns and most people remain ignorant about this issue. We propose a novel Informed Consent Management Engine (ICME) that aims to: (a) increase users’ awareness about privacy issues and data collection practices in their smart building environments, (b) provide fine-grained visibility into privacy conformance and infringement by these devices, (c) recommend and visualise corrective user actions through ”digital nudging”, and (d) support the monitoring and management of personal data disclosure in a shared space. We present a reference architecture for ICME that can be used by software engineers to implement diverse end-user consent management solutions for smart buildings. We also provide a proof-of-concept prototype to demonstrate how the ICME approach works in a shared smart workplace. Demo: https://youtu.be/5y6CdyWAdgY @InProceedings{ESEC/FSE21p1545, author = {Chehara Pathmabandu and John Grundy and Mohan Baruwal Chhetri and Zubair Baig}, title = {ICME: An Informed Consent Management Engine for Conformance in Smart Building Environments}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1545--1549}, doi = {10.1145/3468264.3473118}, year = {2021}, } Publisher's Version Video |
|
Baltes, Sebastian |
ESEC/FSE '21: "Characterizing Search Activities ..."
Characterizing Search Activities on Stack Overflow
Jiakun Liu, Sebastian Baltes, Christoph Treude, David Lo, Yun Zhang, and Xin Xia (Zhejiang University, China; University of Adelaide, Australia; Singapore Management University, Singapore; Zhejiang University City College, China; Huawei, China) To solve programming issues, developers commonly search on Stack Overflow to seek potential solutions. However, there is a gap between the knowledge developers are interested in and the knowledge they are able to retrieve using search engines. To help developers efficiently retrieve relevant knowledge on Stack Overflow, prior studies proposed several techniques to reformulate queries and generate summarized answers. However, few studies performed a large-scale analysis using real-world search logs. In this paper, we characterize how developers search on Stack Overflow using such logs. By doing so, we identify the challenges developers face when searching on Stack Overflow and seek opportunities for the platform and researchers to help developers efficiently retrieve knowledge. To characterize search activities on Stack Overflow, we use search log data based on requests to Stack Overflow's web servers. We find that the most common search activity is reformulating the immediately preceding queries. Related work looked into query reformulations when using generic search engines and found 13 types of query reformulation strategies. Compared to their results, we observe that 71.78% of the reformulations can be fitted into those reformulation strategies. In terms of how queries are structured, 17.41% of the search sessions only search for fragments of source code artifacts (e.g., class and method names) without specifying the names of programming languages, libraries, or frameworks. Based on our findings, we provide actionable suggestions for Stack Overflow moderators and outline directions for future research. For example, we encourage Stack Overflow to set up a database that includes the relations between all computer programming terminologies shared on Stack Overflow, e.g., method name, data structure name, design pattern, and IDE name. By doing so, Stack Overflow could improve the performance of search engines by considering related programming terminologies at different levels of granularity. @InProceedings{ESEC/FSE21p919, author = {Jiakun Liu and Sebastian Baltes and Christoph Treude and David Lo and Yun Zhang and Xin Xia}, title = {Characterizing Search Activities on Stack Overflow}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {919--931}, doi = {10.1145/3468264.3468582}, year = {2021}, } Publisher's Version |
|
Baluta, Teodora |
ESEC/FSE '21: "SynGuar: Guaranteeing Generalization ..."
SynGuar: Guaranteeing Generalization in Programming by Example
Bo Wang, Teodora Baluta, Aashish Kolluri, and Prateek Saxena (National University of Singapore, Singapore) Programming by Example (PBE) is a program synthesis paradigm in which the synthesizer creates a program that matches a set of given examples. In many applications of such synthesis (e.g., program repair or reverse engineering), we are to reconstruct a program that is close to a specific target program, not merely to produce some program that satisfies the seen examples. In such settings, we wish that the synthesized program generalizes well, i.e., has as few errors as possible on the unobserved examples capturing the target function behavior. In this paper, we propose the first framework (called SynGuar) for PBE synthesizers that guarantees to achieve low generalization error with high probability. Our main contribution is a procedure to dynamically calculate how many additional examples suffice to theoretically guarantee generalization. We show how our techniques can be used in 2 well-known synthesis approaches: PROSE and STUN (synthesis through unification), for common string-manipulation program benchmarks. We find that often a few hundred examples suffice to provably bound generalization error below 5% with high (≥ 98%) probability on these benchmarks. Further, we confirm this empirically: SynGuar significantly improves the accuracy of existing synthesizers in generating the right target programs. But with fewer examples chosen arbitrarily, the same baseline synthesizers (without SynGuar) overfit and lose accuracy. @InProceedings{ESEC/FSE21p677, author = {Bo Wang and Teodora Baluta and Aashish Kolluri and Prateek Saxena}, title = {SynGuar: Guaranteeing Generalization in Programming by Example}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {677--689}, doi = {10.1145/3468264.3468621}, year = {2021}, } Publisher's Version Info Artifacts Reusable |
|
Banerjee, Debasish |
ESEC/FSE '21: "Mono2Micro: A Practical and ..."
Mono2Micro: A Practical and Effective Tool for Decomposing Monolithic Java Applications to Microservices
Anup K. Kalia, Jin Xiao, Rahul Krishna, Saurabh Sinha, Maja Vukovic, and Debasish Banerjee (IBM Research, USA; IBM, USA) In migrating production workloads to cloud, enterprises often face the daunting task of evolving monolithic applications toward a microservice architecture. At IBM, we developed a tool called Mono2Micro to assist with this challenging task. Mono2Micro performs spatio-temporal decomposition, leveraging well-defined business use cases and runtime call relations to create functionally cohesive partitioning of application classes. Our preliminary evaluation of Mono2Micro showed promising results. How well does Mono2Micro perform against other decomposition techniques, and how do practitioners perceive the tool? This paper describes the technical foundations of Mono2Micro and presents results to answer these two questions. To answer the first question, we evaluated Mono2Micro against four existing techniques on a set of open-source and proprietary Java applications and using different metrics to assess the quality of decomposition and tool’s efficiency. Our results show that Mono2Micro significantly outperforms state-of-the-art baselines in specific metrics well-defined for the problem domain. To answer the second question, we conducted a survey of twenty-one practitioners in various industry roles who have used Mono2Micro. This study highlights several benefits of the tool, interesting practitioner perceptions, and scope for further improvements. Overall, these results show that Mono2Micro can provide a valuable aid to practitioners in creating functionally cohesive and explainable microservice decompositions. @InProceedings{ESEC/FSE21p1214, author = {Anup K. Kalia and Jin Xiao and Rahul Krishna and Saurabh Sinha and Maja Vukovic and Debasish Banerjee}, title = {Mono2Micro: A Practical and Effective Tool for Decomposing Monolithic Java Applications to Microservices}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1214--1224}, doi = {10.1145/3468264.3473915}, year = {2021}, } Publisher's Version |
|
Baskararajah, Janusan |
ESEC/FSE '21: "Term Interrelations and Trends ..."
Term Interrelations and Trends in Software Engineering
Janusan Baskararajah, Lei Zhang, and Andriy Miranskyy (Ryerson University, Canada) The Software Engineering (SE) community is prolific, making it challenging for experts to keep up with the flood of new papers and for neophytes to enter the field. Therefore, we posit that the community may benefit from a tool extracting terms and their interrelations from the SE community's text corpus and showing terms' trends. In this paper, we build a prototyping tool using the word embedding technique. We train the embeddings on the SE Body of Knowledge handbook and 15,233 research papers' titles and abstracts. We also create test cases necessary for validation of the training of the embeddings. We provide representative examples showing that the embeddings may aid in summarizing terms and uncovering trends in the knowledge base. @InProceedings{ESEC/FSE21p1471, author = {Janusan Baskararajah and Lei Zhang and Andriy Miranskyy}, title = {Term Interrelations and Trends in Software Engineering}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1471--1474}, doi = {10.1145/3468264.3473132}, year = {2021}, } Publisher's Version |
|
Baudry, Benoit |
ESEC/FSE '21: "A Longitudinal Analysis of ..."
A Longitudinal Analysis of Bloated Java Dependencies
César Soto-Valero, Thomas Durieux, and Benoit Baudry (KTH, Sweden) We study the evolution and impact of bloated dependencies in a single software ecosystem: Java/Maven. Bloated dependencies are third-party libraries that are packaged in the application binary but are not needed to run the application. We analyze the history of 435 Java projects. This historical data includes 48,469 distinct dependencies, which we study across a total of 31,515 versions of Maven dependency trees. Bloated dependencies steadily increase over time, and 89.2% of the direct dependencies that are bloated remain bloated in all subsequent versions of the studied projects. This empirical evidence suggests that developers can safely remove a bloated dependency. We further report novel insights regarding the unnecessary maintenance efforts induced by bloat. We find that 22% of dependency updates performed by developers are made on bloated dependencies, and that Dependabot suggests a similar ratio of updates on bloated dependencies. @InProceedings{ESEC/FSE21p1021, author = {César Soto-Valero and Thomas Durieux and Benoit Baudry}, title = {A Longitudinal Analysis of Bloated Java Dependencies}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1021--1031}, doi = {10.1145/3468264.3468589}, year = {2021}, } Publisher's Version Info |
|
Bhagwan, Ranjita |
ESEC/FSE '21: "Data-Driven Test Selection ..."
Data-Driven Test Selection at Scale
Sonu Mehta, Farima Farmahinifarahani, Ranjita Bhagwan, Suraj Guptha, Sina Jafari, Rahul Kumar, Vaibhav Saini, and Anirudh Santhiar (Microsoft Research, India; University of California at Irvine, USA; Microsoft, USA) Large-scale services depend on Continuous Integration/Continuous Deployment (CI/CD) processes to maintain their agility and code-quality. Change-based testing plays an important role in finding bugs, but testing after every change is prohibitively expensive at a scale where thousands of changes are committed every hour. Test selection models deal with this issue by running a subset of tests for every change. In this paper, we present a generic, language-agnostic and lightweight statistical model for test selection. Unlike existing techniques, the proposed model does not require complex feature extraction techniques. Consequently, it scales to hundreds of repositories of varying characteristics while capturing more than 99% of buggy pull requests. Additionally, to better evaluate test selection models, we propose application-specific metrics that capture both a reduction in resource cost and a reduction in pull-request turn-around time. By evaluating our model on 22 large repositories at Microsoft, we find that we can save 15%−30% of compute time while reporting back more than ≈99% of buggy pull requests. @InProceedings{ESEC/FSE21p1225, author = {Sonu Mehta and Farima Farmahinifarahani and Ranjita Bhagwan and Suraj Guptha and Sina Jafari and Rahul Kumar and Vaibhav Saini and Anirudh Santhiar}, title = {Data-Driven Test Selection at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1225--1235}, doi = {10.1145/3468264.3473916}, year = {2021}, } Publisher's Version |
|
Bhattacharya, Arnab |
ESEC/FSE '21: "Sangrahaka: A Tool for Annotating ..."
Sangrahaka: A Tool for Annotating and Querying Knowledge Graphs
Hrishikesh Terdalkar and Arnab Bhattacharya (IIT Kanpur, India) We present a web-based tool Sangrahaka for annotating entities and relationships from text corpora towards construction of a knowledge graph and subsequent querying using templatized natural language questions. The application is language and corpus agnostic, but can be tuned for specific needs of a language or a corpus. The application is freely available for download and installation. Besides having a user-friendly interface, it is fast, supports customization, and is fault tolerant on both client and server side. It outperforms other annotation tools in an objective evaluation metric. The framework has been successfully used in two annotation tasks. The code is available from https://github.com/hrishikeshrt/sangrahaka. @InProceedings{ESEC/FSE21p1520, author = {Hrishikesh Terdalkar and Arnab Bhattacharya}, title = {Sangrahaka: A Tool for Annotating and Querying Knowledge Graphs}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1520--1524}, doi = {10.1145/3468264.3473113}, year = {2021}, } Publisher's Version |
|
Binda, Jasper |
ESEC/FSE '21: "Data-Driven Extract Method ..."
Data-Driven Extract Method Recommendations: A Study at ING
David van der Leij, Jasper Binda, Robbert van Dalen, Pieter Vallen, Yaping Luo, and Maurício Aniche (Delft University of Technology, Netherlands; ING, Netherlands; Eindhoven University of Technology, Netherlands) The sound identification of refactoring opportunities is still an open problem in software engineering. Recent studies have shown the effectiveness of machine learning models in recommending methods that should undergo different refactoring operations. In this work, we experiment with such approaches to identify methods that should undergo an Extract Method refactoring, in the context of ING, a large financial organization. More specifically, we (i) compare the code metrics distributions, which are used as features by the models, between open-source and ING systems, (ii) measure the accuracy of different machine learning models in recommending Extract Method refactorings, (iii) compare the recommendations given by the models with the opinions of ING experts. Our results show that the feature distributions of ING systems and open-source systems are somewhat different, that machine learning models can recommend Extract Method refactorings with high accuracy, and that experts tend to agree with most of the recommendations of the model. @InProceedings{ESEC/FSE21p1337, author = {David van der Leij and Jasper Binda and Robbert van Dalen and Pieter Vallen and Yaping Luo and Maurício Aniche}, title = {Data-Driven Extract Method Recommendations: A Study at ING}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1337--1347}, doi = {10.1145/3468264.3473927}, year = {2021}, } Publisher's Version |
|
Biswas, Sumon |
ESEC/FSE '21: "Fair Preprocessing: Towards ..."
Fair Preprocessing: Towards Understanding Compositional Fairness of Data Transformers in Machine Learning Pipeline
Sumon Biswas and Hridesh Rajan (Iowa State University, USA) In recent years, many incidents have been reported where machine learning models exhibited discrimination among people based on race, sex, age, etc. Research has been conducted to measure and mitigate unfairness in machine learning models. For a machine learning task, it is a common practice to build a pipeline that includes an ordered set of data preprocessing stages followed by a classifier. However, most of the research on fairness has considered a single classifier based prediction task. What are the fairness impacts of the preprocessing stages in machine learning pipeline? Furthermore, studies showed that often the root cause of unfairness is ingrained in the data itself, rather than the model. But no research has been conducted to measure the unfairness caused by a specific transformation made in the data preprocessing stage. In this paper, we introduced the causal method of fairness to reason about the fairness impact of data preprocessing stages in ML pipeline. We leveraged existing metrics to define the fairness measures of the stages. Then we conducted a detailed fairness evaluation of the preprocessing stages in 37 pipelines collected from three different sources. Our results show that certain data transformers are causing the model to exhibit unfairness. We identified a number of fairness patterns in several categories of data transformers. Finally, we showed how the local fairness of a preprocessing stage composes in the global fairness of the pipeline. We used the fairness composition to choose appropriate downstream transformer that mitigates unfairness in the machine learning pipeline. @InProceedings{ESEC/FSE21p981, author = {Sumon Biswas and Hridesh Rajan}, title = {Fair Preprocessing: Towards Understanding Compositional Fairness of Data Transformers in Machine Learning Pipeline}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {981--993}, doi = {10.1145/3468264.3468536}, year = {2021}, } Publisher's Version Info Artifacts Functional |
|
Bittner, Paul Maximilian |
ESEC/FSE '21: "Feature Trace Recording ..."
Feature Trace Recording
Paul Maximilian Bittner, Alexander Schultheiß, Thomas Thüm, Timo Kehrer, Jeffrey M. Young, and Lukas Linsbauer (University of Ulm, Germany; Humboldt University of Berlin, Germany; Oregon State University, USA; TU Braunschweig, Germany) Tracing requirements to their implementation is crucial to all stakeholders of a software development process. When managing software variability, requirements are typically expressed in terms of features, a feature being a user-visible characteristic of the software. While feature traces are fully documented in software product lines, ad-hoc branching and forking, known as clone-and-own, is still the dominant way for developing multi-variant software systems in practice. Retroactive migration to product lines suffers from uncertainties and high effort because knowledge of feature traces must be recovered but is scattered across teams or even lost. We propose a semi-automated methodology for recording feature traces proactively, during software development when the necessary knowledge is present. To support the ongoing development of previously unmanaged clone-and-own projects, we explicitly deal with the absence of domain knowledge for both existing and new source code. We evaluate feature trace recording by replaying code edit patterns from the history of two real-world product lines. Our results show that feature trace recording reduces the manual effort to specify traces. Recorded feature traces could improve automation in change-propagation among cloned system variants and could reduce effort if developers decide to migrate to a product line. @InProceedings{ESEC/FSE21p1007, author = {Paul Maximilian Bittner and Alexander Schultheiß and Thomas Thüm and Timo Kehrer and Jeffrey M. Young and Lukas Linsbauer}, title = {Feature Trace Recording}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1007--1020}, doi = {10.1145/3468264.3468531}, year = {2021}, } Publisher's Version Info Artifacts Reusable |
|
Bodden, Eric |
ESEC/FSE '21: "IDE Support for Cloud-Based ..."
IDE Support for Cloud-Based Static Analyses
Linghui Luo, Martin Schäf, Daniel Sanchez, and Eric Bodden (University of Paderborn, Germany; Amazon Web Services, USA; Amazon Alexa, USA; Fraunhofer IEM, Germany) Integrating static analyses into continuous integration (CI) or continuous delivery (CD) has become the best practice for assuring code quality and security. Static Application Security Testing (SAST) tools fit well into CI/CD, because CI/CD allows time for deep static analyses on large code bases and prevents vulnerabilities in the early stages of the development lifecycle. In CI/CD, the SAST tools usually run in the cloud and provide findings via a web interface. Recent studies show that developers prefer seeing the findings of these tools directly in their IDEs. Most tools with IDE integration run lightweight static analyses and can give feedback at coding time, but SAST tools used in CI/CD take longer to run and usually are not able to do so. Can developers interact directly with a cloud-based SAST tool that is typically used in CI/CD through their IDE? We investigated if such a mechanism can integrate cloud-based SAST tools better into a developers’ workflow than web-based solutions. We interviewed developers to understand their expectations from an IDE solution. Guided by these interviews, we implemented an IDE prototype for an existing cloud-based SAST tool. With a usability test using this prototype, we found that the IDE solution promoted more frequent tool interactions. In particular, developers performed code scans three times more often. This indicates better integration of the cloud-based SAST tool into developers’ workflow. Furthermore, while our study did not show statistically significant improvement on developers’ code-fixing performance, it did show a promising reduction in time for fixing vulnerable code. @InProceedings{ESEC/FSE21p1178, author = {Linghui Luo and Martin Schäf and Daniel Sanchez and Eric Bodden}, title = {IDE Support for Cloud-Based Static Analyses}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1178--1189}, doi = {10.1145/3468264.3468535}, year = {2021}, } Publisher's Version Info |
|
Bogomolov, Egor |
ESEC/FSE '21: "Authorship Attribution of ..."
Authorship Attribution of Source Code: A Language-Agnostic Approach and Applicability in Software Engineering
Egor Bogomolov, Vladimir Kovalenko, Yurii Rebryk, Alberto Bacchelli, and Timofey Bryksin (JetBrains Research, Russia; HSE University, Russia; JetBrains Research, Netherlands; University of Zurich, Switzerland) Authorship attribution (i.e., determining who is the author of a piece of source code) is an established research topic. State-of-the-art results for the authorship attribution problem look promising for the software engineering field, where they could be applied to detect plagiarized code and prevent legal issues. With this article, we first introduce a new language-agnostic approach to authorship attribution of source code. Then, we discuss limitations of existing synthetic datasets for authorship attribution, and propose a data collection approach that delivers datasets that better reflect aspects important for potential practical use in software engineering. Finally, we demonstrate that high accuracy of authorship attribution models on existing datasets drastically drops when they are evaluated on more realistic data. We outline next steps for the design and evaluation of authorship attribution models that could bring the research efforts closer to practical use for software engineering. @InProceedings{ESEC/FSE21p932, author = {Egor Bogomolov and Vladimir Kovalenko and Yurii Rebryk and Alberto Bacchelli and Timofey Bryksin}, title = {Authorship Attribution of Source Code: A Language-Agnostic Approach and Applicability in Software Engineering}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {932--944}, doi = {10.1145/3468264.3468606}, year = {2021}, } Publisher's Version |
|
Böhme, Marcel |
ESEC/FSE '21: "Estimating Residual Risk in ..."
Estimating Residual Risk in Greybox Fuzzing
Marcel Böhme, Danushka Liyanage, and Valentin Wüstholz (Monash University, Australia; ConsenSys, Germany) For any errorless fuzzing campaign, no matter how long, there is always some residual risk that a software error would be discovered if only the campaign was run for just a bit longer. Recently, greybox fuzzing tools have found widespread adoption. Yet, practitioners can only guess when the residual risk of a greybox fuzzing campaign falls below a specific, maximum allowable threshold. In this paper, we explain why residual risk cannot be directly estimated for greybox campaigns, argue that the discovery probability (i.e., the probability that the next generated input increases code coverage) provides an excellent upper bound, and explore sound statistical methods to estimate the discovery probability in an ongoing greybox campaign. We find that estimators for blackbox fuzzing systematically and substantially under-estimate the true risk. An engineer—who stops the campaign when the estimators purport a risk below the maximum allowable risk—is vastly misled. She might need execute a campaign that is orders of magnitude longer to achieve the allowable risk. Hence, the key challenge we address in this paper is adaptive bias: The probability to discover a specific error actually increases over time. We provide the first probabilistic analysis of adaptive bias, and introduce two novel classes of estimators that tackle adaptive bias. With our estimators, the engineer can decide with confidence when to abort the campaign. @InProceedings{ESEC/FSE21p230, author = {Marcel Böhme and Danushka Liyanage and Valentin Wüstholz}, title = {Estimating Residual Risk in Greybox Fuzzing}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {230--241}, doi = {10.1145/3468264.3468570}, year = {2021}, } Publisher's Version Info Artifacts Reusable |
|
Bračevac, Oliver |
ESEC/FSE '21: "LLSC: A Parallel Symbolic ..."
LLSC: A Parallel Symbolic Execution Compiler for LLVM IR
Guannan Wei, Shangyin Tan, Oliver Bračevac, and Tiark Rompf (Purdue University, USA) We present LLSC, a prototype compiler for nondeterministic parallel symbolic execution of the LLVM intermediate representation (IR). Given an LLVM IR program, LLSC generates code preserving the symbolic execution semantics and orchestrating solver invocations. The generated code runs efficiently, since the code has eliminated the interpretation overhead and explores multiple paths in parallel. To the best of our knowledge, LLSC is the first compiler for fork-based symbolic execution semantics that can generate parallel execution code. In this demonstration paper, we present the current development and preliminary evaluation of LLSC. The principle behind LLSC is to automatically specialize a symbolic interpreter via the 1st Futamura projection, a fundamental connection between interpreters and compilers. The symbolic interpreter is written in an expressive high-level language equipped with a multi-stage programming facility. We demonstrate the run time performance through a set of benchmark programs, showing that LLSC outperforms interpretation-based symbolic execution engines in significant ways. @InProceedings{ESEC/FSE21p1495, author = {Guannan Wei and Shangyin Tan and Oliver Bračevac and Tiark Rompf}, title = {LLSC: A Parallel Symbolic Execution Compiler for LLVM IR}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1495--1499}, doi = {10.1145/3468264.3473108}, year = {2021}, } Publisher's Version |
|
Braha, Julian |
ESEC/FSE '21: "Finding Broken Linux Configuration ..."
Finding Broken Linux Configuration Specifications by Statically Analyzing the Kconfig Language
Jeho Oh, Necip Fazıl Yıldıran, Julian Braha, and Paul Gazzillo (University of Texas at Austin, USA; University of Central Florida, USA) Highly-configurable software underpins much of our computing infrastructure. It enables extensive reuse, but opens the door to broken configuration specifications. The configuration specification language, Kconfig, is designed to prevent invalid configurations of the Linux kernel from being built. However, the astronomical size of the configuration space for Linux makes finding specification bugs difficult by hand or with random testing. In this paper, we introduce a software model checking framework for building Kconfig static analysis tools. We develop a formal semantics of the Kconfig language and implement the semantics in a symbolic evaluator called kclause that models Kconfig behavior as logical formulas. We then design and implement a bug finder, called kismet, that takes kclause models and leverages automated theorem proving to find unmet dependency bugs. kismet is evaluated for its precision, performance, and impact on kernel development for a recent version of Linux, which has over 140,000 lines of Kconfig across 28 architecture-specific specifications. Our evaluation finds 781 bugs (151 when considering sharing among Kconfig specifications) with 100% precision, spending between 37 and 90 minutes for each Kconfig specification, although it misses some bugs due to underapproximation. Compared to random testing, kismet finds substantially more true positive bugs in a fraction of the time. @InProceedings{ESEC/FSE21p893, author = {Jeho Oh and Necip Fazıl Yıldıran and Julian Braha and Paul Gazzillo}, title = {Finding Broken Linux Configuration Specifications by Statically Analyzing the Kconfig Language}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {893--905}, doi = {10.1145/3468264.3468578}, year = {2021}, } Publisher's Version Artifacts Functional |
|
Broughton, Matthew |
ESEC/FSE '21: "StateFormer: Fine-Grained ..."
StateFormer: Fine-Grained Type Recovery from Binaries using Generative State Modeling
Kexin Pei, Jonas Guan, Matthew Broughton, Zhongtian Chen, Songchen Yao, David Williams-King, Vikas Ummadisetty, Junfeng Yang, Baishakhi Ray, and Suman Jana (Columbia University, USA; University of Toronto, Canada; Dublin High School, Ireland) Binary type inference is a critical reverse engineering task supporting many security applications, including vulnerability analysis, binary hardening, forensics, and decompilation. It is a difficult task because source-level type information is often stripped during compilation, leaving only binaries with untyped memory and register accesses. Existing approaches rely on hand-coded type inference rules defined by domain experts, which are brittle and require nontrivial effort to maintain and update. Even though machine learning approaches have shown promise at automatically learning the inference rules, their accuracy is still low, especially for optimized binaries. We present StateFormer, a new neural architecture that is adept at accurate and robust type inference. StateFormer follows a two-step transfer learning paradigm. In the pretraining step, the model is trained with Generative State Modeling (GSM), a novel task that we design to teach the model to statically approximate execution effects of assembly instructions in both forward and backward directions. In the finetuning step, the pretrained model learns to use its knowledge of operational semantics to infer types. We evaluate StateFormer's performance on a corpus of 33 popular open-source software projects containing over 1.67 billion variables of different types. The programs are compiled with GCC and LLVM over 4 optimization levels O0-O3, and 3 obfuscation passes based on LLVM. Our model significantly outperforms state-of-the-art ML-based tools by 14.6% in recovering types for both function arguments and variables. Our ablation studies show that GSM improves type inference accuracy by 33%. @InProceedings{ESEC/FSE21p690, author = {Kexin Pei and Jonas Guan and Matthew Broughton and Zhongtian Chen and Songchen Yao and David Williams-King and Vikas Ummadisetty and Junfeng Yang and Baishakhi Ray and Suman Jana}, title = {StateFormer: Fine-Grained Type Recovery from Binaries using Generative State Modeling}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {690--702}, doi = {10.1145/3468264.3468607}, year = {2021}, } Publisher's Version Info Artifacts Reusable |
|
Bryksin, Timofey |
ESEC/FSE '21: "One Thousand and One Stories: ..."
One Thousand and One Stories: A Large-Scale Survey of Software Refactoring
Yaroslav Golubev, Zarina Kurbatova, Eman Abdullah AlOmar, Timofey Bryksin, and Mohamed Wiem Mkaouer (JetBrains Research, Russia; Rochester Institute of Technology, USA; HSE University, Russia) Despite the availability of refactoring as a feature in popular IDEs, recent studies revealed that developers are reluctant to use them, and still prefer the manual refactoring of their code. At JetBrains, our goal is to fully support refactoring features in IntelliJ-based IDEs and improve their adoption in practice. Therefore, we start by raising the following main questions. How exactly do people refactor code? What refactorings are the most popular? Why do some developers tend not to use convenient IDE refactoring tools? In this paper, we investigate the raised questions through the design and implementation of a survey targeting 1,183 users of IntelliJ-based IDEs. Our quantitative and qualitative analysis of the survey results shows that almost two-thirds of developers spend more than one hour in a single session refactoring their code; that refactoring types vary greatly in popularity; and that a lot of developers would like to know more about IDE refactoring features but lack the means to do so. These results serve us internally to support the next generation of refactoring features, as well as can help our research community to establish new directions in the refactoring usability research. @InProceedings{ESEC/FSE21p1303, author = {Yaroslav Golubev and Zarina Kurbatova and Eman Abdullah AlOmar and Timofey Bryksin and Mohamed Wiem Mkaouer}, title = {One Thousand and One Stories: A Large-Scale Survey of Software Refactoring}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1303--1313}, doi = {10.1145/3468264.3473924}, year = {2021}, } Publisher's Version ESEC/FSE '21: "Authorship Attribution of ..." Authorship Attribution of Source Code: A Language-Agnostic Approach and Applicability in Software Engineering Egor Bogomolov, Vladimir Kovalenko, Yurii Rebryk, Alberto Bacchelli, and Timofey Bryksin (JetBrains Research, Russia; HSE University, Russia; JetBrains Research, Netherlands; University of Zurich, Switzerland) Authorship attribution (i.e., determining who is the author of a piece of source code) is an established research topic. State-of-the-art results for the authorship attribution problem look promising for the software engineering field, where they could be applied to detect plagiarized code and prevent legal issues. With this article, we first introduce a new language-agnostic approach to authorship attribution of source code. Then, we discuss limitations of existing synthetic datasets for authorship attribution, and propose a data collection approach that delivers datasets that better reflect aspects important for potential practical use in software engineering. Finally, we demonstrate that high accuracy of authorship attribution models on existing datasets drastically drops when they are evaluated on more realistic data. We outline next steps for the design and evaluation of authorship attribution models that could bring the research efforts closer to practical use for software engineering. @InProceedings{ESEC/FSE21p932, author = {Egor Bogomolov and Vladimir Kovalenko and Yurii Rebryk and Alberto Bacchelli and Timofey Bryksin}, title = {Authorship Attribution of Source Code: A Language-Agnostic Approach and Applicability in Software Engineering}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {932--944}, doi = {10.1145/3468264.3468606}, year = {2021}, } Publisher's Version |
|
Cai, Yan |
ESEC/FSE '21: "Sound and Efficient Concurrency ..."
Sound and Efficient Concurrency Bug Prediction
Yan Cai, Hao Yun, Jinqiu Wang, Lei Qiao, and Jens Palsberg (Institute of Software at Chinese Academy of Sciences, China; Beijing Institute of Control Engineering, China; University of California at Los Angeles, USA) Concurrency bugs are extremely difficult to detect. Recently, several dynamic techniques achieve sound analysis. M2 is even complete for two threads. It is designed to decide whether two events can occur consecutively. However, real-world concurrency bugs can involve more events and threads. Some can occur when the order of two or more events can be exchanged even if they occur not consecutively. We propose a new technique SeqCheck to soundly decide whether a sequence of events can occur in a specified order. The ordered sequence represents a potential concurrency bug. And several known forms of concurrency bugs can be easily encoded into event sequences where each represents a way that the bug can occur. To achieve it, SeqCheck explicitly analyzes branch events and includes a set of efficient algorithms. We show that SeqCheck is sound; and it is also complete on traces of two threads. We have implemented SeqCheck to detect three types of concurrency bugs and evaluated it on 51 Java benchmarks producing up to billions of events. Compared with M2 and other three recent sound race detectors, SeqCheck detected 333 races in ~30 minutes; while others detected from 130 to 285 races in ~6 to ~12 hours. SeqCheck detected 20 deadlocks in ~6 seconds. This is only one less than Dirk; but Dirk spent more than one hour. SeqCheck also detected 30 atomicity violations in ~20 minutes. The evaluation shows SeqCheck can significantly outperform existing concurrency bug detectors. @InProceedings{ESEC/FSE21p255, author = {Yan Cai and Hao Yun and Jinqiu Wang and Lei Qiao and Jens Palsberg}, title = {Sound and Efficient Concurrency Bug Prediction}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {255--267}, doi = {10.1145/3468264.3468549}, year = {2021}, } Publisher's Version Artifacts Reusable ESEC/FSE '21: "Detecting Concurrency Vulnerabilities ..." Detecting Concurrency Vulnerabilities Based on Partial Orders of Memory and Thread Events Kunpeng Yu, Chenxu Wang, Yan Cai, Xiapu Luo, and Zijiang Yang (Xi'an Jiaotong University, China; Institute of Software at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China; Hong Kong Polytechnic University, China) Memory vulnerabilities are the main causes of software security problems. However, detecting vulnerabilities in multi-threaded programs is challenging because many vulnerabilities occur under specific executions, and it is hard to explore all possible executions of a multi-threaded program. Existing approaches are either computationally intensive or likely to miss some vulnerabilities due to the complex thread interleaving. This paper introduces a novel approach to detect concurrency memory vulnerabilities based on partial orders of events. A partial order on a set of events represents the definite execution orders of events. It allows constructing feasible traces exposing specific vulnerabilities by exchanging the execution orders of vulnerability-potential events. It also reduces the search space of possible executions and thus improves computational efficiency. We propose new algorithms to extract vulnerability-potential event pairs for three kinds of memory vulnerabilities. We also design a novel algorithm to compute a potential event pair's feasible set, which contains the relevant events required by a feasible trace. Our method extends existing approaches for data race detection by considering that two events are protected by the same lock. We implement a prototype of our approach and conduct experiments to evaluate its performance. Experimental results show that our tool exhibits superiority over state-of-the-art algorithms in both effectiveness and efficiency. @InProceedings{ESEC/FSE21p280, author = {Kunpeng Yu and Chenxu Wang and Yan Cai and Xiapu Luo and Zijiang Yang}, title = {Detecting Concurrency Vulnerabilities Based on Partial Orders of Memory and Thread Events}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {280--291}, doi = {10.1145/3468264.3468572}, year = {2021}, } Publisher's Version |
|
Cao, Chun |
ESEC/FSE '21: "Timely and Accurate Detection ..."
Timely and Accurate Detection of Model Deviation in Self-Adaptive Software-Intensive Systems
Yanxiang Tong, Yi Qin, Yanyan Jiang, Chang Xu, Chun Cao, and Xiaoxing Ma (Nanjing University, China) Control-based approaches to self-adaptive software-intensive systems (SASs) are hailed for their optimal performance and theoretical guarantees on the reliability of adaptation behavior. However, in practice the guarantees are often threatened by model deviations occurred at runtime. In this paper, we propose a Model-guided Deviation Detector (MoD2) for timely and accurate detection of model deviations. To ensure reliability, a SAS can switch a control-based optimal controller for a mandatory controller once an unsafe model deviation is detected. MoD2 achieves both high timeliness and high accuracy through a deliberate fusion of parameter deviation estimation, uncertainty compensation, and safe region quantification. Empirical evaluation with three exemplar systems validated the efficacy of MoD2 (93.3% shorter detection delay, 39.4% lower FN rate, and 25.2% lower FP rate), as well as the benefits of the adaptation-switching mechanism (abnormal rate dropped by 29.2%). @InProceedings{ESEC/FSE21p168, author = {Yanxiang Tong and Yi Qin and Yanyan Jiang and Chang Xu and Chun Cao and Xiaoxing Ma}, title = {Timely and Accurate Detection of Model Deviation in Self-Adaptive Software-Intensive Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {168--180}, doi = {10.1145/3468264.3468548}, year = {2021}, } Publisher's Version |
|
Cao, Yinzhi |
ESEC/FSE '21: "Detecting Node.js Prototype ..."
Detecting Node.js Prototype Pollution Vulnerabilities via Object Lookup Analysis
Song Li, Mingqing Kang, Jianwei Hou, and Yinzhi Cao (Johns Hopkins University, USA; Renmin University of China, China) Prototype pollution is a type of vulnerability specific to prototype-based languages, such as JavaScript, which allows an adversary to pollute a base object’s property, leading to a further consequence such as Denial of Service (DoS), arbitrary code execution, and session fixation. On one hand, the only prior work in detecting prototype pollution adopts dynamic analysis to fuzz package inputs, which inevitably has code coverage issues in triggering some deeply embedded vulnerabilities. On the other hand, it is challenging to apply state-of-the-art static analysis in detecting prototype pollution because of the involvement of prototype chains and fine-grained object relations including built-in ones. In this paper, we propose a flow-, context-, and branch-sensitive static taint analysis tool, called ObjLupAnsys, to detect prototype pollution vulnerabilities. The key of ObjLupAnsys is a so-called object lookup analysis, which gradually expands the source and sink objects into big clusters with a complex inner structure by performing targeted object lookups in both clusters so that a system built-in function can be redefined. Specifically, at the source cluster, ObjLupAnsys proactively creates new object properties based on how the target program uses the initial source object; at the sink cluster, ObjLupAnsys assigns property values in object lookups to decrease the number of object lookups to reach a system built-in function. We implemented an open-source tool and applied it for the detection of prototype pollution among Node.js packages. Our evaluation shows that ObjLupAnsys finds 61 zero-day, previously-unknown, exploitable vulnerabilities as opposed to 18 by the state-of-the-art dynamic fuzzing tool and three by a state-of-the-art static analysis tool that is modified to detect prototype pollution. To date, 11 vulnerable Node.js packages are assigned with CVE numbers and five have already been patched by their developers. In addition, ObjLupAnsys also discovered seven applications or packages including a real-world, online website, which are indirectly vulnerable due to the inclusion of vulnerable packages found by ObjLupAnsys. @InProceedings{ESEC/FSE21p268, author = {Song Li and Mingqing Kang and Jianwei Hou and Yinzhi Cao}, title = {Detecting Node.js Prototype Pollution Vulnerabilities via Object Lookup Analysis}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {268--279}, doi = {10.1145/3468264.3468542}, year = {2021}, } Publisher's Version |
|
Carbin, Michael |
ESEC/FSE '21: "Generalizable and Interpretable ..."
Generalizable and Interpretable Learning for Configuration Extrapolation
Yi Ding, Ahsan Pervaiz, Michael Carbin, and Henry Hoffmann (Massachusetts Institute of Technology, USA; University of Chicago, USA) Modern software applications are increasingly configurable, which puts a burden on users to tune these configurations for their target hardware and workloads. To help users, machine learning techniques can model the complex relationships between software configuration parameters and performance. While powerful, these learners have two major drawbacks: (1) they rarely incorporate prior knowledge and (2) they produce outputs that are not interpretable by users. These limitations make it difficult to (1) leverage information a user has already collected (e.g., tuning for new hardware using the best configurations from old hardware) and (2) gain insights into the learner’s behavior (e.g., understanding why the learner chose different configurations on different hardware or for different workloads). To address these issues, this paper presents two configuration optimization tools, GIL and GIL+, using the proposed generalizable and interpretable learning approaches. To incorporate prior knowledge, the proposed tools (1) start from known configurations, (2) iteratively construct a new linear model, (3) extrapolate better performance configurations from that model, and (4) repeat. Since the base learners are linear models, these tools are inherently interpretable. We enhance this property with a graphical representation of how they arrived at the highest performance configuration. We evaluate GIL and GIL+ by using them to configure Apache Spark workloads on different hardware platforms and find that, compared to prior work, GIL and GIL+ produce comparable, and sometimes even better performance configurations, but with interpretable results. @InProceedings{ESEC/FSE21p728, author = {Yi Ding and Ahsan Pervaiz and Michael Carbin and Henry Hoffmann}, title = {Generalizable and Interpretable Learning for Configuration Extrapolation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {728--740}, doi = {10.1145/3468264.3468603}, year = {2021}, } Publisher's Version |
|
Chakraborty, Joymallya |
ESEC/FSE '21: "Bias in Machine Learning Software: ..."
Bias in Machine Learning Software: Why? How? What to Do?
Joymallya Chakraborty, Suvodeep Majumder, and Tim Menzies (North Carolina State University, USA) Increasingly, software is making autonomous decisions in case of criminal sentencing, approving credit cards, hiring employees, and so on. Some of these decisions show bias and adversely affect certain social groups (e.g. those defined by sex, race, age, marital status). Many prior works on bias mitigation take the following form: change the data or learners in multiple ways, then see if any of that improves fairness. Perhaps a better approach is to postulate root causes of bias and then applying some resolution strategy. This paper postulates that the root causes of bias are the prior decisions that affect- (a) what data was selected and (b) the labels assigned to those examples. Our Fair-SMOTE algorithm removes biased labels; and rebalances internal distributions such that based on sensitive attribute, examples are equal in both positive and negative classes. On testing, it was seen that this method was just as effective at reducing bias as prior approaches. Further, models generated via Fair-SMOTE achieve higher performance (measured in terms of recall and F1) than other state-of-the-art fairness improvement algorithms. To the best of our knowledge, measured in terms of number of analyzed learners and datasets, this study is one of the largest studies on bias mitigation yet presented in the literature. @InProceedings{ESEC/FSE21p429, author = {Joymallya Chakraborty and Suvodeep Majumder and Tim Menzies}, title = {Bias in Machine Learning Software: Why? How? What to Do?}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {429--440}, doi = {10.1145/3468264.3468537}, year = {2021}, } Publisher's Version |
|
Chakraborty, Mohna |
ESEC/FSE '21: "Does Reusing Pre-trained NLP ..."
Does Reusing Pre-trained NLP Model Propagate Bugs?
Mohna Chakraborty (Iowa State University, USA) In this digital era, the textual content has become a seemingly ubiquitous part of our life. Natural Language Processing (NLP) empowers machines to comprehend the intricacies of textual data and eases human-computer interaction. Advancement in language modeling, continual learning, availability of a large amount of linguistic data, and large-scale computational power have made it feasible to train models for downstream tasks related to text analysis, including safety-critical ones, e.g., medical, airlines, etc. Compared to other deep learning (DL) models, NLP-based models are widely reused for various tasks. However, the reuse of pre-trained models in a new setting is still a complex task due to the limitations of the training dataset, model structure, specification, usage, etc. With this motivation, we study BERT, a vastly used language model (LM), from the direction of reusing in the code. We mined 80 posts from Stack Overflow related to BERT and found 4 types of bugs observed in clients’ code. Our results show that 13.75% are fairness, 28.75% are parameter, 15% are token, and 16.25% are version-related bugs. @InProceedings{ESEC/FSE21p1686, author = {Mohna Chakraborty}, title = {Does Reusing Pre-trained NLP Model Propagate Bugs?}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1686--1688}, doi = {10.1145/3468264.3473494}, year = {2021}, } Publisher's Version |
|
Chandra, Satish |
ESEC/FSE '21: "Explaining Mispredictions ..."
Explaining Mispredictions of Machine Learning Models using Rule Induction
Jürgen Cito, Isil Dillig, Seohyun Kim, Vijayaraghavan Murali, and Satish Chandra (TU Vienna, Austria; Facebook, Austria; University of Texas at Austin, USA; Facebook, USA) While machine learning (ML) models play an increasingly prevalent role in many software engineering tasks, their prediction accuracy is often problematic. When these models do mispredict, it can be very difficult to isolate the cause. In this paper, we propose a technique that aims to facilitate the debugging process of trained statistical models. Given an ML model and a labeled data set, our method produces an interpretable characterization of the data on which the model performs particularly poorly. The output of our technique can be useful for understanding limitations of the training data or the model itself; it can also be useful for ensembling if there are multiple models with different strengths. We evaluate our approach through case studies and illustrate how it can be used to improve the accuracy of predictive models used for software engineering tasks within Facebook. @InProceedings{ESEC/FSE21p716, author = {Jürgen Cito and Isil Dillig and Seohyun Kim and Vijayaraghavan Murali and Satish Chandra}, title = {Explaining Mispredictions of Machine Learning Models using Rule Induction}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {716--727}, doi = {10.1145/3468264.3468614}, year = {2021}, } Publisher's Version |
|
Chang, Wanli |
ESEC/FSE '21: "Making Smart Contract Development ..."
Making Smart Contract Development More Secure and Easier
Meng Ren, Fuchen Ma, Zijing Yin, Ying Fu, Huizhong Li, Wanli Chang, and Yu Jiang (Tsinghua University, China; Ant Group, China; WeBank, China; University of York, UK) With the rapid development of distributed applications, smart contracts have attracted more and more developers' attentions. However, developers or domain experts have different levels of familiarity with specific programming languages, like Solidity, and those vulnerabilities hidden in the code would be exploited and result in huge property losses. Existing auxiliary tools lack security considerations. Most of them only provide word completion based on fuzzy search and detection services for limited types of vulnerabilities, which results in the manpower waste during coding and potential vulnerability threats after deployment. In this work, we propose an integrated framework to enhance security in the two stages of recommendation and validation, assisting developers to implement more secure contracts more quickly. First, we reinforce original smart contracts with general patch patterns and secure programming standards for training, and design a real-time code suggestion algorithm to predict secure words for selection. Then, we integrate multiple widely-used testing tools to provide validation services. For evaluation, we collected 47,398 real-world contracts, and the result shows that it outperforms existing platforms and tools, improving the average word suggestion accuracy by 30%-60% and helping detect about 25%-61% more vulnerabilities. In most cases, our framework can correctly predict next words with the probability up to 82%-97% within top ten candidates. Compared with professional vulnerability mining tools, it can find more vulnerabilities and provide targeted modification suggestions without frivolous configurations. Currently, this framework has been used as the official development tool of WeBank and integrated as the recommended platform by FISCO-BCOS community. @InProceedings{ESEC/FSE21p1360, author = {Meng Ren and Fuchen Ma and Zijing Yin and Ying Fu and Huizhong Li and Wanli Chang and Yu Jiang}, title = {Making Smart Contract Development More Secure and Easier}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1360--1370}, doi = {10.1145/3468264.3473929}, year = {2021}, } Publisher's Version |
|
Chattopadhyay, Souti |
ESEC/FSE '21: "Reel Life vs. Real Life: How ..."
Reel Life vs. Real Life: How Software Developers Share Their Daily Life through Vlogs
Souti Chattopadhyay, Thomas Zimmermann, and Denae Ford (Oregon State University, USA; Microsoft Research, USA) Software developers are turning to vlogs (video blogs) to share what a day is like to walk in their shoes. Through these vlogs developers share a rich perspective of their technical work as well their personal lives. However, does the type of activities portrayed in vlogs differ from activities developers in the industry perform? Would developers at a software company prefer to show activities to different extents if they were asked to share about their day through vlogs? To answer these questions, we analyzed 130 vlogs by software developers on YouTube and conducted a survey with 335 software developers at a large software company. We found that although vlogs present traditional development activities such as coding and code peripheral activities (11%), they also prominently feature wellness and lifestyle related activities (47.3%) that have not been reflected in previous software engineering literature. We also found that developers at the software company were inclined to share more non-coding tasks (e.g., personal projects, time spent with family and friends, and health) when asked to create a mock-up vlog to promote diversity. These findings demonstrate a shift in our understanding of how software developers are spending their time and find valuable to share publicly. We discuss how vlogs provide a more complete perspective of software development work and serve as a valuable source of data for empirical research. @InProceedings{ESEC/FSE21p404, author = {Souti Chattopadhyay and Thomas Zimmermann and Denae Ford}, title = {Reel Life vs. Real Life: How Software Developers Share Their Daily Life through Vlogs}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {404--415}, doi = {10.1145/3468264.3468599}, year = {2021}, } Publisher's Version Info |
|
Chen, Chunyang |
ESEC/FSE '21: "OwlEyes-Online: A Fully Automated ..."
OwlEyes-Online: A Fully Automated Platform for Detecting and Localizing UI Display Issues
Yuhui Su, Zhe Liu, Chunyang Chen, Junjie Wang, and Qing Wang (Institute of Software at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China; Monash University, Australia) Graphical User Interface (GUI) provides visual bridges between software apps and end users. However, due to the compatibility of software or hardware, UI display issues such as text overlap, blurred screen, image missing always occur during GUI rendering on different devices. Because these UI display issues can be found directly by human eyes, in this paper, we implement an online UI display issue detection tool OwlEyes-Online, which provides a simple and easy-to-use platform for users to realize the automatic detection and localization of UI display issues. The OwlEyes-Online can automatically run the app and get its screenshots and XML files, and then detect the existence of issues by analyzing the screenshots. In addition, OwlEyes-Online can also find the detailed area of the issue in the given screenshots to further remind developers. Finally, OwlEyes-Online will automatically generate test reports with UI display issues detected in app screenshots and send them to users. The OwlEyes-Online was evaluated and proved to be able to accurately detect UI display issues. Tool Link: http://www.owleyes.online:7476 Github Link: https://github.com/franklinbill/owleyes Demo Video Link: https://youtu.be/002nHZBxtCY @InProceedings{ESEC/FSE21p1500, author = {Yuhui Su and Zhe Liu and Chunyang Chen and Junjie Wang and Qing Wang}, title = {OwlEyes-Online: A Fully Automated Platform for Detecting and Localizing UI Display Issues}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1500--1504}, doi = {10.1145/3468264.3473109}, year = {2021}, } Publisher's Version Video Info |
|
Chen, Feifei |
ESEC/FSE '21: "Embedding App-Library Graph ..."
Embedding App-Library Graph for Neural Third Party Library Recommendation
Bo Li, Qiang He, Feifei Chen, Xin Xia, Li Li, John Grundy, and Yun Yang (Swinburne University of Technology, Australia; Deakin University, Australia; Monash University, Australia) The mobile app marketplace has fierce competition for mobile app developers, who need to develop and update their apps as soon as possible to gain first mover advantage. Third-party libraries (TPLs) offer developers an easier way to enhance their apps with new features. However, how to find suitable candidates among the high number and fast-changing TPLs is a challenging problem. TPL recommendation is a promising solution, but unfortunately existing approaches suffer from low accuracy in recommendation results. To tackle this challenge, we propose GRec, a graph neural network (GNN) based approach, for recommending potentially useful TPLs for app development. GRec models mobile apps, TPLs, and their interactions into an app-library graph. It then distills app-library interaction information from the app-library graph to make more accurate TPL recommendations. To evaluate GRec’s performance, we conduct comprehensive experiments based on a large-scale real-world Android app dataset containing 31,432 Android apps, 752 distinct TPLs, and 537,011 app-library usage records. Our experimental results illustrate that GRec can significantly increase the prediction accuracy and diversify the prediction results compared with state-of-the-art methods. A user study performed with app developers also confirms GRec's usefulness for real-world mobile app development. @InProceedings{ESEC/FSE21p466, author = {Bo Li and Qiang He and Feifei Chen and Xin Xia and Li Li and John Grundy and Yun Yang}, title = {Embedding App-Library Graph for Neural Third Party Library Recommendation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {466--477}, doi = {10.1145/3468264.3468552}, year = {2021}, } Publisher's Version |
|
Chen, Guanhong |
ESEC/FSE '21: "A Comprehensive Study on Learning-Based ..."
A Comprehensive Study on Learning-Based PE Malware Family Classification Methods
Yixuan Ma, Shuang Liu, Jiajun Jiang, Guanhong Chen, and Keqiu Li (State Key Laboratory of Communication Content Cognition, China; Tianjin University, China) Driven by the high profit, Portable Executable (PE) malware has been consistently evolving in terms of both volume and sophistication. PE malware family classification has gained great attention and a large number of approaches have been proposed. With the rapid development of machine learning techniques and the exciting results they achieved on various tasks, machine learning algorithms have also gained popularity in the PE malware family classification task. Three mainstream approaches that use learning based algorithms, as categorized by the input format the methods take, are image-based, binary-based and disassembly-based approaches. Although a large number of approaches are published, there is no consistent comparisons on those approaches, especially from the practical industry adoption perspective. Moreover, there is no comparison in the scenario of concept drift, which is a fact for the malware classification task due to the fast evolving nature of malware. In this work, we conduct a thorough empirical study on learning-based PE malware classification approaches on 4 different datasets and consistent experiment settings. Based on the experiment results and an interview with our industry partners, we find that (1) there is no individual class of methods that significantly outperforms the others; (2) All classes of methods show performance degradation on concept drift (by an average F1-score of 32.23%); and (3) the prediction time and high memory consumption hinder existing approaches from being adopted for industry usage. @InProceedings{ESEC/FSE21p1314, author = {Yixuan Ma and Shuang Liu and Jiajun Jiang and Guanhong Chen and Keqiu Li}, title = {A Comprehensive Study on Learning-Based PE Malware Family Classification Methods}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1314--1325}, doi = {10.1145/3468264.3473925}, year = {2021}, } Publisher's Version Info |
|
Chen, Jiaqi |
ESEC/FSE '21: "iBatch: Saving Ethereum Fees ..."
iBatch: Saving Ethereum Fees via Secure and Cost-Effective Batching of Smart-Contract Invocations
Yibo Wang, Qi Zhang, Kai Li, Yuzhe Tang, Jiaqi Chen, Xiapu Luo, and Ting Chen (Syracuse University, USA; Hong Kong Polytechnic University, China; University of Electronic Science and Technology of China, China) This paper presents iBatch, a middleware system running on top of an operational Ethereum network to enable secure batching of smart-contract invocations against an untrusted relay server off-chain. iBatch does so at a low overhead by validating the server's batched invocations in smart contracts without additional states. The iBatch mechanism supports a variety of policies, ranging from conservative to aggressive batching, and can be configured adaptively to the current workloads. iBatch automatically rewrites smart contracts to integrate with legacy applications and support large-scale deployment. For cost evaluation, we develop a platform with fast and cost-accurate transaction replaying, build real transaction benchmarks on popular Ethereum applications, and build a functional prototype of iBatch on Ethereum. The evaluation results show that iBatch saves 14.6%-59.1% Gas cost per invocation with a moderate 2-minute delay and 19.06%-31.52% Ether cost per invocation with a delay of 0.26-1.66 blocks. @InProceedings{ESEC/FSE21p566, author = {Yibo Wang and Qi Zhang and Kai Li and Yuzhe Tang and Jiaqi Chen and Xiapu Luo and Ting Chen}, title = {iBatch: Saving Ethereum Fees via Secure and Cost-Effective Batching of Smart-Contract Invocations}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {566--577}, doi = {10.1145/3468264.3468568}, year = {2021}, } Publisher's Version |
|
Chen, Junjie |
ESEC/FSE '21: "Exposing Numerical Bugs in ..."
Exposing Numerical Bugs in Deep Learning via Gradient Back-Propagation
Ming Yan, Junjie Chen, Xiangyu Zhang, Lin Tan, Gan Wang, and Zan Wang (Tianjin University, China; Purdue University, USA) Numerical computation is dominant in deep learning (DL) programs. Consequently, numerical bugs are one of the most prominent kinds of defects in DL programs. Numerical bugs can lead to exceptional values such as NaN (Not-a-Number) and INF (Infinite), which can be propagated and eventually cause crashes or invalid outputs. They occur when special inputs cause invalid parameter values at internal mathematical operations such as log(). In this paper, we propose the first dynamic technique, called GRIST, which automatically generates a small input that can expose numerical bugs in DL programs. GRIST piggy-backs on the built-in gradient computation functionalities of DL infrastructures. Our evaluation on 63 real-world DL programs shows that GRIST detects 78 bugs including 56 unknown bugs. By submitting them to the corresponding issue repositories, eight bugs have been confirmed and three bugs have been fixed. Moreover, GRIST can save 8.79X execution time to expose numerical bugs compared to running original programs with its provided inputs. Compared to the state-of-the-art technique DEBAR (which is a static technique), DEBAR produces 12 false positives and misses 31 true bugs (of which 30 bugs can be found by GRIST), while GRIST only misses one known bug in those programs and no false positive. The results demonstrate the effectiveness of GRIST. @InProceedings{ESEC/FSE21p627, author = {Ming Yan and Junjie Chen and Xiangyu Zhang and Lin Tan and Gan Wang and Zan Wang}, title = {Exposing Numerical Bugs in Deep Learning via Gradient Back-Propagation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {627--638}, doi = {10.1145/3468264.3468612}, year = {2021}, } Publisher's Version ESEC/FSE '21: "A Comprehensive Study of Deep ..." A Comprehensive Study of Deep Learning Compiler Bugs Qingchao Shen, Haoyang Ma, Junjie Chen, Yongqiang Tian, Shing-Chi Cheung, and Xiang Chen (Tianjin University, China; University of Waterloo, Canada; Hong Kong University of Science and Technology, China; Nantong University, China) There are increasing uses of deep learning (DL) compilers to generate optimized code, boosting the runtime performance of DL models on specific hardware. Like their traditional counterparts, DL compilers can generate incorrect code, resulting in unexpected model behaviors that may cause catastrophic consequences in mission-critical systems. On the other hand, the DL models processed by DL compilers differ fundamentally from imperative programs in that the program logic in DL models is implicit. As such, various characteristics of the bugs arising from traditional compilers need to be revisited in the context of DL compilers. In this paper, we present the first systematic study of DL compiler bugs by analyzing 603 bugs arising in three popular DL compilers (i.e., TVM from Apache, Glow from Facebook, and nGraph from Intel). We analyzed these bugs according to their root causes, symptoms, and the stages where they occur during compilation. We obtain 12 findings, and provide a series of valuable guidelines for future work on DL compiler bug detection and debugging. For example, a large portion (nearly 20%) of DL compiler bugs are related to types, especially tensor types. The analysis of these bugs helps design new mutation operators (e.g., adding type cast for a tensor to promote implicit type conversion in subsequent tensor computations) to facilitate type-related bug detection. Further, we developed TVMfuzz as a proof-of-concept application of our findings to test the TVM DL compiler. It generates new tests based on TVM's original test suite. They expose 8 TVM bugs that are missed by the original test suite. The result demonstrates the usefulness of our findings. @InProceedings{ESEC/FSE21p968, author = {Qingchao Shen and Haoyang Ma and Junjie Chen and Yongqiang Tian and Shing-Chi Cheung and Xiang Chen}, title = {A Comprehensive Study of Deep Learning Compiler Bugs}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {968--980}, doi = {10.1145/3468264.3468591}, year = {2021}, } Publisher's Version ESEC/FSE '21: "Identifying Bad Software Changes ..." Identifying Bad Software Changes via Multimodal Anomaly Detection for Online Service Systems Nengwen Zhao, Junjie Chen, Zhaoyang Yu, Honglin Wang, Jiesong Li, Bin Qiu, Hongyu Xu, Wenchi Zhang, Kaixin Sui, and Dan Pei (Tsinghua University, China; Tianjin University, China; BizSeer, China; China Guangfa Bank, China) In large-scale online service systems, software changes are inevitable and frequent. Due to importing new code or configurations, changes are likely to incur incidents and destroy user experience. Thus it is essential for engineers to identify bad software changes, so as to reduce the influence of incidents and improve system re- liability. To better understand bad software changes, we perform the first empirical study based on large-scale real-world data from a large commercial bank. Our quantitative analyses indicate that about 50.4% of incidents are caused by bad changes, mainly be- cause of code defect, configuration error, resource contention, and software version. Besides, our qualitative analyses show that the current practice of detecting bad software changes performs not well to handle heterogeneous multi-source data involved in soft- ware changes. Based on the findings and motivation obtained from the empirical study, we propose a novel approach named SCWarn aiming to identify bad changes and produce interpretable alerts accurately and timely. The key idea of SCWarn is drawing support from multimodal learning to identify anomalies from heterogeneous multi-source data. An extensive study on two datasets with various bad software changes demonstrates our approach significantly outperforms all the compared approaches, achieving 0.95 F1-score on average and reducing MTTD (mean time to detect) by 20.4%∼60.7%. In particular, we shared some success stories and lessons learned from the practical usage. @InProceedings{ESEC/FSE21p527, author = {Nengwen Zhao and Junjie Chen and Zhaoyang Yu and Honglin Wang and Jiesong Li and Bin Qiu and Hongyu Xu and Wenchi Zhang and Kaixin Sui and Dan Pei}, title = {Identifying Bad Software Changes via Multimodal Anomaly Detection for Online Service Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {527--539}, doi = {10.1145/3468264.3468543}, year = {2021}, } Publisher's Version ESEC/FSE '21: "Probabilistic Delta Debugging ..." Probabilistic Delta Debugging Guancheng Wang, Ruobing Shen, Junjie Chen, Yingfei Xiong, and Lu Zhang (Peking University, China; Tianjin University, China) The delta debugging problem concerns how to reduce an object while preserving a certain property, and widely exists in many applications, such as compiler development, regression fault localization, and software debloating. Given the importance of delta debugging, multiple algorithms have been proposed to solve the delta debugging problem efficiently and effectively. However, the efficiency and effectiveness of the state-of-the-art algorithms are still not satisfactory. For example, the state-of-the-art delta debugging tool, CHISEL, may take up to 3 hours to reduce a single program with 14,092 lines of code, while the reduced program may be up to 2 times unnecessarily large. In this paper, we propose a probabilistic delta debugging algorithm (named ProbDD) to improve the efficiency and the effectiveness of delta debugging. Our key insight is, the ddmin algorithm, the basic algorithm upon which many existing approaches are built, follows a predefined sequence of attempts to remove elements from a sequence, and fails to utilize the information from existing test results. To address this problem, ProbDD builds a probabilistic model to estimate the probabilities of the elements to be kept in the produced result, selects a set of elements to maximize the gain of the next test based on the model, and improves the model based on the test results. We prove the correctness of ProbDD, and analyze the minimality of its result and the asymptotic number of tests under the worst case. The asymptotic number of tests in the worst case of ProbDD is O(n), which is smaller than that of ddmin, O(n2) worst-case asymptotic number of tests. Furthermore, we experimentally compared ProbDD with ddmin on 40 subjects in HDD and CHISEL, two approaches that wrap ddmin for reducing trees and C programs, respectively. The results show that, after replacing ddmin with ProbDD, HDD and CHISEL produce 59.48% and 11.51% smaller results and use 63.22% and 45.27% less time, respectively. @InProceedings{ESEC/FSE21p881, author = {Guancheng Wang and Ruobing Shen and Junjie Chen and Yingfei Xiong and Lu Zhang}, title = {Probabilistic Delta Debugging}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {881--892}, doi = {10.1145/3468264.3468625}, year = {2021}, } Publisher's Version Info ESEC/FSE '21: "LS-Sampling: An Effective ..." LS-Sampling: An Effective Local Search Based Sampling Approach for Achieving High t-wise Coverage Chuan Luo, Binqi Sun, Bo Qiao, Junjie Chen, Hongyu Zhang, Jinkun Lin, Qingwei Lin, and Dongmei Zhang (Microsoft Research, China; Tianjin University, China; University of Newcastle, Australia; Institute of Software at Chinese Academy of Sciences, China) There has been a rapidly increasing demand for developing highly configurable software systems, which urgently calls for effective testing methods. In practice, t-wise coverage has been widely recognized as a useful metric to evaluate the quality of a test suite for testing highly configurable software systems, and achieving high t-wise coverage is important for ensuring test adequacy. However, state-of-the-art methods usually cost a fairly long time to generate large test suites for high pairwise coverage (i.e., 2-wise coverage), which would lead to ineffective and inefficient testing of highly configurable software systems. In this paper, we propose a novel local search based sampling approach dubbed LS-Sampling for achieving high t-wise coverage. Extensive experiments on a large number of public benchmarks, which are collected from real-world, highly configurable software systems, show that LS-Sampling achieves higher 2-wise and 3-wise coverage than the current state of the art. LS-Sampling is effective, since on average it achieves the 2-wise coverage of 99.64% and the 3-wise coverage of 97.87% through generating a small test suite consisting of only 100 test cases (90% smaller than the test suites generated by its state-of-the-art competitors). Furthermore, LS-Sampling is efficient, since it only requires an average execution time of less than one minute to generate a test suite with high 2-wise and 3-wise coverage. @InProceedings{ESEC/FSE21p1081, author = {Chuan Luo and Binqi Sun and Bo Qiao and Junjie Chen and Hongyu Zhang and Jinkun Lin and Qingwei Lin and Dongmei Zhang}, title = {LS-Sampling: An Effective Local Search Based Sampling Approach for Achieving High t-wise Coverage}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1081--1092}, doi = {10.1145/3468264.3468622}, year = {2021}, } Publisher's Version |
|
Chen, Ke |
ESEC/FSE '21: "An Empirical Study of GUI ..."
An Empirical Study of GUI Widget Detection for Industrial Mobile Games
Jiaming Ye, Ke Chen, Xiaofei Xie, Lei Ma, Ruochen Huang, Yingfeng Chen, Yinxing Xue, and Jianjun Zhao (Kyushu University, Japan; Netease, China; University of Alberta, Canada; University of Science and Technology of China, China) With the widespread adoption of smartphones in our daily life, mobile games experienced increasing demand over the past years. Meanwhile, the quality of mobile games has been continuously drawing more and more attention, which can greatly affect the player experience. For better quality assurance, general-purpose testing has been extensively studied for mobile apps. However, due to the unique characteristic of mobile games, existing mobile testing techniques may not be directly suitable and applicable. To better understand the challenges in mobile game testing, in this paper, we first initiate an early step to conduct an empirical study towards understanding the challenges and pain points of mobile game testing process at our industrial partner NetEase Games. Specifically, we first conduct a survey from the mobile test development team at NetEase Games via both scrum interviews and questionnaires. We found that accurate and effective GUI widget detection for mobile games could be the pillar to boost the automation of mobile game testing and other downstream analysis tasks in practice. We then continue to perform comparative studies to investigate the effectiveness of state-of-the-art general-purpose mobile app GUI widget detection methods in the context of mobile games. To this end, we also develop a technique to automatically collect GUI widgets region information of industrial mobile games, which is equipped with a heuristic-based data cleaning method for quality refinement of the labeling results. Our evaluation shows that: (1) Existing GUI widget detection methods for general-purpose mobile apps cannot perform well on industrial mobile games. (2) Mobile game exhibits obvious difference from other general-purpose mobile apps in the perspective GUI widgets. Our further in-depth analysis reveals high diversity and density characteristics of mobile game GUI widgets could be the major reasons that post the challenges for existing methods, which calls for new research methods and better industry practices. To enable further research along this line, we construct the very first GUI widget detection benchmark, specially designed for mobile games, incorporating both our collected dataset and the state-of-the-art widget detection methods for mobile apps, which could also be the basis for further study of many downstream quality assurance tasks (e.g., testing and analysis) for mobile games. @InProceedings{ESEC/FSE21p1427, author = {Jiaming Ye and Ke Chen and Xiaofei Xie and Lei Ma and Ruochen Huang and Yingfeng Chen and Yinxing Xue and Jianjun Zhao}, title = {An Empirical Study of GUI Widget Detection for Industrial Mobile Games}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1427--1437}, doi = {10.1145/3468264.3473935}, year = {2021}, } Publisher's Version ESEC/FSE '21: "GLIB: Towards Automated Test ..." GLIB: Towards Automated Test Oracle for Graphically-Rich Applications Ke Chen, Yufei Li, Yingfeng Chen, Changjie Fan, Zhipeng Hu, and Wei Yang (Netease, China; University of Texas at Dallas, USA) Graphically-rich applications such as games are ubiquitous with attractive visual effects of Graphical User Interface (GUI) that offers a bridge between software applications and end-users. However, various types of graphical glitches may arise from such GUI complexity and have become one of the main component of software compatibility issues. Our study on bug reports from game development teams in NetEase Inc. indicates that graphical glitches frequently occur during the GUI rendering and severely degrade the quality of graphically-rich applications such as video games. Existing automated testing techniques for such applications focus mainly on generating various GUI test sequences and check whether the test sequences can cause crashes. These techniques require constant human attention to captures non-crashing bugs such as bugs causing graphical glitches. In this paper, we present the first step in automating the test oracle for detecting non-crashing bugs in graphically-rich applications. Specifically, we propose GLIB based on a code-based data augmentation technique to detect game GUI glitches. We perform an evaluation of GLIB on 20 real-world game apps (with bug reports available) and the result shows that GLIB can achieve 100% precision and 99.5% recall in detecting non-crashing bugs such as game GUI glitches. Practical application of GLIB on another 14 real-world games (without bug reports) further demonstrates that GLIB can effectively uncover GUI glitches, with 48 of 53 bugs reported by GLIB having been confirmed and fixed so far. @InProceedings{ESEC/FSE21p1093, author = {Ke Chen and Yufei Li and Yingfeng Chen and Changjie Fan and Zhipeng Hu and Wei Yang}, title = {GLIB: Towards Automated Test Oracle for Graphically-Rich Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1093--1104}, doi = {10.1145/3468264.3468586}, year = {2021}, } Publisher's Version |
|
Chen, Songqiang |
ESEC/FSE '21: "Validation on Machine Reading ..."
Validation on Machine Reading Comprehension Software without Annotated Labels: A Property-Based Method
Songqiang Chen, Shuo Jin, and Xiaoyuan Xie (Wuhan University, China) Machine Reading Comprehension (MRC) in Natural Language Processing has seen great progress recently. But almost all the current MRC software is validated with a reference-based method, which requires well-annotated labels for test cases and tests the software by checking the consistency between the labels and the outputs. However, labeling test cases of MRC could be very costly due to their complexity, which makes reference-based validation hard to be extensible and sufficient. Furthermore, solely checking the consistency and measuring the overall score may not be sensible and flexible for assessing the language understanding capability. In this paper, we propose a property-based validation method for MRC software with Metamorphic Testing to supplement the reference-based validation. It does not refer to the labels and hence can make much data available for testing. Besides, it validates MRC software against various linguistic properties to give a specific and in-depth picture on linguistic capabilities of MRC software. Comprehensive experimental results show that our method can successfully reveal violations to the target linguistic properties without the labels. Moreover, it can reveal problems that have been concealed by the traditional validation. Comparison according to the properties provides deeper and more concrete ideas about different language understanding capabilities of the MRC software. @InProceedings{ESEC/FSE21p590, author = {Songqiang Chen and Shuo Jin and Xiaoyuan Xie}, title = {Validation on Machine Reading Comprehension Software without Annotated Labels: A Property-Based Method}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {590--602}, doi = {10.1145/3468264.3468569}, year = {2021}, } Publisher's Version |
|
Chen, Tao |
ESEC/FSE '21: "Multi-objectivizing Software ..."
Multi-objectivizing Software Configuration Tuning
Tao Chen and Miqing Li (University of Electronic Science and Technology of China, China; Loughborough University, UK; University of Birmingham, UK) Automatically tuning software configuration for optimizing a single performance attribute (e.g., minimizing latency) is not trivial, due to the nature of the configuration systems (e.g., complex landscape and expensive measurement). To deal with the problem, existing work has been focusing on developing various effective optimizers. However, a prominent issue that all these optimizers need to take care of is how to avoid the search being trapped in local optima — a hard nut to crack for software configuration tuning due to its rugged and sparse landscape, and neighboring configurations tending to behave very differently. Overcoming such in an expensive measurement setting is even more challenging. In this paper, we take a different perspective to tackle this issue. Instead of focusing on improving the optimizer, we work on the level of optimization model. We do this by proposing a meta multi-objectivization model (MMO) that considers an auxiliary performance objective (e.g., throughput in addition to latency). What makes this model unique is that we do not optimize the auxiliary performance objective, but rather use it to make similarly-performing while different configurations less comparable (i.e. Pareto nondominated to each other), thus preventing the search from being trapped in local optima. Experiments on eight real-world software systems/environments with diverse performance attributes reveal that our MMO model is statistically more effective than state-of-the-art single-objective counterparts in overcoming local optima (up to 42% gain), while using as low as 24% of their measurements to achieve the same (or better) performance result. @InProceedings{ESEC/FSE21p453, author = {Tao Chen and Miqing Li}, title = {Multi-objectivizing Software Configuration Tuning}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {453--465}, doi = {10.1145/3468264.3468555}, year = {2021}, } Publisher's Version Info |
|
Chen, Taolue |
ESEC/FSE '21: "BRAID: An API Recommender ..."
BRAID: An API Recommender Supporting Implicit User Feedback
Yu Zhou, Haonan Jin, Xinying Yang, Taolue Chen, Krishna Narasimhan, and Harald C. Gall (Nanjing University of Aeronautics and Astronautics, China; University of London, UK; TU Darmstadt, Germany; University of Zurich, Switzerland) Efficient application programming interface (API) recommendation is one of the most desired features of modern integrated development environments. A multitude of API recommendation approaches have been proposed. However, most of the currently available API recommenders do not support the effective integration of user feedback into the recommendation loop. In this paper, we present BRAID (Boosting RecommendAtion with Implicit FeeDback), a tool which leverages user feedback, and employs learning-to-rank and active learning techniques to boost recommendation performance. The implementation is based on the VSCode plugin architecture, which provides an integrated user interface. Essentially, BRAID is a general framework which can accommodate existing query-based API recommendation approaches as components. Comparative experiments with strong baselines demonstrate the efficacy of the tool. A video demonstrating the usage of BRAID can be found at https://youtu.be/naD0guvl8sE. @InProceedings{ESEC/FSE21p1510, author = {Yu Zhou and Haonan Jin and Xinying Yang and Taolue Chen and Krishna Narasimhan and Harald C. Gall}, title = {BRAID: An API Recommender Supporting Implicit User Feedback}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1510--1514}, doi = {10.1145/3468264.3473111}, year = {2021}, } Publisher's Version |
|
Chen, Tianyi |
ESEC/FSE '21: "Boosting Static Analysis Accuracy ..."
Boosting Static Analysis Accuracy with Instrumented Test Executions
Tianyi Chen, Kihong Heo, and Mukund Raghothaman (University of Southern California, USA; KAIST, South Korea) The two broad approaches to discover properties of programs---static and dynamic analyses---have complementary strengths: static techniques perform exhaustive exploration and prove upper bounds on program behaviors, while the dynamic analysis of test cases provides concrete evidence of these behaviors and promise low false alarm rates. In this paper, we present DynaBoost, a system which uses information obtained from test executions to prioritize the alarms of a static analyzer. We instrument the program to dynamically look for dataflow behaviors predicted by the static analyzer, and use these results to bootstrap a probabilistic alarm ranking system, where the user repeatedly inspects the alarm judged most likely to be a real bug, and where the system re-ranks the remaining alarms in response to user feedback. The combined system is able to exploit information that cannot be easily provided by users, and provides significant improvements in the human alarm inspection burden: by 35% compared to the baseline ranking system, and by 89% compared to an unaided programmer triaging alarm reports. @InProceedings{ESEC/FSE21p1154, author = {Tianyi Chen and Kihong Heo and Mukund Raghothaman}, title = {Boosting Static Analysis Accuracy with Instrumented Test Executions}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1154--1165}, doi = {10.1145/3468264.3468626}, year = {2021}, } Publisher's Version Artifacts Functional |
|
Chen, Ting |
ESEC/FSE '21: "iBatch: Saving Ethereum Fees ..."
iBatch: Saving Ethereum Fees via Secure and Cost-Effective Batching of Smart-Contract Invocations
Yibo Wang, Qi Zhang, Kai Li, Yuzhe Tang, Jiaqi Chen, Xiapu Luo, and Ting Chen (Syracuse University, USA; Hong Kong Polytechnic University, China; University of Electronic Science and Technology of China, China) This paper presents iBatch, a middleware system running on top of an operational Ethereum network to enable secure batching of smart-contract invocations against an untrusted relay server off-chain. iBatch does so at a low overhead by validating the server's batched invocations in smart contracts without additional states. The iBatch mechanism supports a variety of policies, ranging from conservative to aggressive batching, and can be configured adaptively to the current workloads. iBatch automatically rewrites smart contracts to integrate with legacy applications and support large-scale deployment. For cost evaluation, we develop a platform with fast and cost-accurate transaction replaying, build real transaction benchmarks on popular Ethereum applications, and build a functional prototype of iBatch on Ethereum. The evaluation results show that iBatch saves 14.6%-59.1% Gas cost per invocation with a moderate 2-minute delay and 19.06%-31.52% Ether cost per invocation with a delay of 0.26-1.66 blocks. @InProceedings{ESEC/FSE21p566, author = {Yibo Wang and Qi Zhang and Kai Li and Yuzhe Tang and Jiaqi Chen and Xiapu Luo and Ting Chen}, title = {iBatch: Saving Ethereum Fees via Secure and Cost-Effective Batching of Smart-Contract Invocations}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {566--577}, doi = {10.1145/3468264.3468568}, year = {2021}, } Publisher's Version |
|
Chen, Tse-Hsun (Peter) |
ESEC/FSE '21: "How Disabled Tests Manifest ..."
How Disabled Tests Manifest in Test Maintainability Challenges?
Dong Jae Kim, Bo Yang, Jinqiu Yang, and Tse-Hsun (Peter) Chen (Concordia University, Canada) Software testing is an essential software quality assurance practice. Testing helps expose faults earlier, allowing developers to repair the code and reduce future maintenance costs. However, repairing (i.e., making failing tests pass) may not always be done immediately. Bugs may require multiple rounds of repairs and even remain unfixed due to the difficulty of bug-fixing tasks. To help test maintenance, along with code comments, the majority of testing frameworks (e.g., JUnit and TestNG) have also introduced annotations such as @Ignore to disable failing tests temporarily. Although disabling tests may help alleviate maintenance difficulties, they may also introduce technical debt. With the faster release of applications in modern software development, disabling tests may become the salvation for many developers to meet project deliverables. In the end, disabled tests may become outdated and a source of technical debt, harming long-term maintenance. Despite its harmful implications, there is little empirical research evidence on the prevalence, evolution, and maintenance of disabling tests in practice. To fill this gap, we perform the first empirical study on test disabling practice. We develop a tool to mine 122K commits and detect 3,111 changes that disable tests from 15 open-source Java systems. Our main findings are: (1) Test disabling changes are 19% more common than regular test refactorings, such as renames and type changes. (2) Our life-cycle analysis shows that 41% of disabled tests are never brought back to evaluate software quality, and most disabled tests stay disabled for several years. (3)We unveil the motivations behind test disabling practice and the associated technical debt by manually studying evolutions of 349 unique disabled tests, achieving a 95% confidence level and a 5% confidence interval. Finally, we present some actionable implications for researchers and developers. @InProceedings{ESEC/FSE21p1045, author = {Dong Jae Kim and Bo Yang and Jinqiu Yang and Tse-Hsun (Peter) Chen}, title = {How Disabled Tests Manifest in Test Maintainability Challenges?}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1045--1055}, doi = {10.1145/3468264.3468609}, year = {2021}, } Publisher's Version ESEC/FSE '21: "Would You Like a Quick Peek? ..." Would You Like a Quick Peek? Providing Logging Support to Monitor Data Processing in Big Data Applications Zehao Wang, Haoxiang Zhang, Tse-Hsun (Peter) Chen, and Shaowei Wang (Concordia University, Canada; Huawei, Canada; University of Manitoba, Canada) To analyze large-scale data efficiently, developers have created various big data processing frameworks (e.g., Apache Spark). These big data processing frameworks provide abstractions to developers so that they can focus on implementing the data analysis logic. In traditional software systems, developers leverage logging to monitor applications and record intermediate states to assist workload understanding and issue diagnosis. However, due to the abstraction and the peculiarity of big data frameworks, there is currently no effective monitoring approach for big data applications. In this paper, we first manually study 1,000 randomly sampled Spark-related questions on Stack Overflow to study their root causes and the type of information, if recorded, that can assist developers with motioning and diagnosis. Then, we design an approach, DPLOG, which assists developers with monitoring Spark applications. DPLOG leverages statistical sampling to minimize performance overhead and provides intermediate information and hint/warning messages for each data processing step of a chained method pipeline. We evaluate DPLOG on six benchmarking programs and find that DPLOG has a relatively small overhead (i.e., less than 10% increase in response time when processing 5GB data) compared to without using DPLOG, and reduce the overhead by over 500% compared to the baseline. Our user study with 20 developers shows that DPLOG can reduce the needed time to debug big data applications by 63% and the participants give DPLOG an average of 4.85/5 for its usefulness. The idea of DPLOG may be applied to other big data processing frameworks, and our study sheds light on future research opportunities in assisting developers with monitoring big data applications. @InProceedings{ESEC/FSE21p516, author = {Zehao Wang and Haoxiang Zhang and Tse-Hsun (Peter) Chen and Shaowei Wang}, title = {Would You Like a Quick Peek? Providing Logging Support to Monitor Data Processing in Big Data Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {516--526}, doi = {10.1145/3468264.3468613}, year = {2021}, } Publisher's Version |
|
Chen, Tsong Yueh |
ESEC/FSE '21: "New Visions on Metamorphic ..."
New Visions on Metamorphic Testing after a Quarter of a Century of Inception
Tsong Yueh Chen and T. H. Tse (Swinburne University of Technology, Australia; University of Hong Kong, Hong Kong) Metamorphic testing (MT) was introduced about a quarter of a century ago. It is increasingly being accepted by researchers and the industry as a useful testing technique. The studies, research results, applications, and extensions of MT have given us many insights and visions for its future. Our visions include: MRs will be a practical means to top up test case generation techniques, beyond the alleviation of the test oracle problem; MT will not only be a standalone technique, but conveniently integrated with other methods; MT and MRs will evolve beyond software testing, or even beyond verification; MRs may be anything that you can imagine, beyond the necessary properties of algorithms; MT research will be beyond empirical studies and move toward a theoretical foundation; MT will not only bring new concepts to software testing but also new concepts to other disciplines; MRs will alleviate the reliable test set problem beyond traditional approaches. These visions may help researchers explore the challenges and opportunities for MT in the next decade. @InProceedings{ESEC/FSE21p1487, author = {Tsong Yueh Chen and T. H. Tse}, title = {New Visions on Metamorphic Testing after a Quarter of a Century of Inception}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1487--1490}, doi = {10.1145/3468264.3473136}, year = {2021}, } Publisher's Version Info |
|
Chen, Xiang |
ESEC/FSE '21: "A Comprehensive Study of Deep ..."
A Comprehensive Study of Deep Learning Compiler Bugs
Qingchao Shen, Haoyang Ma, Junjie Chen, Yongqiang Tian, Shing-Chi Cheung, and Xiang Chen (Tianjin University, China; University of Waterloo, Canada; Hong Kong University of Science and Technology, China; Nantong University, China) There are increasing uses of deep learning (DL) compilers to generate optimized code, boosting the runtime performance of DL models on specific hardware. Like their traditional counterparts, DL compilers can generate incorrect code, resulting in unexpected model behaviors that may cause catastrophic consequences in mission-critical systems. On the other hand, the DL models processed by DL compilers differ fundamentally from imperative programs in that the program logic in DL models is implicit. As such, various characteristics of the bugs arising from traditional compilers need to be revisited in the context of DL compilers. In this paper, we present the first systematic study of DL compiler bugs by analyzing 603 bugs arising in three popular DL compilers (i.e., TVM from Apache, Glow from Facebook, and nGraph from Intel). We analyzed these bugs according to their root causes, symptoms, and the stages where they occur during compilation. We obtain 12 findings, and provide a series of valuable guidelines for future work on DL compiler bug detection and debugging. For example, a large portion (nearly 20%) of DL compiler bugs are related to types, especially tensor types. The analysis of these bugs helps design new mutation operators (e.g., adding type cast for a tensor to promote implicit type conversion in subsequent tensor computations) to facilitate type-related bug detection. Further, we developed TVMfuzz as a proof-of-concept application of our findings to test the TVM DL compiler. It generates new tests based on TVM's original test suite. They expose 8 TVM bugs that are missed by the original test suite. The result demonstrates the usefulness of our findings. @InProceedings{ESEC/FSE21p968, author = {Qingchao Shen and Haoyang Ma and Junjie Chen and Yongqiang Tian and Shing-Chi Cheung and Xiang Chen}, title = {A Comprehensive Study of Deep Learning Compiler Bugs}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {968--980}, doi = {10.1145/3468264.3468591}, year = {2021}, } Publisher's Version |
|
Chen, Xiao |
ESEC/FSE '21: "A First Look at Developers’ ..."
A First Look at Developers’ Live Chat on Gitter
Lin Shi, Xiao Chen, Ye Yang, Hanzhi Jiang, Ziyou Jiang, Nan Niu, and Qing Wang (Institute of Software at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China; Stevens Institute of Technology, USA; University of Cincinnati, USA) Modern communication platforms such as Gitter and Slack play an increasingly critical role in supporting software teamwork, especially in open source development.Conversations on such platforms often contain intensive, valuable information that may be used for better understanding OSS developer communication and collaboration. However, little work has been done in this regard. To bridge the gap, this paper reports a first comprehensive empirical study on developers' live chat, investigating when they interact, what community structures look like, which topics are discussed, and how they interact. We manually analyze 749 dialogs in the first phase, followed by an automated analysis of over 173K dialogs in the second phase. We find that developers tend to converse more often on weekdays, especially on Wednesdays and Thursdays (UTC), that there are three common community structures observed, that developers tend to discuss topics such as API usages and errors, and that six dialog interaction patterns are identified in the live chat communities. Based on the findings, we provide recommendations for individual developers and OSS communities, highlight desired features for platform vendors, and shed light on future research directions. We believe that the findings and insights will enable a better understanding of developers' live chat, pave the way for other researchers, as well as a better utilization and mining of knowledge embedded in the massive chat history. @InProceedings{ESEC/FSE21p391, author = {Lin Shi and Xiao Chen and Ye Yang and Hanzhi Jiang and Ziyou Jiang and Nan Niu and Qing Wang}, title = {A First Look at Developers’ Live Chat on Gitter}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {391--403}, doi = {10.1145/3468264.3468562}, year = {2021}, } Publisher's Version Info |
|
Chen, Yingfeng |
ESEC/FSE '21: "An Empirical Study of GUI ..."
An Empirical Study of GUI Widget Detection for Industrial Mobile Games
Jiaming Ye, Ke Chen, Xiaofei Xie, Lei Ma, Ruochen Huang, Yingfeng Chen, Yinxing Xue, and Jianjun Zhao (Kyushu University, Japan; Netease, China; University of Alberta, Canada; University of Science and Technology of China, China) With the widespread adoption of smartphones in our daily life, mobile games experienced increasing demand over the past years. Meanwhile, the quality of mobile games has been continuously drawing more and more attention, which can greatly affect the player experience. For better quality assurance, general-purpose testing has been extensively studied for mobile apps. However, due to the unique characteristic of mobile games, existing mobile testing techniques may not be directly suitable and applicable. To better understand the challenges in mobile game testing, in this paper, we first initiate an early step to conduct an empirical study towards understanding the challenges and pain points of mobile game testing process at our industrial partner NetEase Games. Specifically, we first conduct a survey from the mobile test development team at NetEase Games via both scrum interviews and questionnaires. We found that accurate and effective GUI widget detection for mobile games could be the pillar to boost the automation of mobile game testing and other downstream analysis tasks in practice. We then continue to perform comparative studies to investigate the effectiveness of state-of-the-art general-purpose mobile app GUI widget detection methods in the context of mobile games. To this end, we also develop a technique to automatically collect GUI widgets region information of industrial mobile games, which is equipped with a heuristic-based data cleaning method for quality refinement of the labeling results. Our evaluation shows that: (1) Existing GUI widget detection methods for general-purpose mobile apps cannot perform well on industrial mobile games. (2) Mobile game exhibits obvious difference from other general-purpose mobile apps in the perspective GUI widgets. Our further in-depth analysis reveals high diversity and density characteristics of mobile game GUI widgets could be the major reasons that post the challenges for existing methods, which calls for new research methods and better industry practices. To enable further research along this line, we construct the very first GUI widget detection benchmark, specially designed for mobile games, incorporating both our collected dataset and the state-of-the-art widget detection methods for mobile apps, which could also be the basis for further study of many downstream quality assurance tasks (e.g., testing and analysis) for mobile games. @InProceedings{ESEC/FSE21p1427, author = {Jiaming Ye and Ke Chen and Xiaofei Xie and Lei Ma and Ruochen Huang and Yingfeng Chen and Yinxing Xue and Jianjun Zhao}, title = {An Empirical Study of GUI Widget Detection for Industrial Mobile Games}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1427--1437}, doi = {10.1145/3468264.3473935}, year = {2021}, } Publisher's Version ESEC/FSE '21: "GLIB: Towards Automated Test ..." GLIB: Towards Automated Test Oracle for Graphically-Rich Applications Ke Chen, Yufei Li, Yingfeng Chen, Changjie Fan, Zhipeng Hu, and Wei Yang (Netease, China; University of Texas at Dallas, USA) Graphically-rich applications such as games are ubiquitous with attractive visual effects of Graphical User Interface (GUI) that offers a bridge between software applications and end-users. However, various types of graphical glitches may arise from such GUI complexity and have become one of the main component of software compatibility issues. Our study on bug reports from game development teams in NetEase Inc. indicates that graphical glitches frequently occur during the GUI rendering and severely degrade the quality of graphically-rich applications such as video games. Existing automated testing techniques for such applications focus mainly on generating various GUI test sequences and check whether the test sequences can cause crashes. These techniques require constant human attention to captures non-crashing bugs such as bugs causing graphical glitches. In this paper, we present the first step in automating the test oracle for detecting non-crashing bugs in graphically-rich applications. Specifically, we propose GLIB based on a code-based data augmentation technique to detect game GUI glitches. We perform an evaluation of GLIB on 20 real-world game apps (with bug reports available) and the result shows that GLIB can achieve 100% precision and 99.5% recall in detecting non-crashing bugs such as game GUI glitches. Practical application of GLIB on another 14 real-world games (without bug reports) further demonstrates that GLIB can effectively uncover GUI glitches, with 48 of 53 bugs reported by GLIB having been confirmed and fixed so far. @InProceedings{ESEC/FSE21p1093, author = {Ke Chen and Yufei Li and Yingfeng Chen and Changjie Fan and Zhipeng Hu and Wei Yang}, title = {GLIB: Towards Automated Test Oracle for Graphically-Rich Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1093--1104}, doi = {10.1145/3468264.3468586}, year = {2021}, } Publisher's Version |
|
Chen, Yu |
ESEC/FSE '21: "Toward Efficient Interactions ..."
Toward Efficient Interactions between Python and Native Libraries
Jialiang Tan, Yu Chen, Zhenming Liu, Bin Ren, Shuaiwen Leon Song, Xipeng Shen, and Xu Liu (College of William & Mary, USA; University of Sydney, Australia; North Carolina State University, USA) Python has become a popular programming language because of its excellent programmability. Many modern software packages utilize Python for high-level algorithm design and depend on native libraries written in C/C++/Fortran for efficient computation kernels. Interaction between Python code and native libraries introduces performance losses because of the abstraction lying on the boundary of Python and native libraries. On the one side, Python code, typically run with interpretation, is disjoint from its execution behavior. On the other side, native libraries do not include program semantics to understand algorithm defects. To understand the interaction inefficiencies, we extensively study a large collection of Python software packages and categorize them according to the root causes of inefficiencies. We extract two inefficiency patterns that are common in interaction inefficiencies. Based on these patterns, we develop PieProf, a lightweight profiler, to pinpoint interaction inefficiencies in Python applications. The principle of PieProf is to measure the inefficiencies in the native execution and associate inefficiencies with high-level Python code to provide a holistic view. Guided by PieProf, we optimize 17 real-world applications, yielding speedups up to 6.3× on application level. @InProceedings{ESEC/FSE21p1117, author = {Jialiang Tan and Yu Chen and Zhenming Liu and Bin Ren and Shuaiwen Leon Song and Xipeng Shen and Xu Liu}, title = {Toward Efficient Interactions between Python and Native Libraries}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1117--1128}, doi = {10.1145/3468264.3468541}, year = {2021}, } Publisher's Version |
|
Chen, Yuqi |
ESEC/FSE '21: "Code Integrity Attestation ..."
Code Integrity Attestation for PLCs using Black Box Neural Network Predictions
Yuqi Chen, Christopher M. Poskitt, and Jun Sun (Singapore Management University, Singapore) Cyber-physical systems (CPSs) are widespread in critical domains, and significant damage can be caused if an attacker is able to modify the code of their programmable logic controllers (PLCs). Unfortunately, traditional techniques for attesting code integrity (i.e. verifying that it has not been modified) rely on firmware access or roots-of-trust, neither of which proprietary or legacy PLCs are likely to provide. In this paper, we propose a practical code integrity checking solution based on privacy-preserving black box models that instead attest the input/output behaviour of PLC programs. Using faithful offline copies of the PLC programs, we identify their most important inputs through an information flow analysis, execute them on multiple combinations to collect data, then train neural networks able to predict PLC outputs (i.e. actuator commands) from their inputs. By exploiting the black box nature of the model, our solution maintains the privacy of the original PLC code and does not assume that attackers are unaware of its presence. The trust instead comes from the fact that it is extremely hard to attack the PLC code and neural networks at the same time and with consistent outcomes. We evaluated our approach on a modern six-stage water treatment plant testbed, finding that it could predict actuator states from PLC inputs with near-100% accuracy, and thus could detect all 120 effective code mutations that we subjected the PLCs to. Finally, we found that it is not practically possible to simultaneously modify the PLC code and apply discreet adversarial noise to our attesters in a way that leads to consistent (mis-)predictions. @InProceedings{ESEC/FSE21p32, author = {Yuqi Chen and Christopher M. Poskitt and Jun Sun}, title = {Code Integrity Attestation for PLCs using Black Box Neural Network Predictions}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {32--44}, doi = {10.1145/3468264.3468617}, year = {2021}, } Publisher's Version |
|
Chen, Zhenpeng |
ESEC/FSE '21: "TaintStream: Fine-Grained ..."
TaintStream: Fine-Grained Taint Tracking for Big Data Platforms through Dynamic Code Translation
Chengxu Yang, Yuanchun Li, Mengwei Xu, Zhenpeng Chen, Yunxin Liu, Gang Huang, and Xuanzhe Liu (Peking University, China; Microsoft Research, China; Beijing University of Posts and Telecommunications, China; Tsinghua University, China) Big data has become valuable property for enterprises and enabled various intelligent applications. Today, it is common to host data in big data platforms (e.g., Spark), where developers can submit scripts to process the original and intermediate data tables. Meanwhile, it is highly desirable to manage the data to comply with various privacy requirements. To enable flexible and automated privacy policy enforcement, we propose TaintStream, a fine-grained taint tracking framework for Spark-like big data platforms. TaintStream works by automatically injecting taint tracking logic into the data processing scripts, and the injected scripts are dynamically translated to maintain a taint tag for each cell during execution. The dynamic translation rules are carefully designed to guarantee non-interference in the original data operation. By defining different semantics of taint tags, TaintStream can enable various data management applications such as access control, data retention, and user data erasure. Our experiments on a self-crafted benchmarksuite show that TaintStream is able to achieve accurate cell-level taint tracking with a precision of 93.0% and less than 15% overhead. We also demonstrate the usefulness of TaintStream through several real-world use cases of privacy policy enforcement. @InProceedings{ESEC/FSE21p806, author = {Chengxu Yang and Yuanchun Li and Mengwei Xu and Zhenpeng Chen and Yunxin Liu and Gang Huang and Xuanzhe Liu}, title = {TaintStream: Fine-Grained Taint Tracking for Big Data Platforms through Dynamic Code Translation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {806--817}, doi = {10.1145/3468264.3468532}, year = {2021}, } Publisher's Version ESEC/FSE '21: "An Empirical Study on Challenges ..." An Empirical Study on Challenges of Application Development in Serverless Computing Jinfeng Wen, Zhenpeng Chen, Yi Liu, Yiling Lou, Yun Ma, Gang Huang, Xin Jin, and Xuanzhe Liu (Peking University, China) Serverless computing is an emerging paradigm for cloud computing, gaining traction in a wide range of applications such as video processing and machine learning. This new paradigm allows developers to focus on the development of the logic of serverless computing based applications (abbreviated as serverless-based applications) in the granularity of function, thereby freeing developers from tedious and error-prone infrastructure management. Meanwhile, it also introduces new challenges on the design, implementation, and deployment of serverless-based applications, and current serverless computing platforms are far away from satisfactory. However, to the best of our knowledge, these challenges have not been well studied. To fill this knowledge gap, this paper presents the first comprehensive study on understanding the challenges in developing serverless-based applications from the developers’ perspective. We mine and analyze 22,731 relevant questions from Stack Overflow (a popular Q&A website for developers), and show the increasing popularity trend and the high difficulty level of serverless computing for developers. Through manual inspection of 619 sampled questions, we construct a taxonomy of challenges that developers encounter, and report a series of findings and actionable implications. Stakeholders including application developers, researchers, and cloud providers can leverage these findings and implications to better understand and further explore the serverless computing paradigm. @InProceedings{ESEC/FSE21p416, author = {Jinfeng Wen and Zhenpeng Chen and Yi Liu and Yiling Lou and Yun Ma and Gang Huang and Xin Jin and Xuanzhe Liu}, title = {An Empirical Study on Challenges of Application Development in Serverless Computing}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {416--428}, doi = {10.1145/3468264.3468558}, year = {2021}, } Publisher's Version |
|
Chen, Zhihui |
ESEC/FSE '21: "Demystifying “Bad” Error ..."
Demystifying “Bad” Error Messages in Data Science Libraries
Yida Tao, Zhihui Chen, Yepang Liu, Jifeng Xuan, Zhiwu Xu, and Shengchao Qin (Shenzhen University, China; Southern University of Science and Technology, China; Wuhan University, China; Teesside University, UK) Error messages are critical starting points for debugging. Unfortunately, they seem to be notoriously cryptic, confusing, and uninformative. Yet, it still remains a mystery why error messages receive such bad reputations, especially given that they are merely very short pieces of natural language text. In this paper, we empirically demystify the causes and fixes of "bad" error messages, by qualitatively studying 201 Stack Overflow threads and 335 GitHub issues. We specifically focus on error messages encountered in data science development, which is an increasingly important but not well studied domain. We found that the causes of "bad" error messages are far more complicated than poor phrasing or flawed articulation of error message content. Many error messages are inherently and inevitably misleading or uninformative, since libraries do not know user intentions and cannot "see" external errors. Fixes to error-message-related issues mostly involve source code changes, while exclusive message content updates only take up a small portion. In addition, whether an error message is informative or helpful is not always clear-cut; even error messages that clearly pinpoint faults and resolutions can still cause confusion for certain users. These findings thus call for a more in-depth investigation on how error messages should be evaluated and improved in the future. @InProceedings{ESEC/FSE21p818, author = {Yida Tao and Zhihui Chen and Yepang Liu and Jifeng Xuan and Zhiwu Xu and Shengchao Qin}, title = {Demystifying “Bad” Error Messages in Data Science Libraries}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {818--829}, doi = {10.1145/3468264.3468560}, year = {2021}, } Publisher's Version |
|
Chen, Zhongtian |
ESEC/FSE '21: "StateFormer: Fine-Grained ..."
StateFormer: Fine-Grained Type Recovery from Binaries using Generative State Modeling
Kexin Pei, Jonas Guan, Matthew Broughton, Zhongtian Chen, Songchen Yao, David Williams-King, Vikas Ummadisetty, Junfeng Yang, Baishakhi Ray, and Suman Jana (Columbia University, USA; University of Toronto, Canada; Dublin High School, Ireland) Binary type inference is a critical reverse engineering task supporting many security applications, including vulnerability analysis, binary hardening, forensics, and decompilation. It is a difficult task because source-level type information is often stripped during compilation, leaving only binaries with untyped memory and register accesses. Existing approaches rely on hand-coded type inference rules defined by domain experts, which are brittle and require nontrivial effort to maintain and update. Even though machine learning approaches have shown promise at automatically learning the inference rules, their accuracy is still low, especially for optimized binaries. We present StateFormer, a new neural architecture that is adept at accurate and robust type inference. StateFormer follows a two-step transfer learning paradigm. In the pretraining step, the model is trained with Generative State Modeling (GSM), a novel task that we design to teach the model to statically approximate execution effects of assembly instructions in both forward and backward directions. In the finetuning step, the pretrained model learns to use its knowledge of operational semantics to infer types. We evaluate StateFormer's performance on a corpus of 33 popular open-source software projects containing over 1.67 billion variables of different types. The programs are compiled with GCC and LLVM over 4 optimization levels O0-O3, and 3 obfuscation passes based on LLVM. Our model significantly outperforms state-of-the-art ML-based tools by 14.6% in recovering types for both function arguments and variables. Our ablation studies show that GSM improves type inference accuracy by 33%. @InProceedings{ESEC/FSE21p690, author = {Kexin Pei and Jonas Guan and Matthew Broughton and Zhongtian Chen and Songchen Yao and David Williams-King and Vikas Ummadisetty and Junfeng Yang and Baishakhi Ray and Suman Jana}, title = {StateFormer: Fine-Grained Type Recovery from Binaries using Generative State Modeling}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {690--702}, doi = {10.1145/3468264.3468607}, year = {2021}, } Publisher's Version Info Artifacts Reusable |
|
Chen, Zhuangzhi |
ESEC/FSE '21: "Sustainability Forecasting ..."
Sustainability Forecasting for Apache Incubator Projects
Likang Yin, Zhuangzhi Chen, Qi Xuan, and Vladimir Filkov (University of California at Davis, USA; Zhejiang University of Technology, China) Although OSS development is very popular, ultimately more than 80% of OSS projects fail. Identifying the factors associated with OSS success can help in devising interventions when a project takes a downturn. OSS success has been studied from a variety of angles, more recently in empirical studies of large numbers of diverse projects, using proxies for sustainability, e.g., internal metrics related to productivity and external ones, related to community popularity. The internal socio-technical structure of projects has also been shown important, especially their dynamics. This points to another angle on evaluating software success, from the perspective of self-sustaining and self-governing communities. To uncover the dynamics of how a project at a nascent development stage gradually evolves into a sustainable one, here we apply a socio-technical network modeling perspective to a dataset of Apache Software Foundation Incubator (ASFI), sustainability-labeled projects. To identify and validate the determinants of sustainability, we undertake a mix of quantitative and qualitative studies of ASFI projects’ socio-technical network trajectories. We develop interpretable models which can forecast a project becoming sustainable with 93+% accuracy, within 8 months of incubation start. Based on the interpretable models we describe a strategy for real-time monitoring and suggesting actions, which can be used by projects to correct their sustainability trajectories. @InProceedings{ESEC/FSE21p1056, author = {Likang Yin and Zhuangzhi Chen and Qi Xuan and Vladimir Filkov}, title = {Sustainability Forecasting for Apache Incubator Projects}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1056--1067}, doi = {10.1145/3468264.3468563}, year = {2021}, } Publisher's Version |
|
Cheung, Shing-Chi |
ESEC/FSE '21: "A Comprehensive Study of Deep ..."
A Comprehensive Study of Deep Learning Compiler Bugs
Qingchao Shen, Haoyang Ma, Junjie Chen, Yongqiang Tian, Shing-Chi Cheung, and Xiang Chen (Tianjin University, China; University of Waterloo, Canada; Hong Kong University of Science and Technology, China; Nantong University, China) There are increasing uses of deep learning (DL) compilers to generate optimized code, boosting the runtime performance of DL models on specific hardware. Like their traditional counterparts, DL compilers can generate incorrect code, resulting in unexpected model behaviors that may cause catastrophic consequences in mission-critical systems. On the other hand, the DL models processed by DL compilers differ fundamentally from imperative programs in that the program logic in DL models is implicit. As such, various characteristics of the bugs arising from traditional compilers need to be revisited in the context of DL compilers. In this paper, we present the first systematic study of DL compiler bugs by analyzing 603 bugs arising in three popular DL compilers (i.e., TVM from Apache, Glow from Facebook, and nGraph from Intel). We analyzed these bugs according to their root causes, symptoms, and the stages where they occur during compilation. We obtain 12 findings, and provide a series of valuable guidelines for future work on DL compiler bug detection and debugging. For example, a large portion (nearly 20%) of DL compiler bugs are related to types, especially tensor types. The analysis of these bugs helps design new mutation operators (e.g., adding type cast for a tensor to promote implicit type conversion in subsequent tensor computations) to facilitate type-related bug detection. Further, we developed TVMfuzz as a proof-of-concept application of our findings to test the TVM DL compiler. It generates new tests based on TVM's original test suite. They expose 8 TVM bugs that are missed by the original test suite. The result demonstrates the usefulness of our findings. @InProceedings{ESEC/FSE21p968, author = {Qingchao Shen and Haoyang Ma and Junjie Chen and Yongqiang Tian and Shing-Chi Cheung and Xiang Chen}, title = {A Comprehensive Study of Deep Learning Compiler Bugs}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {968--980}, doi = {10.1145/3468264.3468591}, year = {2021}, } Publisher's Version ESEC/FSE '21: "ÐArcher: Detecting On-Chain-Off-Chain ..." ÐArcher: Detecting On-Chain-Off-Chain Synchronization Bugs in Decentralized Applications Wuqi Zhang, Lili Wei, Shuqing Li, Yepang Liu, and Shing-Chi Cheung (Hong Kong University of Science and Technology, China; Southern University of Science and Technology, China) Since the emergence of Ethereum, blockchain-based decentralized applications (DApps) have become increasingly popular and important. To balance the security, performance, and costs, a DApp typically consists of two layers: an on-chain layer to execute transactions and store crucial data on the blockchain and an off-chain layer to interact with users. A DApp needs to synchronize its off-chain layer with the on-chain layer proactively. Otherwise, the inconsistent data in the off-chain layer could mislead users and cause undesirable consequences, e.g., loss of transaction fees. However, transactions sent to the blockchain are not guaranteed to be executed and could even be reversed after execution due to chain reorganization. Such non-determinism in the transaction execution is unique to blockchain. DApp developers may fail to perform the on-chain-off-chain synchronization accurately due to their lack of familiarity with the complex transaction lifecycle. In this work, we investigate the challenges of synchronizing on-chain and off-chain data in Ethereum-based DApps. We present two types of bugs that could result in inconsistencies between the on-chain and off-chain layers. To help detect such on-chain-off-chain synchronization bugs, we introduce a state transition model to guide the testing of DApps and propose two effective oracles to facilitate the automatic identification of bugs. We build the first testing framework, ÐArcher, to detect on-chain-off-chain synchronization bugs in DApps. We have evaluated ÐArcher on 11 popular real-world DApps. ÐArcher achieves high precision (99.3%), recall (87.6%), and accuracy (89.4%) in bug detection and significantly outperforms the baseline methods. It has found 15 real bugs in the 11 DApps. So far, six of the 15 bugs have been confirmed by the developers, and three have been fixed. These promising results demonstrate the usefulness of ÐArcher. @InProceedings{ESEC/FSE21p553, author = {Wuqi Zhang and Lili Wei and Shuqing Li and Yepang Liu and Shing-Chi Cheung}, title = {ÐArcher: Detecting On-Chain-Off-Chain Synchronization Bugs in Decentralized Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {553--565}, doi = {10.1145/3468264.3468546}, year = {2021}, } Publisher's Version Info Artifacts Functional |
|
Chhetri, Mohan Baruwal |
ESEC/FSE '21: "ICME: An Informed Consent ..."
ICME: An Informed Consent Management Engine for Conformance in Smart Building Environments
Chehara Pathmabandu, John Grundy, Mohan Baruwal Chhetri, and Zubair Baig (Monash University, Australia; CSIRO’s Data61, Australia; Deakin University, Australia) Smart buildings can reveal highly sensitive insights about their inhabitants and expose them to new privacy threats and vulnerabilities. Yet, convenience overrides privacy concerns and most people remain ignorant about this issue. We propose a novel Informed Consent Management Engine (ICME) that aims to: (a) increase users’ awareness about privacy issues and data collection practices in their smart building environments, (b) provide fine-grained visibility into privacy conformance and infringement by these devices, (c) recommend and visualise corrective user actions through ”digital nudging”, and (d) support the monitoring and management of personal data disclosure in a shared space. We present a reference architecture for ICME that can be used by software engineers to implement diverse end-user consent management solutions for smart buildings. We also provide a proof-of-concept prototype to demonstrate how the ICME approach works in a shared smart workplace. Demo: https://youtu.be/5y6CdyWAdgY @InProceedings{ESEC/FSE21p1545, author = {Chehara Pathmabandu and John Grundy and Mohan Baruwal Chhetri and Zubair Baig}, title = {ICME: An Informed Consent Management Engine for Conformance in Smart Building Environments}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1545--1549}, doi = {10.1145/3468264.3473118}, year = {2021}, } Publisher's Version Video |
|
Chimalakonda, Sridhar |
ESEC/FSE '21: "AC²: Towards Understanding ..."
AC²: Towards Understanding Architectural Changes in Python Projects
A. Eashaan Rao, Dheeraj Vagavolu, and Sridhar Chimalakonda (IIT Tirupati, India) Open source projects are adopting faster release cycles that reflect various changes in the software. Therefore, comprehending the effects of these changes as software architecture evolves over multiple releases becomes necessary. However, it is challenging to keep architecture in-check and add new changes simultaneously for every release. To this end, we propose a visualization tool called AC2, which allows users to examine the alterations in the architecture at both higher and lower levels of abstraction for Python projects. AC2 uses call graphs and collaboration graphs to show the interaction between different architectural components. The tool provides four different views to see the architectural changes. Users can examine two releases at a time to comprehend architectural changes between them. AC2 can support the maintainers and developers, observing changes in the project and their influence on the architecture, which allows them to examine its increasing complexity over many releases at component level. AC2 can be downloaded from https://github.com/rishalab/AC2 and the demo can be seen at https://www.youtube.com/watch?v=GNrJfZ0RCVI. @InProceedings{ESEC/FSE21p1555, author = {A. Eashaan Rao and Dheeraj Vagavolu and Sridhar Chimalakonda}, title = {AC²: Towards Understanding Architectural Changes in Python Projects}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1555--1559}, doi = {10.1145/3468264.3473120}, year = {2021}, } Publisher's Version Video Info ESEC/FSE '21: "StackEmo: Towards Enhancing ..." StackEmo: Towards Enhancing User Experience by Augmenting Stack Overflow with Emojis Akhila Sri Manasa Venigalla and Sridhar Chimalakonda (IIT Tirupati, India) Many novice programmers visit Stack Overflow for purposes that include posing questions and finding answers for issues they come across in the process of programming. Many questions have more than one correct answer on Stack Overflow, which are accompanied by number of comments from the users. Comments help developers in identifying the answer that better fits their purpose. However, it is difficult to navigate through all the comments to select an answer. Adding relevant visual cues to comments could help developers in prioritizing the comments to be read. Comments logged generally include sentiments of users, which, when depicted visually, could motivate users in reading through the comments and also help them in prioritizing the comments. However, the sentiment of comments is not being explicitly depicted on the current Stack Overflow platform. While there exist many tools that augment or annotate Stack Overflow platform for developers, we are not aware of tools that annotate visual representations of sentiments to the posts. In this paper, we propose StackEmo as a Google Chrome plugin to augment comments on Stack Overflow with emojis, based on the sentiment of the comments posted. We evaluated StackEmo through an in-user likert scale based survey with 30 university students to understand user perception towards StackEmo. The results of the survey provided us insights on improving StackEmo, with 83% of the participants willing to recommend the plugin to their peers. The source code and tool are available for download on GitHub at: https://github.com/rishalab/StackEmo, and the demo can be found here on youtube: https://youtu.be/BCFlqvMhTMA. @InProceedings{ESEC/FSE21p1550, author = {Akhila Sri Manasa Venigalla and Sridhar Chimalakonda}, title = {StackEmo: Towards Enhancing User Experience by Augmenting Stack Overflow with Emojis}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1550--1554}, doi = {10.1145/3468264.3473119}, year = {2021}, } Publisher's Version Video |
|
Chinn, Michael |
ESEC/FSE '21: "PHYSFRAME: Type Checking Physical ..."
PHYSFRAME: Type Checking Physical Frames of Reference for Robotic Systems
Sayali Kate, Michael Chinn, Hongjun Choi, Xiangyu Zhang, and Sebastian Elbaum (Purdue University, USA; University of Virginia, USA) A robotic system continuously measures its own motions and the external world during operation. Such measurements are with respect to some frame of reference, i.e., a coordinate system. A nontrivial robotic system has a large number of different frames and data have to be translated back-and-forth from a frame to another. The onus is on the developers to get such translation right. However, this is very challenging and error-prone, evidenced by the large number of questions and issues related to frame uses on developers' forum. Since any state variable can be associated with some frame, reference frames can be naturally modeled as variable types. We hence develop a novel type system that can automatically infer variables' frame types and in turn detect any type inconsistencies and violations of frame conventions. The evaluation on a set of 180 publicly available ROS projects shows that our system can detect 190 inconsistencies with 154 true positives. We reported 52 to developers and received 18 responses so far, with 15 fixed/acknowledged. Our technique also finds 45 violations of common practices. @InProceedings{ESEC/FSE21p45, author = {Sayali Kate and Michael Chinn and Hongjun Choi and Xiangyu Zhang and Sebastian Elbaum}, title = {PHYSFRAME: Type Checking Physical Frames of Reference for Robotic Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {45--56}, doi = {10.1145/3468264.3468608}, year = {2021}, } Publisher's Version |
|
Chintalapati, Murali |
ESEC/FSE '21: "Onion: Identifying Incident-Indicating ..."
Onion: Identifying Incident-Indicating Logs for Cloud Systems
Xu Zhang, Yong Xu, Si Qin, Shilin He, Bo Qiao, Ze Li, Hongyu Zhang, Xukun Li, Yingnong Dang, Qingwei Lin, Murali Chintalapati, Saravanakumar Rajmohan, and Dongmei Zhang (Microsoft Research, China; Microsoft Azure, USA; University of Newcastle, Australia; Microsoft 365, USA) In cloud systems, incidents affect the availability of services and require quick mitigation actions. Once an incident occurs, operators and developers often examine logs to perform fault diagnosis. However, the large volume of diverse logs and the overwhelming details in log data make the manual diagnosis process time-consuming and error-prone. In this paper, we propose Onion, an automatic solution for precisely and efficiently locating incident-indicating logs, which can provide useful clues for diagnosing the incidents. We first point out three criteria for localizing incident-indicating logs, i.e., Consistency, Impact, and Bilateral-Difference. Then we propose a novel agglomeration of logs, called log clique, based on which these criteria are satisfied. To obtain log cliques, we develop an incident-aware log representation and a progressive log clustering technique. Contrast analysis is then performed on the cliques to identify the incident-indicating logs. We have evaluated Onion using well-labeled log datasets. Onion achieves an average F1-score of 0.95 and can process millions of logs in only a few minutes, demonstrating its effectiveness and efficiency. Onion has also been successfully applied to the cloud system of Microsoft. Its practicability has been confirmed through the quantitative and qualitative analysis of the real incident cases. @InProceedings{ESEC/FSE21p1253, author = {Xu Zhang and Yong Xu and Si Qin and Shilin He and Bo Qiao and Ze Li and Hongyu Zhang and Xukun Li and Yingnong Dang and Qingwei Lin and Murali Chintalapati and Saravanakumar Rajmohan and Dongmei Zhang}, title = {Onion: Identifying Incident-Indicating Logs for Cloud Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1253--1263}, doi = {10.1145/3468264.3473919}, year = {2021}, } Publisher's Version |
|
Chiou, Paul T. |
ESEC/FSE '21: "Detecting and Localizing Keyboard ..."
Detecting and Localizing Keyboard Accessibility Failures in Web Applications
Paul T. Chiou, Ali S. Alotaibi, and William G. J. Halfond (University of Southern California, USA) The keyboard is the most universally supported input method operable by people with disabilities. Yet, many popular websites lack keyboard accessible mechanism, which could cause failures that make the website unusable. In this paper, we present a novel approach for automatically detecting and localizing keyboard accessibility failures in web applications. Our extensive evaluation of our technique on real world web pages showed that our technique was able to detect keyboard failures in web applications with high precision and recall and was able to accurately identify the underlying elements in the web pages that led to the observed problems. @InProceedings{ESEC/FSE21p855, author = {Paul T. Chiou and Ali S. Alotaibi and William G. J. Halfond}, title = {Detecting and Localizing Keyboard Accessibility Failures in Web Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {855--867}, doi = {10.1145/3468264.3468581}, year = {2021}, } Publisher's Version Artifacts Reusable |
|
Chirkova, Nadezhda |
ESEC/FSE '21: "Empirical Study of Transformers ..."
Empirical Study of Transformers for Source Code
Nadezhda Chirkova and Sergey Troshin (HSE University, Russia) Initially developed for natural language processing (NLP), Transformers are now widely used for source code processing, due to the format similarity between source code and text. In contrast to natural language, source code is strictly structured, i.e., it follows the syntax of the programming language. Several recent works develop Transformer modifications for capturing syntactic information in source code. The drawback of these works is that they do not compare to each other and consider different tasks. In this work, we conduct a thorough empirical study of the capabilities of Transformers to utilize syntactic information in different tasks. We consider three tasks (code completion, function naming and bug fixing) and re-implement different syntax-capturing modifications in a unified framework. We show that Transformers are able to make meaningful predictions based purely on syntactic information and underline the best practices of taking the syntactic information into account for improving the performance of the model. @InProceedings{ESEC/FSE21p703, author = {Nadezhda Chirkova and Sergey Troshin}, title = {Empirical Study of Transformers for Source Code}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {703--715}, doi = {10.1145/3468264.3468611}, year = {2021}, } Publisher's Version Info |
|
Choi, Hongjun |
ESEC/FSE '21: "PHYSFRAME: Type Checking Physical ..."
PHYSFRAME: Type Checking Physical Frames of Reference for Robotic Systems
Sayali Kate, Michael Chinn, Hongjun Choi, Xiangyu Zhang, and Sebastian Elbaum (Purdue University, USA; University of Virginia, USA) A robotic system continuously measures its own motions and the external world during operation. Such measurements are with respect to some frame of reference, i.e., a coordinate system. A nontrivial robotic system has a large number of different frames and data have to be translated back-and-forth from a frame to another. The onus is on the developers to get such translation right. However, this is very challenging and error-prone, evidenced by the large number of questions and issues related to frame uses on developers' forum. Since any state variable can be associated with some frame, reference frames can be naturally modeled as variable types. We hence develop a novel type system that can automatically infer variables' frame types and in turn detect any type inconsistencies and violations of frame conventions. The evaluation on a set of 180 publicly available ROS projects shows that our system can detect 190 inconsistencies with 154 true positives. We reported 52 to developers and received 18 responses so far, with 15 fixed/acknowledged. Our technique also finds 45 violations of common practices. @InProceedings{ESEC/FSE21p45, author = {Sayali Kate and Michael Chinn and Hongjun Choi and Xiangyu Zhang and Sebastian Elbaum}, title = {PHYSFRAME: Type Checking Physical Frames of Reference for Robotic Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {45--56}, doi = {10.1145/3468264.3468608}, year = {2021}, } Publisher's Version |
|
Chouchen, Moataz |
ESEC/FSE '21: "BF-Detector: An Automated ..."
BF-Detector: An Automated Tool for CI Build Failure Detection
Islem Saidani, Ali Ouni, Moataz Chouchen, and Mohamed Wiem Mkaouer (ETS, Canada; Rochester Institute of Technology, USA) Continuous Integration (CI) aims at supporting developers in inte-grating code changes quickly through automated building. How-ever, there is a consensus that CI build failure is a major barrierthat developers face, which prevents them from proceeding furtherwith development. In this paper, we introduceBF-Detector, anautomated tool to detect CI build failure. Based on the adaptationof Non-dominated Sorting Genetic Algorithm (NSGA-II), our toolaims at finding the best prediction rules based on two conflictingobjective functions to deal with both minority and majority classes.We evaluated the effectiveness of our tool on a benchmark of 56,019CI builds. The results reveal that our technique outperforms state-of-the-art approaches by providing a better balance between bothfailed and passed builds.BF-Detectortool is publicly available,with a demo video, at: https://github.com/stilab-ets/BF-Detector. @InProceedings{ESEC/FSE21p1530, author = {Islem Saidani and Ali Ouni and Moataz Chouchen and Mohamed Wiem Mkaouer}, title = {BF-Detector: An Automated Tool for CI Build Failure Detection}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1530--1534}, doi = {10.1145/3468264.3473115}, year = {2021}, } Publisher's Version Info ESEC/FSE '21: "csDetector: An Open Source ..." csDetector: An Open Source Tool for Community Smells Detection Nuri Almarimi, Ali Ouni, Moataz Chouchen, and Mohamed Wiem Mkaouer (ETS, Canada; University of Quebec, Canada; Rochester Institute of Technology, USA) Community smells represent symptoms of sub-optimal organizational and social issues within software development communities that often lead to additional project costs and reduced software quality. Previous research identified a variety of community smells that are connected to sub-optimal patterns under different perspectives of organizational-social structures in the software development community. To detect community smells and understanding the characteristics of such organizational-social structures in a project, we propose csDetector, an open source tool that is able to automatically detect community smells within a project and provide relevant socio-technical metrics. csDetector uses a machine learning based detection approach that learns from various existing bad community development practices to provide automated support in detecting related community smells. We evaluate the effectiveness of csDetector on a benchmark of 143 open source projects from GitHub. Our results show that the csDetector tool can detect ten commonly occurring community smells in open software projects with an average F1 score of 84%. csDetector is publicly available, with a demo video, at: https://github.com/Nuri22/csDetector. @InProceedings{ESEC/FSE21p1560, author = {Nuri Almarimi and Ali Ouni and Moataz Chouchen and Mohamed Wiem Mkaouer}, title = {csDetector: An Open Source Tool for Community Smells Detection}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1560--1564}, doi = {10.1145/3468264.3473121}, year = {2021}, } Publisher's Version |
|
Chowdhary, Sangeeta |
ESEC/FSE '21: "Parallel Shadow Execution ..."
Parallel Shadow Execution to Accelerate the Debugging of Numerical Errors
Sangeeta Chowdhary and Santosh Nagarakatte (Rutgers University, USA) This paper proposes a new approach for debugging errors in floating point computation by performing shadow execution with higher precision in parallel. The programmer specifies parts of the program that need to be debugged for errors. Our compiler creates shadow execution tasks, which execute on different cores and perform the computation with higher precision. We propose a novel method to execute a shadow execution task from an arbitrary memory state, which is necessary because we are creating a parallel shadow execution from a sequential program. Our approach also ensures that the shadow execution follows the same control flow path as the original program. Our runtime automatically distributes the shadow execution tasks to balance the load on the cores. Our prototype for parallel shadow execution, PFPSanitizer, provides comprehensive detection of errors while having lower performance overheads than prior approaches. @InProceedings{ESEC/FSE21p615, author = {Sangeeta Chowdhary and Santosh Nagarakatte}, title = {Parallel Shadow Execution to Accelerate the Debugging of Numerical Errors}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {615--626}, doi = {10.1145/3468264.3468585}, year = {2021}, } Publisher's Version Artifacts Functional |
|
Christakis, Maria |
ESEC/FSE '21: "Metamorphic Testing of Datalog ..."
Metamorphic Testing of Datalog Engines
Muhammad Numair Mansur, Maria Christakis, and Valentin Wüstholz (MPI-SWS, Germany; ConsenSys, Germany) Datalog is a popular query language with applications in several domains. Like any complex piece of software, Datalog engines may contain bugs. The most critical ones manifest as incorrect results when evaluating queries—we refer to these as query bugs. Given the wide applicability of the language, query bugs may have detrimental consequences, for instance, by compromising the soundness of a program analysis that is implemented and formalized in Datalog. In this paper, we present the first metamorphic-testing approach for detecting query bugs in Datalog engines. We ran our tool on three mature engines and found 13 previously unknown query bugs, some of which are deep and revealed critical semantic issues. @InProceedings{ESEC/FSE21p639, author = {Muhammad Numair Mansur and Maria Christakis and Valentin Wüstholz}, title = {Metamorphic Testing of Datalog Engines}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {639--650}, doi = {10.1145/3468264.3468573}, year = {2021}, } Publisher's Version Info |
|
Cito, Jürgen |
ESEC/FSE '21: "Explaining Mispredictions ..."
Explaining Mispredictions of Machine Learning Models using Rule Induction
Jürgen Cito, Isil Dillig, Seohyun Kim, Vijayaraghavan Murali, and Satish Chandra (TU Vienna, Austria; Facebook, Austria; University of Texas at Austin, USA; Facebook, USA) While machine learning (ML) models play an increasingly prevalent role in many software engineering tasks, their prediction accuracy is often problematic. When these models do mispredict, it can be very difficult to isolate the cause. In this paper, we propose a technique that aims to facilitate the debugging process of trained statistical models. Given an ML model and a labeled data set, our method produces an interpretable characterization of the data on which the model performs particularly poorly. The output of our technique can be useful for understanding limitations of the training data or the model itself; it can also be useful for ensembling if there are multiple models with different strengths. We evaluate our approach through case studies and illustrate how it can be used to improve the accuracy of predictive models used for software engineering tasks within Facebook. @InProceedings{ESEC/FSE21p716, author = {Jürgen Cito and Isil Dillig and Seohyun Kim and Vijayaraghavan Murali and Satish Chandra}, title = {Explaining Mispredictions of Machine Learning Models using Rule Induction}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {716--727}, doi = {10.1145/3468264.3468614}, year = {2021}, } Publisher's Version |
|
Clark, David |
ESEC/FSE '21: "Software Robustness: A Survey, ..."
Software Robustness: A Survey, a Theory, and Prospects
Justyna Petke, David Clark, and William B. Langdon (University College London, UK) If a software execution is disrupted, witnessing the execution at a later point may see evidence of the disruption or not. If not, we say the disruption failed to propagate. One name for this phenomenon is software robustness but it appears in different contexts in software engineering with different names. Contexts include testing, security, reliability, and automated code improvement or repair. Names include coincidental correctness, correctness attraction, transient error reliability. As witnessed, it is a dynamic phenomenon but any explanation with predictive power must necessarily take a static view. As a dynamic/static phenomenon it is convenient to take a statistical view of it which we do by way of information theory. We theorise that for failed disruption propagation to occur, a necessary condition is that the code region where the disruption occurs is composed with or succeeded by a subsequent code region that suffers entropy loss over all executions. The higher is the entropy loss, the higher the likelihood that disruption in the first region fails to propagate to the downstream observation point. We survey different research silos that address this phenomenon and explain how the theory might be exploited in software engineering. @InProceedings{ESEC/FSE21p1475, author = {Justyna Petke and David Clark and William B. Langdon}, title = {Software Robustness: A Survey, a Theory, and Prospects}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1475--1478}, doi = {10.1145/3468264.3473133}, year = {2021}, } Publisher's Version |
|
Cleland-Huang, Jane |
ESEC/FSE '21: "Hazard Analysis for Human-on-the-Loop ..."
Hazard Analysis for Human-on-the-Loop Interactions in sUAS Systems
Michael Vierhauser, Md Nafee Al Islam, Ankit Agrawal, Jane Cleland-Huang, and James Mason (JKU Linz, Austria; University of Notre Dame, USA; Northrop Grumman, USA) With the rise of new AI technologies, autonomous systems are moving towards a paradigm in which increasing levels of responsibility are shifted from the human to the system, creating a transition from human-in-the-loop systems to human-on-the-loop (HoTL) systems. This has a significant impact on the safety analysis of such systems, as new types of errors occurring at the boundaries of human-machine interactions need to be taken into consideration. Traditional safety analysis typically focuses on system-level hazards with little focus on user-related or user-induced hazards that can cause critical system failures. To address this issue, we construct domain-level safety analysis assets for sUAS (small unmanned aerial systems) applications and describe the process we followed to explicitly, and systematically identify Human Interaction Points (HiPs), Hazard Factors and Mitigations from system hazards. We evaluate our approach by first investigating the extent to which recent sUAS incidents are covered by our hazard trees, and second by performing a study with six domain experts using our hazard trees to identify and document hazards for sUAS usage scenarios. Our study showed that our hazard trees provided effective coverage for a wide variety of sUAS application scenarios and were useful for stimulating safety thinking and helping users to identify and potentially mitigate human-interaction hazards. @InProceedings{ESEC/FSE21p8, author = {Michael Vierhauser and Md Nafee Al Islam and Ankit Agrawal and Jane Cleland-Huang and James Mason}, title = {Hazard Analysis for Human-on-the-Loop Interactions in sUAS Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {8--19}, doi = {10.1145/3468264.3468534}, year = {2021}, } Publisher's Version Info |
|
Cohen, Sophie |
ESEC/FSE '21: "Contextualizing Toxicity in ..."
Contextualizing Toxicity in Open Source: A Qualitative Study
Sophie Cohen (Wesleyan University, USA) In this paper, we study toxic online interactions in issue discussions of open-source communities. Our goal is to qualitatively understand how toxicity impacts an open-source community like GitHub. We are driven by users complaining about toxicity, which leads to burnout and disengagement from the site. We collect a substantial sample of toxic interactions and qualitatively analyze their characteristics to ground future discussions and intervention design. @InProceedings{ESEC/FSE21p1669, author = {Sophie Cohen}, title = {Contextualizing Toxicity in Open Source: A Qualitative Study}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1669--1671}, doi = {10.1145/3468264.3473492}, year = {2021}, } Publisher's Version |
|
Coleman, Cora |
ESEC/FSE '21: "Synthesis of Web Layouts from ..."
Synthesis of Web Layouts from Examples
Dylan Lukes, John Sarracino, Cora Coleman, Hila Peleg, Sorin Lerner, and Nadia Polikarpova (University of California at San Diego, USA; Cornell University, USA; Technion, Israel) We present a new technique for synthesizing dynamic, constraint-based visual layouts from examples. Our technique tackles two major challenges of layout synthesis. First, realistic layouts, especially on the web, often contain hundreds of elements, so the synthesizer needs to scale to layouts of this complexity. Second, in common usage scenarios, examples contain noise, so the synthesizer needs to be tolerant to imprecise inputs. To address these challenges we propose a two-phase approach to synthesis, where a local inference phase rapidly generates a set of likely candidate constraints that satisfy the given examples, and then a global inference phase selects a subset of the candidates that generalizes to unseen inputs. This separation of concerns helps our technique tackle the two challenges: the local phase employs Bayesian inference to handle noisy inputs, while the global phase leverages the hierarchical nature of complex layouts to decompose the global inference problem into inference of independent sub-layouts. We implemented this technique in a tool called Mockdown and evaluated it on nine real-world web layouts, as well as a series of widespread layout components and an existing dataset of 644 Android applications. Our experiments show that Mockdown is able to synthesize a highly accurate layout for the majority of benchmarks from just three examples (two for Android layouts), and that it scales to layouts with over 600 elements, about 30x more than has been reported in prior work on layout synthesis. @InProceedings{ESEC/FSE21p651, author = {Dylan Lukes and John Sarracino and Cora Coleman and Hila Peleg and Sorin Lerner and Nadia Polikarpova}, title = {Synthesis of Web Layouts from Examples}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {651--663}, doi = {10.1145/3468264.3468533}, year = {2021}, } Publisher's Version Info Artifacts Reusable |
|
Dang, Yingnong |
ESEC/FSE '21: "Onion: Identifying Incident-Indicating ..."
Onion: Identifying Incident-Indicating Logs for Cloud Systems
Xu Zhang, Yong Xu, Si Qin, Shilin He, Bo Qiao, Ze Li, Hongyu Zhang, Xukun Li, Yingnong Dang, Qingwei Lin, Murali Chintalapati, Saravanakumar Rajmohan, and Dongmei Zhang (Microsoft Research, China; Microsoft Azure, USA; University of Newcastle, Australia; Microsoft 365, USA) In cloud systems, incidents affect the availability of services and require quick mitigation actions. Once an incident occurs, operators and developers often examine logs to perform fault diagnosis. However, the large volume of diverse logs and the overwhelming details in log data make the manual diagnosis process time-consuming and error-prone. In this paper, we propose Onion, an automatic solution for precisely and efficiently locating incident-indicating logs, which can provide useful clues for diagnosing the incidents. We first point out three criteria for localizing incident-indicating logs, i.e., Consistency, Impact, and Bilateral-Difference. Then we propose a novel agglomeration of logs, called log clique, based on which these criteria are satisfied. To obtain log cliques, we develop an incident-aware log representation and a progressive log clustering technique. Contrast analysis is then performed on the cliques to identify the incident-indicating logs. We have evaluated Onion using well-labeled log datasets. Onion achieves an average F1-score of 0.95 and can process millions of logs in only a few minutes, demonstrating its effectiveness and efficiency. Onion has also been successfully applied to the cloud system of Microsoft. Its practicability has been confirmed through the quantitative and qualitative analysis of the real incident cases. @InProceedings{ESEC/FSE21p1253, author = {Xu Zhang and Yong Xu and Si Qin and Shilin He and Bo Qiao and Ze Li and Hongyu Zhang and Xukun Li and Yingnong Dang and Qingwei Lin and Murali Chintalapati and Saravanakumar Rajmohan and Dongmei Zhang}, title = {Onion: Identifying Incident-Indicating Logs for Cloud Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1253--1263}, doi = {10.1145/3468264.3473919}, year = {2021}, } Publisher's Version |
|
Daniakin, Kirill |
ESEC/FSE '21: "Overcoming Metric Diversity ..."
Overcoming Metric Diversity in Meta-analysis for Software Engineering: Proposed Approach and a Case Study on Its Usage on the Effects of Software Reuse
Kirill Daniakin (Innopolis University, Russia) This work addresses the problem of metric diversity in meta-analysis for Software Engineering by clustering studies using input-output tables and by vote-counting. Diversity arises when researchers, measuring same phenomena, use different, and typically "incomparable" metrics making impossible a direct analysis of the effects and their sizes. Additionally, this work discusses an application of proposed approach to the case of Software Reuse. @InProceedings{ESEC/FSE21p1677, author = {Kirill Daniakin}, title = {Overcoming Metric Diversity in Meta-analysis for Software Engineering: Proposed Approach and a Case Study on Its Usage on the Effects of Software Reuse}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1677--1679}, doi = {10.1145/3468264.3473488}, year = {2021}, } Publisher's Version |
|
Datta, Mohit |
ESEC/FSE '21: "Intelligent Container Reallocation ..."
Intelligent Container Reallocation at Microsoft 365
Bo Qiao, Fangkai Yang, Chuan Luo, Yanan Wang, Johnny Li, Qingwei Lin, Hongyu Zhang, Mohit Datta, Andrew Zhou, Thomas Moscibroda, Saravanakumar Rajmohan, and Dongmei Zhang (Microsoft Research, China; Microsoft 365, China; University of Newcastle, Australia; Microsoft 365, USA; Microsoft Azure, USA) The use of containers in microservices has gained popularity as it facilitates agile development, resource governance, and software maintenance. Container reallocation aims to achieve workload balance via reallocating containers over physical machines. It affects the overall performance of microservice-based systems. However, container scheduling and reallocation remain an open issue due to their complexity in real-world scenarios. In this paper, we propose a novel Multi-Phase Local Search (MPLS) algorithm to optimize container reallocation. The experimental results show that our optimization algorithm outperforms state-of-the-art methods. In practice, it has been successfully applied to Microsoft 365 system to mitigate hotspot machines and balance workloads across the entire system. @InProceedings{ESEC/FSE21p1438, author = {Bo Qiao and Fangkai Yang and Chuan Luo and Yanan Wang and Johnny Li and Qingwei Lin and Hongyu Zhang and Mohit Datta and Andrew Zhou and Thomas Moscibroda and Saravanakumar Rajmohan and Dongmei Zhang}, title = {Intelligent Container Reallocation at Microsoft 365}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1438--1443}, doi = {10.1145/3468264.3473936}, year = {2021}, } Publisher's Version |
|
Davis, James C. |
ESEC/FSE '21: "A Replication of ‘DeepBugs: ..."
A Replication of ‘DeepBugs: A Learning Approach to Name-based Bug Detection’
Jordan Winkler, Abhimanyu Agarwal, Caleb Tung, Dario Rios Ugalde, Young Jin Jung, and James C. Davis (Purdue University, USA; Lockheed Martin, USA) We replicated the main result of DeepBugs, a bug detection algorithm for name-based bugs. The original authors evaluated it in three contexts: swapped-argument bugs, wrong binary operator,and wrong binary operator operands. We followed the algorithm and replicated the results for swapped-argument bugs. Our replication used independent implementations of the major components: training set generation, token vectorization, and neural network data pipeline, model, and loss function. Using the same dataset and the same testing process, we report comparable performance: within 2% of the accuracy reported by Pradel and Sen. @InProceedings{ESEC/FSE21p1604, author = {Jordan Winkler and Abhimanyu Agarwal and Caleb Tung and Dario Rios Ugalde and Young Jin Jung and James C. Davis}, title = {A Replication of ‘DeepBugs: A Learning Approach to Name-based Bug Detection’}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1604--1604}, doi = {10.1145/3468264.3477221}, year = {2021}, } Publisher's Version |
|
Devanbu, Premkumar T. |
ESEC/FSE '21: "Learning Type Annotation: ..."
Learning Type Annotation: Is Big Data Enough?
Kevin Jesse, Premkumar T. Devanbu, and Toufique Ahmed (University of California at Davis, USA) TypeScript is a widely used optionally-typed language where developers can adopt “pay as you go” typing: they can add types as desired, and benefit from static typing. The “type annotation tax” or manual effort required to annotate new or existing TypeScript can be reduced by a variety of automatic methods. Probabilistic machine-learning (ML) approaches work quite well. ML approaches use different inductive biases, ranging from simple token sequences to complex graphical neural network (GNN) models capturing syntax and semantic relations. More sophisticated inductive biases are hand-engineered to exploit the formal nature of software. Rather than deploying fancy inductive biases for code, can we just use “big data” to learn natural patterns relevant to typing? We find evidence suggesting that this is the case. We present TypeBert, demonstrating that even with simple token-sequence inductive bias used in BERT-style models and enough data, type-annotation performance of the most sophisticated models can be surpassed. @InProceedings{ESEC/FSE21p1483, author = {Kevin Jesse and Premkumar T. Devanbu and Toufique Ahmed}, title = {Learning Type Annotation: Is Big Data Enough?}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1483--1486}, doi = {10.1145/3468264.3473135}, year = {2021}, } Publisher's Version |
|
Dilhara, Malinda |
ESEC/FSE '21: "Discovering Repetitive Code ..."
Discovering Repetitive Code Changes in ML Systems
Malinda Dilhara (University of Colorado at Boulder, USA) Similar to software evolution in other software systems, ML software systems evolve with many repetitive changes. Despite some research and tooling for repetitive code changes that exist in Java and other languages, there is a lack of such tools for Python. Given the significant rise of ML software development, and that many ML developers are not professionally trained developers, the lack of software evolution tools for ML code is even more critical. To bring the ML developers’ toolset into the 21st century, we implemented an approach to adapt and reuse the vast ecosystem of Java static analysis tools for Python. Using this approach, we adapted two software evolution tools, RefactoringMiner and CPATMiner, to Python. With the tools, we conducted the first and most fine-grained study on code change patterns in 59 ML systems and surveyed 253 developers. We recommend empirically-justified, actionable opportunities for tool builders and release the tools for researchers. @InProceedings{ESEC/FSE21p1683, author = {Malinda Dilhara}, title = {Discovering Repetitive Code Changes in ML Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1683--1685}, doi = {10.1145/3468264.3473493}, year = {2021}, } Publisher's Version |
|
Dillig, Isil |
ESEC/FSE '21: "Checking Conformance of Applications ..."
Checking Conformance of Applications against GUI Policies
Zhen Zhang, Yu Feng, Michael D. Ernst, Sebastian Porst, and Isil Dillig (University of Washington, USA; University of California at Santa Barbara, USA; Google, USA; University of Texas at Austin, USA) A good graphical user interface (GUI) is crucial for an application's usability, so vendors and regulatory agencies increasingly place restrictions on how GUI elements should appear to and interact with users. Motivated by this concern, this paper presents a new technique (based on static analysis) for checking conformance between (Android) applications and GUI policies expressed in a formal specification language. In particular, this paper (1) describes a specification language for formalizing GUI policies, (2) proposes a new program abstraction called an _event-driven layout forest_, and (3) describes a static analysis for constructing this abstraction and checking it against a GUI policy. We have implemented the proposed approach in a tool called Venus, and we evaluate it on 2361 Android applications and 17 policies. Our evaluation shows that Venus can uncover malicious applications that perform ad fraud and identify violations of GUI design guidelines and GDPR laws. @InProceedings{ESEC/FSE21p95, author = {Zhen Zhang and Yu Feng and Michael D. Ernst and Sebastian Porst and Isil Dillig}, title = {Checking Conformance of Applications against GUI Policies}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {95--106}, doi = {10.1145/3468264.3468561}, year = {2021}, } Publisher's Version ESEC/FSE '21: "Explaining Mispredictions ..." Explaining Mispredictions of Machine Learning Models using Rule Induction Jürgen Cito, Isil Dillig, Seohyun Kim, Vijayaraghavan Murali, and Satish Chandra (TU Vienna, Austria; Facebook, Austria; University of Texas at Austin, USA; Facebook, USA) While machine learning (ML) models play an increasingly prevalent role in many software engineering tasks, their prediction accuracy is often problematic. When these models do mispredict, it can be very difficult to isolate the cause. In this paper, we propose a technique that aims to facilitate the debugging process of trained statistical models. Given an ML model and a labeled data set, our method produces an interpretable characterization of the data on which the model performs particularly poorly. The output of our technique can be useful for understanding limitations of the training data or the model itself; it can also be useful for ensembling if there are multiple models with different strengths. We evaluate our approach through case studies and illustrate how it can be used to improve the accuracy of predictive models used for software engineering tasks within Facebook. @InProceedings{ESEC/FSE21p716, author = {Jürgen Cito and Isil Dillig and Seohyun Kim and Vijayaraghavan Murali and Satish Chandra}, title = {Explaining Mispredictions of Machine Learning Models using Rule Induction}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {716--727}, doi = {10.1145/3468264.3468614}, year = {2021}, } Publisher's Version |
|
Ding, Yi |
ESEC/FSE '21: "Generalizable and Interpretable ..."
Generalizable and Interpretable Learning for Configuration Extrapolation
Yi Ding, Ahsan Pervaiz, Michael Carbin, and Henry Hoffmann (Massachusetts Institute of Technology, USA; University of Chicago, USA) Modern software applications are increasingly configurable, which puts a burden on users to tune these configurations for their target hardware and workloads. To help users, machine learning techniques can model the complex relationships between software configuration parameters and performance. While powerful, these learners have two major drawbacks: (1) they rarely incorporate prior knowledge and (2) they produce outputs that are not interpretable by users. These limitations make it difficult to (1) leverage information a user has already collected (e.g., tuning for new hardware using the best configurations from old hardware) and (2) gain insights into the learner’s behavior (e.g., understanding why the learner chose different configurations on different hardware or for different workloads). To address these issues, this paper presents two configuration optimization tools, GIL and GIL+, using the proposed generalizable and interpretable learning approaches. To incorporate prior knowledge, the proposed tools (1) start from known configurations, (2) iteratively construct a new linear model, (3) extrapolate better performance configurations from that model, and (4) repeat. Since the base learners are linear models, these tools are inherently interpretable. We enhance this property with a graphical representation of how they arrived at the highest performance configuration. We evaluate GIL and GIL+ by using them to configure Apache Spark workloads on different hardware platforms and find that, compared to prior work, GIL and GIL+ produce comparable, and sometimes even better performance configurations, but with interpretable results. @InProceedings{ESEC/FSE21p728, author = {Yi Ding and Ahsan Pervaiz and Michael Carbin and Henry Hoffmann}, title = {Generalizable and Interpretable Learning for Configuration Extrapolation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {728--740}, doi = {10.1145/3468264.3468603}, year = {2021}, } Publisher's Version |
|
Dissanayake, Nesara |
ESEC/FSE '21: "A Grounded Theory of the Role ..."
A Grounded Theory of the Role of Coordination in Software Security Patch Management
Nesara Dissanayake, Mansooreh Zahedi, Asangi Jayatilaka, and Muhammad Ali Babar (University of Adelaide, Australia) Several disastrous security attacks can be attributed to delays in patching software vulnerabilities. While researchers and practitioners have paid significant attention to automate vulnerabilities identification and patch development activities of software security patch management, there has been relatively little effort dedicated to gain an in-depth understanding of the socio-technical aspects, e.g., coordination of interdependent activities of the patching process and patching decisions, that may cause delays in applying security patches. We report on a Grounded Theory study of the role of coordination in security patch management. The reported theory consists of four inter-related dimensions, i.e., causes, breakdowns, constraints, and mechanisms. The theory explains the causes that define the need for coordination among interdependent software/hardware components and multiple stakeholders’ decisions, the constraints that can negatively impact coordination, the breakdowns in coordination, and the potential corrective measures. This study provides potentially useful insights for researchers and practitioners who can carefully consider the needs of and devise suitable solutions for supporting the coordination of interdependencies involved in security patch management. @InProceedings{ESEC/FSE21p793, author = {Nesara Dissanayake and Mansooreh Zahedi and Asangi Jayatilaka and Muhammad Ali Babar}, title = {A Grounded Theory of the Role of Coordination in Software Security Patch Management}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {793--805}, doi = {10.1145/3468264.3468595}, year = {2021}, } Publisher's Version |
|
Dong, Hang |
ESEC/FSE '21: "Effective Low Capacity Status ..."
Effective Low Capacity Status Prediction for Cloud Systems
Hang Dong, Si Qin, Yong Xu, Bo Qiao, Shandan Zhou, Xian Yang, Chuan Luo, Pu Zhao, Qingwei Lin, Hongyu Zhang, Abulikemu Abuduweili, Sanjay Ramanujan, Karthikeyan Subramanian, Andrew Zhou, Saravanakumar Rajmohan, Dongmei Zhang, and Thomas Moscibroda (Microsoft Research, China; Microsoft Azure, USA; Hong Kong Baptist University, China; University of Newcastle, Australia; Microsoft 365, China; Microsoft 365, USA) In cloud systems, an accurate capacity planning is very important for cloud provider to improve service availability. Traditional methods simply predicting "when the available resources is exhausted" are not effective due to customer demand fragmentation and platform allocation constraints. In this paper, we propose a novel prediction approach which proactively predicts the level of resource allocation failures from the perspective of low capacity status. By jointly considering the data from different sources in both time series form and static form, the proposed approach can make accurate LCS predictions in a complex and dynamic cloud environment, and thereby improve the service availability of cloud systems. The proposed approach is evaluated by real-world datasets collected from a large scale public cloud platform, and the results confirm its effectiveness. @InProceedings{ESEC/FSE21p1236, author = {Hang Dong and Si Qin and Yong Xu and Bo Qiao and Shandan Zhou and Xian Yang and Chuan Luo and Pu Zhao and Qingwei Lin and Hongyu Zhang and Abulikemu Abuduweili and Sanjay Ramanujan and Karthikeyan Subramanian and Andrew Zhou and Saravanakumar Rajmohan and Dongmei Zhang and Thomas Moscibroda}, title = {Effective Low Capacity Status Prediction for Cloud Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1236--1241}, doi = {10.1145/3468264.3473917}, year = {2021}, } Publisher's Version |
|
Dong, Helen |
ESEC/FSE '21: "A Qualitative Study of Cleaning ..."
A Qualitative Study of Cleaning in Jupyter Notebooks
Helen Dong (Carnegie Mellon University, USA) Data scientists commonly use computational notebooks because they provide a good environment for testing multiple models. However, once the scientist completes the code and finds the ideal model, the data scientist will have to dedicate time to clean up the code in order for others to understand it. In this paper, we perform a qualitative study on how scientists clean their code in hopes of being able to suggest a tool to automate this process. Our end goal is for tool builders to address possible gaps and provide additional aid to data scientists, who can then focus more on their actual work rather than the routine and tedious cleaning duties. @InProceedings{ESEC/FSE21p1663, author = {Helen Dong}, title = {A Qualitative Study of Cleaning in Jupyter Notebooks}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1663--1665}, doi = {10.1145/3468264.3473490}, year = {2021}, } Publisher's Version |
|
Dong, Jin Song |
ESEC/FSE '21: "Graph-Based Seed Object Synthesis ..."
Graph-Based Seed Object Synthesis for Search-Based Unit Testing
Yun Lin, You Sheng Ong, Jun Sun, Gordon Fraser, and Jin Song Dong (National University of Singapore, Singapore; Singapore Management University, Singapore; University of Passau, Germany) Search-based software testing (SBST) generates tests using search algorithms guided by measurements gauging how far a test case is away from exercising a coverage goal. The effectiveness of SBST largely depends on the continuity and monotonicity of the fitness landscape decided by these measurements and the search operators. Unfortunately, the fitness landscape is challenging when the function under test takes object inputs, as classical measurement hardly provide guidance for constructing legitimate object inputs. To overcome this problem, we propose test seeds, i.e., test code skeletons of legitimate objects which enable the use of classical measurements. Given a target branch in a function under test, we first statically analyze the function to build an object construction graph that captures the relation between the operands of the target method and the states of their relevant object inputs. Based on the graph, we synthesize test template code where each "slot" is a mutation point for the search algorithm. This approach can be seamlessly integrated with existing SBST algorithms, and we implemented EvoObj on top of EvoSuite. Our experiments show that EvoObj outperforms EvoSuite with statistical significance on 2750 methods over 103 open source Java projects using state-of-the-art SBST algorithms. @InProceedings{ESEC/FSE21p1068, author = {Yun Lin and You Sheng Ong and Jun Sun and Gordon Fraser and Jin Song Dong}, title = {Graph-Based Seed Object Synthesis for Search-Based Unit Testing}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1068--1080}, doi = {10.1145/3468264.3468619}, year = {2021}, } Publisher's Version Info |
|
Dong, Jinhao |
ESEC/FSE '21: "Boosting Coverage-Based Fault ..."
Boosting Coverage-Based Fault Localization via Graph-Based Representation Learning
Yiling Lou, Qihao Zhu, Jinhao Dong, Xia Li, Zeyu Sun, Dan Hao, Lu Zhang, and Lingming Zhang (Peking University, China; Kennesaw State University, USA; University of Illinois at Urbana-Champaign, USA) Coverage-based fault localization has been extensively studied in the literature due to its effectiveness and lightweightness for real-world systems. However, existing techniques often utilize coverage in an oversimplified way by abstracting detailed coverage into numbers of tests or boolean vectors, thus limiting their effectiveness in practice. In this work, we present a novel coverage-based fault localization technique, GRACE, which fully utilizes detailed coverage information with graph-based representation learning. Our intuition is that coverage can be regarded as connective relationships between tests and program entities, which can be inherently and integrally represented by a graph structure: with tests and program entities as nodes, while with coverage and code structures as edges. Therefore, we first propose a novel graph-based representation to reserve all detailed coverage information and fine-grained code structures into one graph. Then we leverage Gated Graph Neural Network to learn valuable features from the graph-based coverage representation and rank program entities in a listwise way. Our evaluation on the widely used benchmark Defects4J (V1.2.0) shows that GRACE significantly outperforms state-of-the-art coverage-based fault localization: GRACE localizes 195 bugs within Top-1 whereas the best compared technique can at most localize 166 bugs within Top-1. We further investigate the impact of each GRACE component and find that they all positively contribute to GRACE. In addition, our results also demonstrate that GRACE has learnt essential features from coverage, which are complementary to various information used in existing learning-based fault localization. Finally, we evaluate GRACE in the cross-project prediction scenario on extra 226 bugs from Defects4J (V2.0.0), and find that GRACE consistently outperforms state-of-the-art coverage-based techniques. @InProceedings{ESEC/FSE21p664, author = {Yiling Lou and Qihao Zhu and Jinhao Dong and Xia Li and Zeyu Sun and Dan Hao and Lu Zhang and Lingming Zhang}, title = {Boosting Coverage-Based Fault Localization via Graph-Based Representation Learning}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {664--676}, doi = {10.1145/3468264.3468580}, year = {2021}, } Publisher's Version |
|
Dong, Zhen |
ESEC/FSE '21: "Flaky Test Detection in Android ..."
Flaky Test Detection in Android via Event Order Exploration
Zhen Dong, Abhishek Tiwari, Xiao Liang Yu, and Abhik Roychoudhury (National University of Singapore, Singapore) Validation of Android apps via testing is difficult owing to the presence of flaky tests. Due to non-deterministic execution environments, a sequence of events (a test) may lead to success or failure in unpredictable ways. In this work, we present an approach and tool FlakeScanner for detecting flaky tests through exploration of event orders. Our key observation is that for a test in a mobile app, there is a testing framework thread which creates the test events, a main User-Interface (UI) thread processing these events, and there may be several other background threads running asynchronously. For any event e whose execution involves potential non-determinism, we localize the earliest (latest) event after (before) which e must happen. We then efficiently explore the schedules between the upper/lower bound events while grouping events within a single statement, to find whether the test outcome is flaky. We also create a suite of subject programs called FlakyAppRepo (containing 33 widely-used Android projects) to study flaky tests in Android apps. Our experiments on the subject-suite FlakyAppRepo show FlakeScanner detected 45 out of 52 known flaky tests as well as 245 previously unknown flaky tests among 1444 tests. @InProceedings{ESEC/FSE21p367, author = {Zhen Dong and Abhishek Tiwari and Xiao Liang Yu and Abhik Roychoudhury}, title = {Flaky Test Detection in Android via Event Order Exploration}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {367--378}, doi = {10.1145/3468264.3468584}, year = {2021}, } Publisher's Version |
|
Dritsa, Konstantina |
ESEC/FSE '21: "CrossVul: A Cross-Language ..."
CrossVul: A Cross-Language Vulnerability Dataset with Commit Data
Georgios Nikitopoulos, Konstantina Dritsa, Panos Louridas, and Dimitris Mitropoulos (University of Thessaly, Greece; Athens University of Economics and Business, Greece; University of Athens, Greece) Examining the characteristics of software vulnerabilities and the code that contains them can lead to the development of more secure software. We present a dataset (∼1.4 GB) containing vulnerable source code files together with the corresponding, patched versions. Contrary to other existing vulnerability datasets, ours includes vulnerable files written in more than 40 programming languages. Each file is associated to (1) a Common Vulnerability Exposures identifier (CVE ID) and (2) the repository it came from. Further, our dataset can be the basis for machine learning applications that identify defects, as we show in specific examples. We also present a supporting dataset that contains commit messages derived from Git commits that serve as security patches. This dataset can be used to train ML models that in turn, can be used to detect security patch commits as we highlight in a specific use case. @InProceedings{ESEC/FSE21p1565, author = {Georgios Nikitopoulos and Konstantina Dritsa and Panos Louridas and Dimitris Mitropoulos}, title = {CrossVul: A Cross-Language Vulnerability Dataset with Commit Data}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1565--1569}, doi = {10.1145/3468264.3473122}, year = {2021}, } Publisher's Version |
|
D'Souza, Deepak |
ESEC/FSE '21: "GenSys: A Scalable Fixed-Point ..."
GenSys: A Scalable Fixed-Point Engine for Maximal Controller Synthesis over Infinite State Spaces
Stanly Samuel, Deepak D'Souza, and Raghavan Komondoor (IISc Bengaluru, India) The synthesis of maximally-permissive controllers in infinite-state systems has many practical applications. Such controllers directly correspond to maximal winning strategies in logically specified infinite-state two-player games. In this paper, we introduce a tool called GenSys which is a fixed-point engine for computing maximal winning strategies for players in infinite-state safety games. A key feature of GenSys is that it leverages the capabilities of existing off-the-shelf solvers to implement its fixed point engine. GenSys outperforms state-of-the-art tools in this space by a significant margin. Our tool has solved some of the challenging problems in this space, is scalable, and also synthesizes compact controllers. These controllers are comparatively small in size and easier to comprehend. GenSys is freely available for use and is available under an open-source license. @InProceedings{ESEC/FSE21p1585, author = {Stanly Samuel and Deepak D'Souza and Raghavan Komondoor}, title = {GenSys: A Scalable Fixed-Point Engine for Maximal Controller Synthesis over Infinite State Spaces}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1585--1589}, doi = {10.1145/3468264.3473126}, year = {2021}, } Publisher's Version Info |
|
Duan, Zhenhua |
ESEC/FSE '21: "Conditional Interpolation: ..."
Conditional Interpolation: Making Concurrent Program Verification More Effective
Jie Su, Cong Tian, and Zhenhua Duan (Xidian University, China) Due to the state-space explosion problem, efficient verification of real-world programs in large scale is still a big challenge. Particularly, thread alternation makes the verification of concurrent programs much more difficult since it aggravates this problem. In this paper, an application of Craig interpolation, namely conditional interpolation, is proposed to work together with CEGAR-based approach to reduce the state-space of concurrent tasks. Specifically, conditional interpolation is formalized to confine the reachable region of states so that infeasible conditional branches could be pruned. Furthermore, the generated conditional interpolants are utilized to shorten the interpolation paths, which makes the time consumed for verification significantly reduced. We have implemented the proposed approach on top of an open-source software model checker. Empirical results show that the conditional interpolation is effective in improving the verification efficiency of concurrent tasks. @InProceedings{ESEC/FSE21p144, author = {Jie Su and Cong Tian and Zhenhua Duan}, title = {Conditional Interpolation: Making Concurrent Program Verification More Effective}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {144--154}, doi = {10.1145/3468264.3468602}, year = {2021}, } Publisher's Version |
|
Durieux, Thomas |
ESEC/FSE '21: "A Longitudinal Analysis of ..."
A Longitudinal Analysis of Bloated Java Dependencies
César Soto-Valero, Thomas Durieux, and Benoit Baudry (KTH, Sweden) We study the evolution and impact of bloated dependencies in a single software ecosystem: Java/Maven. Bloated dependencies are third-party libraries that are packaged in the application binary but are not needed to run the application. We analyze the history of 435 Java projects. This historical data includes 48,469 distinct dependencies, which we study across a total of 31,515 versions of Maven dependency trees. Bloated dependencies steadily increase over time, and 89.2% of the direct dependencies that are bloated remain bloated in all subsequent versions of the studied projects. This empirical evidence suggests that developers can safely remove a bloated dependency. We further report novel insights regarding the unnecessary maintenance efforts induced by bloat. We find that 22% of dependency updates performed by developers are made on bloated dependencies, and that Dependabot suggests a similar ratio of updates on bloated dependencies. @InProceedings{ESEC/FSE21p1021, author = {César Soto-Valero and Thomas Durieux and Benoit Baudry}, title = {A Longitudinal Analysis of Bloated Java Dependencies}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1021--1031}, doi = {10.1145/3468264.3468589}, year = {2021}, } Publisher's Version Info |
|
Dutta, Saikat |
ESEC/FSE '21: "FLEX: Fixing Flaky Tests in ..."
FLEX: Fixing Flaky Tests in Machine Learning Projects by Updating Assertion Bounds
Saikat Dutta, August Shi, and Sasa Misailovic (University of Illinois at Urbana-Champaign, USA; University of Texas at Austin, USA) Many machine learning (ML) algorithms are inherently random – multiple executions using the same inputs may produce slightly different results each time. Randomness impacts how developers write tests that check for end-to-end quality of their implementations of these ML algorithms. In particular, selecting the proper thresholds for comparing obtained quality metrics with the reference results is a non-intuitive task, which may lead to flaky test executions. We present FLEX, the first tool for automatically fixing flaky tests due to algorithmic randomness in ML algorithms. FLEX fixes tests that use approximate assertions to compare actual and expected values that represent the quality of the outputs of ML algorithms. We present a technique for systematically identifying the acceptable bound between the actual and expected output quality that also minimizes flakiness. Our technique is based on the Peak Over Threshold method from statistical Extreme Value Theory, which estimates the tail distribution of the output values observed from several runs. Based on the tail distribution, FLEX updates the bound used in the test, or selects the number of test re-runs, based on a desired confidence level. We evaluate FLEX on a corpus of 35 tests collected from the latest versions of 21 ML projects. Overall, FLEX identifies and proposes a fix for 28 tests. We sent 19 pull requests, each fixing one test, to the developers. So far, 9 have been accepted by the developers. @InProceedings{ESEC/FSE21p603, author = {Saikat Dutta and August Shi and Sasa Misailovic}, title = {FLEX: Fixing Flaky Tests in Machine Learning Projects by Updating Assertion Bounds}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {603--614}, doi = {10.1145/3468264.3468615}, year = {2021}, } Publisher's Version |
|
Elbaum, Sebastian |
ESEC/FSE '21: "Swarmbug: Debugging Configuration ..."
Swarmbug: Debugging Configuration Bugs in Swarm Robotics
Chijung Jung, Ali Ahad, Jinho Jung, Sebastian Elbaum, and Yonghwi Kwon (University of Virginia, USA; Georgia Institute of Technology, USA) Swarm robotics collectively solve problems that are challenging for individual robots, from environmental monitoring to entertainment. The algorithms enabling swarms allow individual robots of the swarm to plan, share, and coordinate their trajectories and tasks to achieve a common goal. Such algorithms rely on a large number of configurable parameters that can be tailored to target particular scenarios. This large configuration space, the complexity of the algorithms, and the dependencies with the robots’ setup and performance make debugging and fixing swarms configuration bugs extremely challenging. This paper proposes Swarmbug, a swarm debugging system that automatically diagnoses and fixes buggy behaviors caused by misconfiguration. The essence of Swarmbug is the novel concept called the degree of causal contribution (Dcc), which abstracts impacts of environment configurations (e.g., obstacles) to the drones in a swarm via behavior causal analysis. Swarmbug automatically generates, validates, and ranks fixes for configuration bugs. We evaluate Swarmbug on four diverse swarm algorithms. Swarmbug successfully fixes four configuration bugs in the evaluated algorithms, showing that it is generic and effective. We also conduct a real-world experiment with physical drones to show the Swarmbug’s fix is effective in the real-world. @InProceedings{ESEC/FSE21p868, author = {Chijung Jung and Ali Ahad and Jinho Jung and Sebastian Elbaum and Yonghwi Kwon}, title = {Swarmbug: Debugging Configuration Bugs in Swarm Robotics}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {868--880}, doi = {10.1145/3468264.3468601}, year = {2021}, } Publisher's Version Info ESEC/FSE '21: "PHYSFRAME: Type Checking Physical ..." PHYSFRAME: Type Checking Physical Frames of Reference for Robotic Systems Sayali Kate, Michael Chinn, Hongjun Choi, Xiangyu Zhang, and Sebastian Elbaum (Purdue University, USA; University of Virginia, USA) A robotic system continuously measures its own motions and the external world during operation. Such measurements are with respect to some frame of reference, i.e., a coordinate system. A nontrivial robotic system has a large number of different frames and data have to be translated back-and-forth from a frame to another. The onus is on the developers to get such translation right. However, this is very challenging and error-prone, evidenced by the large number of questions and issues related to frame uses on developers' forum. Since any state variable can be associated with some frame, reference frames can be naturally modeled as variable types. We hence develop a novel type system that can automatically infer variables' frame types and in turn detect any type inconsistencies and violations of frame conventions. The evaluation on a set of 180 publicly available ROS projects shows that our system can detect 190 inconsistencies with 154 true positives. We reported 52 to developers and received 18 responses so far, with 15 fixed/acknowledged. Our technique also finds 45 violations of common practices. @InProceedings{ESEC/FSE21p45, author = {Sayali Kate and Michael Chinn and Hongjun Choi and Xiangyu Zhang and Sebastian Elbaum}, title = {PHYSFRAME: Type Checking Physical Frames of Reference for Robotic Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {45--56}, doi = {10.1145/3468264.3468608}, year = {2021}, } Publisher's Version |
|
El Moussa, Noura |
ESEC/FSE '21: "Health of Smart Ecosystems ..."
Health of Smart Ecosystems
Noura El Moussa, Davide Molinelli, Mauro Pezzè, and Martin Tappler (USI Lugano, Switzerland; Schaffhausen Institute of Technology, Switzerland; TU Graz, Austria; Silicon Austria Labs, Austria) Software is a core component of smart ecosystems, large ’system communities’ that emerge from the composition of autonomous, independent, and highly heterogeneous systems, like smart cities, smart grids, smart buildings. The systems that comprise smart ecosystems are not centrally owned, and mutually interact both explicitly and implicitly, leading to unavoidable contradictions and failures. The distinctive characteristics of smart ecosystems challenge software engineers with problems never addressed so far. In this paper we discuss the big challenge of defining a new concept of ’dependability’ and new approaches to reveal smart ecosystem failures. @InProceedings{ESEC/FSE21p1491, author = {Noura El Moussa and Davide Molinelli and Mauro Pezzè and Martin Tappler}, title = {Health of Smart Ecosystems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1491--1494}, doi = {10.1145/3468264.3473137}, year = {2021}, } Publisher's Version |
|
Elsner, Daniel |
ESEC/FSE '21: "How Can Manual Testing Processes ..."
How Can Manual Testing Processes Be Optimized? Developer Survey, Optimization Guidelines, and Case Studies
Roman Haas, Daniel Elsner, Elmar Juergens, Alexander Pretschner, and Sven Apel (Saarland University, Germany; CQSE, Germany; TU Munich, Germany) Manual software testing is tedious and costly as it involves significant human effort. Yet, it is still widely applied in industry and will be in the foreseeable future. Although there is arguably a great need for optimization of manual testing processes, research focuses mostly on optimization techniques for automated tests. Accordingly, there is no precise understanding of the practices and processes of manual testing in industry nor about pitfalls and optimization potential that is untapped. To shed light on this issue, we conducted a survey among 38 testing professionals from 16 companies, to investigate their manual testing processes and to identify potential for optimization. We synthesize guidelines when optimization techniques from automated testing can be implemented for manual testing. By means of case studies on two industrial software projects, we show that fault detection likelihood, test feedback time and test creation efforts can be improved when following our guidelines. @InProceedings{ESEC/FSE21p1281, author = {Roman Haas and Daniel Elsner and Elmar Juergens and Alexander Pretschner and Sven Apel}, title = {How Can Manual Testing Processes Be Optimized? Developer Survey, Optimization Guidelines, and Case Studies}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1281--1291}, doi = {10.1145/3468264.3473922}, year = {2021}, } Publisher's Version Info |
|
Emmi, Michael |
ESEC/FSE '21: "RAPID: Checking API Usage ..."
RAPID: Checking API Usage for the Cloud in the Cloud
Michael Emmi, Liana Hadarean, Ranjit Jhala, Lee Pike, Nicolás Rosner, Martin Schäf, Aritra Sengupta, and Willem Visser (Amazon Web Services, USA) We present RAPID, an industrial-strength analysis developed at AWS that aims to help developers by providing automatic, fast and actionable feedback about correct usage of cloud-service APIs. RAPID’s design is based on the insight that cloud service APIs are structured around short-lived request- and response-objects whose usage patterns can be specified as value-dependent type-state automata and be verified by combining local type-state with global value-flow analyses. We describe various challenges that arose to deploy RAPID at scale. Finally, we present an evaluation that validates our design choices, deployment heuristics, and shows that RAPID is able to quickly and precisely report a wide variety of useful API misuse violations in large, industrial-strength code bases. @InProceedings{ESEC/FSE21p1416, author = {Michael Emmi and Liana Hadarean and Ranjit Jhala and Lee Pike and Nicolás Rosner and Martin Schäf and Aritra Sengupta and Willem Visser}, title = {RAPID: Checking API Usage for the Cloud in the Cloud}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1416--1426}, doi = {10.1145/3468264.3473934}, year = {2021}, } Publisher's Version |
|
Endres, Madeline |
ESEC/FSE '21: "To Read or to Rotate? Comparing ..."
To Read or to Rotate? Comparing the Effects of Technical Reading Training and Spatial Skills Training on Novice Programming Ability
Madeline Endres, Madison Fansher, Priti Shah, and Westley Weimer (University of Michigan, USA) Understanding how to best support and train novice programmers is a critical component of producing better and more diverse software engineers. In this paper, we present the results of a controlled 11-week longitudinal study with 57 CS1 students comparing two skill-based interventions to improve programming performance. The first intervention involves spatial training, an established baseline known to be helpful in engineering contexts. The second intervention is a novel CS-focused technical reading training. In our reading training, we teach strategies for summarizing scientific papers and understanding scientific charts and figures; most of the covered readings were CS1-accessible portions of computer science research papers. For the spatial training, we use a standardized training curriculum previously found to improve programming skills by focusing on spatial ability (i.e., the ability to mentally manipulate objects). We first replicate findings that both reading ability and spatial ability correlate with programming success. Significantly, however, we find that those in our reading training exhibit larger programming ability gains than those in the standard spatial training (p = 0.02, f2=0.10). We also find that reading trained participants perform particularly well on programming problems that require tracing through code (p = 0.03, f2=0.10). Our results suggest that technical reading training could be beneficial for novice programmers. Finally, we discuss the implications of our results for future CS1 interventions, the possibility for non-programming based training to positively impact developers, and future directions for software engineering education research. @InProceedings{ESEC/FSE21p754, author = {Madeline Endres and Madison Fansher and Priti Shah and Westley Weimer}, title = {To Read or to Rotate? Comparing the Effects of Technical Reading Training and Spatial Skills Training on Novice Programming Ability}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {754--766}, doi = {10.1145/3468264.3468583}, year = {2021}, } Publisher's Version Info |
|
Ernst, Michael D. |
ESEC/FSE '21: "Checking Conformance of Applications ..."
Checking Conformance of Applications against GUI Policies
Zhen Zhang, Yu Feng, Michael D. Ernst, Sebastian Porst, and Isil Dillig (University of Washington, USA; University of California at Santa Barbara, USA; Google, USA; University of Texas at Austin, USA) A good graphical user interface (GUI) is crucial for an application's usability, so vendors and regulatory agencies increasingly place restrictions on how GUI elements should appear to and interact with users. Motivated by this concern, this paper presents a new technique (based on static analysis) for checking conformance between (Android) applications and GUI policies expressed in a formal specification language. In particular, this paper (1) describes a specification language for formalizing GUI policies, (2) proposes a new program abstraction called an _event-driven layout forest_, and (3) describes a static analysis for constructing this abstraction and checking it against a GUI policy. We have implemented the proposed approach in a tool called Venus, and we evaluate it on 2361 Android applications and 17 policies. Our evaluation shows that Venus can uncover malicious applications that perform ad fraud and identify violations of GUI design guidelines and GDPR laws. @InProceedings{ESEC/FSE21p95, author = {Zhen Zhang and Yu Feng and Michael D. Ernst and Sebastian Porst and Isil Dillig}, title = {Checking Conformance of Applications against GUI Policies}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {95--106}, doi = {10.1145/3468264.3468561}, year = {2021}, } Publisher's Version ESEC/FSE '21: "Lightweight and Modular Resource ..." Lightweight and Modular Resource Leak Verification Martin Kellogg, Narges Shadab, Manu Sridharan, and Michael D. Ernst (University of Washington, USA; University of California at Riverside, USA) A resource leak occurs when a program allocates a resource, such as a socket or file handle, but fails to deallocate it. Resource leaks cause resource starvation, slowdowns, and crashes. Previous techniques to prevent resource leaks are either unsound, imprecise, inapplicable to existing code, slow, or a combination of these. Static detection of resource leaks requires checking that de-allocation methods are always invoked on relevant objects before they become unreachable. Our key insight is that leak detection can be reduced to an accumulation problem, a class of typestate problems amenable to sound and modular checking without the need for a heavyweight, whole-program alias analysis. The precision of an accumulation analysis can be improved by computing targeted aliasing information, and we augmented our baseline checker with three such novel techniques: a lightweight ownership transfer system; a specialized resource alias analysis; and a system to create a fresh obligation when a non-final resource field is updated. Our approach occupies a unique slice of the design space: it is sound and runs relatively quickly (taking minutes on programs that a state-of-the-art approach took hours to analyze). We implemented our techniques for Java in an open-source tool called the Resource Leak Checker. The Resource Leak Checker revealed 49 real resource leaks in widely-deployed software. It scales well, has a manageable false positive rate (comparable to the high-confidence resource leak analysis built into the Eclipse IDE), and imposes only a small annotation burden (1/1500 LoC) for developers. @InProceedings{ESEC/FSE21p181, author = {Martin Kellogg and Narges Shadab and Manu Sridharan and Michael D. Ernst}, title = {Lightweight and Modular Resource Leak Verification}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {181--192}, doi = {10.1145/3468264.3468576}, year = {2021}, } Publisher's Version Artifacts Functional |
|
Fakhoury, Sarah |
ESEC/FSE '21: "Reassessing Automatic Evaluation ..."
Reassessing Automatic Evaluation Metrics for Code Summarization Tasks
Devjeet Roy, Sarah Fakhoury, and Venera Arnaoudova (Washington State University, USA) In recent years, research in the domain of source code summarization has adopted data-driven techniques pioneered in machine translation (MT). Automatic evaluation metrics such as BLEU, METEOR, and ROUGE, are fundamental to the evaluation of MT systems and have been adopted as proxies of human evaluation in the code summarization domain. However, the extent to which automatic metrics agree with the gold standard of human evaluation has not been evaluated on code summarization tasks. Despite this, marginal improvements in metric scores are often used to discriminate between the performance of competing summarization models. In this paper, we present a critical exploration of the applicability and interpretation of automatic metrics as evaluation techniques for code summarization tasks. We conduct an empirical study with 226 human annotators to assess the degree to which automatic metrics reflect human evaluation. Results indicate that metric improvements of less than 2 points do not guarantee systematic improvements in summarization quality, and are unreliable as proxies of human evaluation. When the difference between metric scores for two summarization approaches increases but remains within 5 points, some metrics such as METEOR and chrF become highly reliable proxies, whereas others, such as corpus BLEU, remain unreliable. Based on these findings, we make several recommendations for the use of automatic metrics to discriminate model performance in code summarization. @InProceedings{ESEC/FSE21p1105, author = {Devjeet Roy and Sarah Fakhoury and Venera Arnaoudova}, title = {Reassessing Automatic Evaluation Metrics for Code Summarization Tasks}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1105--1116}, doi = {10.1145/3468264.3468588}, year = {2021}, } Publisher's Version |
|
Fan, Changjie |
ESEC/FSE '21: "GLIB: Towards Automated Test ..."
GLIB: Towards Automated Test Oracle for Graphically-Rich Applications
Ke Chen, Yufei Li, Yingfeng Chen, Changjie Fan, Zhipeng Hu, and Wei Yang (Netease, China; University of Texas at Dallas, USA) Graphically-rich applications such as games are ubiquitous with attractive visual effects of Graphical User Interface (GUI) that offers a bridge between software applications and end-users. However, various types of graphical glitches may arise from such GUI complexity and have become one of the main component of software compatibility issues. Our study on bug reports from game development teams in NetEase Inc. indicates that graphical glitches frequently occur during the GUI rendering and severely degrade the quality of graphically-rich applications such as video games. Existing automated testing techniques for such applications focus mainly on generating various GUI test sequences and check whether the test sequences can cause crashes. These techniques require constant human attention to captures non-crashing bugs such as bugs causing graphical glitches. In this paper, we present the first step in automating the test oracle for detecting non-crashing bugs in graphically-rich applications. Specifically, we propose GLIB based on a code-based data augmentation technique to detect game GUI glitches. We perform an evaluation of GLIB on 20 real-world game apps (with bug reports available) and the result shows that GLIB can achieve 100% precision and 99.5% recall in detecting non-crashing bugs such as game GUI glitches. Practical application of GLIB on another 14 real-world games (without bug reports) further demonstrates that GLIB can effectively uncover GUI glitches, with 48 of 53 bugs reported by GLIB having been confirmed and fixed so far. @InProceedings{ESEC/FSE21p1093, author = {Ke Chen and Yufei Li and Yingfeng Chen and Changjie Fan and Zhipeng Hu and Wei Yang}, title = {GLIB: Towards Automated Test Oracle for Graphically-Rich Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1093--1104}, doi = {10.1145/3468264.3468586}, year = {2021}, } Publisher's Version |
|
Fansher, Madison |
ESEC/FSE '21: "To Read or to Rotate? Comparing ..."
To Read or to Rotate? Comparing the Effects of Technical Reading Training and Spatial Skills Training on Novice Programming Ability
Madeline Endres, Madison Fansher, Priti Shah, and Westley Weimer (University of Michigan, USA) Understanding how to best support and train novice programmers is a critical component of producing better and more diverse software engineers. In this paper, we present the results of a controlled 11-week longitudinal study with 57 CS1 students comparing two skill-based interventions to improve programming performance. The first intervention involves spatial training, an established baseline known to be helpful in engineering contexts. The second intervention is a novel CS-focused technical reading training. In our reading training, we teach strategies for summarizing scientific papers and understanding scientific charts and figures; most of the covered readings were CS1-accessible portions of computer science research papers. For the spatial training, we use a standardized training curriculum previously found to improve programming skills by focusing on spatial ability (i.e., the ability to mentally manipulate objects). We first replicate findings that both reading ability and spatial ability correlate with programming success. Significantly, however, we find that those in our reading training exhibit larger programming ability gains than those in the standard spatial training (p = 0.02, f2=0.10). We also find that reading trained participants perform particularly well on programming problems that require tracing through code (p = 0.03, f2=0.10). Our results suggest that technical reading training could be beneficial for novice programmers. Finally, we discuss the implications of our results for future CS1 interventions, the possibility for non-programming based training to positively impact developers, and future directions for software engineering education research. @InProceedings{ESEC/FSE21p754, author = {Madeline Endres and Madison Fansher and Priti Shah and Westley Weimer}, title = {To Read or to Rotate? Comparing the Effects of Technical Reading Training and Spatial Skills Training on Novice Programming Ability}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {754--766}, doi = {10.1145/3468264.3468583}, year = {2021}, } Publisher's Version Info |
|
Farmahinifarahani, Farima |
ESEC/FSE '21: "Data-Driven Test Selection ..."
Data-Driven Test Selection at Scale
Sonu Mehta, Farima Farmahinifarahani, Ranjita Bhagwan, Suraj Guptha, Sina Jafari, Rahul Kumar, Vaibhav Saini, and Anirudh Santhiar (Microsoft Research, India; University of California at Irvine, USA; Microsoft, USA) Large-scale services depend on Continuous Integration/Continuous Deployment (CI/CD) processes to maintain their agility and code-quality. Change-based testing plays an important role in finding bugs, but testing after every change is prohibitively expensive at a scale where thousands of changes are committed every hour. Test selection models deal with this issue by running a subset of tests for every change. In this paper, we present a generic, language-agnostic and lightweight statistical model for test selection. Unlike existing techniques, the proposed model does not require complex feature extraction techniques. Consequently, it scales to hundreds of repositories of varying characteristics while capturing more than 99% of buggy pull requests. Additionally, to better evaluate test selection models, we propose application-specific metrics that capture both a reduction in resource cost and a reduction in pull-request turn-around time. By evaluating our model on 22 large repositories at Microsoft, we find that we can save 15%−30% of compute time while reporting back more than ≈99% of buggy pull requests. @InProceedings{ESEC/FSE21p1225, author = {Sonu Mehta and Farima Farmahinifarahani and Ranjita Bhagwan and Suraj Guptha and Sina Jafari and Rahul Kumar and Vaibhav Saini and Anirudh Santhiar}, title = {Data-Driven Test Selection at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1225--1235}, doi = {10.1145/3468264.3473916}, year = {2021}, } Publisher's Version |
|
Feng, Yu |
ESEC/FSE '21: "Checking Conformance of Applications ..."
Checking Conformance of Applications against GUI Policies
Zhen Zhang, Yu Feng, Michael D. Ernst, Sebastian Porst, and Isil Dillig (University of Washington, USA; University of California at Santa Barbara, USA; Google, USA; University of Texas at Austin, USA) A good graphical user interface (GUI) is crucial for an application's usability, so vendors and regulatory agencies increasingly place restrictions on how GUI elements should appear to and interact with users. Motivated by this concern, this paper presents a new technique (based on static analysis) for checking conformance between (Android) applications and GUI policies expressed in a formal specification language. In particular, this paper (1) describes a specification language for formalizing GUI policies, (2) proposes a new program abstraction called an _event-driven layout forest_, and (3) describes a static analysis for constructing this abstraction and checking it against a GUI policy. We have implemented the proposed approach in a tool called Venus, and we evaluate it on 2361 Android applications and 17 policies. Our evaluation shows that Venus can uncover malicious applications that perform ad fraud and identify violations of GUI design guidelines and GDPR laws. @InProceedings{ESEC/FSE21p95, author = {Zhen Zhang and Yu Feng and Michael D. Ernst and Sebastian Porst and Isil Dillig}, title = {Checking Conformance of Applications against GUI Policies}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {95--106}, doi = {10.1145/3468264.3468561}, year = {2021}, } Publisher's Version |
|
Feng, Zhen |
ESEC/FSE '21: "An Empirical Investigation ..."
An Empirical Investigation of Practical Log Anomaly Detection for Online Service Systems
Nengwen Zhao, Honglin Wang, Zeyan Li, Xiao Peng, Gang Wang, Zhu Pan, Yong Wu, Zhen Feng, Xidao Wen, Wenchi Zhang, Kaixin Sui, and Dan Pei (Tsinghua University, China; BizSeer, China; China Everbright Bank, China) Log data is an essential and valuable resource of online service systems, which records detailed information of system running status and user behavior. Log anomaly detection is vital for service reliability engineering, which has been extensively studied. However, we find that existing approaches suffer from several limitations when deploying them into practice, including 1) inability to deal with various logs and complex log abnormal patterns; 2) poor interpretability; 3) lack of domain knowledge. To help understand these practical challenges and investigate the practical performance of existing work quantitatively, we conduct the first empirical study and an experimental study based on large-scale real-world data. We find that logs with rich information indeed exhibit diverse abnormal patterns (e.g., keywords, template count, template sequence, variable value, and variable distribution). However, existing approaches fail to tackle such complex abnormal patterns, producing unsatisfactory performance. Motivated by obtained findings, we propose a generic log anomaly detection system named LogAD based on ensemble learning, which integrates multiple anomaly detection approaches and domain knowledge, so as to handle complex situations in practice. About the effectiveness of LogAD, the average F1-score achieves 0.83, outperforming all baselines. Besides, we also share some success cases and lessons learned during our study. To our best knowledge, we are the first to investigate practical log anomaly detection in the real world deeply. Our work is helpful for practitioners and researchers to apply log anomaly detection to practice to enhance service reliability. @InProceedings{ESEC/FSE21p1404, author = {Nengwen Zhao and Honglin Wang and Zeyan Li and Xiao Peng and Gang Wang and Zhu Pan and Yong Wu and Zhen Feng and Xidao Wen and Wenchi Zhang and Kaixin Sui and Dan Pei}, title = {An Empirical Investigation of Practical Log Anomaly Detection for Online Service Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1404--1415}, doi = {10.1145/3468264.3473933}, year = {2021}, } Publisher's Version |
|
Fernandes, Sara |
ESEC/FSE '21: "A Live Environment for Inspection ..."
A Live Environment for Inspection and Refactoring of Software Systems
Sara Fernandes (University of Porto, Portugal; INESC-ID, Portugal) Refactoring helps to improve the design of software systems, making them more readable, maintainable, cleaner, and easy to expand. Most of the tools that already exist on this concept allow developers to select and execute the best refactoring techniques for a particular programming context. However, they aren’t interactive and prompt enough, providing a poor programming experience. In this gap, we can introduce and combine the topic of liveness with refactoring methods. Live Refactoring allows to know continuously, while programming, the blocks of code that we should refactor and why they were classified as problematic. Therefore, it shortens the time needed to create high-quality systems, due to early and continuous refactoring feedback, support, and guidance. This paper presents our research project based on a live refactoring environment. This environment is focused on a refactoring tool that aims to explore the concept of Live Refactoring and its main components --- recommendation, visualization, and application. @InProceedings{ESEC/FSE21p1655, author = {Sara Fernandes}, title = {A Live Environment for Inspection and Refactoring of Software Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1655--1659}, doi = {10.1145/3468264.3473100}, year = {2021}, } Publisher's Version |
|
Filieri, Antonio |
ESEC/FSE '21: "Symbolic Parallel Adaptive ..."
Symbolic Parallel Adaptive Importance Sampling for Probabilistic Program Analysis
Yicheng Luo, Antonio Filieri, and Yuan Zhou (University College London, UK; Imperial College London, UK; DII, China) Probabilistic software analysis aims at quantifying the probability of a target event occurring during the execution of a program processing uncertain incoming data or written itself using probabilistic programming constructs. Recent techniques combine symbolic execution with model counting or solution space quantification methods to obtain accurate estimates of the occurrence probability of rare target events, such as failures in a mission-critical system. However, they face several scalability and applicability limitations when analyzing software processing with high-dimensional and correlated multivariate input distributions. In this paper, we present SYMbolic Parallel Adaptive Importance Sampling (SYMPAIS), a new inference method tailored to analyze path conditions generated from the symbolic execution of programs with high-dimensional, correlated input distributions. SYMPAIS combines results from importance sampling and constraint solving to produce accurate estimates of the satisfaction probability for a broad class of constraints that cannot be analyzed by current solution space quantification methods. We demonstrate SYMPAIS's generality and performance compared with state-of-the-art alternatives on a set of problems from different application domains. @InProceedings{ESEC/FSE21p1166, author = {Yicheng Luo and Antonio Filieri and Yuan Zhou}, title = {Symbolic Parallel Adaptive Importance Sampling for Probabilistic Program Analysis}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1166--1177}, doi = {10.1145/3468264.3468593}, year = {2021}, } Publisher's Version Artifacts Reusable |
|
Filkov, Vladimir |
ESEC/FSE '21: "Sustainability Forecasting ..."
Sustainability Forecasting for Apache Incubator Projects
Likang Yin, Zhuangzhi Chen, Qi Xuan, and Vladimir Filkov (University of California at Davis, USA; Zhejiang University of Technology, China) Although OSS development is very popular, ultimately more than 80% of OSS projects fail. Identifying the factors associated with OSS success can help in devising interventions when a project takes a downturn. OSS success has been studied from a variety of angles, more recently in empirical studies of large numbers of diverse projects, using proxies for sustainability, e.g., internal metrics related to productivity and external ones, related to community popularity. The internal socio-technical structure of projects has also been shown important, especially their dynamics. This points to another angle on evaluating software success, from the perspective of self-sustaining and self-governing communities. To uncover the dynamics of how a project at a nascent development stage gradually evolves into a sustainable one, here we apply a socio-technical network modeling perspective to a dataset of Apache Software Foundation Incubator (ASFI), sustainability-labeled projects. To identify and validate the determinants of sustainability, we undertake a mix of quantitative and qualitative studies of ASFI projects’ socio-technical network trajectories. We develop interpretable models which can forecast a project becoming sustainable with 93+% accuracy, within 8 months of incubation start. Based on the interpretable models we describe a strategy for real-time monitoring and suggesting actions, which can be used by projects to correct their sustainability trajectories. @InProceedings{ESEC/FSE21p1056, author = {Likang Yin and Zhuangzhi Chen and Qi Xuan and Vladimir Filkov}, title = {Sustainability Forecasting for Apache Incubator Projects}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1056--1067}, doi = {10.1145/3468264.3468563}, year = {2021}, } Publisher's Version |
|
Ford, Denae |
ESEC/FSE '21: "Reel Life vs. Real Life: How ..."
Reel Life vs. Real Life: How Software Developers Share Their Daily Life through Vlogs
Souti Chattopadhyay, Thomas Zimmermann, and Denae Ford (Oregon State University, USA; Microsoft Research, USA) Software developers are turning to vlogs (video blogs) to share what a day is like to walk in their shoes. Through these vlogs developers share a rich perspective of their technical work as well their personal lives. However, does the type of activities portrayed in vlogs differ from activities developers in the industry perform? Would developers at a software company prefer to show activities to different extents if they were asked to share about their day through vlogs? To answer these questions, we analyzed 130 vlogs by software developers on YouTube and conducted a survey with 335 software developers at a large software company. We found that although vlogs present traditional development activities such as coding and code peripheral activities (11%), they also prominently feature wellness and lifestyle related activities (47.3%) that have not been reflected in previous software engineering literature. We also found that developers at the software company were inclined to share more non-coding tasks (e.g., personal projects, time spent with family and friends, and health) when asked to create a mock-up vlog to promote diversity. These findings demonstrate a shift in our understanding of how software developers are spending their time and find valuable to share publicly. We discuss how vlogs provide a more complete perspective of software development work and serve as a valuable source of data for empirical research. @InProceedings{ESEC/FSE21p404, author = {Souti Chattopadhyay and Thomas Zimmermann and Denae Ford}, title = {Reel Life vs. Real Life: How Software Developers Share Their Daily Life through Vlogs}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {404--415}, doi = {10.1145/3468264.3468599}, year = {2021}, } Publisher's Version Info |
|
Fraser, Gordon |
ESEC/FSE '21: "Graph-Based Seed Object Synthesis ..."
Graph-Based Seed Object Synthesis for Search-Based Unit Testing
Yun Lin, You Sheng Ong, Jun Sun, Gordon Fraser, and Jin Song Dong (National University of Singapore, Singapore; Singapore Management University, Singapore; University of Passau, Germany) Search-based software testing (SBST) generates tests using search algorithms guided by measurements gauging how far a test case is away from exercising a coverage goal. The effectiveness of SBST largely depends on the continuity and monotonicity of the fitness landscape decided by these measurements and the search operators. Unfortunately, the fitness landscape is challenging when the function under test takes object inputs, as classical measurement hardly provide guidance for constructing legitimate object inputs. To overcome this problem, we propose test seeds, i.e., test code skeletons of legitimate objects which enable the use of classical measurements. Given a target branch in a function under test, we first statically analyze the function to build an object construction graph that captures the relation between the operands of the target method and the states of their relevant object inputs. Based on the graph, we synthesize test template code where each "slot" is a mutation point for the search algorithm. This approach can be seamlessly integrated with existing SBST algorithms, and we implemented EvoObj on top of EvoSuite. Our experiments show that EvoObj outperforms EvoSuite with statistical significance on 2750 methods over 103 open source Java projects using state-of-the-art SBST algorithms. @InProceedings{ESEC/FSE21p1068, author = {Yun Lin and You Sheng Ong and Jun Sun and Gordon Fraser and Jin Song Dong}, title = {Graph-Based Seed Object Synthesis for Search-Based Unit Testing}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1068--1080}, doi = {10.1145/3468264.3468619}, year = {2021}, } Publisher's Version Info |
|
Fu, Chen |
ESEC/FSE '21: "Frontmatter: Mining Android ..."
Frontmatter: Mining Android User Interfaces at Scale
Konstantin Kuznetsov, Chen Fu, Song Gao, David N. Jansen, Lijun Zhang, and Andreas Zeller (CISPA, Germany; Institute of Software at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China; Institute of Intelligent Software, China) We introduce Frontmatter: the largest open-access dataset containing user interface models of about 160,000 Android apps. Frontmatter opens the door for comprehensive mining of mobile user interfaces, jumpstarting empirical research at a large scale, addressing questions such as "How many travel apps require registration?", "Which apps do not follow accessibility guidelines?", "Does the user interface correspond to the description?", and many more. The Frontmatter UI analysis tool and the Frontmatter dataset are available under an open-source license. @InProceedings{ESEC/FSE21p1580, author = {Konstantin Kuznetsov and Chen Fu and Song Gao and David N. Jansen and Lijun Zhang and Andreas Zeller}, title = {Frontmatter: Mining Android User Interfaces at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1580--1584}, doi = {10.1145/3468264.3473125}, year = {2021}, } Publisher's Version Video |
|
Fu, Ying |
ESEC/FSE '21: "Making Smart Contract Development ..."
Making Smart Contract Development More Secure and Easier
Meng Ren, Fuchen Ma, Zijing Yin, Ying Fu, Huizhong Li, Wanli Chang, and Yu Jiang (Tsinghua University, China; Ant Group, China; WeBank, China; University of York, UK) With the rapid development of distributed applications, smart contracts have attracted more and more developers' attentions. However, developers or domain experts have different levels of familiarity with specific programming languages, like Solidity, and those vulnerabilities hidden in the code would be exploited and result in huge property losses. Existing auxiliary tools lack security considerations. Most of them only provide word completion based on fuzzy search and detection services for limited types of vulnerabilities, which results in the manpower waste during coding and potential vulnerability threats after deployment. In this work, we propose an integrated framework to enhance security in the two stages of recommendation and validation, assisting developers to implement more secure contracts more quickly. First, we reinforce original smart contracts with general patch patterns and secure programming standards for training, and design a real-time code suggestion algorithm to predict secure words for selection. Then, we integrate multiple widely-used testing tools to provide validation services. For evaluation, we collected 47,398 real-world contracts, and the result shows that it outperforms existing platforms and tools, improving the average word suggestion accuracy by 30%-60% and helping detect about 25%-61% more vulnerabilities. In most cases, our framework can correctly predict next words with the probability up to 82%-97% within top ten candidates. Compared with professional vulnerability mining tools, it can find more vulnerabilities and provide targeted modification suggestions without frivolous configurations. Currently, this framework has been used as the official development tool of WeBank and integrated as the recommended platform by FISCO-BCOS community. @InProceedings{ESEC/FSE21p1360, author = {Meng Ren and Fuchen Ma and Zijing Yin and Ying Fu and Huizhong Li and Wanli Chang and Yu Jiang}, title = {Making Smart Contract Development More Secure and Easier}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1360--1370}, doi = {10.1145/3468264.3473929}, year = {2021}, } Publisher's Version |
|
Gall, Harald C. |
ESEC/FSE '21: "BRAID: An API Recommender ..."
BRAID: An API Recommender Supporting Implicit User Feedback
Yu Zhou, Haonan Jin, Xinying Yang, Taolue Chen, Krishna Narasimhan, and Harald C. Gall (Nanjing University of Aeronautics and Astronautics, China; University of London, UK; TU Darmstadt, Germany; University of Zurich, Switzerland) Efficient application programming interface (API) recommendation is one of the most desired features of modern integrated development environments. A multitude of API recommendation approaches have been proposed. However, most of the currently available API recommenders do not support the effective integration of user feedback into the recommendation loop. In this paper, we present BRAID (Boosting RecommendAtion with Implicit FeeDback), a tool which leverages user feedback, and employs learning-to-rank and active learning techniques to boost recommendation performance. The implementation is based on the VSCode plugin architecture, which provides an integrated user interface. Essentially, BRAID is a general framework which can accommodate existing query-based API recommendation approaches as components. Comparative experiments with strong baselines demonstrate the efficacy of the tool. A video demonstrating the usage of BRAID can be found at https://youtu.be/naD0guvl8sE. @InProceedings{ESEC/FSE21p1510, author = {Yu Zhou and Haonan Jin and Xinying Yang and Taolue Chen and Krishna Narasimhan and Harald C. Gall}, title = {BRAID: An API Recommender Supporting Implicit User Feedback}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1510--1514}, doi = {10.1145/3468264.3473111}, year = {2021}, } Publisher's Version |
|
Gao, Song |
ESEC/FSE '21: "Frontmatter: Mining Android ..."
Frontmatter: Mining Android User Interfaces at Scale
Konstantin Kuznetsov, Chen Fu, Song Gao, David N. Jansen, Lijun Zhang, and Andreas Zeller (CISPA, Germany; Institute of Software at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China; Institute of Intelligent Software, China) We introduce Frontmatter: the largest open-access dataset containing user interface models of about 160,000 Android apps. Frontmatter opens the door for comprehensive mining of mobile user interfaces, jumpstarting empirical research at a large scale, addressing questions such as "How many travel apps require registration?", "Which apps do not follow accessibility guidelines?", "Does the user interface correspond to the description?", and many more. The Frontmatter UI analysis tool and the Frontmatter dataset are available under an open-source license. @InProceedings{ESEC/FSE21p1580, author = {Konstantin Kuznetsov and Chen Fu and Song Gao and David N. Jansen and Lijun Zhang and Andreas Zeller}, title = {Frontmatter: Mining Android User Interfaces at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1580--1584}, doi = {10.1145/3468264.3473125}, year = {2021}, } Publisher's Version Video |
|
Gao, Zhipeng |
ESEC/FSE '21: "Code2Que: A Tool for Improving ..."
Code2Que: A Tool for Improving Question Titles from Mined Code Snippets in Stack Overflow
Zhipeng Gao, Xin Xia, David Lo, John Grundy, and Yuan-Fang Li (Monash University, Australia; Singapore Management University, Singapore) Stack Overflow is one of the most popular technical Q&A sites used by software developers. Seeking help from Stack Overflow has become an essential part of software developers’ daily work for solving programming-related questions. Although the Stack Overflow community has provided quality assurance guidelines to help users write better questions, we observed that a significant number of questions submitted to Stack Overflow are of low quality. In this paper, we introduce a new web-based tool, Code2Que, which can help developers in writing higher quality questions for a given code snippet. Code2Que consists of two main stages: offline learning and online recommendation. In the offline learning phase, we first collect a set of good quality ⟨code snippet, question⟩ pairs as training samples. We then train our model on these training samples via a deep sequence-to-sequence approach, enhanced with an attention mechanism, a copy mechanism and a coverage mechanism. In the online recommendation phase, for a given code snippet, we use the offline trained model to generate question titles to assist less experienced developers in writing questions more effectively. To evaluate Code2Que, we first sampled 50 low quality ⟨code snippet, question⟩ pairs from the Python and Java datasets on Stack Overflow. Then we conducted a user study to evaluate the question titles generated by our approach as compared to human-written ones using three metrics: Clearness, Fitness and Willingness to Respond. Our experimental results show that for a large number of low-quality questions in Stack Overflow, Code2Que can improve the question titles in terms of Clearness, Fitness and Willingness measures. @InProceedings{ESEC/FSE21p1525, author = {Zhipeng Gao and Xin Xia and David Lo and John Grundy and Yuan-Fang Li}, title = {Code2Que: A Tool for Improving Question Titles from Mined Code Snippets in Stack Overflow}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1525--1529}, doi = {10.1145/3468264.3473114}, year = {2021}, } Publisher's Version ESEC/FSE '21: "Automating the Removal of ..." Automating the Removal of Obsolete TODO Comments Zhipeng Gao, Xin Xia, David Lo, John Grundy, and Thomas Zimmermann (Monash University, Australia; Singapore Management University, Singapore; Microsoft Research, USA) TODO comments are very widely used by software developers to describe their pending tasks during software development. However, after performing the task developers sometimes neglect or simply forget to remove the TODO comment, resulting in obsolete TODO comments. These obsolete TODO comments can confuse development teams and may cause the introduction of bugs in the future, decreasing the software's quality and maintainability. Manually identifying obsolete TODO comments is time-consuming and expensive. It is thus necessary to detect obsolete TODO comments and remove them automatically before they cause any unwanted side effects. In this work, we propose a novel model, named TDCleaner, to identify obsolete TODO comments in software projects. TDCleaner can assist developers in just-in-time checking of TODO comments status and avoid leaving obsolete TODO comments. Our approach has two main stages: offline learning and online prediction. During offline learning, we first automatically establish <code_change, todo_comment, commit_msg> training samples and leverage three neural encoders to capture the semantic features of TODO comment, code change and commit message respectively. TDCleaner then automatically learns the correlations and interactions between different encoders to estimate the final status of the TODO comment. For online prediction, we check a TODO comment's status by leveraging the offline trained model to judge the TODO comment's likelihood of being obsolete. We built our dataset by collecting TODO comments from the top-10,000 Python and Java Github repositories and evaluated TDCleaner on them. Extensive experimental results show the promising performance of our model over a set of benchmarks. We also performed an in-the-wild evaluation with real-world software projects, we reported 18 obsolete TODO comments identified by TDCleaner to Github developers and 9 of them have already been confirmed and removed by the developers, demonstrating the practical usage of our approach. @InProceedings{ESEC/FSE21p218, author = {Zhipeng Gao and Xin Xia and David Lo and John Grundy and Thomas Zimmermann}, title = {Automating the Removal of Obsolete TODO Comments}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {218--229}, doi = {10.1145/3468264.3468553}, year = {2021}, } Publisher's Version |
|
Garlan, David |
ESEC/FSE '21: "AlloyMax: Bringing Maximum ..."
AlloyMax: Bringing Maximum Satisfaction to Relational Specifications
Changjian Zhang, Ryan Wagner, Pedro Orvalho, David Garlan, Vasco Manquinho, Ruben Martins, and Eunsuk Kang (Carnegie Mellon University, USA; INESC-ID, Portugal; University of Lisbon, Portugal) Alloy is a declarative modeling language based on a first-order relational logic. Its constraint-based analysis has enabled a wide range of applications in software engineering, including configuration synthesis, bug finding, test-case generation, and security analysis. Certain types of analysis tasks in these domains involve finding an optimal solution. For example, in a network configuration problem, instead of finding any valid configuration, it may be desirable to find one that is most permissive (i.e., it permits a maximum number of packets). Due to its dependence on SAT, however, Alloy cannot be used to specify and analyze these types of problems. We propose AlloyMax, an extension of Alloy with a capability to express and analyze problems with optimal solutions. AlloyMax introduces (1) a small addition of language constructs that can be used to specify a wide range of problems that involve optimality and (2) a new analysis engine that leverages a Maximum Satisfiability (MaxSAT) solver to generate optimal solutions. To enable this new type of analysis, we show how a specification in a first-order relational logic can be translated into an input format of MaxSAT solvers—namely, a Boolean formula in weighted conjunctive normal form (WCNF). We demonstrate the applicability and scalability of AlloyMax on a benchmark of problems. To our knowledge, AlloyMax is the first approach to enable analysis with optimality in a relational modeling language, and we believe that AlloyMax has the potential to bring a wide range of new applications to Alloy. @InProceedings{ESEC/FSE21p155, author = {Changjian Zhang and Ryan Wagner and Pedro Orvalho and David Garlan and Vasco Manquinho and Ruben Martins and Eunsuk Kang}, title = {AlloyMax: Bringing Maximum Satisfaction to Relational Specifications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {155--167}, doi = {10.1145/3468264.3468587}, year = {2021}, } Publisher's Version Artifacts Reusable |
|
Gazzillo, Paul |
ESEC/FSE '21: "Finding Broken Linux Configuration ..."
Finding Broken Linux Configuration Specifications by Statically Analyzing the Kconfig Language
Jeho Oh, Necip Fazıl Yıldıran, Julian Braha, and Paul Gazzillo (University of Texas at Austin, USA; University of Central Florida, USA) Highly-configurable software underpins much of our computing infrastructure. It enables extensive reuse, but opens the door to broken configuration specifications. The configuration specification language, Kconfig, is designed to prevent invalid configurations of the Linux kernel from being built. However, the astronomical size of the configuration space for Linux makes finding specification bugs difficult by hand or with random testing. In this paper, we introduce a software model checking framework for building Kconfig static analysis tools. We develop a formal semantics of the Kconfig language and implement the semantics in a symbolic evaluator called kclause that models Kconfig behavior as logical formulas. We then design and implement a bug finder, called kismet, that takes kclause models and leverages automated theorem proving to find unmet dependency bugs. kismet is evaluated for its precision, performance, and impact on kernel development for a recent version of Linux, which has over 140,000 lines of Kconfig across 28 architecture-specific specifications. Our evaluation finds 781 bugs (151 when considering sharing among Kconfig specifications) with 100% precision, spending between 37 and 90 minutes for each Kconfig specification, although it misses some bugs due to underapproximation. Compared to random testing, kismet finds substantially more true positive bugs in a fraction of the time. @InProceedings{ESEC/FSE21p893, author = {Jeho Oh and Necip Fazıl Yıldıran and Julian Braha and Paul Gazzillo}, title = {Finding Broken Linux Configuration Specifications by Statically Analyzing the Kconfig Language}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {893--905}, doi = {10.1145/3468264.3468578}, year = {2021}, } Publisher's Version Artifacts Functional |
|
Ghaisas, Smita |
ESEC/FSE '21: "Domain Adaptation for an Automated ..."
Domain Adaptation for an Automated Classification of Deontic Modalities in Software Engineering Contracts
Vivek Joshi, Preethu Rose Anish, and Smita Ghaisas (TCS Research, India) Contracts are agreements between parties engaging in economic transactions. They specify deontic modalities that the signatories should be held responsible for and state the penalties or actions to be taken if the stated agreements are not met. Additionally, contracts have also been known to be source of Software Engineering (SE) requirements. Identifying the deontic modalities in contracts can therefore add value to the Requirements Engineering (RE) phase of SE. The complex and ambiguous language of contracts make it difficult and time-consuming to identify the deontic modalities (obligations, permissions, prohibitions), embedded in the text. State-of-art neural network models are effective for text classification; however, they require substantial amounts of training data. The availability of contracts data is sparse owing to the confidentiality concerns of customers. In this paper, we leverage the linguistic and taxonomical similarities between regulations (available abundantly in the public domain) and contracts to demonstrate that it is possible to use regulations as training data for classifying deontic modalities in real-life contracts. We discuss the results of a range of experiments from the use of rule-based approach to Bidirectional Encoder Representations from Transformers (BERT) for automating the classification of deontic modalities. With BERT, we obtained an average precision and recall of 90% and 89.66% respectively. @InProceedings{ESEC/FSE21p1275, author = {Vivek Joshi and Preethu Rose Anish and Smita Ghaisas}, title = {Domain Adaptation for an Automated Classification of Deontic Modalities in Software Engineering Contracts}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1275--1280}, doi = {10.1145/3468264.3473921}, year = {2021}, } Publisher's Version |
|
Golubev, Yaroslav |
ESEC/FSE '21: "One Thousand and One Stories: ..."
One Thousand and One Stories: A Large-Scale Survey of Software Refactoring
Yaroslav Golubev, Zarina Kurbatova, Eman Abdullah AlOmar, Timofey Bryksin, and Mohamed Wiem Mkaouer (JetBrains Research, Russia; Rochester Institute of Technology, USA; HSE University, Russia) Despite the availability of refactoring as a feature in popular IDEs, recent studies revealed that developers are reluctant to use them, and still prefer the manual refactoring of their code. At JetBrains, our goal is to fully support refactoring features in IntelliJ-based IDEs and improve their adoption in practice. Therefore, we start by raising the following main questions. How exactly do people refactor code? What refactorings are the most popular? Why do some developers tend not to use convenient IDE refactoring tools? In this paper, we investigate the raised questions through the design and implementation of a survey targeting 1,183 users of IntelliJ-based IDEs. Our quantitative and qualitative analysis of the survey results shows that almost two-thirds of developers spend more than one hour in a single session refactoring their code; that refactoring types vary greatly in popularity; and that a lot of developers would like to know more about IDE refactoring features but lack the means to do so. These results serve us internally to support the next generation of refactoring features, as well as can help our research community to establish new directions in the refactoring usability research. @InProceedings{ESEC/FSE21p1303, author = {Yaroslav Golubev and Zarina Kurbatova and Eman Abdullah AlOmar and Timofey Bryksin and Mohamed Wiem Mkaouer}, title = {One Thousand and One Stories: A Large-Scale Survey of Software Refactoring}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1303--1313}, doi = {10.1145/3468264.3473924}, year = {2021}, } Publisher's Version |
|
Grundy, John |
ESEC/FSE '21: "ICME: An Informed Consent ..."
ICME: An Informed Consent Management Engine for Conformance in Smart Building Environments
Chehara Pathmabandu, John Grundy, Mohan Baruwal Chhetri, and Zubair Baig (Monash University, Australia; CSIRO’s Data61, Australia; Deakin University, Australia) Smart buildings can reveal highly sensitive insights about their inhabitants and expose them to new privacy threats and vulnerabilities. Yet, convenience overrides privacy concerns and most people remain ignorant about this issue. We propose a novel Informed Consent Management Engine (ICME) that aims to: (a) increase users’ awareness about privacy issues and data collection practices in their smart building environments, (b) provide fine-grained visibility into privacy conformance and infringement by these devices, (c) recommend and visualise corrective user actions through ”digital nudging”, and (d) support the monitoring and management of personal data disclosure in a shared space. We present a reference architecture for ICME that can be used by software engineers to implement diverse end-user consent management solutions for smart buildings. We also provide a proof-of-concept prototype to demonstrate how the ICME approach works in a shared smart workplace. Demo: https://youtu.be/5y6CdyWAdgY @InProceedings{ESEC/FSE21p1545, author = {Chehara Pathmabandu and John Grundy and Mohan Baruwal Chhetri and Zubair Baig}, title = {ICME: An Informed Consent Management Engine for Conformance in Smart Building Environments}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1545--1549}, doi = {10.1145/3468264.3473118}, year = {2021}, } Publisher's Version Video ESEC/FSE '21: "Code2Que: A Tool for Improving ..." Code2Que: A Tool for Improving Question Titles from Mined Code Snippets in Stack Overflow Zhipeng Gao, Xin Xia, David Lo, John Grundy, and Yuan-Fang Li (Monash University, Australia; Singapore Management University, Singapore) Stack Overflow is one of the most popular technical Q&A sites used by software developers. Seeking help from Stack Overflow has become an essential part of software developers’ daily work for solving programming-related questions. Although the Stack Overflow community has provided quality assurance guidelines to help users write better questions, we observed that a significant number of questions submitted to Stack Overflow are of low quality. In this paper, we introduce a new web-based tool, Code2Que, which can help developers in writing higher quality questions for a given code snippet. Code2Que consists of two main stages: offline learning and online recommendation. In the offline learning phase, we first collect a set of good quality ⟨code snippet, question⟩ pairs as training samples. We then train our model on these training samples via a deep sequence-to-sequence approach, enhanced with an attention mechanism, a copy mechanism and a coverage mechanism. In the online recommendation phase, for a given code snippet, we use the offline trained model to generate question titles to assist less experienced developers in writing questions more effectively. To evaluate Code2Que, we first sampled 50 low quality ⟨code snippet, question⟩ pairs from the Python and Java datasets on Stack Overflow. Then we conducted a user study to evaluate the question titles generated by our approach as compared to human-written ones using three metrics: Clearness, Fitness and Willingness to Respond. Our experimental results show that for a large number of low-quality questions in Stack Overflow, Code2Que can improve the question titles in terms of Clearness, Fitness and Willingness measures. @InProceedings{ESEC/FSE21p1525, author = {Zhipeng Gao and Xin Xia and David Lo and John Grundy and Yuan-Fang Li}, title = {Code2Que: A Tool for Improving Question Titles from Mined Code Snippets in Stack Overflow}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1525--1529}, doi = {10.1145/3468264.3473114}, year = {2021}, } Publisher's Version ESEC/FSE '21: "Automating the Removal of ..." Automating the Removal of Obsolete TODO Comments Zhipeng Gao, Xin Xia, David Lo, John Grundy, and Thomas Zimmermann (Monash University, Australia; Singapore Management University, Singapore; Microsoft Research, USA) TODO comments are very widely used by software developers to describe their pending tasks during software development. However, after performing the task developers sometimes neglect or simply forget to remove the TODO comment, resulting in obsolete TODO comments. These obsolete TODO comments can confuse development teams and may cause the introduction of bugs in the future, decreasing the software's quality and maintainability. Manually identifying obsolete TODO comments is time-consuming and expensive. It is thus necessary to detect obsolete TODO comments and remove them automatically before they cause any unwanted side effects. In this work, we propose a novel model, named TDCleaner, to identify obsolete TODO comments in software projects. TDCleaner can assist developers in just-in-time checking of TODO comments status and avoid leaving obsolete TODO comments. Our approach has two main stages: offline learning and online prediction. During offline learning, we first automatically establish <code_change, todo_comment, commit_msg> training samples and leverage three neural encoders to capture the semantic features of TODO comment, code change and commit message respectively. TDCleaner then automatically learns the correlations and interactions between different encoders to estimate the final status of the TODO comment. For online prediction, we check a TODO comment's status by leveraging the offline trained model to judge the TODO comment's likelihood of being obsolete. We built our dataset by collecting TODO comments from the top-10,000 Python and Java Github repositories and evaluated TDCleaner on them. Extensive experimental results show the promising performance of our model over a set of benchmarks. We also performed an in-the-wild evaluation with real-world software projects, we reported 18 obsolete TODO comments identified by TDCleaner to Github developers and 9 of them have already been confirmed and removed by the developers, demonstrating the practical usage of our approach. @InProceedings{ESEC/FSE21p218, author = {Zhipeng Gao and Xin Xia and David Lo and John Grundy and Thomas Zimmermann}, title = {Automating the Removal of Obsolete TODO Comments}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {218--229}, doi = {10.1145/3468264.3468553}, year = {2021}, } Publisher's Version ESEC/FSE '21: "Embedding App-Library Graph ..." Embedding App-Library Graph for Neural Third Party Library Recommendation Bo Li, Qiang He, Feifei Chen, Xin Xia, Li Li, John Grundy, and Yun Yang (Swinburne University of Technology, Australia; Deakin University, Australia; Monash University, Australia) The mobile app marketplace has fierce competition for mobile app developers, who need to develop and update their apps as soon as possible to gain first mover advantage. Third-party libraries (TPLs) offer developers an easier way to enhance their apps with new features. However, how to find suitable candidates among the high number and fast-changing TPLs is a challenging problem. TPL recommendation is a promising solution, but unfortunately existing approaches suffer from low accuracy in recommendation results. To tackle this challenge, we propose GRec, a graph neural network (GNN) based approach, for recommending potentially useful TPLs for app development. GRec models mobile apps, TPLs, and their interactions into an app-library graph. It then distills app-library interaction information from the app-library graph to make more accurate TPL recommendations. To evaluate GRec’s performance, we conduct comprehensive experiments based on a large-scale real-world Android app dataset containing 31,432 Android apps, 752 distinct TPLs, and 537,011 app-library usage records. Our experimental results illustrate that GRec can significantly increase the prediction accuracy and diversify the prediction results compared with state-of-the-art methods. A user study performed with app developers also confirms GRec's usefulness for real-world mobile app development. @InProceedings{ESEC/FSE21p466, author = {Bo Li and Qiang He and Feifei Chen and Xin Xia and Li Li and John Grundy and Yun Yang}, title = {Embedding App-Library Graph for Neural Third Party Library Recommendation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {466--477}, doi = {10.1145/3468264.3468552}, year = {2021}, } Publisher's Version |
|
Gu, Haiqiao |
ESEC/FSE '21: "A Large-Scale Empirical Study ..."
A Large-Scale Empirical Study on Java Library Migrations: Prevalence, Trends, and Rationales
Hao He, Runzhi He, Haiqiao Gu, and Minghui Zhou (Peking University, China; Tsinghua University, China) With the rise of open-source software and package hosting platforms, reusing 3rd-party libraries has become a common practice. Due to various failures during software evolution, a project may remove a used library and replace it with another library, which we call library migration. Despite substantial research on dependency management, the understanding of how and why library migrations occur is still lacking. Achieving this understanding may help practitioners optimize their library selection criteria, develop automated approaches to monitor dependencies, and provide migration suggestions for their libraries or software projects. In this paper, through a fine-grained commit-level analysis of 19,652 Java GitHub projects, we extract the largest migration dataset to-date (1,194 migration rules, 3,163 migration commits). We show that 8,065 (41.04%) projects having at least one library removal, 1,564 (7.96%, lower-bound) to 5,004 (25.46%, upper-bound) projects have at least one migration, and a median project with migrations has 2 to 4 migrations in total. We discover that library migrations are dominated by several domains (logging, JSON, testing and web service) presenting a long tail distribution. Also, migrations are highly unidirectional in that libraries are either mostly abandoned or mostly chosen in our project corpus. A thematic analysis on related commit messages, issues, and pull requests identifies 14 frequently mentioned migration reasons (e.g., lack of maintenance, usability, integration, etc), 7 of which are not discussed in previous work. Our findings can be operationalized into actionable insights for package hosting platforms, project maintainers, and library developers. We provide a replication package at https://doi.org/10.5281/zenodo.4816752. @InProceedings{ESEC/FSE21p478, author = {Hao He and Runzhi He and Haiqiao Gu and Minghui Zhou}, title = {A Large-Scale Empirical Study on Java Library Migrations: Prevalence, Trends, and Rationales}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {478--490}, doi = {10.1145/3468264.3468571}, year = {2021}, } Publisher's Version Artifacts Reusable |
|
Guan, Jonas |
ESEC/FSE '21: "StateFormer: Fine-Grained ..."
StateFormer: Fine-Grained Type Recovery from Binaries using Generative State Modeling
Kexin Pei, Jonas Guan, Matthew Broughton, Zhongtian Chen, Songchen Yao, David Williams-King, Vikas Ummadisetty, Junfeng Yang, Baishakhi Ray, and Suman Jana (Columbia University, USA; University of Toronto, Canada; Dublin High School, Ireland) Binary type inference is a critical reverse engineering task supporting many security applications, including vulnerability analysis, binary hardening, forensics, and decompilation. It is a difficult task because source-level type information is often stripped during compilation, leaving only binaries with untyped memory and register accesses. Existing approaches rely on hand-coded type inference rules defined by domain experts, which are brittle and require nontrivial effort to maintain and update. Even though machine learning approaches have shown promise at automatically learning the inference rules, their accuracy is still low, especially for optimized binaries. We present StateFormer, a new neural architecture that is adept at accurate and robust type inference. StateFormer follows a two-step transfer learning paradigm. In the pretraining step, the model is trained with Generative State Modeling (GSM), a novel task that we design to teach the model to statically approximate execution effects of assembly instructions in both forward and backward directions. In the finetuning step, the pretrained model learns to use its knowledge of operational semantics to infer types. We evaluate StateFormer's performance on a corpus of 33 popular open-source software projects containing over 1.67 billion variables of different types. The programs are compiled with GCC and LLVM over 4 optimization levels O0-O3, and 3 obfuscation passes based on LLVM. Our model significantly outperforms state-of-the-art ML-based tools by 14.6% in recovering types for both function arguments and variables. Our ablation studies show that GSM improves type inference accuracy by 33%. @InProceedings{ESEC/FSE21p690, author = {Kexin Pei and Jonas Guan and Matthew Broughton and Zhongtian Chen and Songchen Yao and David Williams-King and Vikas Ummadisetty and Junfeng Yang and Baishakhi Ray and Suman Jana}, title = {StateFormer: Fine-Grained Type Recovery from Binaries using Generative State Modeling}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {690--702}, doi = {10.1145/3468264.3468607}, year = {2021}, } Publisher's Version Info Artifacts Reusable |
|
Guptha, Suraj |
ESEC/FSE '21: "Data-Driven Test Selection ..."
Data-Driven Test Selection at Scale
Sonu Mehta, Farima Farmahinifarahani, Ranjita Bhagwan, Suraj Guptha, Sina Jafari, Rahul Kumar, Vaibhav Saini, and Anirudh Santhiar (Microsoft Research, India; University of California at Irvine, USA; Microsoft, USA) Large-scale services depend on Continuous Integration/Continuous Deployment (CI/CD) processes to maintain their agility and code-quality. Change-based testing plays an important role in finding bugs, but testing after every change is prohibitively expensive at a scale where thousands of changes are committed every hour. Test selection models deal with this issue by running a subset of tests for every change. In this paper, we present a generic, language-agnostic and lightweight statistical model for test selection. Unlike existing techniques, the proposed model does not require complex feature extraction techniques. Consequently, it scales to hundreds of repositories of varying characteristics while capturing more than 99% of buggy pull requests. Additionally, to better evaluate test selection models, we propose application-specific metrics that capture both a reduction in resource cost and a reduction in pull-request turn-around time. By evaluating our model on 22 large repositories at Microsoft, we find that we can save 15%−30% of compute time while reporting back more than ≈99% of buggy pull requests. @InProceedings{ESEC/FSE21p1225, author = {Sonu Mehta and Farima Farmahinifarahani and Ranjita Bhagwan and Suraj Guptha and Sina Jafari and Rahul Kumar and Vaibhav Saini and Anirudh Santhiar}, title = {Data-Driven Test Selection at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1225--1235}, doi = {10.1145/3468264.3473916}, year = {2021}, } Publisher's Version |
|
Haas, Roman |
ESEC/FSE '21: "How Can Manual Testing Processes ..."
How Can Manual Testing Processes Be Optimized? Developer Survey, Optimization Guidelines, and Case Studies
Roman Haas, Daniel Elsner, Elmar Juergens, Alexander Pretschner, and Sven Apel (Saarland University, Germany; CQSE, Germany; TU Munich, Germany) Manual software testing is tedious and costly as it involves significant human effort. Yet, it is still widely applied in industry and will be in the foreseeable future. Although there is arguably a great need for optimization of manual testing processes, research focuses mostly on optimization techniques for automated tests. Accordingly, there is no precise understanding of the practices and processes of manual testing in industry nor about pitfalls and optimization potential that is untapped. To shed light on this issue, we conducted a survey among 38 testing professionals from 16 companies, to investigate their manual testing processes and to identify potential for optimization. We synthesize guidelines when optimization techniques from automated testing can be implemented for manual testing. By means of case studies on two industrial software projects, we show that fault detection likelihood, test feedback time and test creation efforts can be improved when following our guidelines. @InProceedings{ESEC/FSE21p1281, author = {Roman Haas and Daniel Elsner and Elmar Juergens and Alexander Pretschner and Sven Apel}, title = {How Can Manual Testing Processes Be Optimized? Developer Survey, Optimization Guidelines, and Case Studies}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1281--1291}, doi = {10.1145/3468264.3473922}, year = {2021}, } Publisher's Version Info |
|
Hadarean, Liana |
ESEC/FSE '21: "RAPID: Checking API Usage ..."
RAPID: Checking API Usage for the Cloud in the Cloud
Michael Emmi, Liana Hadarean, Ranjit Jhala, Lee Pike, Nicolás Rosner, Martin Schäf, Aritra Sengupta, and Willem Visser (Amazon Web Services, USA) We present RAPID, an industrial-strength analysis developed at AWS that aims to help developers by providing automatic, fast and actionable feedback about correct usage of cloud-service APIs. RAPID’s design is based on the insight that cloud service APIs are structured around short-lived request- and response-objects whose usage patterns can be specified as value-dependent type-state automata and be verified by combining local type-state with global value-flow analyses. We describe various challenges that arose to deploy RAPID at scale. Finally, we present an evaluation that validates our design choices, deployment heuristics, and shows that RAPID is able to quickly and precisely report a wide variety of useful API misuse violations in large, industrial-strength code bases. @InProceedings{ESEC/FSE21p1416, author = {Michael Emmi and Liana Hadarean and Ranjit Jhala and Lee Pike and Nicolás Rosner and Martin Schäf and Aritra Sengupta and Willem Visser}, title = {RAPID: Checking API Usage for the Cloud in the Cloud}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1416--1426}, doi = {10.1145/3468264.3473934}, year = {2021}, } Publisher's Version |
|
Halfond, William G. J. |
ESEC/FSE '21: "Detecting and Localizing Keyboard ..."
Detecting and Localizing Keyboard Accessibility Failures in Web Applications
Paul T. Chiou, Ali S. Alotaibi, and William G. J. Halfond (University of Southern California, USA) The keyboard is the most universally supported input method operable by people with disabilities. Yet, many popular websites lack keyboard accessible mechanism, which could cause failures that make the website unusable. In this paper, we present a novel approach for automatically detecting and localizing keyboard accessibility failures in web applications. Our extensive evaluation of our technique on real world web pages showed that our technique was able to detect keyboard failures in web applications with high precision and recall and was able to accurately identify the underlying elements in the web pages that led to the observed problems. @InProceedings{ESEC/FSE21p855, author = {Paul T. Chiou and Ali S. Alotaibi and William G. J. Halfond}, title = {Detecting and Localizing Keyboard Accessibility Failures in Web Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {855--867}, doi = {10.1145/3468264.3468581}, year = {2021}, } Publisher's Version Artifacts Reusable |
|
Hao, Dan |
ESEC/FSE '21: "Boosting Coverage-Based Fault ..."
Boosting Coverage-Based Fault Localization via Graph-Based Representation Learning
Yiling Lou, Qihao Zhu, Jinhao Dong, Xia Li, Zeyu Sun, Dan Hao, Lu Zhang, and Lingming Zhang (Peking University, China; Kennesaw State University, USA; University of Illinois at Urbana-Champaign, USA) Coverage-based fault localization has been extensively studied in the literature due to its effectiveness and lightweightness for real-world systems. However, existing techniques often utilize coverage in an oversimplified way by abstracting detailed coverage into numbers of tests or boolean vectors, thus limiting their effectiveness in practice. In this work, we present a novel coverage-based fault localization technique, GRACE, which fully utilizes detailed coverage information with graph-based representation learning. Our intuition is that coverage can be regarded as connective relationships between tests and program entities, which can be inherently and integrally represented by a graph structure: with tests and program entities as nodes, while with coverage and code structures as edges. Therefore, we first propose a novel graph-based representation to reserve all detailed coverage information and fine-grained code structures into one graph. Then we leverage Gated Graph Neural Network to learn valuable features from the graph-based coverage representation and rank program entities in a listwise way. Our evaluation on the widely used benchmark Defects4J (V1.2.0) shows that GRACE significantly outperforms state-of-the-art coverage-based fault localization: GRACE localizes 195 bugs within Top-1 whereas the best compared technique can at most localize 166 bugs within Top-1. We further investigate the impact of each GRACE component and find that they all positively contribute to GRACE. In addition, our results also demonstrate that GRACE has learnt essential features from coverage, which are complementary to various information used in existing learning-based fault localization. Finally, we evaluate GRACE in the cross-project prediction scenario on extra 226 bugs from Defects4J (V2.0.0), and find that GRACE consistently outperforms state-of-the-art coverage-based techniques. @InProceedings{ESEC/FSE21p664, author = {Yiling Lou and Qihao Zhu and Jinhao Dong and Xia Li and Zeyu Sun and Dan Hao and Lu Zhang and Lingming Zhang}, title = {Boosting Coverage-Based Fault Localization via Graph-Based Representation Learning}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {664--676}, doi = {10.1145/3468264.3468580}, year = {2021}, } Publisher's Version |
|
Harman, Mark |
ESEC/FSE '21: "Fairea: A Model Behaviour ..."
Fairea: A Model Behaviour Mutation Approach to Benchmarking Bias Mitigation Methods
Max Hort, Jie M. Zhang, Federica Sarro, and Mark Harman (University College London, UK) The increasingly wide uptake of Machine Learning (ML) has raised the significance of the problem of tackling bias (i.e., unfairness), making it a primary software engineering concern. In this paper, we introduce Fairea, a model behaviour mutation approach to benchmarking ML bias mitigation methods. We also report on a large-scale empirical study to test the effectiveness of 12 widely-studied bias mitigation methods. Our results reveal that, surprisingly, bias mitigation methods have a poor effectiveness in 49% of the cases. In particular, 15% of the mitigation cases have worse fairness-accuracy trade-offs than the baseline established by Fairea; 34% of the cases have a decrease in accuracy and an increase in bias. Fairea has been made publicly available for software engineers and researchers to evaluate their bias mitigation methods. @InProceedings{ESEC/FSE21p994, author = {Max Hort and Jie M. Zhang and Federica Sarro and Mark Harman}, title = {Fairea: A Model Behaviour Mutation Approach to Benchmarking Bias Mitigation Methods}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {994--1006}, doi = {10.1145/3468264.3468565}, year = {2021}, } Publisher's Version Artifacts Functional |
|
Hassan, Ahmed E. |
ESEC/FSE '21: "The Gas Triangle and Its Challenges ..."
The Gas Triangle and Its Challenges to the Development of Blockchain-Powered Applications
Gustavo A. Oliva and Ahmed E. Hassan (Queen's University, Canada) Ethereum is the most popular blockchain platform for the development of blockchain-powered applications (a.k.a, ). Developing a involves translating requests captured in the frontend of an application into contract transactions. However, transactions need to be payed for. Ethereum employs the gas system to charge transaction fees. The gas system has three key components, namely gas price, gas usage, and gas limit. We refer to these components and their interplay as the gas triangle. In this paper, we claim that the inherently complex gas triangle should not be exposed to end-users. We conduct two studies that provide empirical evidence to support our claim. In light of our results, we provide a list of recommendations to novice end-users. We conclude the paper with a list of research challenges that need to be tackled in order to support the development of next-generation that completely hide the gas triangle from end-users. @InProceedings{ESEC/FSE21p1463, author = {Gustavo A. Oliva and Ahmed E. Hassan}, title = {The Gas Triangle and Its Challenges to the Development of Blockchain-Powered Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1463--1466}, doi = {10.1145/3468264.3473130}, year = {2021}, } Publisher's Version |
|
He, Hao |
ESEC/FSE '21: "A Large-Scale Empirical Study ..."
A Large-Scale Empirical Study on Java Library Migrations: Prevalence, Trends, and Rationales
Hao He, Runzhi He, Haiqiao Gu, and Minghui Zhou (Peking University, China; Tsinghua University, China) With the rise of open-source software and package hosting platforms, reusing 3rd-party libraries has become a common practice. Due to various failures during software evolution, a project may remove a used library and replace it with another library, which we call library migration. Despite substantial research on dependency management, the understanding of how and why library migrations occur is still lacking. Achieving this understanding may help practitioners optimize their library selection criteria, develop automated approaches to monitor dependencies, and provide migration suggestions for their libraries or software projects. In this paper, through a fine-grained commit-level analysis of 19,652 Java GitHub projects, we extract the largest migration dataset to-date (1,194 migration rules, 3,163 migration commits). We show that 8,065 (41.04%) projects having at least one library removal, 1,564 (7.96%, lower-bound) to 5,004 (25.46%, upper-bound) projects have at least one migration, and a median project with migrations has 2 to 4 migrations in total. We discover that library migrations are dominated by several domains (logging, JSON, testing and web service) presenting a long tail distribution. Also, migrations are highly unidirectional in that libraries are either mostly abandoned or mostly chosen in our project corpus. A thematic analysis on related commit messages, issues, and pull requests identifies 14 frequently mentioned migration reasons (e.g., lack of maintenance, usability, integration, etc), 7 of which are not discussed in previous work. Our findings can be operationalized into actionable insights for package hosting platforms, project maintainers, and library developers. We provide a replication package at https://doi.org/10.5281/zenodo.4816752. @InProceedings{ESEC/FSE21p478, author = {Hao He and Runzhi He and Haiqiao Gu and Minghui Zhou}, title = {A Large-Scale Empirical Study on Java Library Migrations: Prevalence, Trends, and Rationales}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {478--490}, doi = {10.1145/3468264.3468571}, year = {2021}, } Publisher's Version Artifacts Reusable |
|
He, Qiang |
ESEC/FSE '21: "Embedding App-Library Graph ..."
Embedding App-Library Graph for Neural Third Party Library Recommendation
Bo Li, Qiang He, Feifei Chen, Xin Xia, Li Li, John Grundy, and Yun Yang (Swinburne University of Technology, Australia; Deakin University, Australia; Monash University, Australia) The mobile app marketplace has fierce competition for mobile app developers, who need to develop and update their apps as soon as possible to gain first mover advantage. Third-party libraries (TPLs) offer developers an easier way to enhance their apps with new features. However, how to find suitable candidates among the high number and fast-changing TPLs is a challenging problem. TPL recommendation is a promising solution, but unfortunately existing approaches suffer from low accuracy in recommendation results. To tackle this challenge, we propose GRec, a graph neural network (GNN) based approach, for recommending potentially useful TPLs for app development. GRec models mobile apps, TPLs, and their interactions into an app-library graph. It then distills app-library interaction information from the app-library graph to make more accurate TPL recommendations. To evaluate GRec’s performance, we conduct comprehensive experiments based on a large-scale real-world Android app dataset containing 31,432 Android apps, 752 distinct TPLs, and 537,011 app-library usage records. Our experimental results illustrate that GRec can significantly increase the prediction accuracy and diversify the prediction results compared with state-of-the-art methods. A user study performed with app developers also confirms GRec's usefulness for real-world mobile app development. @InProceedings{ESEC/FSE21p466, author = {Bo Li and Qiang He and Feifei Chen and Xin Xia and Li Li and John Grundy and Yun Yang}, title = {Embedding App-Library Graph for Neural Third Party Library Recommendation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {466--477}, doi = {10.1145/3468264.3468552}, year = {2021}, } Publisher's Version |
|
He, Runzhi |
ESEC/FSE '21: "A Large-Scale Empirical Study ..."
A Large-Scale Empirical Study on Java Library Migrations: Prevalence, Trends, and Rationales
Hao He, Runzhi He, Haiqiao Gu, and Minghui Zhou (Peking University, China; Tsinghua University, China) With the rise of open-source software and package hosting platforms, reusing 3rd-party libraries has become a common practice. Due to various failures during software evolution, a project may remove a used library and replace it with another library, which we call library migration. Despite substantial research on dependency management, the understanding of how and why library migrations occur is still lacking. Achieving this understanding may help practitioners optimize their library selection criteria, develop automated approaches to monitor dependencies, and provide migration suggestions for their libraries or software projects. In this paper, through a fine-grained commit-level analysis of 19,652 Java GitHub projects, we extract the largest migration dataset to-date (1,194 migration rules, 3,163 migration commits). We show that 8,065 (41.04%) projects having at least one library removal, 1,564 (7.96%, lower-bound) to 5,004 (25.46%, upper-bound) projects have at least one migration, and a median project with migrations has 2 to 4 migrations in total. We discover that library migrations are dominated by several domains (logging, JSON, testing and web service) presenting a long tail distribution. Also, migrations are highly unidirectional in that libraries are either mostly abandoned or mostly chosen in our project corpus. A thematic analysis on related commit messages, issues, and pull requests identifies 14 frequently mentioned migration reasons (e.g., lack of maintenance, usability, integration, etc), 7 of which are not discussed in previous work. Our findings can be operationalized into actionable insights for package hosting platforms, project maintainers, and library developers. We provide a replication package at https://doi.org/10.5281/zenodo.4816752. @InProceedings{ESEC/FSE21p478, author = {Hao He and Runzhi He and Haiqiao Gu and Minghui Zhou}, title = {A Large-Scale Empirical Study on Java Library Migrations: Prevalence, Trends, and Rationales}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {478--490}, doi = {10.1145/3468264.3468571}, year = {2021}, } Publisher's Version Artifacts Reusable |
|
He, Shilin |
ESEC/FSE '21: "Onion: Identifying Incident-Indicating ..."
Onion: Identifying Incident-Indicating Logs for Cloud Systems
Xu Zhang, Yong Xu, Si Qin, Shilin He, Bo Qiao, Ze Li, Hongyu Zhang, Xukun Li, Yingnong Dang, Qingwei Lin, Murali Chintalapati, Saravanakumar Rajmohan, and Dongmei Zhang (Microsoft Research, China; Microsoft Azure, USA; University of Newcastle, Australia; Microsoft 365, USA) In cloud systems, incidents affect the availability of services and require quick mitigation actions. Once an incident occurs, operators and developers often examine logs to perform fault diagnosis. However, the large volume of diverse logs and the overwhelming details in log data make the manual diagnosis process time-consuming and error-prone. In this paper, we propose Onion, an automatic solution for precisely and efficiently locating incident-indicating logs, which can provide useful clues for diagnosing the incidents. We first point out three criteria for localizing incident-indicating logs, i.e., Consistency, Impact, and Bilateral-Difference. Then we propose a novel agglomeration of logs, called log clique, based on which these criteria are satisfied. To obtain log cliques, we develop an incident-aware log representation and a progressive log clustering technique. Contrast analysis is then performed on the cliques to identify the incident-indicating logs. We have evaluated Onion using well-labeled log datasets. Onion achieves an average F1-score of 0.95 and can process millions of logs in only a few minutes, demonstrating its effectiveness and efficiency. Onion has also been successfully applied to the cloud system of Microsoft. Its practicability has been confirmed through the quantitative and qualitative analysis of the real incident cases. @InProceedings{ESEC/FSE21p1253, author = {Xu Zhang and Yong Xu and Si Qin and Shilin He and Bo Qiao and Ze Li and Hongyu Zhang and Xukun Li and Yingnong Dang and Qingwei Lin and Murali Chintalapati and Saravanakumar Rajmohan and Dongmei Zhang}, title = {Onion: Identifying Incident-Indicating Logs for Cloud Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1253--1263}, doi = {10.1145/3468264.3473919}, year = {2021}, } Publisher's Version |
|
Hellendoorn, Vincent J. |
ESEC/FSE '21: "Towards Automating Code Review ..."
Towards Automating Code Review at Scale
Vincent J. Hellendoorn, Jason Tsay, Manisha Mukherjee, and Martin Hirzel (Carnegie Mellon University, USA; IBM Research, USA) As neural methods are increasingly used to support and automate software development tasks, code review is a natural next target. Yet, training models to imitate developers based on past code reviews is far from straightforward: reviews found in open-source projects vary greatly in quality, phrasing, and depth depending on the reviewer. In addition, changesets are often large, stretching the capacity of current neural models. Recent work reported modest success at predicting review resolutions, but largely side-stepped the above issues by focusing on small inputs where comments were already known to occur. This work examines the vision and challenges of automating code review at realistic scale. We collect hundreds of thousands of changesets across hundreds of projects that routinely conduct code review, many of which change thousands of tokens. We focus on predicting just the locations of comments, which are quite rare. By analyzing model performance and dataset statistics, we show that even this task is already challenging, in no small part because of tremendous variation and (apparent) randomness in code reviews. Our findings give rise to a research agenda for realistically and impactfully automating code review. @InProceedings{ESEC/FSE21p1479, author = {Vincent J. Hellendoorn and Jason Tsay and Manisha Mukherjee and Martin Hirzel}, title = {Towards Automating Code Review at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1479--1482}, doi = {10.1145/3468264.3473134}, year = {2021}, } Publisher's Version ESEC/FSE '21: "Understanding Neural Code ..." Understanding Neural Code Intelligence through Program Simplification Md Rafiqul Islam Rabin, Vincent J. Hellendoorn, and Mohammad Amin Alipour (University of Houston, USA; Carnegie Mellon University, USA) A wide range of code intelligence (CI) tools, powered by deep neural networks, have been developed recently to improve programming productivity and perform program analysis. To reliably use such tools, developers often need to reason about the behavior of the underlying models and the factors that affect them. This is especially challenging for tools backed by deep neural networks. Various methods have tried to reduce this opacity in the vein of "transparent/interpretable-AI". However, these approaches are often specific to a particular set of network architectures, even requiring access to the network's parameters. This makes them difficult to use for the average programmer, which hinders the reliable adoption of neural CI systems. In this paper, we propose a simple, model-agnostic approach to identify critical input features for models in CI systems, by drawing on software debugging research, specifically delta debugging. Our approach, SIVAND, uses simplification techniques that reduce the size of input programs of a CI model while preserving the predictions of the model. We show that this approach yields remarkably small outputs and is broadly applicable across many model architectures and problem domains. We find that the models in our experiments often rely heavily on just a few syntactic features in input programs. We believe that SIVAND's extracted features may help understand neural CI systems' predictions and learned behavior. @InProceedings{ESEC/FSE21p441, author = {Md Rafiqul Islam Rabin and Vincent J. Hellendoorn and Mohammad Amin Alipour}, title = {Understanding Neural Code Intelligence through Program Simplification}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {441--452}, doi = {10.1145/3468264.3468539}, year = {2021}, } Publisher's Version |
|
Heller, Veit |
ESEC/FSE '21: "Efficient Module-Level Dynamic ..."
Efficient Module-Level Dynamic Analysis for Dynamic Languages with Module Recontextualization
Nikos Vasilakis, Grigoris Ntousakis, Veit Heller, and Martin C. Rinard (Massachusetts Institute of Technology, USA; TU Crete, Greece) Dynamic program analysis is a long-standing technique for obtaining information about program execution. We present module recontextualization, a new dynamic analysis approach that targets modern dynamic languages such as JavaScript and Racket, enabled by the fact that they feature a module-import mechanism that loads code at runtime as a string. This approach uses lightweight load-time code transformations that operate on the string representation of the module, as well as the context to which it is about to be bound, to insert developer-provided, analysis-specific code into the module before it is loaded. This code implements the dynamic analysis, enabling this approach to capture all interactions around the module in unmodified production language runtime environments. We implement this approach in two systems targeting the JavaScript and Racket ecosystems. Our evaluation shows that this approach can deliver order-of-magnitude performance improvements over state-of-the-art dynamic analysis systems while supporting a range of analyses, implemented on average in about 100 lines of code. @InProceedings{ESEC/FSE21p1202, author = {Nikos Vasilakis and Grigoris Ntousakis and Veit Heller and Martin C. Rinard}, title = {Efficient Module-Level Dynamic Analysis for Dynamic Languages with Module Recontextualization}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1202--1213}, doi = {10.1145/3468264.3468574}, year = {2021}, } Publisher's Version Info Artifacts Functional |
|
Heo, Kihong |
ESEC/FSE '21: "Boosting Static Analysis Accuracy ..."
Boosting Static Analysis Accuracy with Instrumented Test Executions
Tianyi Chen, Kihong Heo, and Mukund Raghothaman (University of Southern California, USA; KAIST, South Korea) The two broad approaches to discover properties of programs---static and dynamic analyses---have complementary strengths: static techniques perform exhaustive exploration and prove upper bounds on program behaviors, while the dynamic analysis of test cases provides concrete evidence of these behaviors and promise low false alarm rates. In this paper, we present DynaBoost, a system which uses information obtained from test executions to prioritize the alarms of a static analyzer. We instrument the program to dynamically look for dataflow behaviors predicted by the static analyzer, and use these results to bootstrap a probabilistic alarm ranking system, where the user repeatedly inspects the alarm judged most likely to be a real bug, and where the system re-ranks the remaining alarms in response to user feedback. The combined system is able to exploit information that cannot be easily provided by users, and provides significant improvements in the human alarm inspection burden: by 35% compared to the baseline ranking system, and by 89% compared to an unaided programmer triaging alarm reports. @InProceedings{ESEC/FSE21p1154, author = {Tianyi Chen and Kihong Heo and Mukund Raghothaman}, title = {Boosting Static Analysis Accuracy with Instrumented Test Executions}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1154--1165}, doi = {10.1145/3468264.3468626}, year = {2021}, } Publisher's Version Artifacts Functional |
|
Heumüller, Robert |
ESEC/FSE '21: "Exploit Those Code Reviews! ..."
Exploit Those Code Reviews! Bigger Data for Deeper Learning
Robert Heumüller, Sebastian Nielebock, and Frank Ortmeier (University of Magdeburg, Germany) Modern code review (MCR) processes are prevalent in most organizations that develop software due to benefits in quality assurance and knowledge transfer. With the rise of collaborative software development platforms like GitHub and Bitbucket, today, millions of projects share not only their code but also their review data. Although researchers have tried to exploit this data for more than a decade, most of that knowledge remains a buried treasure. A crucial catalyst for many advances in deep learning, however, is the accessibility of large-scale standard datasets for different learning tasks. This paper presents the ETCR (Exploit Those Code Reviews!) infrastructure for mining MCR datasets from any GitHub project practicing pull-request-based development. We demonstrate its effectiveness with ETCR-Elasticsearch, a dataset of >231𝑘 review comments for >47𝑘 Java file revisions in >40𝑘 pull-requests from the Elasticsearch project. ETCR is designed with the challenge of deep learning in mind. Compared to previous datasets, ETCR datasets include all information for linking review comments to nodes in the respective program’s Abstract Syntax Tree. @InProceedings{ESEC/FSE21p1505, author = {Robert Heumüller and Sebastian Nielebock and Frank Ortmeier}, title = {Exploit Those Code Reviews! Bigger Data for Deeper Learning}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1505--1509}, doi = {10.1145/3468264.3473110}, year = {2021}, } Publisher's Version Video |
|
Higo, Yoshiki |
ESEC/FSE '21: "NIL: Large-Scale Detection ..."
NIL: Large-Scale Detection of Large-Variance Clones
Tasuku Nakagawa, Yoshiki Higo, and Shinji Kusumoto (Osaka University, Japan) A code clone (in short, clone) is a code fragment that is identical or similar to other code fragments in source code. Clones generated by a large number of changes to copy-and-pasted code fragments are called large-variance (modifications are scattered) or large-gap (modifications are in one place) clones. It is difficult for general clone detection techniques to detect such clones and thus specialized techniques are necessary. In addition, with the rapid growth of software development, scalable clone detectors that can detect clones in large codebases are required. However, there are no existing techniques for quickly detecting large-variance or large-gap clones in large codebases. In this paper, we propose a scalable clone detection technique that can detect large-variance clones from large codebases and describe its implementation, called NIL. NIL is a token-based clone detector that efficiently identifies clone candidates using an N-gram representation of token sequences and an inverted index. Then, NIL verifies the clone candidates by measuring their similarity based on the longest common subsequence between their token sequences. We evaluate NIL in terms of large- variance clone detection accuracy, general Type-1, Type-2, and Type- 3 clone detection accuracy, and scalability. Our experimental results show that NIL has higher accuracy in terms of large-variance clone detection, equivalent accuracy in terms of general clone detection, and the shortest execution time for inputs of various sizes (1–250 MLOC) compared to existing state-of-the-art tools. @InProceedings{ESEC/FSE21p830, author = {Tasuku Nakagawa and Yoshiki Higo and Shinji Kusumoto}, title = {NIL: Large-Scale Detection of Large-Variance Clones}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {830--841}, doi = {10.1145/3468264.3468564}, year = {2021}, } Publisher's Version Info |
|
Hirzel, Martin |
ESEC/FSE '21: "Towards Automating Code Review ..."
Towards Automating Code Review at Scale
Vincent J. Hellendoorn, Jason Tsay, Manisha Mukherjee, and Martin Hirzel (Carnegie Mellon University, USA; IBM Research, USA) As neural methods are increasingly used to support and automate software development tasks, code review is a natural next target. Yet, training models to imitate developers based on past code reviews is far from straightforward: reviews found in open-source projects vary greatly in quality, phrasing, and depth depending on the reviewer. In addition, changesets are often large, stretching the capacity of current neural models. Recent work reported modest success at predicting review resolutions, but largely side-stepped the above issues by focusing on small inputs where comments were already known to occur. This work examines the vision and challenges of automating code review at realistic scale. We collect hundreds of thousands of changesets across hundreds of projects that routinely conduct code review, many of which change thousands of tokens. We focus on predicting just the locations of comments, which are quite rare. By analyzing model performance and dataset statistics, we show that even this task is already challenging, in no small part because of tremendous variation and (apparent) randomness in code reviews. Our findings give rise to a research agenda for realistically and impactfully automating code review. @InProceedings{ESEC/FSE21p1479, author = {Vincent J. Hellendoorn and Jason Tsay and Manisha Mukherjee and Martin Hirzel}, title = {Towards Automating Code Review at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1479--1482}, doi = {10.1145/3468264.3473134}, year = {2021}, } Publisher's Version |
|
Hoff, Adrian |
ESEC/FSE '21: "Towards Immersive Software ..."
Towards Immersive Software Archaeology: Regaining Legacy Systems’ Design Knowledge via Interactive Exploration in Virtual Reality
Adrian Hoff, Michael Nieke, and Christoph Seidl (IT University of Copenhagen, Denmark) Many of today's software systems will become the legacy systems of tomorrow, comprised of outdated technology and inaccurate design documents. Preparing for their eventual re-engineering requires engineers to regain lost design knowledge and discover re-engineering opportunities. While tools and visualizations exist, comprehending an unfamiliar code base remains challenging. Hence, software archaeology suffers from a considerable entry barrier as it requires expert knowledge, significant diligence, tenacity, and stamina. In this paper, we propose a paradigm shift in how legacy systems' design knowledge can be regained by presenting our vision for an immersive explorable software visualization in virtual reality (VR). We propose innovative concepts leveraging benefits of VR for a) immersion in an exoteric visualization metaphor, b) effective navigation and orientation, c) guiding exploration, and d) maintaining a link to the implementation. By enabling immersive and playful legacy system exploration, we strive for lowering the entry barrier, fostering long-term engagement, strengthening mental-model building, and improving knowledge retention in an effort to ease coping with the increased number of tomorrow's legacy systems. @InProceedings{ESEC/FSE21p1455, author = {Adrian Hoff and Michael Nieke and Christoph Seidl}, title = {Towards Immersive Software Archaeology: Regaining Legacy Systems’ Design Knowledge via Interactive Exploration in Virtual Reality}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1455--1458}, doi = {10.1145/3468264.3473128}, year = {2021}, } Publisher's Version |
|
Hoffmann, Henry |
ESEC/FSE '21: "Generalizable and Interpretable ..."
Generalizable and Interpretable Learning for Configuration Extrapolation
Yi Ding, Ahsan Pervaiz, Michael Carbin, and Henry Hoffmann (Massachusetts Institute of Technology, USA; University of Chicago, USA) Modern software applications are increasingly configurable, which puts a burden on users to tune these configurations for their target hardware and workloads. To help users, machine learning techniques can model the complex relationships between software configuration parameters and performance. While powerful, these learners have two major drawbacks: (1) they rarely incorporate prior knowledge and (2) they produce outputs that are not interpretable by users. These limitations make it difficult to (1) leverage information a user has already collected (e.g., tuning for new hardware using the best configurations from old hardware) and (2) gain insights into the learner’s behavior (e.g., understanding why the learner chose different configurations on different hardware or for different workloads). To address these issues, this paper presents two configuration optimization tools, GIL and GIL+, using the proposed generalizable and interpretable learning approaches. To incorporate prior knowledge, the proposed tools (1) start from known configurations, (2) iteratively construct a new linear model, (3) extrapolate better performance configurations from that model, and (4) repeat. Since the base learners are linear models, these tools are inherently interpretable. We enhance this property with a graphical representation of how they arrived at the highest performance configuration. We evaluate GIL and GIL+ by using them to configure Apache Spark workloads on different hardware platforms and find that, compared to prior work, GIL and GIL+ produce comparable, and sometimes even better performance configurations, but with interpretable results. @InProceedings{ESEC/FSE21p728, author = {Yi Ding and Ahsan Pervaiz and Michael Carbin and Henry Hoffmann}, title = {Generalizable and Interpretable Learning for Configuration Extrapolation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {728--740}, doi = {10.1145/3468264.3468603}, year = {2021}, } Publisher's Version |
|
Holzmann, Gerard J. |
ESEC/FSE '21: "Interactive Analysis of Large ..."
Interactive Analysis of Large Code Bases (Invited Talk)
Gerard J. Holzmann (Nimble Research, USA) Current static source code analyzers can be slow, hard to use correctly, and expensive. If not properly configured, they can also generate large amounts of output, even for well-written code. To fix this, we developed a new tool called Cobra. The Cobra tool can be used interactively even on very large code bases, which means that it is very fast. It is also designed to be easy to use and free. The tool comes with a library of predefined queries to catch standard coding issues, including cyber-security related risks from the CVE database. The query libraries can be used as provided, but the best part is that you can also easily modify and extend those queries, or add your own. The Cobra tool is language neutral: you can use it to check C or C++ code, or Java, Ada, Python, or even English prose. I'll show how the tool works, what makes it fast, and how you can write powerful queries in a couple of different ways. @InProceedings{ESEC/FSE21p4, author = {Gerard J. Holzmann}, title = {Interactive Analysis of Large Code Bases (Invited Talk)}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {4--4}, doi = {10.1145/3468264.3478691}, year = {2021}, } Publisher's Version |
|
Horlings, Erik |
ESEC/FSE '21: "Analysis of Specifications ..."
Analysis of Specifications of Multiparty Sessions with dcj-lint
Erik Horlings and Sung-Shik Jongmans (Open University of the Netherlands, Netherlands; CWI, Netherlands) Multiparty session types constitute a method to automatically detect violations of protocol implementations relative to specifications. But, when a violation is detected, does it symptomise a bug in the implementation or in the specification? This paper presents dcj-lint: an analysis tool to detect bugs in protocol specifications, based on multiparty session types. By leveraging a custom-built temporal logic model checker, dcj-lint can be used to efficiently perform: (1) generic sanity checks, and (2) protocol-specific property analyses. In our benchmarks, dcj-lint outperforms an existing state-of-the-art model checker (up to 61x faster). @InProceedings{ESEC/FSE21p1590, author = {Erik Horlings and Sung-Shik Jongmans}, title = {Analysis of Specifications of Multiparty Sessions with dcj-lint}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1590--1594}, doi = {10.1145/3468264.3473127}, year = {2021}, } Publisher's Version |
|
Hort, Max |
ESEC/FSE '21: "Fairea: A Model Behaviour ..."
Fairea: A Model Behaviour Mutation Approach to Benchmarking Bias Mitigation Methods
Max Hort, Jie M. Zhang, Federica Sarro, and Mark Harman (University College London, UK) The increasingly wide uptake of Machine Learning (ML) has raised the significance of the problem of tackling bias (i.e., unfairness), making it a primary software engineering concern. In this paper, we introduce Fairea, a model behaviour mutation approach to benchmarking ML bias mitigation methods. We also report on a large-scale empirical study to test the effectiveness of 12 widely-studied bias mitigation methods. Our results reveal that, surprisingly, bias mitigation methods have a poor effectiveness in 49% of the cases. In particular, 15% of the mitigation cases have worse fairness-accuracy trade-offs than the baseline established by Fairea; 34% of the cases have a decrease in accuracy and an increase in bias. Fairea has been made publicly available for software engineers and researchers to evaluate their bias mitigation methods. @InProceedings{ESEC/FSE21p994, author = {Max Hort and Jie M. Zhang and Federica Sarro and Mark Harman}, title = {Fairea: A Model Behaviour Mutation Approach to Benchmarking Bias Mitigation Methods}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {994--1006}, doi = {10.1145/3468264.3468565}, year = {2021}, } Publisher's Version Artifacts Functional |
|
Hou, Jianwei |
ESEC/FSE '21: "Detecting Node.js Prototype ..."
Detecting Node.js Prototype Pollution Vulnerabilities via Object Lookup Analysis
Song Li, Mingqing Kang, Jianwei Hou, and Yinzhi Cao (Johns Hopkins University, USA; Renmin University of China, China) Prototype pollution is a type of vulnerability specific to prototype-based languages, such as JavaScript, which allows an adversary to pollute a base object’s property, leading to a further consequence such as Denial of Service (DoS), arbitrary code execution, and session fixation. On one hand, the only prior work in detecting prototype pollution adopts dynamic analysis to fuzz package inputs, which inevitably has code coverage issues in triggering some deeply embedded vulnerabilities. On the other hand, it is challenging to apply state-of-the-art static analysis in detecting prototype pollution because of the involvement of prototype chains and fine-grained object relations including built-in ones. In this paper, we propose a flow-, context-, and branch-sensitive static taint analysis tool, called ObjLupAnsys, to detect prototype pollution vulnerabilities. The key of ObjLupAnsys is a so-called object lookup analysis, which gradually expands the source and sink objects into big clusters with a complex inner structure by performing targeted object lookups in both clusters so that a system built-in function can be redefined. Specifically, at the source cluster, ObjLupAnsys proactively creates new object properties based on how the target program uses the initial source object; at the sink cluster, ObjLupAnsys assigns property values in object lookups to decrease the number of object lookups to reach a system built-in function. We implemented an open-source tool and applied it for the detection of prototype pollution among Node.js packages. Our evaluation shows that ObjLupAnsys finds 61 zero-day, previously-unknown, exploitable vulnerabilities as opposed to 18 by the state-of-the-art dynamic fuzzing tool and three by a state-of-the-art static analysis tool that is modified to detect prototype pollution. To date, 11 vulnerable Node.js packages are assigned with CVE numbers and five have already been patched by their developers. In addition, ObjLupAnsys also discovered seven applications or packages including a real-world, online website, which are indirectly vulnerable due to the inclusion of vulnerable packages found by ObjLupAnsys. @InProceedings{ESEC/FSE21p268, author = {Song Li and Mingqing Kang and Jianwei Hou and Yinzhi Cao}, title = {Detecting Node.js Prototype Pollution Vulnerabilities via Object Lookup Analysis}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {268--279}, doi = {10.1145/3468264.3468542}, year = {2021}, } Publisher's Version |
|
Hu, Yang |
ESEC/FSE '21: "ACHyb: A Hybrid Analysis Approach ..."
ACHyb: A Hybrid Analysis Approach to Detect Kernel Access Control Vulnerabilities
Yang Hu, Wenxi Wang, Casen Hunger, Riley Wood, Sarfraz Khurshid, and Mohit Tiwari (University of Texas at Austin, USA) Access control is essential for the Operating System (OS) security. Incorrect implementation of access control can introduce new attack surfaces to the OS, known as Kernel Access Control Vulnerabilities (KACVs). To understand KACVs, we conduct our study on the root causes and the security impacts of KACVs. Regarding the complexity of the recognized root causes, we particularly focus on two kinds of KACVs, namely KACV-M (due to missing permission checks) and KACV-I (due to misusing permission checks). We find that over 60% of these KACVs are of critical, high or medium security severity, resulting in a variety of security threats including bypass security checking, privileged escalation, etc. However, existing approaches can only detect KACV-M. The state-of-the-art KACV-M detector called PeX is a static analysis tool, which still suffers from extremely high false-positive rates. In this paper, we present ACHyb, a precise and scalable approach to reveal both KACV-M and KACV-I. ACHyb is a hybrid approach, which first applies static analysis to identify the potentially vulnerable paths and then applies dynamic analysis to further reduce the false positives of the paths. For the static analysis, ACHyb improves PeX in both the precision and the soundness, using the interface analysis, callsite dependence analysis and constraint-based invariant analysis with a stronger access control invariant. For the dynamic analysis, ACHyb utilizes the greybox fuzzing to identify the potential KACVs. In order to improve the fuzzing efficiency, ACHyb adopts our novel clustering-based seed distillation approach to generate high-quality seed programs. Our experimental results show that ACHyb reveals 76 potential KACVs in less than 8 hours and 22 of them are KACVs (19 KACV-M and 3 KACV-I). In contrast, PeX reveals 2,088 potential KACVs in more than 11 hours, and only 14 of them are KACVs (all KACV-M). Furthermore, ACHyb successfully uncovers 7 new KACVs, and 2 of them (1 KACV-M and 1 KACV-I) have been confirmed by kernel developers. @InProceedings{ESEC/FSE21p316, author = {Yang Hu and Wenxi Wang and Casen Hunger and Riley Wood and Sarfraz Khurshid and Mohit Tiwari}, title = {ACHyb: A Hybrid Analysis Approach to Detect Kernel Access Control Vulnerabilities}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {316--327}, doi = {10.1145/3468264.3468627}, year = {2021}, } Publisher's Version Artifacts Reusable |
|
Hu, Zhipeng |
ESEC/FSE '21: "GLIB: Towards Automated Test ..."
GLIB: Towards Automated Test Oracle for Graphically-Rich Applications
Ke Chen, Yufei Li, Yingfeng Chen, Changjie Fan, Zhipeng Hu, and Wei Yang (Netease, China; University of Texas at Dallas, USA) Graphically-rich applications such as games are ubiquitous with attractive visual effects of Graphical User Interface (GUI) that offers a bridge between software applications and end-users. However, various types of graphical glitches may arise from such GUI complexity and have become one of the main component of software compatibility issues. Our study on bug reports from game development teams in NetEase Inc. indicates that graphical glitches frequently occur during the GUI rendering and severely degrade the quality of graphically-rich applications such as video games. Existing automated testing techniques for such applications focus mainly on generating various GUI test sequences and check whether the test sequences can cause crashes. These techniques require constant human attention to captures non-crashing bugs such as bugs causing graphical glitches. In this paper, we present the first step in automating the test oracle for detecting non-crashing bugs in graphically-rich applications. Specifically, we propose GLIB based on a code-based data augmentation technique to detect game GUI glitches. We perform an evaluation of GLIB on 20 real-world game apps (with bug reports available) and the result shows that GLIB can achieve 100% precision and 99.5% recall in detecting non-crashing bugs such as game GUI glitches. Practical application of GLIB on another 14 real-world games (without bug reports) further demonstrates that GLIB can effectively uncover GUI glitches, with 48 of 53 bugs reported by GLIB having been confirmed and fixed so far. @InProceedings{ESEC/FSE21p1093, author = {Ke Chen and Yufei Li and Yingfeng Chen and Changjie Fan and Zhipeng Hu and Wei Yang}, title = {GLIB: Towards Automated Test Oracle for Graphically-Rich Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1093--1104}, doi = {10.1145/3468264.3468586}, year = {2021}, } Publisher's Version |
|
Huang, Gang |
ESEC/FSE '21: "TaintStream: Fine-Grained ..."
TaintStream: Fine-Grained Taint Tracking for Big Data Platforms through Dynamic Code Translation
Chengxu Yang, Yuanchun Li, Mengwei Xu, Zhenpeng Chen, Yunxin Liu, Gang Huang, and Xuanzhe Liu (Peking University, China; Microsoft Research, China; Beijing University of Posts and Telecommunications, China; Tsinghua University, China) Big data has become valuable property for enterprises and enabled various intelligent applications. Today, it is common to host data in big data platforms (e.g., Spark), where developers can submit scripts to process the original and intermediate data tables. Meanwhile, it is highly desirable to manage the data to comply with various privacy requirements. To enable flexible and automated privacy policy enforcement, we propose TaintStream, a fine-grained taint tracking framework for Spark-like big data platforms. TaintStream works by automatically injecting taint tracking logic into the data processing scripts, and the injected scripts are dynamically translated to maintain a taint tag for each cell during execution. The dynamic translation rules are carefully designed to guarantee non-interference in the original data operation. By defining different semantics of taint tags, TaintStream can enable various data management applications such as access control, data retention, and user data erasure. Our experiments on a self-crafted benchmarksuite show that TaintStream is able to achieve accurate cell-level taint tracking with a precision of 93.0% and less than 15% overhead. We also demonstrate the usefulness of TaintStream through several real-world use cases of privacy policy enforcement. @InProceedings{ESEC/FSE21p806, author = {Chengxu Yang and Yuanchun Li and Mengwei Xu and Zhenpeng Chen and Yunxin Liu and Gang Huang and Xuanzhe Liu}, title = {TaintStream: Fine-Grained Taint Tracking for Big Data Platforms through Dynamic Code Translation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {806--817}, doi = {10.1145/3468264.3468532}, year = {2021}, } Publisher's Version ESEC/FSE '21: "An Empirical Study on Challenges ..." An Empirical Study on Challenges of Application Development in Serverless Computing Jinfeng Wen, Zhenpeng Chen, Yi Liu, Yiling Lou, Yun Ma, Gang Huang, Xin Jin, and Xuanzhe Liu (Peking University, China) Serverless computing is an emerging paradigm for cloud computing, gaining traction in a wide range of applications such as video processing and machine learning. This new paradigm allows developers to focus on the development of the logic of serverless computing based applications (abbreviated as serverless-based applications) in the granularity of function, thereby freeing developers from tedious and error-prone infrastructure management. Meanwhile, it also introduces new challenges on the design, implementation, and deployment of serverless-based applications, and current serverless computing platforms are far away from satisfactory. However, to the best of our knowledge, these challenges have not been well studied. To fill this knowledge gap, this paper presents the first comprehensive study on understanding the challenges in developing serverless-based applications from the developers’ perspective. We mine and analyze 22,731 relevant questions from Stack Overflow (a popular Q&A website for developers), and show the increasing popularity trend and the high difficulty level of serverless computing for developers. Through manual inspection of 619 sampled questions, we construct a taxonomy of challenges that developers encounter, and report a series of findings and actionable implications. Stakeholders including application developers, researchers, and cloud providers can leverage these findings and implications to better understand and further explore the serverless computing paradigm. @InProceedings{ESEC/FSE21p416, author = {Jinfeng Wen and Zhenpeng Chen and Yi Liu and Yiling Lou and Yun Ma and Gang Huang and Xin Jin and Xuanzhe Liu}, title = {An Empirical Study on Challenges of Application Development in Serverless Computing}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {416--428}, doi = {10.1145/3468264.3468558}, year = {2021}, } Publisher's Version |
|
Huang, Heqing |
ESEC/FSE '21: "Skeletal Approximation Enumeration ..."
Skeletal Approximation Enumeration for SMT Solver Testing
Peisen Yao, Heqing Huang, Wensheng Tang, Qingkai Shi, Rongxin Wu, and Charles Zhang (Hong Kong University of Science and Technology, China; Ant Group, China; Xiamen University, China) Ensuring the equality of SMT solvers is critical due to its broad spectrum of applications in academia and industry, such as symbolic execution and program verification. Existing approaches to testing SMT solvers are either too costly or find difficulties generalizing to different solvers and theories, due to the test oracle problem. To complement existing approaches and overcome their weaknesses, this paper introduces skeletal approximation enumeration (SAE), a novel lightweight and general testing technique for all first-order theories. To demonstrate its practical utility, we have applied the SAE technique to test Z3 and CVC4, two comprehensively tested, state-of-the-art SMT solvers. By the time of writing, our approach had found 71 confirmed bugs in Z3 and CVC4,55 of which had already been fixed. @InProceedings{ESEC/FSE21p1141, author = {Peisen Yao and Heqing Huang and Wensheng Tang and Qingkai Shi and Rongxin Wu and Charles Zhang}, title = {Skeletal Approximation Enumeration for SMT Solver Testing}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1141--1153}, doi = {10.1145/3468264.3468540}, year = {2021}, } Publisher's Version |
|
Huang, Ruochen |
ESEC/FSE '21: "An Empirical Study of GUI ..."
An Empirical Study of GUI Widget Detection for Industrial Mobile Games
Jiaming Ye, Ke Chen, Xiaofei Xie, Lei Ma, Ruochen Huang, Yingfeng Chen, Yinxing Xue, and Jianjun Zhao (Kyushu University, Japan; Netease, China; University of Alberta, Canada; University of Science and Technology of China, China) With the widespread adoption of smartphones in our daily life, mobile games experienced increasing demand over the past years. Meanwhile, the quality of mobile games has been continuously drawing more and more attention, which can greatly affect the player experience. For better quality assurance, general-purpose testing has been extensively studied for mobile apps. However, due to the unique characteristic of mobile games, existing mobile testing techniques may not be directly suitable and applicable. To better understand the challenges in mobile game testing, in this paper, we first initiate an early step to conduct an empirical study towards understanding the challenges and pain points of mobile game testing process at our industrial partner NetEase Games. Specifically, we first conduct a survey from the mobile test development team at NetEase Games via both scrum interviews and questionnaires. We found that accurate and effective GUI widget detection for mobile games could be the pillar to boost the automation of mobile game testing and other downstream analysis tasks in practice. We then continue to perform comparative studies to investigate the effectiveness of state-of-the-art general-purpose mobile app GUI widget detection methods in the context of mobile games. To this end, we also develop a technique to automatically collect GUI widgets region information of industrial mobile games, which is equipped with a heuristic-based data cleaning method for quality refinement of the labeling results. Our evaluation shows that: (1) Existing GUI widget detection methods for general-purpose mobile apps cannot perform well on industrial mobile games. (2) Mobile game exhibits obvious difference from other general-purpose mobile apps in the perspective GUI widgets. Our further in-depth analysis reveals high diversity and density characteristics of mobile game GUI widgets could be the major reasons that post the challenges for existing methods, which calls for new research methods and better industry practices. To enable further research along this line, we construct the very first GUI widget detection benchmark, specially designed for mobile games, incorporating both our collected dataset and the state-of-the-art widget detection methods for mobile apps, which could also be the basis for further study of many downstream quality assurance tasks (e.g., testing and analysis) for mobile games. @InProceedings{ESEC/FSE21p1427, author = {Jiaming Ye and Ke Chen and Xiaofei Xie and Lei Ma and Ruochen Huang and Yingfeng Chen and Yinxing Xue and Jianjun Zhao}, title = {An Empirical Study of GUI Widget Detection for Industrial Mobile Games}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1427--1437}, doi = {10.1145/3468264.3473935}, year = {2021}, } Publisher's Version |
|
Huang, Yu |
ESEC/FSE '21: "Connecting the Dots: Rethinking ..."
Connecting the Dots: Rethinking the Relationship between Code and Prose Writing with Functional Connectivity
Zachary Karas, Andrew Jahn, Westley Weimer, and Yu Huang (University of Michigan, USA) Medical imaging studies of software engineering have risen in popularity and may reveal the neural underpinnings of coding activities. To date, however, all studies in computer science venues have treated brain regions independently and in isolation. Since most complex neural activity involves coordination among multiple regions, previous analyses may overlook neural behavior. We propose to apply functional connectivity analysis to medical imaging data from software engineering tasks. Informally, this analysis treats the brain as a graph, rather than a series of independent modules, and statistically infers relevant edges. We present a functional connectivity analysis of existing data, which elucidates the interconnections between code writing and prose writing, especially regarding higher mathematics and semantic processing. First, we found a significant link between Broca’s Area (language) and the Number Form Area (higher mathematics) for coding. This both refines previous interpretations that code writing and natural language are distinct from each other, and may also contribute to the understanding of the Number Form Area in the Psychology literature. Second, we identify an area with important functional connectivity for both prose writing and coding, unlike previous analyses that associated it with coding. This advances our neural understanding of coding and prose writing, and was only exposed by using functional connectivity analysis. Third, for coding, we find a strong functional connectivity result for a brain region involved in semantic processing for language, with implications for CS training. Finally, we find a neural relationship between coding and expertise, including a more grounded explanation than prior work. @InProceedings{ESEC/FSE21p767, author = {Zachary Karas and Andrew Jahn and Westley Weimer and Yu Huang}, title = {Connecting the Dots: Rethinking the Relationship between Code and Prose Writing with Functional Connectivity}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {767--779}, doi = {10.1145/3468264.3468579}, year = {2021}, } Publisher's Version |
|
Hunger, Casen |
ESEC/FSE '21: "ACHyb: A Hybrid Analysis Approach ..."
ACHyb: A Hybrid Analysis Approach to Detect Kernel Access Control Vulnerabilities
Yang Hu, Wenxi Wang, Casen Hunger, Riley Wood, Sarfraz Khurshid, and Mohit Tiwari (University of Texas at Austin, USA) Access control is essential for the Operating System (OS) security. Incorrect implementation of access control can introduce new attack surfaces to the OS, known as Kernel Access Control Vulnerabilities (KACVs). To understand KACVs, we conduct our study on the root causes and the security impacts of KACVs. Regarding the complexity of the recognized root causes, we particularly focus on two kinds of KACVs, namely KACV-M (due to missing permission checks) and KACV-I (due to misusing permission checks). We find that over 60% of these KACVs are of critical, high or medium security severity, resulting in a variety of security threats including bypass security checking, privileged escalation, etc. However, existing approaches can only detect KACV-M. The state-of-the-art KACV-M detector called PeX is a static analysis tool, which still suffers from extremely high false-positive rates. In this paper, we present ACHyb, a precise and scalable approach to reveal both KACV-M and KACV-I. ACHyb is a hybrid approach, which first applies static analysis to identify the potentially vulnerable paths and then applies dynamic analysis to further reduce the false positives of the paths. For the static analysis, ACHyb improves PeX in both the precision and the soundness, using the interface analysis, callsite dependence analysis and constraint-based invariant analysis with a stronger access control invariant. For the dynamic analysis, ACHyb utilizes the greybox fuzzing to identify the potential KACVs. In order to improve the fuzzing efficiency, ACHyb adopts our novel clustering-based seed distillation approach to generate high-quality seed programs. Our experimental results show that ACHyb reveals 76 potential KACVs in less than 8 hours and 22 of them are KACVs (19 KACV-M and 3 KACV-I). In contrast, PeX reveals 2,088 potential KACVs in more than 11 hours, and only 14 of them are KACVs (all KACV-M). Furthermore, ACHyb successfully uncovers 7 new KACVs, and 2 of them (1 KACV-M and 1 KACV-I) have been confirmed by kernel developers. @InProceedings{ESEC/FSE21p316, author = {Yang Hu and Wenxi Wang and Casen Hunger and Riley Wood and Sarfraz Khurshid and Mohit Tiwari}, title = {ACHyb: A Hybrid Analysis Approach to Detect Kernel Access Control Vulnerabilities}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {316--327}, doi = {10.1145/3468264.3468627}, year = {2021}, } Publisher's Version Artifacts Reusable |
|
Islam, Md Nafee Al |
ESEC/FSE '21: "Hazard Analysis for Human-on-the-Loop ..."
Hazard Analysis for Human-on-the-Loop Interactions in sUAS Systems
Michael Vierhauser, Md Nafee Al Islam, Ankit Agrawal, Jane Cleland-Huang, and James Mason (JKU Linz, Austria; University of Notre Dame, USA; Northrop Grumman, USA) With the rise of new AI technologies, autonomous systems are moving towards a paradigm in which increasing levels of responsibility are shifted from the human to the system, creating a transition from human-in-the-loop systems to human-on-the-loop (HoTL) systems. This has a significant impact on the safety analysis of such systems, as new types of errors occurring at the boundaries of human-machine interactions need to be taken into consideration. Traditional safety analysis typically focuses on system-level hazards with little focus on user-related or user-induced hazards that can cause critical system failures. To address this issue, we construct domain-level safety analysis assets for sUAS (small unmanned aerial systems) applications and describe the process we followed to explicitly, and systematically identify Human Interaction Points (HiPs), Hazard Factors and Mitigations from system hazards. We evaluate our approach by first investigating the extent to which recent sUAS incidents are covered by our hazard trees, and second by performing a study with six domain experts using our hazard trees to identify and document hazards for sUAS usage scenarios. Our study showed that our hazard trees provided effective coverage for a wide variety of sUAS application scenarios and were useful for stimulating safety thinking and helping users to identify and potentially mitigate human-interaction hazards. @InProceedings{ESEC/FSE21p8, author = {Michael Vierhauser and Md Nafee Al Islam and Ankit Agrawal and Jane Cleland-Huang and James Mason}, title = {Hazard Analysis for Human-on-the-Loop Interactions in sUAS Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {8--19}, doi = {10.1145/3468264.3468534}, year = {2021}, } Publisher's Version Info |
|
Itzhaky, Shachar |
ESEC/FSE '21: "A Bounded Symbolic-Size Model ..."
A Bounded Symbolic-Size Model for Symbolic Execution
David Trabish, Shachar Itzhaky, and Noam Rinetzky (Tel Aviv University, Israel; Technion, Israel) Symbolic execution is a powerful program analysis technique which allows executing programs with symbolic inputs. Modern symbolic execution tools use a concrete modeling of object sizes, that does not allow symbolic-size allocations. This leads to concretizations and enforces the user to set the size of the input ahead of time, thus potentially leading to loss of coverage during the analysis. We present a bounded symbolic-size model in which the size of an object can have a range of values limited by a user-specified bound. Unfortunately, this model amplifies the problem of path explosion, due to additional symbolic expressions representing sizes. To cope with this problem, we propose an approach based on state merging that reduces the forking by applying special treatment to symbolic-size dependent loops. In our evaluation on real-world benchmarks, we show that our approach can lead in many cases to substantial gains in terms of performance and coverage, and find previously unknown bugs. @InProceedings{ESEC/FSE21p1190, author = {David Trabish and Shachar Itzhaky and Noam Rinetzky}, title = {A Bounded Symbolic-Size Model for Symbolic Execution}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1190--1201}, doi = {10.1145/3468264.3468596}, year = {2021}, } Publisher's Version Artifacts Reusable |
|
Jafari, Sina |
ESEC/FSE '21: "Data-Driven Test Selection ..."
Data-Driven Test Selection at Scale
Sonu Mehta, Farima Farmahinifarahani, Ranjita Bhagwan, Suraj Guptha, Sina Jafari, Rahul Kumar, Vaibhav Saini, and Anirudh Santhiar (Microsoft Research, India; University of California at Irvine, USA; Microsoft, USA) Large-scale services depend on Continuous Integration/Continuous Deployment (CI/CD) processes to maintain their agility and code-quality. Change-based testing plays an important role in finding bugs, but testing after every change is prohibitively expensive at a scale where thousands of changes are committed every hour. Test selection models deal with this issue by running a subset of tests for every change. In this paper, we present a generic, language-agnostic and lightweight statistical model for test selection. Unlike existing techniques, the proposed model does not require complex feature extraction techniques. Consequently, it scales to hundreds of repositories of varying characteristics while capturing more than 99% of buggy pull requests. Additionally, to better evaluate test selection models, we propose application-specific metrics that capture both a reduction in resource cost and a reduction in pull-request turn-around time. By evaluating our model on 22 large repositories at Microsoft, we find that we can save 15%−30% of compute time while reporting back more than ≈99% of buggy pull requests. @InProceedings{ESEC/FSE21p1225, author = {Sonu Mehta and Farima Farmahinifarahani and Ranjita Bhagwan and Suraj Guptha and Sina Jafari and Rahul Kumar and Vaibhav Saini and Anirudh Santhiar}, title = {Data-Driven Test Selection at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1225--1235}, doi = {10.1145/3468264.3473916}, year = {2021}, } Publisher's Version |
|
Jahn, Andrew |
ESEC/FSE '21: "Connecting the Dots: Rethinking ..."
Connecting the Dots: Rethinking the Relationship between Code and Prose Writing with Functional Connectivity
Zachary Karas, Andrew Jahn, Westley Weimer, and Yu Huang (University of Michigan, USA) Medical imaging studies of software engineering have risen in popularity and may reveal the neural underpinnings of coding activities. To date, however, all studies in computer science venues have treated brain regions independently and in isolation. Since most complex neural activity involves coordination among multiple regions, previous analyses may overlook neural behavior. We propose to apply functional connectivity analysis to medical imaging data from software engineering tasks. Informally, this analysis treats the brain as a graph, rather than a series of independent modules, and statistically infers relevant edges. We present a functional connectivity analysis of existing data, which elucidates the interconnections between code writing and prose writing, especially regarding higher mathematics and semantic processing. First, we found a significant link between Broca’s Area (language) and the Number Form Area (higher mathematics) for coding. This both refines previous interpretations that code writing and natural language are distinct from each other, and may also contribute to the understanding of the Number Form Area in the Psychology literature. Second, we identify an area with important functional connectivity for both prose writing and coding, unlike previous analyses that associated it with coding. This advances our neural understanding of coding and prose writing, and was only exposed by using functional connectivity analysis. Third, for coding, we find a strong functional connectivity result for a brain region involved in semantic processing for language, with implications for CS training. Finally, we find a neural relationship between coding and expertise, including a more grounded explanation than prior work. @InProceedings{ESEC/FSE21p767, author = {Zachary Karas and Andrew Jahn and Westley Weimer and Yu Huang}, title = {Connecting the Dots: Rethinking the Relationship between Code and Prose Writing with Functional Connectivity}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {767--779}, doi = {10.1145/3468264.3468579}, year = {2021}, } Publisher's Version |
|
Jana, Suman |
ESEC/FSE '21: "StateFormer: Fine-Grained ..."
StateFormer: Fine-Grained Type Recovery from Binaries using Generative State Modeling
Kexin Pei, Jonas Guan, Matthew Broughton, Zhongtian Chen, Songchen Yao, David Williams-King, Vikas Ummadisetty, Junfeng Yang, Baishakhi Ray, and Suman Jana (Columbia University, USA; University of Toronto, Canada; Dublin High School, Ireland) Binary type inference is a critical reverse engineering task supporting many security applications, including vulnerability analysis, binary hardening, forensics, and decompilation. It is a difficult task because source-level type information is often stripped during compilation, leaving only binaries with untyped memory and register accesses. Existing approaches rely on hand-coded type inference rules defined by domain experts, which are brittle and require nontrivial effort to maintain and update. Even though machine learning approaches have shown promise at automatically learning the inference rules, their accuracy is still low, especially for optimized binaries. We present StateFormer, a new neural architecture that is adept at accurate and robust type inference. StateFormer follows a two-step transfer learning paradigm. In the pretraining step, the model is trained with Generative State Modeling (GSM), a novel task that we design to teach the model to statically approximate execution effects of assembly instructions in both forward and backward directions. In the finetuning step, the pretrained model learns to use its knowledge of operational semantics to infer types. We evaluate StateFormer's performance on a corpus of 33 popular open-source software projects containing over 1.67 billion variables of different types. The programs are compiled with GCC and LLVM over 4 optimization levels O0-O3, and 3 obfuscation passes based on LLVM. Our model significantly outperforms state-of-the-art ML-based tools by 14.6% in recovering types for both function arguments and variables. Our ablation studies show that GSM improves type inference accuracy by 33%. @InProceedings{ESEC/FSE21p690, author = {Kexin Pei and Jonas Guan and Matthew Broughton and Zhongtian Chen and Songchen Yao and David Williams-King and Vikas Ummadisetty and Junfeng Yang and Baishakhi Ray and Suman Jana}, title = {StateFormer: Fine-Grained Type Recovery from Binaries using Generative State Modeling}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {690--702}, doi = {10.1145/3468264.3468607}, year = {2021}, } Publisher's Version Info Artifacts Reusable |
|
Jansen, David N. |
ESEC/FSE '21: "Frontmatter: Mining Android ..."
Frontmatter: Mining Android User Interfaces at Scale
Konstantin Kuznetsov, Chen Fu, Song Gao, David N. Jansen, Lijun Zhang, and Andreas Zeller (CISPA, Germany; Institute of Software at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China; Institute of Intelligent Software, China) We introduce Frontmatter: the largest open-access dataset containing user interface models of about 160,000 Android apps. Frontmatter opens the door for comprehensive mining of mobile user interfaces, jumpstarting empirical research at a large scale, addressing questions such as "How many travel apps require registration?", "Which apps do not follow accessibility guidelines?", "Does the user interface correspond to the description?", and many more. The Frontmatter UI analysis tool and the Frontmatter dataset are available under an open-source license. @InProceedings{ESEC/FSE21p1580, author = {Konstantin Kuznetsov and Chen Fu and Song Gao and David N. Jansen and Lijun Zhang and Andreas Zeller}, title = {Frontmatter: Mining Android User Interfaces at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1580--1584}, doi = {10.1145/3468264.3473125}, year = {2021}, } Publisher's Version Video |
|
Jayatilaka, Asangi |
ESEC/FSE '21: "A Grounded Theory of the Role ..."
A Grounded Theory of the Role of Coordination in Software Security Patch Management
Nesara Dissanayake, Mansooreh Zahedi, Asangi Jayatilaka, and Muhammad Ali Babar (University of Adelaide, Australia) Several disastrous security attacks can be attributed to delays in patching software vulnerabilities. While researchers and practitioners have paid significant attention to automate vulnerabilities identification and patch development activities of software security patch management, there has been relatively little effort dedicated to gain an in-depth understanding of the socio-technical aspects, e.g., coordination of interdependent activities of the patching process and patching decisions, that may cause delays in applying security patches. We report on a Grounded Theory study of the role of coordination in security patch management. The reported theory consists of four inter-related dimensions, i.e., causes, breakdowns, constraints, and mechanisms. The theory explains the causes that define the need for coordination among interdependent software/hardware components and multiple stakeholders’ decisions, the constraints that can negatively impact coordination, the breakdowns in coordination, and the potential corrective measures. This study provides potentially useful insights for researchers and practitioners who can carefully consider the needs of and devise suitable solutions for supporting the coordination of interdependencies involved in security patch management. @InProceedings{ESEC/FSE21p793, author = {Nesara Dissanayake and Mansooreh Zahedi and Asangi Jayatilaka and Muhammad Ali Babar}, title = {A Grounded Theory of the Role of Coordination in Software Security Patch Management}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {793--805}, doi = {10.1145/3468264.3468595}, year = {2021}, } Publisher's Version |
|
Jesse, Kevin |
ESEC/FSE '21: "Learning Type Annotation: ..."
Learning Type Annotation: Is Big Data Enough?
Kevin Jesse, Premkumar T. Devanbu, and Toufique Ahmed (University of California at Davis, USA) TypeScript is a widely used optionally-typed language where developers can adopt “pay as you go” typing: they can add types as desired, and benefit from static typing. The “type annotation tax” or manual effort required to annotate new or existing TypeScript can be reduced by a variety of automatic methods. Probabilistic machine-learning (ML) approaches work quite well. ML approaches use different inductive biases, ranging from simple token sequences to complex graphical neural network (GNN) models capturing syntax and semantic relations. More sophisticated inductive biases are hand-engineered to exploit the formal nature of software. Rather than deploying fancy inductive biases for code, can we just use “big data” to learn natural patterns relevant to typing? We find evidence suggesting that this is the case. We present TypeBert, demonstrating that even with simple token-sequence inductive bias used in BERT-style models and enough data, type-annotation performance of the most sophisticated models can be surpassed. @InProceedings{ESEC/FSE21p1483, author = {Kevin Jesse and Premkumar T. Devanbu and Toufique Ahmed}, title = {Learning Type Annotation: Is Big Data Enough?}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1483--1486}, doi = {10.1145/3468264.3473135}, year = {2021}, } Publisher's Version |
|
Jhala, Ranjit |
ESEC/FSE '21: "RAPID: Checking API Usage ..."
RAPID: Checking API Usage for the Cloud in the Cloud
Michael Emmi, Liana Hadarean, Ranjit Jhala, Lee Pike, Nicolás Rosner, Martin Schäf, Aritra Sengupta, and Willem Visser (Amazon Web Services, USA) We present RAPID, an industrial-strength analysis developed at AWS that aims to help developers by providing automatic, fast and actionable feedback about correct usage of cloud-service APIs. RAPID’s design is based on the insight that cloud service APIs are structured around short-lived request- and response-objects whose usage patterns can be specified as value-dependent type-state automata and be verified by combining local type-state with global value-flow analyses. We describe various challenges that arose to deploy RAPID at scale. Finally, we present an evaluation that validates our design choices, deployment heuristics, and shows that RAPID is able to quickly and precisely report a wide variety of useful API misuse violations in large, industrial-strength code bases. @InProceedings{ESEC/FSE21p1416, author = {Michael Emmi and Liana Hadarean and Ranjit Jhala and Lee Pike and Nicolás Rosner and Martin Schäf and Aritra Sengupta and Willem Visser}, title = {RAPID: Checking API Usage for the Cloud in the Cloud}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1416--1426}, doi = {10.1145/3468264.3473934}, year = {2021}, } Publisher's Version |
|
Jiang, Hanzhi |
ESEC/FSE '21: "A First Look at Developers’ ..."
A First Look at Developers’ Live Chat on Gitter
Lin Shi, Xiao Chen, Ye Yang, Hanzhi Jiang, Ziyou Jiang, Nan Niu, and Qing Wang (Institute of Software at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China; Stevens Institute of Technology, USA; University of Cincinnati, USA) Modern communication platforms such as Gitter and Slack play an increasingly critical role in supporting software teamwork, especially in open source development.Conversations on such platforms often contain intensive, valuable information that may be used for better understanding OSS developer communication and collaboration. However, little work has been done in this regard. To bridge the gap, this paper reports a first comprehensive empirical study on developers' live chat, investigating when they interact, what community structures look like, which topics are discussed, and how they interact. We manually analyze 749 dialogs in the first phase, followed by an automated analysis of over 173K dialogs in the second phase. We find that developers tend to converse more often on weekdays, especially on Wednesdays and Thursdays (UTC), that there are three common community structures observed, that developers tend to discuss topics such as API usages and errors, and that six dialog interaction patterns are identified in the live chat communities. Based on the findings, we provide recommendations for individual developers and OSS communities, highlight desired features for platform vendors, and shed light on future research directions. We believe that the findings and insights will enable a better understanding of developers' live chat, pave the way for other researchers, as well as a better utilization and mining of knowledge embedded in the massive chat history. @InProceedings{ESEC/FSE21p391, author = {Lin Shi and Xiao Chen and Ye Yang and Hanzhi Jiang and Ziyou Jiang and Nan Niu and Qing Wang}, title = {A First Look at Developers’ Live Chat on Gitter}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {391--403}, doi = {10.1145/3468264.3468562}, year = {2021}, } Publisher's Version Info |
|
Jiang, Jiajun |
ESEC/FSE '21: "A Comprehensive Study on Learning-Based ..."
A Comprehensive Study on Learning-Based PE Malware Family Classification Methods
Yixuan Ma, Shuang Liu, Jiajun Jiang, Guanhong Chen, and Keqiu Li (State Key Laboratory of Communication Content Cognition, China; Tianjin University, China) Driven by the high profit, Portable Executable (PE) malware has been consistently evolving in terms of both volume and sophistication. PE malware family classification has gained great attention and a large number of approaches have been proposed. With the rapid development of machine learning techniques and the exciting results they achieved on various tasks, machine learning algorithms have also gained popularity in the PE malware family classification task. Three mainstream approaches that use learning based algorithms, as categorized by the input format the methods take, are image-based, binary-based and disassembly-based approaches. Although a large number of approaches are published, there is no consistent comparisons on those approaches, especially from the practical industry adoption perspective. Moreover, there is no comparison in the scenario of concept drift, which is a fact for the malware classification task due to the fast evolving nature of malware. In this work, we conduct a thorough empirical study on learning-based PE malware classification approaches on 4 different datasets and consistent experiment settings. Based on the experiment results and an interview with our industry partners, we find that (1) there is no individual class of methods that significantly outperforms the others; (2) All classes of methods show performance degradation on concept drift (by an average F1-score of 32.23%); and (3) the prediction time and high memory consumption hinder existing approaches from being adopted for industry usage. @InProceedings{ESEC/FSE21p1314, author = {Yixuan Ma and Shuang Liu and Jiajun Jiang and Guanhong Chen and Keqiu Li}, title = {A Comprehensive Study on Learning-Based PE Malware Family Classification Methods}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1314--1325}, doi = {10.1145/3468264.3473925}, year = {2021}, } Publisher's Version Info |
|
Jiang, Yanjie |
ESEC/FSE '21: "Which Abbreviations Should ..."
Which Abbreviations Should Be Expanded?
Yanjie Jiang, Hui Liu, Yuxia Zhang, Nan Niu, Yuhai Zhao, and Lu Zhang (Beijing Institute of Technology, China; University of Cincinnati, USA; Northeastern University, USA; Peking University, China) Abbreviations are common in source code. Properly designed abbreviations may significantly facilitate typing, typesetting, and reading of lengthy source code. However, abbreviations, if used improperly, may also significantly reduce the readability and maintainability of source code. Although a few automated approaches have been proposed to suggest full terms for given abbreviations, to the best of our knowledge, there is no automated approaches to suggest whether abbreviations are used properly, i.e., whether they should be replaced with corresponding full terms. Notably, it is often challenging for inexperienced developers and maintainers to make such decisions. To this end, in this paper, we propose an automated approach to assisting developers and maintainers in making the decisions. The rationale of the approach is that abbreviations should not be expanded if the expansion would result in unacceptably lengthy identifiers or if developers/maintainers can easily figure out the meaning (full terms) of the abbreviations based on their domain knowledge or contexts of the abbreviations. From a corpus of programs, we leverage data mining techniques to discover common abbreviations that are frequently employed by various developers in similar contexts. The key of the data mining is to turn the problem of mining common abbreviations into the maximal clique problem that has been extensively studied. We suggest to not expand given abbreviation if it matches at least one of the discovered common abbreviations. From the same corpus, we also calculate the probability distribution for the length of different types of identifier, e.g., variable names and method names. The probability distribution specifies how likely an identifier of type T is composed of exactly n characters. Our heuristic is to not expand the abbreviation if the probability of its enclosing identifier would be reduced by the expansion. Finally, we also suggest to not expand the abbreviation if its full terms are contained in surrounding contexts of the abbreviation, i.e., tokens on the same source code line. Other abbreviations that do not receive suggestions from the proposed approach are expected to be replaced with their full terms. Our evaluation results on 1,818 abbreviations from five open-source applications suggest that the proposed approach is accurate with a high accuracy of 95%. @InProceedings{ESEC/FSE21p578, author = {Yanjie Jiang and Hui Liu and Yuxia Zhang and Nan Niu and Yuhai Zhao and Lu Zhang}, title = {Which Abbreviations Should Be Expanded?}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {578--589}, doi = {10.1145/3468264.3468616}, year = {2021}, } Publisher's Version |
|
Jiang, Yanyan |
ESEC/FSE '21: "Timely and Accurate Detection ..."
Timely and Accurate Detection of Model Deviation in Self-Adaptive Software-Intensive Systems
Yanxiang Tong, Yi Qin, Yanyan Jiang, Chang Xu, Chun Cao, and Xiaoxing Ma (Nanjing University, China) Control-based approaches to self-adaptive software-intensive systems (SASs) are hailed for their optimal performance and theoretical guarantees on the reliability of adaptation behavior. However, in practice the guarantees are often threatened by model deviations occurred at runtime. In this paper, we propose a Model-guided Deviation Detector (MoD2) for timely and accurate detection of model deviations. To ensure reliability, a SAS can switch a control-based optimal controller for a mandatory controller once an unsafe model deviation is detected. MoD2 achieves both high timeliness and high accuracy through a deliberate fusion of parameter deviation estimation, uncertainty compensation, and safe region quantification. Empirical evaluation with three exemplar systems validated the efficacy of MoD2 (93.3% shorter detection delay, 39.4% lower FN rate, and 25.2% lower FP rate), as well as the benefits of the adaptation-switching mechanism (abnormal rate dropped by 29.2%). @InProceedings{ESEC/FSE21p168, author = {Yanxiang Tong and Yi Qin and Yanyan Jiang and Chang Xu and Chun Cao and Xiaoxing Ma}, title = {Timely and Accurate Detection of Model Deviation in Self-Adaptive Software-Intensive Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {168--180}, doi = {10.1145/3468264.3468548}, year = {2021}, } Publisher's Version |
|
Jiang, Yu |
ESEC/FSE '21: "Making Smart Contract Development ..."
Making Smart Contract Development More Secure and Easier
Meng Ren, Fuchen Ma, Zijing Yin, Ying Fu, Huizhong Li, Wanli Chang, and Yu Jiang (Tsinghua University, China; Ant Group, China; WeBank, China; University of York, UK) With the rapid development of distributed applications, smart contracts have attracted more and more developers' attentions. However, developers or domain experts have different levels of familiarity with specific programming languages, like Solidity, and those vulnerabilities hidden in the code would be exploited and result in huge property losses. Existing auxiliary tools lack security considerations. Most of them only provide word completion based on fuzzy search and detection services for limited types of vulnerabilities, which results in the manpower waste during coding and potential vulnerability threats after deployment. In this work, we propose an integrated framework to enhance security in the two stages of recommendation and validation, assisting developers to implement more secure contracts more quickly. First, we reinforce original smart contracts with general patch patterns and secure programming standards for training, and design a real-time code suggestion algorithm to predict secure words for selection. Then, we integrate multiple widely-used testing tools to provide validation services. For evaluation, we collected 47,398 real-world contracts, and the result shows that it outperforms existing platforms and tools, improving the average word suggestion accuracy by 30%-60% and helping detect about 25%-61% more vulnerabilities. In most cases, our framework can correctly predict next words with the probability up to 82%-97% within top ten candidates. Compared with professional vulnerability mining tools, it can find more vulnerabilities and provide targeted modification suggestions without frivolous configurations. Currently, this framework has been used as the official development tool of WeBank and integrated as the recommended platform by FISCO-BCOS community. @InProceedings{ESEC/FSE21p1360, author = {Meng Ren and Fuchen Ma and Zijing Yin and Ying Fu and Huizhong Li and Wanli Chang and Yu Jiang}, title = {Making Smart Contract Development More Secure and Easier}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1360--1370}, doi = {10.1145/3468264.3473929}, year = {2021}, } Publisher's Version |
|
Jiang, Ziyou |
ESEC/FSE '21: "A First Look at Developers’ ..."
A First Look at Developers’ Live Chat on Gitter
Lin Shi, Xiao Chen, Ye Yang, Hanzhi Jiang, Ziyou Jiang, Nan Niu, and Qing Wang (Institute of Software at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China; Stevens Institute of Technology, USA; University of Cincinnati, USA) Modern communication platforms such as Gitter and Slack play an increasingly critical role in supporting software teamwork, especially in open source development.Conversations on such platforms often contain intensive, valuable information that may be used for better understanding OSS developer communication and collaboration. However, little work has been done in this regard. To bridge the gap, this paper reports a first comprehensive empirical study on developers' live chat, investigating when they interact, what community structures look like, which topics are discussed, and how they interact. We manually analyze 749 dialogs in the first phase, followed by an automated analysis of over 173K dialogs in the second phase. We find that developers tend to converse more often on weekdays, especially on Wednesdays and Thursdays (UTC), that there are three common community structures observed, that developers tend to discuss topics such as API usages and errors, and that six dialog interaction patterns are identified in the live chat communities. Based on the findings, we provide recommendations for individual developers and OSS communities, highlight desired features for platform vendors, and shed light on future research directions. We believe that the findings and insights will enable a better understanding of developers' live chat, pave the way for other researchers, as well as a better utilization and mining of knowledge embedded in the massive chat history. @InProceedings{ESEC/FSE21p391, author = {Lin Shi and Xiao Chen and Ye Yang and Hanzhi Jiang and Ziyou Jiang and Nan Niu and Qing Wang}, title = {A First Look at Developers’ Live Chat on Gitter}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {391--403}, doi = {10.1145/3468264.3468562}, year = {2021}, } Publisher's Version Info |
|
Jin, Guoliang |
ESEC/FSE '21: "Understanding and Detecting ..."
Understanding and Detecting Server-Side Request Races in Web Applications
Zhengyi Qiu, Shudi Shao, Qi Zhao, and Guoliang Jin (North Carolina State University, USA) Modern web sites often run web applications on the server to handle HTTP requests from users and generate dynamic responses. Due to their concurrent nature, web applications are vulnerable to server-side request races. The problem becomes more severe with the ever-increasing popularity of web applications. We first conduct a comprehensive characteristic study of 157 real-world server-side request races collected from different, popular types of web applications. The findings of this study can provide guidance for future development support in combating server-side request races. Guided by our study results, we develop a dynamic framework, ReqRacer, for detecting and exposing server-side request races in web applications. We propose novel approaches to model happens-before relationships between HTTP requests, which are essential to web applications. Our evaluation shows that ReqRacer can effectively and efficiently detect known and unknown request races. @InProceedings{ESEC/FSE21p842, author = {Zhengyi Qiu and Shudi Shao and Qi Zhao and Guoliang Jin}, title = {Understanding and Detecting Server-Side Request Races in Web Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {842--854}, doi = {10.1145/3468264.3468594}, year = {2021}, } Publisher's Version |
|
Jin, Haonan |
ESEC/FSE '21: "BRAID: An API Recommender ..."
BRAID: An API Recommender Supporting Implicit User Feedback
Yu Zhou, Haonan Jin, Xinying Yang, Taolue Chen, Krishna Narasimhan, and Harald C. Gall (Nanjing University of Aeronautics and Astronautics, China; University of London, UK; TU Darmstadt, Germany; University of Zurich, Switzerland) Efficient application programming interface (API) recommendation is one of the most desired features of modern integrated development environments. A multitude of API recommendation approaches have been proposed. However, most of the currently available API recommenders do not support the effective integration of user feedback into the recommendation loop. In this paper, we present BRAID (Boosting RecommendAtion with Implicit FeeDback), a tool which leverages user feedback, and employs learning-to-rank and active learning techniques to boost recommendation performance. The implementation is based on the VSCode plugin architecture, which provides an integrated user interface. Essentially, BRAID is a general framework which can accommodate existing query-based API recommendation approaches as components. Comparative experiments with strong baselines demonstrate the efficacy of the tool. A video demonstrating the usage of BRAID can be found at https://youtu.be/naD0guvl8sE. @InProceedings{ESEC/FSE21p1510, author = {Yu Zhou and Haonan Jin and Xinying Yang and Taolue Chen and Krishna Narasimhan and Harald C. Gall}, title = {BRAID: An API Recommender Supporting Implicit User Feedback}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1510--1514}, doi = {10.1145/3468264.3473111}, year = {2021}, } Publisher's Version |
|
Jin, Shuo |
ESEC/FSE '21: "Validation on Machine Reading ..."
Validation on Machine Reading Comprehension Software without Annotated Labels: A Property-Based Method
Songqiang Chen, Shuo Jin, and Xiaoyuan Xie (Wuhan University, China) Machine Reading Comprehension (MRC) in Natural Language Processing has seen great progress recently. But almost all the current MRC software is validated with a reference-based method, which requires well-annotated labels for test cases and tests the software by checking the consistency between the labels and the outputs. However, labeling test cases of MRC could be very costly due to their complexity, which makes reference-based validation hard to be extensible and sufficient. Furthermore, solely checking the consistency and measuring the overall score may not be sensible and flexible for assessing the language understanding capability. In this paper, we propose a property-based validation method for MRC software with Metamorphic Testing to supplement the reference-based validation. It does not refer to the labels and hence can make much data available for testing. Besides, it validates MRC software against various linguistic properties to give a specific and in-depth picture on linguistic capabilities of MRC software. Comprehensive experimental results show that our method can successfully reveal violations to the target linguistic properties without the labels. Moreover, it can reveal problems that have been concealed by the traditional validation. Comparison according to the properties provides deeper and more concrete ideas about different language understanding capabilities of the MRC software. @InProceedings{ESEC/FSE21p590, author = {Songqiang Chen and Shuo Jin and Xiaoyuan Xie}, title = {Validation on Machine Reading Comprehension Software without Annotated Labels: A Property-Based Method}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {590--602}, doi = {10.1145/3468264.3468569}, year = {2021}, } Publisher's Version |
|
Jin, Xianhao |
ESEC/FSE '21: "Reducing Cost in Continuous ..."
Reducing Cost in Continuous Integration with a Collection of Build Selection Approaches
Xianhao Jin (Virginia Tech, USA) Continuous integration (CI) is a widely used practice in modern software engineering. Unfortunately, it is also an expensive practice — Google and Mozilla estimate their CI systems in millions of dollars. To reduce CI computation cost, I propose the strategy of build selection to selectively execute those builds whose outcomes are failing and skip those passing builds for cost-saving. In my research, I firstly designed SmartBuildSkip as my first build selection approach that can skip unfruitful builds in CI automatically. Next, I evaluated SmartBuildSkip with all CI-improving approaches for understanding the strength and weakness of existing approaches to recommend future technique design. Then I proposed PreciseBuildSkip as a build selection approach to maximize the safety of skipping builds in CI. I also combined existing approaches both within and across granularity to be applied as a new build selection approach — HybridBuildSkip to save builds in a hybrid way. Finally, I plan to propose a human study to understand how to increase developers' trust on build selection approaches. @InProceedings{ESEC/FSE21p1650, author = {Xianhao Jin}, title = {Reducing Cost in Continuous Integration with a Collection of Build Selection Approaches}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1650--1654}, doi = {10.1145/3468264.3473103}, year = {2021}, } Publisher's Version |
|
Jin, Xin |
ESEC/FSE '21: "An Empirical Study on Challenges ..."
An Empirical Study on Challenges of Application Development in Serverless Computing
Jinfeng Wen, Zhenpeng Chen, Yi Liu, Yiling Lou, Yun Ma, Gang Huang, Xin Jin, and Xuanzhe Liu (Peking University, China) Serverless computing is an emerging paradigm for cloud computing, gaining traction in a wide range of applications such as video processing and machine learning. This new paradigm allows developers to focus on the development of the logic of serverless computing based applications (abbreviated as serverless-based applications) in the granularity of function, thereby freeing developers from tedious and error-prone infrastructure management. Meanwhile, it also introduces new challenges on the design, implementation, and deployment of serverless-based applications, and current serverless computing platforms are far away from satisfactory. However, to the best of our knowledge, these challenges have not been well studied. To fill this knowledge gap, this paper presents the first comprehensive study on understanding the challenges in developing serverless-based applications from the developers’ perspective. We mine and analyze 22,731 relevant questions from Stack Overflow (a popular Q&A website for developers), and show the increasing popularity trend and the high difficulty level of serverless computing for developers. Through manual inspection of 619 sampled questions, we construct a taxonomy of challenges that developers encounter, and report a series of findings and actionable implications. Stakeholders including application developers, researchers, and cloud providers can leverage these findings and implications to better understand and further explore the serverless computing paradigm. @InProceedings{ESEC/FSE21p416, author = {Jinfeng Wen and Zhenpeng Chen and Yi Liu and Yiling Lou and Yun Ma and Gang Huang and Xin Jin and Xuanzhe Liu}, title = {An Empirical Study on Challenges of Application Development in Serverless Computing}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {416--428}, doi = {10.1145/3468264.3468558}, year = {2021}, } Publisher's Version |
|
Jin, Zhi |
ESEC/FSE '21: "SmartCommit: A Graph-Based ..."
SmartCommit: A Graph-Based Interactive Assistant for Activity-Oriented Commits
Bo Shen, Wei Zhang, Christian Kästner, Haiyan Zhao, Zhao Wei, Guangtai Liang, and Zhi Jin (Peking University, China; Carnegie Mellon University, USA; Huawei Technologies, China) In collaborative software development, it is considered to be a best practice to submit code changes as a sequence of cohesive commits, each of which records the work result of a specific development activity, such as adding a new feature, bug fixing, and refactoring. However, rather than following this best practice, developers often submit a set of loosely-related changes serving for different development activities as a composite commit, due to the tedious manual work and lack of effective tool support to decompose such a tangled changeset. Composite commits often obfuscate the change history of software artifacts and bring challenges to efficient collaboration among developers. To encourage activity-oriented commits, we propose SmartCommit, a graph-partitioning-based interactive approach to tangled changeset decomposition that leverages not only the efficiency of algorithms but also the knowledge of developers. To evaluate the effectiveness of our approach, we (1) deployed SmartCommit in an international IT company, and analyzed usage data collected from a field study with 83 engineers over 9 months; and (2) conducted a controlled experiment on 3,000 synthetic composite commits from 10 diverse open-source projects. Results show that SmartCommit achieves a median accuracy between 71–84% when decomposing composite commits without developer involvement, and significantly helps developers follow the best practice of submitting activity-oriented commits with acceptable interaction effort and time cost in real collaborative software development. @InProceedings{ESEC/FSE21p379, author = {Bo Shen and Wei Zhang and Christian Kästner and Haiyan Zhao and Zhao Wei and Guangtai Liang and Zhi Jin}, title = {SmartCommit: A Graph-Based Interactive Assistant for Activity-Oriented Commits}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {379--390}, doi = {10.1145/3468264.3468551}, year = {2021}, } Publisher's Version Artifacts Reusable |
|
Johnson, Reese |
ESEC/FSE '21: "XAI Tools in the Public Sector: ..."
XAI Tools in the Public Sector: A Case Study on Predicting Combined Sewer Overflows
Nicholas Maltbie, Nan Niu, Matthew Van Doren, and Reese Johnson (University of Cincinnati, USA; Metropolitan Sewer District of Greater Cincinnati, USA) Artificial intelligence and deep learning are becoming increasingly prevalent in contemporary software solutions. Explainable artificial intelligence (XAI) tools attempt to address the black box nature of the deep learning models and make them more understandable to humans. In this work, we apply three state-of-the-art XAI tools in a real-world case study. Our study focuses on predicting combined sewer overflow events for a municipal wastewater treatment organization. Through a data driven inquiry, we collect both qualitative information via stakeholder interviews and quantitative measures. These help us assess the predictive accuracy of the XAI tools, as well as the simplicity, soundness, and insightfulness of the produced explanations. Our results not only show the varying degrees that the XAI tools meet the requirements, but also highlight that domain experts can draw new insights from complex explanations that may differ from their previous expectations. @InProceedings{ESEC/FSE21p1032, author = {Nicholas Maltbie and Nan Niu and Matthew Van Doren and Reese Johnson}, title = {XAI Tools in the Public Sector: A Case Study on Predicting Combined Sewer Overflows}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1032--1044}, doi = {10.1145/3468264.3468547}, year = {2021}, } Publisher's Version Artifacts Reusable |
|
Jongmans, Sung-Shik |
ESEC/FSE '21: "Analysis of Specifications ..."
Analysis of Specifications of Multiparty Sessions with dcj-lint
Erik Horlings and Sung-Shik Jongmans (Open University of the Netherlands, Netherlands; CWI, Netherlands) Multiparty session types constitute a method to automatically detect violations of protocol implementations relative to specifications. But, when a violation is detected, does it symptomise a bug in the implementation or in the specification? This paper presents dcj-lint: an analysis tool to detect bugs in protocol specifications, based on multiparty session types. By leveraging a custom-built temporal logic model checker, dcj-lint can be used to efficiently perform: (1) generic sanity checks, and (2) protocol-specific property analyses. In our benchmarks, dcj-lint outperforms an existing state-of-the-art model checker (up to 61x faster). @InProceedings{ESEC/FSE21p1590, author = {Erik Horlings and Sung-Shik Jongmans}, title = {Analysis of Specifications of Multiparty Sessions with dcj-lint}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1590--1594}, doi = {10.1145/3468264.3473127}, year = {2021}, } Publisher's Version |
|
Joshi, Vivek |
ESEC/FSE '21: "Domain Adaptation for an Automated ..."
Domain Adaptation for an Automated Classification of Deontic Modalities in Software Engineering Contracts
Vivek Joshi, Preethu Rose Anish, and Smita Ghaisas (TCS Research, India) Contracts are agreements between parties engaging in economic transactions. They specify deontic modalities that the signatories should be held responsible for and state the penalties or actions to be taken if the stated agreements are not met. Additionally, contracts have also been known to be source of Software Engineering (SE) requirements. Identifying the deontic modalities in contracts can therefore add value to the Requirements Engineering (RE) phase of SE. The complex and ambiguous language of contracts make it difficult and time-consuming to identify the deontic modalities (obligations, permissions, prohibitions), embedded in the text. State-of-art neural network models are effective for text classification; however, they require substantial amounts of training data. The availability of contracts data is sparse owing to the confidentiality concerns of customers. In this paper, we leverage the linguistic and taxonomical similarities between regulations (available abundantly in the public domain) and contracts to demonstrate that it is possible to use regulations as training data for classifying deontic modalities in real-life contracts. We discuss the results of a range of experiments from the use of rule-based approach to Bidirectional Encoder Representations from Transformers (BERT) for automating the classification of deontic modalities. With BERT, we obtained an average precision and recall of 90% and 89.66% respectively. @InProceedings{ESEC/FSE21p1275, author = {Vivek Joshi and Preethu Rose Anish and Smita Ghaisas}, title = {Domain Adaptation for an Automated Classification of Deontic Modalities in Software Engineering Contracts}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1275--1280}, doi = {10.1145/3468264.3473921}, year = {2021}, } Publisher's Version |
|
Juergens, Elmar |
ESEC/FSE '21: "How Can Manual Testing Processes ..."
How Can Manual Testing Processes Be Optimized? Developer Survey, Optimization Guidelines, and Case Studies
Roman Haas, Daniel Elsner, Elmar Juergens, Alexander Pretschner, and Sven Apel (Saarland University, Germany; CQSE, Germany; TU Munich, Germany) Manual software testing is tedious and costly as it involves significant human effort. Yet, it is still widely applied in industry and will be in the foreseeable future. Although there is arguably a great need for optimization of manual testing processes, research focuses mostly on optimization techniques for automated tests. Accordingly, there is no precise understanding of the practices and processes of manual testing in industry nor about pitfalls and optimization potential that is untapped. To shed light on this issue, we conducted a survey among 38 testing professionals from 16 companies, to investigate their manual testing processes and to identify potential for optimization. We synthesize guidelines when optimization techniques from automated testing can be implemented for manual testing. By means of case studies on two industrial software projects, we show that fault detection likelihood, test feedback time and test creation efforts can be improved when following our guidelines. @InProceedings{ESEC/FSE21p1281, author = {Roman Haas and Daniel Elsner and Elmar Juergens and Alexander Pretschner and Sven Apel}, title = {How Can Manual Testing Processes Be Optimized? Developer Survey, Optimization Guidelines, and Case Studies}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1281--1291}, doi = {10.1145/3468264.3473922}, year = {2021}, } Publisher's Version Info |
|
Jung, Chijung |
ESEC/FSE '21: "Swarmbug: Debugging Configuration ..."
Swarmbug: Debugging Configuration Bugs in Swarm Robotics
Chijung Jung, Ali Ahad, Jinho Jung, Sebastian Elbaum, and Yonghwi Kwon (University of Virginia, USA; Georgia Institute of Technology, USA) Swarm robotics collectively solve problems that are challenging for individual robots, from environmental monitoring to entertainment. The algorithms enabling swarms allow individual robots of the swarm to plan, share, and coordinate their trajectories and tasks to achieve a common goal. Such algorithms rely on a large number of configurable parameters that can be tailored to target particular scenarios. This large configuration space, the complexity of the algorithms, and the dependencies with the robots’ setup and performance make debugging and fixing swarms configuration bugs extremely challenging. This paper proposes Swarmbug, a swarm debugging system that automatically diagnoses and fixes buggy behaviors caused by misconfiguration. The essence of Swarmbug is the novel concept called the degree of causal contribution (Dcc), which abstracts impacts of environment configurations (e.g., obstacles) to the drones in a swarm via behavior causal analysis. Swarmbug automatically generates, validates, and ranks fixes for configuration bugs. We evaluate Swarmbug on four diverse swarm algorithms. Swarmbug successfully fixes four configuration bugs in the evaluated algorithms, showing that it is generic and effective. We also conduct a real-world experiment with physical drones to show the Swarmbug’s fix is effective in the real-world. @InProceedings{ESEC/FSE21p868, author = {Chijung Jung and Ali Ahad and Jinho Jung and Sebastian Elbaum and Yonghwi Kwon}, title = {Swarmbug: Debugging Configuration Bugs in Swarm Robotics}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {868--880}, doi = {10.1145/3468264.3468601}, year = {2021}, } Publisher's Version Info |
|
Jung, Jinho |
ESEC/FSE '21: "Swarmbug: Debugging Configuration ..."
Swarmbug: Debugging Configuration Bugs in Swarm Robotics
Chijung Jung, Ali Ahad, Jinho Jung, Sebastian Elbaum, and Yonghwi Kwon (University of Virginia, USA; Georgia Institute of Technology, USA) Swarm robotics collectively solve problems that are challenging for individual robots, from environmental monitoring to entertainment. The algorithms enabling swarms allow individual robots of the swarm to plan, share, and coordinate their trajectories and tasks to achieve a common goal. Such algorithms rely on a large number of configurable parameters that can be tailored to target particular scenarios. This large configuration space, the complexity of the algorithms, and the dependencies with the robots’ setup and performance make debugging and fixing swarms configuration bugs extremely challenging. This paper proposes Swarmbug, a swarm debugging system that automatically diagnoses and fixes buggy behaviors caused by misconfiguration. The essence of Swarmbug is the novel concept called the degree of causal contribution (Dcc), which abstracts impacts of environment configurations (e.g., obstacles) to the drones in a swarm via behavior causal analysis. Swarmbug automatically generates, validates, and ranks fixes for configuration bugs. We evaluate Swarmbug on four diverse swarm algorithms. Swarmbug successfully fixes four configuration bugs in the evaluated algorithms, showing that it is generic and effective. We also conduct a real-world experiment with physical drones to show the Swarmbug’s fix is effective in the real-world. @InProceedings{ESEC/FSE21p868, author = {Chijung Jung and Ali Ahad and Jinho Jung and Sebastian Elbaum and Yonghwi Kwon}, title = {Swarmbug: Debugging Configuration Bugs in Swarm Robotics}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {868--880}, doi = {10.1145/3468264.3468601}, year = {2021}, } Publisher's Version Info |
|
Jung, Young Jin |
ESEC/FSE '21: "A Replication of ‘DeepBugs: ..."
A Replication of ‘DeepBugs: A Learning Approach to Name-based Bug Detection’
Jordan Winkler, Abhimanyu Agarwal, Caleb Tung, Dario Rios Ugalde, Young Jin Jung, and James C. Davis (Purdue University, USA; Lockheed Martin, USA) We replicated the main result of DeepBugs, a bug detection algorithm for name-based bugs. The original authors evaluated it in three contexts: swapped-argument bugs, wrong binary operator,and wrong binary operator operands. We followed the algorithm and replicated the results for swapped-argument bugs. Our replication used independent implementations of the major components: training set generation, token vectorization, and neural network data pipeline, model, and loss function. Using the same dataset and the same testing process, we report comparable performance: within 2% of the accuracy reported by Pradel and Sen. @InProceedings{ESEC/FSE21p1604, author = {Jordan Winkler and Abhimanyu Agarwal and Caleb Tung and Dario Rios Ugalde and Young Jin Jung and James C. Davis}, title = {A Replication of ‘DeepBugs: A Learning Approach to Name-based Bug Detection’}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1604--1604}, doi = {10.1145/3468264.3477221}, year = {2021}, } Publisher's Version |
|
Just, Sascha |
ESEC/FSE '21: "When Life Gives You Oranges: ..."
When Life Gives You Oranges: Detecting and Diagnosing Intermittent Job Failures at Mozilla
Johannes Lampel, Sascha Just, Sven Apel, and Andreas Zeller (CISPA, Germany; Saarland University, Germany; Microsoft, USA) Continuous delivery of cloud systems requires constant running of jobs (build processes, tests, etc.). One issue that plagues this continuous integration (CI) process are intermittent failures - non-deterministic, false alarms that do not result from a bug in the software or job specification, but rather from issues in the underlying infrastructure. At Mozilla, such intermittent failures are called oranges as a reference to the color of the build status indicator. As such intermittent failures disrupt CI and lead to failures, they erode the developers' trust in the jobs. We present a novel approach that automatically classifies failing jobs to determine whether job execution failures arise from an actual software bug or were caused by flakiness in the job (e.g., test) or the underlying infrastructure. For this purpose, we train classification models using job telemetry data to diagnose failure patterns involving features such as runtime, cpu load, operating system version, or specific platform with high precision. In an evaluation on a set of Mozilla CI jobs, our approach achieves precision scores of 73%, on average, across all data sets with some test suites achieving precision scores good enough for fully automated classification (i.e., precision scores of up to 100%), and recall scores of 82% on average (up to 94%). @InProceedings{ESEC/FSE21p1381, author = {Johannes Lampel and Sascha Just and Sven Apel and Andreas Zeller}, title = {When Life Gives You Oranges: Detecting and Diagnosing Intermittent Job Failures at Mozilla}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1381--1392}, doi = {10.1145/3468264.3473931}, year = {2021}, } Publisher's Version |
|
Kalia, Anup K. |
ESEC/FSE '21: "Mono2Micro: A Practical and ..."
Mono2Micro: A Practical and Effective Tool for Decomposing Monolithic Java Applications to Microservices
Anup K. Kalia, Jin Xiao, Rahul Krishna, Saurabh Sinha, Maja Vukovic, and Debasish Banerjee (IBM Research, USA; IBM, USA) In migrating production workloads to cloud, enterprises often face the daunting task of evolving monolithic applications toward a microservice architecture. At IBM, we developed a tool called Mono2Micro to assist with this challenging task. Mono2Micro performs spatio-temporal decomposition, leveraging well-defined business use cases and runtime call relations to create functionally cohesive partitioning of application classes. Our preliminary evaluation of Mono2Micro showed promising results. How well does Mono2Micro perform against other decomposition techniques, and how do practitioners perceive the tool? This paper describes the technical foundations of Mono2Micro and presents results to answer these two questions. To answer the first question, we evaluated Mono2Micro against four existing techniques on a set of open-source and proprietary Java applications and using different metrics to assess the quality of decomposition and tool’s efficiency. Our results show that Mono2Micro significantly outperforms state-of-the-art baselines in specific metrics well-defined for the problem domain. To answer the second question, we conducted a survey of twenty-one practitioners in various industry roles who have used Mono2Micro. This study highlights several benefits of the tool, interesting practitioner perceptions, and scope for further improvements. Overall, these results show that Mono2Micro can provide a valuable aid to practitioners in creating functionally cohesive and explainable microservice decompositions. @InProceedings{ESEC/FSE21p1214, author = {Anup K. Kalia and Jin Xiao and Rahul Krishna and Saurabh Sinha and Maja Vukovic and Debasish Banerjee}, title = {Mono2Micro: A Practical and Effective Tool for Decomposing Monolithic Java Applications to Microservices}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1214--1224}, doi = {10.1145/3468264.3473915}, year = {2021}, } Publisher's Version |
|
Kang, Eunsuk |
ESEC/FSE '21: "AlloyMax: Bringing Maximum ..."
AlloyMax: Bringing Maximum Satisfaction to Relational Specifications
Changjian Zhang, Ryan Wagner, Pedro Orvalho, David Garlan, Vasco Manquinho, Ruben Martins, and Eunsuk Kang (Carnegie Mellon University, USA; INESC-ID, Portugal; University of Lisbon, Portugal) Alloy is a declarative modeling language based on a first-order relational logic. Its constraint-based analysis has enabled a wide range of applications in software engineering, including configuration synthesis, bug finding, test-case generation, and security analysis. Certain types of analysis tasks in these domains involve finding an optimal solution. For example, in a network configuration problem, instead of finding any valid configuration, it may be desirable to find one that is most permissive (i.e., it permits a maximum number of packets). Due to its dependence on SAT, however, Alloy cannot be used to specify and analyze these types of problems. We propose AlloyMax, an extension of Alloy with a capability to express and analyze problems with optimal solutions. AlloyMax introduces (1) a small addition of language constructs that can be used to specify a wide range of problems that involve optimality and (2) a new analysis engine that leverages a Maximum Satisfiability (MaxSAT) solver to generate optimal solutions. To enable this new type of analysis, we show how a specification in a first-order relational logic can be translated into an input format of MaxSAT solvers—namely, a Boolean formula in weighted conjunctive normal form (WCNF). We demonstrate the applicability and scalability of AlloyMax on a benchmark of problems. To our knowledge, AlloyMax is the first approach to enable analysis with optimality in a relational modeling language, and we believe that AlloyMax has the potential to bring a wide range of new applications to Alloy. @InProceedings{ESEC/FSE21p155, author = {Changjian Zhang and Ryan Wagner and Pedro Orvalho and David Garlan and Vasco Manquinho and Ruben Martins and Eunsuk Kang}, title = {AlloyMax: Bringing Maximum Satisfaction to Relational Specifications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {155--167}, doi = {10.1145/3468264.3468587}, year = {2021}, } Publisher's Version Artifacts Reusable |
|
Kang, Mingqing |
ESEC/FSE '21: "Detecting Node.js Prototype ..."
Detecting Node.js Prototype Pollution Vulnerabilities via Object Lookup Analysis
Song Li, Mingqing Kang, Jianwei Hou, and Yinzhi Cao (Johns Hopkins University, USA; Renmin University of China, China) Prototype pollution is a type of vulnerability specific to prototype-based languages, such as JavaScript, which allows an adversary to pollute a base object’s property, leading to a further consequence such as Denial of Service (DoS), arbitrary code execution, and session fixation. On one hand, the only prior work in detecting prototype pollution adopts dynamic analysis to fuzz package inputs, which inevitably has code coverage issues in triggering some deeply embedded vulnerabilities. On the other hand, it is challenging to apply state-of-the-art static analysis in detecting prototype pollution because of the involvement of prototype chains and fine-grained object relations including built-in ones. In this paper, we propose a flow-, context-, and branch-sensitive static taint analysis tool, called ObjLupAnsys, to detect prototype pollution vulnerabilities. The key of ObjLupAnsys is a so-called object lookup analysis, which gradually expands the source and sink objects into big clusters with a complex inner structure by performing targeted object lookups in both clusters so that a system built-in function can be redefined. Specifically, at the source cluster, ObjLupAnsys proactively creates new object properties based on how the target program uses the initial source object; at the sink cluster, ObjLupAnsys assigns property values in object lookups to decrease the number of object lookups to reach a system built-in function. We implemented an open-source tool and applied it for the detection of prototype pollution among Node.js packages. Our evaluation shows that ObjLupAnsys finds 61 zero-day, previously-unknown, exploitable vulnerabilities as opposed to 18 by the state-of-the-art dynamic fuzzing tool and three by a state-of-the-art static analysis tool that is modified to detect prototype pollution. To date, 11 vulnerable Node.js packages are assigned with CVE numbers and five have already been patched by their developers. In addition, ObjLupAnsys also discovered seven applications or packages including a real-world, online website, which are indirectly vulnerable due to the inclusion of vulnerable packages found by ObjLupAnsys. @InProceedings{ESEC/FSE21p268, author = {Song Li and Mingqing Kang and Jianwei Hou and Yinzhi Cao}, title = {Detecting Node.js Prototype Pollution Vulnerabilities via Object Lookup Analysis}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {268--279}, doi = {10.1145/3468264.3468542}, year = {2021}, } Publisher's Version |
|
Karas, Zachary |
ESEC/FSE '21: "Connecting the Dots: Rethinking ..."
Connecting the Dots: Rethinking the Relationship between Code and Prose Writing with Functional Connectivity
Zachary Karas, Andrew Jahn, Westley Weimer, and Yu Huang (University of Michigan, USA) Medical imaging studies of software engineering have risen in popularity and may reveal the neural underpinnings of coding activities. To date, however, all studies in computer science venues have treated brain regions independently and in isolation. Since most complex neural activity involves coordination among multiple regions, previous analyses may overlook neural behavior. We propose to apply functional connectivity analysis to medical imaging data from software engineering tasks. Informally, this analysis treats the brain as a graph, rather than a series of independent modules, and statistically infers relevant edges. We present a functional connectivity analysis of existing data, which elucidates the interconnections between code writing and prose writing, especially regarding higher mathematics and semantic processing. First, we found a significant link between Broca’s Area (language) and the Number Form Area (higher mathematics) for coding. This both refines previous interpretations that code writing and natural language are distinct from each other, and may also contribute to the understanding of the Number Form Area in the Psychology literature. Second, we identify an area with important functional connectivity for both prose writing and coding, unlike previous analyses that associated it with coding. This advances our neural understanding of coding and prose writing, and was only exposed by using functional connectivity analysis. Third, for coding, we find a strong functional connectivity result for a brain region involved in semantic processing for language, with implications for CS training. Finally, we find a neural relationship between coding and expertise, including a more grounded explanation than prior work. @InProceedings{ESEC/FSE21p767, author = {Zachary Karas and Andrew Jahn and Westley Weimer and Yu Huang}, title = {Connecting the Dots: Rethinking the Relationship between Code and Prose Writing with Functional Connectivity}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {767--779}, doi = {10.1145/3468264.3468579}, year = {2021}, } Publisher's Version |
|
Kästner, Christian |
ESEC/FSE '21: "VarFix: Balancing Edit Expressiveness ..."
VarFix: Balancing Edit Expressiveness and Search Effectiveness in Automated Program Repair
Chu-Pan Wong, Priscila Santiesteban, Christian Kästner, and Claire Le Goues (Carnegie Mellon University, USA; Coe College, USA) Automatically repairing a buggy program is essentially a search problem, searching for code transformations that pass a set of tests. Various search strategies have been explored, but they either navigate the search space in an ad hoc way using heuristics, or systemically but at the cost of limited edit expressiveness in the kinds of supported program edits. In this work, we explore the possibility of systematically navigating the search space without sacrificing edit expressiveness. The key enabler of this exploration is variational execution, a dynamic analysis technique that has been shown to be effective at exploring many similar executions in large search spaces. We evaluate our approach on IntroClassJava and Defects4J, showing that a systematic search is effective at leveraging and combining fixing ingredients to find patches, including many high-quality patches and multi-edit patches. @InProceedings{ESEC/FSE21p354, author = {Chu-Pan Wong and Priscila Santiesteban and Christian Kästner and Claire Le Goues}, title = {VarFix: Balancing Edit Expressiveness and Search Effectiveness in Automated Program Repair}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {354--366}, doi = {10.1145/3468264.3468600}, year = {2021}, } Publisher's Version Info ESEC/FSE '21: "SmartCommit: A Graph-Based ..." SmartCommit: A Graph-Based Interactive Assistant for Activity-Oriented Commits Bo Shen, Wei Zhang, Christian Kästner, Haiyan Zhao, Zhao Wei, Guangtai Liang, and Zhi Jin (Peking University, China; Carnegie Mellon University, USA; Huawei Technologies, China) In collaborative software development, it is considered to be a best practice to submit code changes as a sequence of cohesive commits, each of which records the work result of a specific development activity, such as adding a new feature, bug fixing, and refactoring. However, rather than following this best practice, developers often submit a set of loosely-related changes serving for different development activities as a composite commit, due to the tedious manual work and lack of effective tool support to decompose such a tangled changeset. Composite commits often obfuscate the change history of software artifacts and bring challenges to efficient collaboration among developers. To encourage activity-oriented commits, we propose SmartCommit, a graph-partitioning-based interactive approach to tangled changeset decomposition that leverages not only the efficiency of algorithms but also the knowledge of developers. To evaluate the effectiveness of our approach, we (1) deployed SmartCommit in an international IT company, and analyzed usage data collected from a field study with 83 engineers over 9 months; and (2) conducted a controlled experiment on 3,000 synthetic composite commits from 10 diverse open-source projects. Results show that SmartCommit achieves a median accuracy between 71–84% when decomposing composite commits without developer involvement, and significantly helps developers follow the best practice of submitting activity-oriented commits with acceptable interaction effort and time cost in real collaborative software development. @InProceedings{ESEC/FSE21p379, author = {Bo Shen and Wei Zhang and Christian Kästner and Haiyan Zhao and Zhao Wei and Guangtai Liang and Zhi Jin}, title = {SmartCommit: A Graph-Based Interactive Assistant for Activity-Oriented Commits}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {379--390}, doi = {10.1145/3468264.3468551}, year = {2021}, } Publisher's Version Artifacts Reusable |
|
Kate, Sayali |
ESEC/FSE '21: "PHYSFRAME: Type Checking Physical ..."
PHYSFRAME: Type Checking Physical Frames of Reference for Robotic Systems
Sayali Kate, Michael Chinn, Hongjun Choi, Xiangyu Zhang, and Sebastian Elbaum (Purdue University, USA; University of Virginia, USA) A robotic system continuously measures its own motions and the external world during operation. Such measurements are with respect to some frame of reference, i.e., a coordinate system. A nontrivial robotic system has a large number of different frames and data have to be translated back-and-forth from a frame to another. The onus is on the developers to get such translation right. However, this is very challenging and error-prone, evidenced by the large number of questions and issues related to frame uses on developers' forum. Since any state variable can be associated with some frame, reference frames can be naturally modeled as variable types. We hence develop a novel type system that can automatically infer variables' frame types and in turn detect any type inconsistencies and violations of frame conventions. The evaluation on a set of 180 publicly available ROS projects shows that our system can detect 190 inconsistencies with 154 true positives. We reported 52 to developers and received 18 responses so far, with 15 fixed/acknowledged. Our technique also finds 45 violations of common practices. @InProceedings{ESEC/FSE21p45, author = {Sayali Kate and Michael Chinn and Hongjun Choi and Xiangyu Zhang and Sebastian Elbaum}, title = {PHYSFRAME: Type Checking Physical Frames of Reference for Robotic Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {45--56}, doi = {10.1145/3468264.3468608}, year = {2021}, } Publisher's Version |
|
Kehrer, Timo |
ESEC/FSE '21: "Feature Trace Recording ..."
Feature Trace Recording
Paul Maximilian Bittner, Alexander Schultheiß, Thomas Thüm, Timo Kehrer, Jeffrey M. Young, and Lukas Linsbauer (University of Ulm, Germany; Humboldt University of Berlin, Germany; Oregon State University, USA; TU Braunschweig, Germany) Tracing requirements to their implementation is crucial to all stakeholders of a software development process. When managing software variability, requirements are typically expressed in terms of features, a feature being a user-visible characteristic of the software. While feature traces are fully documented in software product lines, ad-hoc branching and forking, known as clone-and-own, is still the dominant way for developing multi-variant software systems in practice. Retroactive migration to product lines suffers from uncertainties and high effort because knowledge of feature traces must be recovered but is scattered across teams or even lost. We propose a semi-automated methodology for recording feature traces proactively, during software development when the necessary knowledge is present. To support the ongoing development of previously unmanaged clone-and-own projects, we explicitly deal with the absence of domain knowledge for both existing and new source code. We evaluate feature trace recording by replaying code edit patterns from the history of two real-world product lines. Our results show that feature trace recording reduces the manual effort to specify traces. Recorded feature traces could improve automation in change-propagation among cloned system variants and could reduce effort if developers decide to migrate to a product line. @InProceedings{ESEC/FSE21p1007, author = {Paul Maximilian Bittner and Alexander Schultheiß and Thomas Thüm and Timo Kehrer and Jeffrey M. Young and Lukas Linsbauer}, title = {Feature Trace Recording}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1007--1020}, doi = {10.1145/3468264.3468531}, year = {2021}, } Publisher's Version Info Artifacts Reusable |
|
Kellogg, Martin |
ESEC/FSE '21: "Lightweight Verification via ..."
Lightweight Verification via Specialized Typecheckers
Martin Kellogg (University of Washington, USA) Testing and other unsound analyses are developer-friendly but cannot give guarantees that programs are free of bugs. Verification and other extant sound approaches can give guarantees but often require too much effort for everyday developers. In this work, we describe our efforts to make verification more accessible for developers by using specialized pluggable typecheckers---a relatively accessible verification technology---to solve complex problems that previously required more complex and harder-to-use verification approaches. @InProceedings{ESEC/FSE21p1635, author = {Martin Kellogg}, title = {Lightweight Verification via Specialized Typecheckers}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1635--1639}, doi = {10.1145/3468264.3473105}, year = {2021}, } Publisher's Version ESEC/FSE '21: "Lightweight and Modular Resource ..." Lightweight and Modular Resource Leak Verification Martin Kellogg, Narges Shadab, Manu Sridharan, and Michael D. Ernst (University of Washington, USA; University of California at Riverside, USA) A resource leak occurs when a program allocates a resource, such as a socket or file handle, but fails to deallocate it. Resource leaks cause resource starvation, slowdowns, and crashes. Previous techniques to prevent resource leaks are either unsound, imprecise, inapplicable to existing code, slow, or a combination of these. Static detection of resource leaks requires checking that de-allocation methods are always invoked on relevant objects before they become unreachable. Our key insight is that leak detection can be reduced to an accumulation problem, a class of typestate problems amenable to sound and modular checking without the need for a heavyweight, whole-program alias analysis. The precision of an accumulation analysis can be improved by computing targeted aliasing information, and we augmented our baseline checker with three such novel techniques: a lightweight ownership transfer system; a specialized resource alias analysis; and a system to create a fresh obligation when a non-final resource field is updated. Our approach occupies a unique slice of the design space: it is sound and runs relatively quickly (taking minutes on programs that a state-of-the-art approach took hours to analyze). We implemented our techniques for Java in an open-source tool called the Resource Leak Checker. The Resource Leak Checker revealed 49 real resource leaks in widely-deployed software. It scales well, has a manageable false positive rate (comparable to the high-confidence resource leak analysis built into the Eclipse IDE), and imposes only a small annotation burden (1/1500 LoC) for developers. @InProceedings{ESEC/FSE21p181, author = {Martin Kellogg and Narges Shadab and Manu Sridharan and Michael D. Ernst}, title = {Lightweight and Modular Resource Leak Verification}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {181--192}, doi = {10.1145/3468264.3468576}, year = {2021}, } Publisher's Version Artifacts Functional |
|
Khan, Tanvir Ahmed |
ESEC/FSE '21: "AlloyFL: A Fault Localization ..."
AlloyFL: A Fault Localization Framework for Alloy
Tanvir Ahmed Khan, Allison Sullivan, and Kaiyuan Wang (University of Texas at Arlington, USA; Google, USA) Declarative models help improve the reliability of software systems: models can be used to convey requirements, analyze system designs and verify implementation properties. Alloy is a commonly used modeling language. A key strength of Alloy is the Analyzer, Alloy's integrated development environment (IDE), which allows users to write and execute models by leveraging a fully automatic SAT based analysis engine. Unfortunately, writing correct constraints of complex properties is difficult. To help users identify fault locations, AlloyFL is a fault localization technique that takes as input a faulty Alloy model and a fault-revealing test suite. As output, AlloyFL returns a ranked list of locations from most to least suspicious. This paper describes our Java implementation of AlloyFL as an extension to the Analyzer. Our experimental results show AlloyFL is capable of detecting the location of real world faults and works in the presence of multiple faulty locations. The demo video for AlloyFL can be found at https://youtu.be/ZwgP58Nsbx8. @InProceedings{ESEC/FSE21p1535, author = {Tanvir Ahmed Khan and Allison Sullivan and Kaiyuan Wang}, title = {AlloyFL: A Fault Localization Framework for Alloy}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1535--1539}, doi = {10.1145/3468264.3473116}, year = {2021}, } Publisher's Version Video Info |
|
Khurshid, Sarfraz |
ESEC/FSE '21: "ACHyb: A Hybrid Analysis Approach ..."
ACHyb: A Hybrid Analysis Approach to Detect Kernel Access Control Vulnerabilities
Yang Hu, Wenxi Wang, Casen Hunger, Riley Wood, Sarfraz Khurshid, and Mohit Tiwari (University of Texas at Austin, USA) Access control is essential for the Operating System (OS) security. Incorrect implementation of access control can introduce new attack surfaces to the OS, known as Kernel Access Control Vulnerabilities (KACVs). To understand KACVs, we conduct our study on the root causes and the security impacts of KACVs. Regarding the complexity of the recognized root causes, we particularly focus on two kinds of KACVs, namely KACV-M (due to missing permission checks) and KACV-I (due to misusing permission checks). We find that over 60% of these KACVs are of critical, high or medium security severity, resulting in a variety of security threats including bypass security checking, privileged escalation, etc. However, existing approaches can only detect KACV-M. The state-of-the-art KACV-M detector called PeX is a static analysis tool, which still suffers from extremely high false-positive rates. In this paper, we present ACHyb, a precise and scalable approach to reveal both KACV-M and KACV-I. ACHyb is a hybrid approach, which first applies static analysis to identify the potentially vulnerable paths and then applies dynamic analysis to further reduce the false positives of the paths. For the static analysis, ACHyb improves PeX in both the precision and the soundness, using the interface analysis, callsite dependence analysis and constraint-based invariant analysis with a stronger access control invariant. For the dynamic analysis, ACHyb utilizes the greybox fuzzing to identify the potential KACVs. In order to improve the fuzzing efficiency, ACHyb adopts our novel clustering-based seed distillation approach to generate high-quality seed programs. Our experimental results show that ACHyb reveals 76 potential KACVs in less than 8 hours and 22 of them are KACVs (19 KACV-M and 3 KACV-I). In contrast, PeX reveals 2,088 potential KACVs in more than 11 hours, and only 14 of them are KACVs (all KACV-M). Furthermore, ACHyb successfully uncovers 7 new KACVs, and 2 of them (1 KACV-M and 1 KACV-I) have been confirmed by kernel developers. @InProceedings{ESEC/FSE21p316, author = {Yang Hu and Wenxi Wang and Casen Hunger and Riley Wood and Sarfraz Khurshid and Mohit Tiwari}, title = {ACHyb: A Hybrid Analysis Approach to Detect Kernel Access Control Vulnerabilities}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {316--327}, doi = {10.1145/3468264.3468627}, year = {2021}, } Publisher's Version Artifacts Reusable |
|
Kim, Dong Jae |
ESEC/FSE '21: "How Disabled Tests Manifest ..."
How Disabled Tests Manifest in Test Maintainability Challenges?
Dong Jae Kim, Bo Yang, Jinqiu Yang, and Tse-Hsun (Peter) Chen (Concordia University, Canada) Software testing is an essential software quality assurance practice. Testing helps expose faults earlier, allowing developers to repair the code and reduce future maintenance costs. However, repairing (i.e., making failing tests pass) may not always be done immediately. Bugs may require multiple rounds of repairs and even remain unfixed due to the difficulty of bug-fixing tasks. To help test maintenance, along with code comments, the majority of testing frameworks (e.g., JUnit and TestNG) have also introduced annotations such as @Ignore to disable failing tests temporarily. Although disabling tests may help alleviate maintenance difficulties, they may also introduce technical debt. With the faster release of applications in modern software development, disabling tests may become the salvation for many developers to meet project deliverables. In the end, disabled tests may become outdated and a source of technical debt, harming long-term maintenance. Despite its harmful implications, there is little empirical research evidence on the prevalence, evolution, and maintenance of disabling tests in practice. To fill this gap, we perform the first empirical study on test disabling practice. We develop a tool to mine 122K commits and detect 3,111 changes that disable tests from 15 open-source Java systems. Our main findings are: (1) Test disabling changes are 19% more common than regular test refactorings, such as renames and type changes. (2) Our life-cycle analysis shows that 41% of disabled tests are never brought back to evaluate software quality, and most disabled tests stay disabled for several years. (3)We unveil the motivations behind test disabling practice and the associated technical debt by manually studying evolutions of 349 unique disabled tests, achieving a 95% confidence level and a 5% confidence interval. Finally, we present some actionable implications for researchers and developers. @InProceedings{ESEC/FSE21p1045, author = {Dong Jae Kim and Bo Yang and Jinqiu Yang and Tse-Hsun (Peter) Chen}, title = {How Disabled Tests Manifest in Test Maintainability Challenges?}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1045--1055}, doi = {10.1145/3468264.3468609}, year = {2021}, } Publisher's Version |
|
Kim, Miryung |
ESEC/FSE '21: "HeteroFuzz: Fuzz Testing to ..."
HeteroFuzz: Fuzz Testing to Detect Platform Dependent Divergence for Heterogeneous Applications
Qian Zhang, Jiyuan Wang, and Miryung Kim (University of California at Los Angeles, USA) As specialized hardware accelerators like FPGAs become a prominent part of the current computing landscape, software applications are increasingly constructed to leverage heterogeneous architectures. Such a trend is already happening in the domain of machine learning and Internet-of-Things (IoT) systems built on edge devices. Yet, debugging and testing methods for heterogeneous applications are currently lacking. These applications may look similar to regular C/C++ code but include hardware synthesis details in terms of preprocessor directives. Therefore, their behavior under heterogeneous architectures may diverge significantly from CPU due to hardware synthesis details. Further, the compilation and hardware simulation cycle takes an enormous amount of time, prohibiting frequent invocations required for fuzz testing. We propose a novel fuzz testing technique, called HeteroFuzz, designed to specifically target heterogeneous applications and to detect platform-dependent divergence. The key essence of HeteroFuzz is that it uses a three-pronged approach to reduce the long latency of repetitively invoking a hardware simulator on a heterogeneous application. First, in addition to monitoring code coverage as a fuzzing guidance mechanism, we analyze synthesis pragmas in kernel code and monitor accelerator-relevant value spectra. Second, we design dynamic probabilistic mutations to increase the chance of hitting divergent behavior under different platforms. Third, we memorize the boundaries of seen kernel inputs and skip HLS simulator invocation if it can expose only redundant divergent behavior. We evaluate HeteroFuzz on seven real-world heterogeneous applications with FPGA kernels. HeteroFuzz is 754X faster in exposing the same set of distinct divergence symptoms than naive fuzzing. Probabilistic mutations contribute to 17.5X speed up than the one without. Selective invocation of HLS simulation contributes to 8.8X speed up than the one without. @InProceedings{ESEC/FSE21p242, author = {Qian Zhang and Jiyuan Wang and Miryung Kim}, title = {HeteroFuzz: Fuzz Testing to Detect Platform Dependent Divergence for Heterogeneous Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {242--254}, doi = {10.1145/3468264.3468610}, year = {2021}, } Publisher's Version |
|
Kim, Seohyun |
ESEC/FSE '21: "Explaining Mispredictions ..."
Explaining Mispredictions of Machine Learning Models using Rule Induction
Jürgen Cito, Isil Dillig, Seohyun Kim, Vijayaraghavan Murali, and Satish Chandra (TU Vienna, Austria; Facebook, Austria; University of Texas at Austin, USA; Facebook, USA) While machine learning (ML) models play an increasingly prevalent role in many software engineering tasks, their prediction accuracy is often problematic. When these models do mispredict, it can be very difficult to isolate the cause. In this paper, we propose a technique that aims to facilitate the debugging process of trained statistical models. Given an ML model and a labeled data set, our method produces an interpretable characterization of the data on which the model performs particularly poorly. The output of our technique can be useful for understanding limitations of the training data or the model itself; it can also be useful for ensembling if there are multiple models with different strengths. We evaluate our approach through case studies and illustrate how it can be used to improve the accuracy of predictive models used for software engineering tasks within Facebook. @InProceedings{ESEC/FSE21p716, author = {Jürgen Cito and Isil Dillig and Seohyun Kim and Vijayaraghavan Murali and Satish Chandra}, title = {Explaining Mispredictions of Machine Learning Models using Rule Induction}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {716--727}, doi = {10.1145/3468264.3468614}, year = {2021}, } Publisher's Version |
|
Kim, Taeho |
ESEC/FSE '21: "Selecting Test Inputs for ..."
Selecting Test Inputs for DNNs using Differential Testing with Subspecialized Model Instances
Yu-Seung Ma, Shin Yoo, and Taeho Kim (Electronics and Telecommunications Research Institute, South Korea; KAIST, South Korea) Testing of Deep Learning (DL) models is difficult due to the lack of automated test oracle and the high cost of human labelling. Differential testing has been used as a surrogate oracle, but there is no systematic guide on how to choose the reference model to use for differential testing. We propose a novel differential testing approach based on subspecialized models, i.e., models that are trained on sliced training data only (hence specialized for the slice). A preliminary evaluation of our approach with an CNN-based EMNIST image classifier shows that it can achieve higher error detection rate with selected inputs compared to using more advanced ResNet and LeNet as the reference model for differential testing. Our approach also outperforms N-version testing, i.e., the use of the same DL model architecture trained separately but using the same data. @InProceedings{ESEC/FSE21p1467, author = {Yu-Seung Ma and Shin Yoo and Taeho Kim}, title = {Selecting Test Inputs for DNNs using Differential Testing with Subspecialized Model Instances}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1467--1470}, doi = {10.1145/3468264.3473131}, year = {2021}, } Publisher's Version |
|
Kolluri, Aashish |
ESEC/FSE '21: "SynGuar: Guaranteeing Generalization ..."
SynGuar: Guaranteeing Generalization in Programming by Example
Bo Wang, Teodora Baluta, Aashish Kolluri, and Prateek Saxena (National University of Singapore, Singapore) Programming by Example (PBE) is a program synthesis paradigm in which the synthesizer creates a program that matches a set of given examples. In many applications of such synthesis (e.g., program repair or reverse engineering), we are to reconstruct a program that is close to a specific target program, not merely to produce some program that satisfies the seen examples. In such settings, we wish that the synthesized program generalizes well, i.e., has as few errors as possible on the unobserved examples capturing the target function behavior. In this paper, we propose the first framework (called SynGuar) for PBE synthesizers that guarantees to achieve low generalization error with high probability. Our main contribution is a procedure to dynamically calculate how many additional examples suffice to theoretically guarantee generalization. We show how our techniques can be used in 2 well-known synthesis approaches: PROSE and STUN (synthesis through unification), for common string-manipulation program benchmarks. We find that often a few hundred examples suffice to provably bound generalization error below 5% with high (≥ 98%) probability on these benchmarks. Further, we confirm this empirically: SynGuar significantly improves the accuracy of existing synthesizers in generating the right target programs. But with fewer examples chosen arbitrarily, the same baseline synthesizers (without SynGuar) overfit and lose accuracy. @InProceedings{ESEC/FSE21p677, author = {Bo Wang and Teodora Baluta and Aashish Kolluri and Prateek Saxena}, title = {SynGuar: Guaranteeing Generalization in Programming by Example}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {677--689}, doi = {10.1145/3468264.3468621}, year = {2021}, } Publisher's Version Info Artifacts Reusable |
|
Komondoor, Raghavan |
ESEC/FSE '21: "GenSys: A Scalable Fixed-Point ..."
GenSys: A Scalable Fixed-Point Engine for Maximal Controller Synthesis over Infinite State Spaces
Stanly Samuel, Deepak D'Souza, and Raghavan Komondoor (IISc Bengaluru, India) The synthesis of maximally-permissive controllers in infinite-state systems has many practical applications. Such controllers directly correspond to maximal winning strategies in logically specified infinite-state two-player games. In this paper, we introduce a tool called GenSys which is a fixed-point engine for computing maximal winning strategies for players in infinite-state safety games. A key feature of GenSys is that it leverages the capabilities of existing off-the-shelf solvers to implement its fixed point engine. GenSys outperforms state-of-the-art tools in this space by a significant margin. Our tool has solved some of the challenging problems in this space, is scalable, and also synthesizes compact controllers. These controllers are comparatively small in size and easier to comprehend. GenSys is freely available for use and is available under an open-source license. @InProceedings{ESEC/FSE21p1585, author = {Stanly Samuel and Deepak D'Souza and Raghavan Komondoor}, title = {GenSys: A Scalable Fixed-Point Engine for Maximal Controller Synthesis over Infinite State Spaces}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1585--1589}, doi = {10.1145/3468264.3473126}, year = {2021}, } Publisher's Version Info |
|
Kovalenko, Vladimir |
ESEC/FSE '21: "Authorship Attribution of ..."
Authorship Attribution of Source Code: A Language-Agnostic Approach and Applicability in Software Engineering
Egor Bogomolov, Vladimir Kovalenko, Yurii Rebryk, Alberto Bacchelli, and Timofey Bryksin (JetBrains Research, Russia; HSE University, Russia; JetBrains Research, Netherlands; University of Zurich, Switzerland) Authorship attribution (i.e., determining who is the author of a piece of source code) is an established research topic. State-of-the-art results for the authorship attribution problem look promising for the software engineering field, where they could be applied to detect plagiarized code and prevent legal issues. With this article, we first introduce a new language-agnostic approach to authorship attribution of source code. Then, we discuss limitations of existing synthetic datasets for authorship attribution, and propose a data collection approach that delivers datasets that better reflect aspects important for potential practical use in software engineering. Finally, we demonstrate that high accuracy of authorship attribution models on existing datasets drastically drops when they are evaluated on more realistic data. We outline next steps for the design and evaluation of authorship attribution models that could bring the research efforts closer to practical use for software engineering. @InProceedings{ESEC/FSE21p932, author = {Egor Bogomolov and Vladimir Kovalenko and Yurii Rebryk and Alberto Bacchelli and Timofey Bryksin}, title = {Authorship Attribution of Source Code: A Language-Agnostic Approach and Applicability in Software Engineering}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {932--944}, doi = {10.1145/3468264.3468606}, year = {2021}, } Publisher's Version |
|
Krishna, Rahul |
ESEC/FSE '21: "Mono2Micro: A Practical and ..."
Mono2Micro: A Practical and Effective Tool for Decomposing Monolithic Java Applications to Microservices
Anup K. Kalia, Jin Xiao, Rahul Krishna, Saurabh Sinha, Maja Vukovic, and Debasish Banerjee (IBM Research, USA; IBM, USA) In migrating production workloads to cloud, enterprises often face the daunting task of evolving monolithic applications toward a microservice architecture. At IBM, we developed a tool called Mono2Micro to assist with this challenging task. Mono2Micro performs spatio-temporal decomposition, leveraging well-defined business use cases and runtime call relations to create functionally cohesive partitioning of application classes. Our preliminary evaluation of Mono2Micro showed promising results. How well does Mono2Micro perform against other decomposition techniques, and how do practitioners perceive the tool? This paper describes the technical foundations of Mono2Micro and presents results to answer these two questions. To answer the first question, we evaluated Mono2Micro against four existing techniques on a set of open-source and proprietary Java applications and using different metrics to assess the quality of decomposition and tool’s efficiency. Our results show that Mono2Micro significantly outperforms state-of-the-art baselines in specific metrics well-defined for the problem domain. To answer the second question, we conducted a survey of twenty-one practitioners in various industry roles who have used Mono2Micro. This study highlights several benefits of the tool, interesting practitioner perceptions, and scope for further improvements. Overall, these results show that Mono2Micro can provide a valuable aid to practitioners in creating functionally cohesive and explainable microservice decompositions. @InProceedings{ESEC/FSE21p1214, author = {Anup K. Kalia and Jin Xiao and Rahul Krishna and Saurabh Sinha and Maja Vukovic and Debasish Banerjee}, title = {Mono2Micro: A Practical and Effective Tool for Decomposing Monolithic Java Applications to Microservices}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1214--1224}, doi = {10.1145/3468264.3473915}, year = {2021}, } Publisher's Version |
|
Kumar, Rahul |
ESEC/FSE '21: "Data-Driven Test Selection ..."
Data-Driven Test Selection at Scale
Sonu Mehta, Farima Farmahinifarahani, Ranjita Bhagwan, Suraj Guptha, Sina Jafari, Rahul Kumar, Vaibhav Saini, and Anirudh Santhiar (Microsoft Research, India; University of California at Irvine, USA; Microsoft, USA) Large-scale services depend on Continuous Integration/Continuous Deployment (CI/CD) processes to maintain their agility and code-quality. Change-based testing plays an important role in finding bugs, but testing after every change is prohibitively expensive at a scale where thousands of changes are committed every hour. Test selection models deal with this issue by running a subset of tests for every change. In this paper, we present a generic, language-agnostic and lightweight statistical model for test selection. Unlike existing techniques, the proposed model does not require complex feature extraction techniques. Consequently, it scales to hundreds of repositories of varying characteristics while capturing more than 99% of buggy pull requests. Additionally, to better evaluate test selection models, we propose application-specific metrics that capture both a reduction in resource cost and a reduction in pull-request turn-around time. By evaluating our model on 22 large repositories at Microsoft, we find that we can save 15%−30% of compute time while reporting back more than ≈99% of buggy pull requests. @InProceedings{ESEC/FSE21p1225, author = {Sonu Mehta and Farima Farmahinifarahani and Ranjita Bhagwan and Suraj Guptha and Sina Jafari and Rahul Kumar and Vaibhav Saini and Anirudh Santhiar}, title = {Data-Driven Test Selection at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1225--1235}, doi = {10.1145/3468264.3473916}, year = {2021}, } Publisher's Version |
|
Kurbatova, Zarina |
ESEC/FSE '21: "One Thousand and One Stories: ..."
One Thousand and One Stories: A Large-Scale Survey of Software Refactoring
Yaroslav Golubev, Zarina Kurbatova, Eman Abdullah AlOmar, Timofey Bryksin, and Mohamed Wiem Mkaouer (JetBrains Research, Russia; Rochester Institute of Technology, USA; HSE University, Russia) Despite the availability of refactoring as a feature in popular IDEs, recent studies revealed that developers are reluctant to use them, and still prefer the manual refactoring of their code. At JetBrains, our goal is to fully support refactoring features in IntelliJ-based IDEs and improve their adoption in practice. Therefore, we start by raising the following main questions. How exactly do people refactor code? What refactorings are the most popular? Why do some developers tend not to use convenient IDE refactoring tools? In this paper, we investigate the raised questions through the design and implementation of a survey targeting 1,183 users of IntelliJ-based IDEs. Our quantitative and qualitative analysis of the survey results shows that almost two-thirds of developers spend more than one hour in a single session refactoring their code; that refactoring types vary greatly in popularity; and that a lot of developers would like to know more about IDE refactoring features but lack the means to do so. These results serve us internally to support the next generation of refactoring features, as well as can help our research community to establish new directions in the refactoring usability research. @InProceedings{ESEC/FSE21p1303, author = {Yaroslav Golubev and Zarina Kurbatova and Eman Abdullah AlOmar and Timofey Bryksin and Mohamed Wiem Mkaouer}, title = {One Thousand and One Stories: A Large-Scale Survey of Software Refactoring}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1303--1313}, doi = {10.1145/3468264.3473924}, year = {2021}, } Publisher's Version |
|
Kusumoto, Shinji |
ESEC/FSE '21: "NIL: Large-Scale Detection ..."
NIL: Large-Scale Detection of Large-Variance Clones
Tasuku Nakagawa, Yoshiki Higo, and Shinji Kusumoto (Osaka University, Japan) A code clone (in short, clone) is a code fragment that is identical or similar to other code fragments in source code. Clones generated by a large number of changes to copy-and-pasted code fragments are called large-variance (modifications are scattered) or large-gap (modifications are in one place) clones. It is difficult for general clone detection techniques to detect such clones and thus specialized techniques are necessary. In addition, with the rapid growth of software development, scalable clone detectors that can detect clones in large codebases are required. However, there are no existing techniques for quickly detecting large-variance or large-gap clones in large codebases. In this paper, we propose a scalable clone detection technique that can detect large-variance clones from large codebases and describe its implementation, called NIL. NIL is a token-based clone detector that efficiently identifies clone candidates using an N-gram representation of token sequences and an inverted index. Then, NIL verifies the clone candidates by measuring their similarity based on the longest common subsequence between their token sequences. We evaluate NIL in terms of large- variance clone detection accuracy, general Type-1, Type-2, and Type- 3 clone detection accuracy, and scalability. Our experimental results show that NIL has higher accuracy in terms of large-variance clone detection, equivalent accuracy in terms of general clone detection, and the shortest execution time for inputs of various sizes (1–250 MLOC) compared to existing state-of-the-art tools. @InProceedings{ESEC/FSE21p830, author = {Tasuku Nakagawa and Yoshiki Higo and Shinji Kusumoto}, title = {NIL: Large-Scale Detection of Large-Variance Clones}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {830--841}, doi = {10.1145/3468264.3468564}, year = {2021}, } Publisher's Version Info |
|
Kuznetsov, Konstantin |
ESEC/FSE '21: "Frontmatter: Mining Android ..."
Frontmatter: Mining Android User Interfaces at Scale
Konstantin Kuznetsov, Chen Fu, Song Gao, David N. Jansen, Lijun Zhang, and Andreas Zeller (CISPA, Germany; Institute of Software at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China; Institute of Intelligent Software, China) We introduce Frontmatter: the largest open-access dataset containing user interface models of about 160,000 Android apps. Frontmatter opens the door for comprehensive mining of mobile user interfaces, jumpstarting empirical research at a large scale, addressing questions such as "How many travel apps require registration?", "Which apps do not follow accessibility guidelines?", "Does the user interface correspond to the description?", and many more. The Frontmatter UI analysis tool and the Frontmatter dataset are available under an open-source license. @InProceedings{ESEC/FSE21p1580, author = {Konstantin Kuznetsov and Chen Fu and Song Gao and David N. Jansen and Lijun Zhang and Andreas Zeller}, title = {Frontmatter: Mining Android User Interfaces at Scale}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1580--1584}, doi = {10.1145/3468264.3473125}, year = {2021}, } Publisher's Version Video |
|
Kwon, Yonghwi |
ESEC/FSE '21: "Swarmbug: Debugging Configuration ..."
Swarmbug: Debugging Configuration Bugs in Swarm Robotics
Chijung Jung, Ali Ahad, Jinho Jung, Sebastian Elbaum, and Yonghwi Kwon (University of Virginia, USA; Georgia Institute of Technology, USA) Swarm robotics collectively solve problems that are challenging for individual robots, from environmental monitoring to entertainment. The algorithms enabling swarms allow individual robots of the swarm to plan, share, and coordinate their trajectories and tasks to achieve a common goal. Such algorithms rely on a large number of configurable parameters that can be tailored to target particular scenarios. This large configuration space, the complexity of the algorithms, and the dependencies with the robots’ setup and performance make debugging and fixing swarms configuration bugs extremely challenging. This paper proposes Swarmbug, a swarm debugging system that automatically diagnoses and fixes buggy behaviors caused by misconfiguration. The essence of Swarmbug is the novel concept called the degree of causal contribution (Dcc), which abstracts impacts of environment configurations (e.g., obstacles) to the drones in a swarm via behavior causal analysis. Swarmbug automatically generates, validates, and ranks fixes for configuration bugs. We evaluate Swarmbug on four diverse swarm algorithms. Swarmbug successfully fixes four configuration bugs in the evaluated algorithms, showing that it is generic and effective. We also conduct a real-world experiment with physical drones to show the Swarmbug’s fix is effective in the real-world. @InProceedings{ESEC/FSE21p868, author = {Chijung Jung and Ali Ahad and Jinho Jung and Sebastian Elbaum and Yonghwi Kwon}, title = {Swarmbug: Debugging Configuration Bugs in Swarm Robotics}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {868--880}, doi = {10.1145/3468264.3468601}, year = {2021}, } Publisher's Version Info |
|
Lampel, Johannes |
ESEC/FSE '21: "When Life Gives You Oranges: ..."
When Life Gives You Oranges: Detecting and Diagnosing Intermittent Job Failures at Mozilla
Johannes Lampel, Sascha Just, Sven Apel, and Andreas Zeller (CISPA, Germany; Saarland University, Germany; Microsoft, USA) Continuous delivery of cloud systems requires constant running of jobs (build processes, tests, etc.). One issue that plagues this continuous integration (CI) process are intermittent failures - non-deterministic, false alarms that do not result from a bug in the software or job specification, but rather from issues in the underlying infrastructure. At Mozilla, such intermittent failures are called oranges as a reference to the color of the build status indicator. As such intermittent failures disrupt CI and lead to failures, they erode the developers' trust in the jobs. We present a novel approach that automatically classifies failing jobs to determine whether job execution failures arise from an actual software bug or were caused by flakiness in the job (e.g., test) or the underlying infrastructure. For this purpose, we train classification models using job telemetry data to diagnose failure patterns involving features such as runtime, cpu load, operating system version, or specific platform with high precision. In an evaluation on a set of Mozilla CI jobs, our approach achieves precision scores of 73%, on average, across all data sets with some test suites achieving precision scores good enough for fully automated classification (i.e., precision scores of up to 100%), and recall scores of 82% on average (up to 94%). @InProceedings{ESEC/FSE21p1381, author = {Johannes Lampel and Sascha Just and Sven Apel and Andreas Zeller}, title = {When Life Gives You Oranges: Detecting and Diagnosing Intermittent Job Failures at Mozilla}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1381--1392}, doi = {10.1145/3468264.3473931}, year = {2021}, } Publisher's Version |
|
Langdon, William B. |
ESEC/FSE '21: "Software Robustness: A Survey, ..."
Software Robustness: A Survey, a Theory, and Prospects
Justyna Petke, David Clark, and William B. Langdon (University College London, UK) If a software execution is disrupted, witnessing the execution at a later point may see evidence of the disruption or not. If not, we say the disruption failed to propagate. One name for this phenomenon is software robustness but it appears in different contexts in software engineering with different names. Contexts include testing, security, reliability, and automated code improvement or repair. Names include coincidental correctness, correctness attraction, transient error reliability. As witnessed, it is a dynamic phenomenon but any explanation with predictive power must necessarily take a static view. As a dynamic/static phenomenon it is convenient to take a statistical view of it which we do by way of information theory. We theorise that for failed disruption propagation to occur, a necessary condition is that the code region where the disruption occurs is composed with or succeeded by a subsequent code region that suffers entropy loss over all executions. The higher is the entropy loss, the higher the likelihood that disruption in the first region fails to propagate to the downstream observation point. We survey different research silos that address this phenomenon and explain how the theory might be exploited in software engineering. @InProceedings{ESEC/FSE21p1475, author = {Justyna Petke and David Clark and William B. Langdon}, title = {Software Robustness: A Survey, a Theory, and Prospects}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1475--1478}, doi = {10.1145/3468264.3473133}, year = {2021}, } Publisher's Version |
|
Laredo, Jim A. |
ESEC/FSE '21: "Probing Model Signal-Awareness ..."
Probing Model Signal-Awareness via Prediction-Preserving Input Minimization
Sahil Suneja, Yunhui Zheng, Yufan Zhuang, Jim A. Laredo, and Alessandro Morari (IBM Research, USA) This work explores the signal awareness of AI models for source code understanding. Using a software vulnerability detection use case, we evaluate the models' ability to capture the correct vulnerability signals to produce their predictions. Our prediction-preserving input minimization (P2IM) approach systematically reduces the original source code to a minimal snippet which a model needs to maintain its prediction. The model's reliance on incorrect signals is then uncovered when the vulnerability in the original code is missing in the minimal snippet, both of which the model however predicts as being vulnerable. We measure the signal awareness of models using a new metric we propose -- Signal-aware Recall (SAR). We apply P2IM on three different neural network architectures across multiple datasets. The results show a sharp drop in the model's Recall from the high 90s to sub-60s with the new metric, highlighting that the models are presumably picking up a lot of noise or dataset nuances while learning their vulnerability detection logic. Although the drop in model performance may be perceived as an adversarial attack, but this isn't P2IM's objective. The idea is rather to uncover the signal-awareness of a black-box model in a data-driven manner via controlled queries. SAR's purpose is to measure the impact of task-agnostic model training, and not to suggest a shortcoming in the Recall metric. The expectation, in fact, is for SAR to match Recall in the ideal scenario where the model truly captures task-specific signals. @InProceedings{ESEC/FSE21p945, author = {Sahil Suneja and Yunhui Zheng and Yufan Zhuang and Jim A. Laredo and Alessandro Morari}, title = {Probing Model Signal-Awareness via Prediction-Preserving Input Minimization}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {945--955}, doi = {10.1145/3468264.3468545}, year = {2021}, } Publisher's Version |
|
Lassenius, Casper |
ESEC/FSE '21: "Organizational Implications ..."
Organizational Implications of Agile Adoption: A Case Study from the Public Sector
Parastoo Mohagheghi and Casper Lassenius (NAV, Norway; Simula Metropolitan Center for Digital Engineering, Norway) While agile software development is increasingly adopted in large organizations, there is still a lack of studies on how traditionally organized enterprises adopt and scale agile forms of organization. This industrial multiple embedded case study explores how the organizational model of a large public sector entity evolved over four years to support the adoption of agile software development methods. Data was collected through semi-structured interviews and document analysis. We describe the change in three phases: pre-transformation, initial transformation, and maturing. Changes in three subcases of organizational units are further described in detail. Moving from an outsourced project-based way-of-working with separate business, IT and vendor organizations, the new organizational design emphasizes internal development capability, cross-functional autonomous teams organized around products and grouped in product areas, and continuous delivery. Starting from the IT department, the transformation expanded to the whole organization, and went beyond software development to the finance and leadership. We describe the target and intermediate organizations employed when adopting agile development methods for the whole organization and three organizational units responsible for different services. Defining suitable product boundaries, achieving alignment across teams, enhancing the competence of product owners, the coexistence of old and new types of systems, processes, and structures, and balancing the teams’ need for autonomy with the organizational needs for coordination and control are remaining challenges. @InProceedings{ESEC/FSE21p1444, author = {Parastoo Mohagheghi and Casper Lassenius}, title = {Organizational Implications of Agile Adoption: A Case Study from the Public Sector}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1444--1454}, doi = {10.1145/3468264.3473937}, year = {2021}, } Publisher's Version |
|
Lee, Woosuk |
ESEC/FSE '21: "Context-Aware and Data-Driven ..."
Context-Aware and Data-Driven Feedback Generation for Programming Assignments
Dowon Song, Woosuk Lee, and Hakjoo Oh (Korea University, South Korea; Hanyang University, South Korea) Recently, various techniques have been proposed to automatically provide personalized feedback on programming exercises. The cutting edge of which is the data-driven approaches that leverage a corpus of existing correct programs and repair incorrect submissions by using similar reference programs in the corpus. However, current data-driven techniques work under the strong assumption that the corpus contains a solution program that is close enough to the incorrect submission. In this paper, we present Cafe, a new data-driven approach for feedback generation that overcomes this limitation. Unlike existing approaches, Cafe uses a novel context-aware repair algorithm that can generate feedback even if the incorrect program differs significantly from the reference solutions. We implemented Cafe for OCaml and evaluated it with 4,211 real student programs. The results show that Cafe is able to repair 83 % of incorrect submissions, far outperforming existing approaches. @InProceedings{ESEC/FSE21p328, author = {Dowon Song and Woosuk Lee and Hakjoo Oh}, title = {Context-Aware and Data-Driven Feedback Generation for Programming Assignments}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {328--340}, doi = {10.1145/3468264.3468598}, year = {2021}, } Publisher's Version Artifacts Reusable |
|
Le Goues, Claire |
ESEC/FSE '21: "VarFix: Balancing Edit Expressiveness ..."
VarFix: Balancing Edit Expressiveness and Search Effectiveness in Automated Program Repair
Chu-Pan Wong, Priscila Santiesteban, Christian Kästner, and Claire Le Goues (Carnegie Mellon University, USA; Coe College, USA) Automatically repairing a buggy program is essentially a search problem, searching for code transformations that pass a set of tests. Various search strategies have been explored, but they either navigate the search space in an ad hoc way using heuristics, or systemically but at the cost of limited edit expressiveness in the kinds of supported program edits. In this work, we explore the possibility of systematically navigating the search space without sacrificing edit expressiveness. The key enabler of this exploration is variational execution, a dynamic analysis technique that has been shown to be effective at exploring many similar executions in large search spaces. We evaluate our approach on IntroClassJava and Defects4J, showing that a systematic search is effective at leveraging and combining fixing ingredients to find patches, including many high-quality patches and multi-edit patches. @InProceedings{ESEC/FSE21p354, author = {Chu-Pan Wong and Priscila Santiesteban and Christian Kästner and Claire Le Goues}, title = {VarFix: Balancing Edit Expressiveness and Search Effectiveness in Automated Program Repair}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {354--366}, doi = {10.1145/3468264.3468600}, year = {2021}, } Publisher's Version Info |
|
Lerner, Sorin |
ESEC/FSE '21: "Synthesis of Web Layouts from ..."
Synthesis of Web Layouts from Examples
Dylan Lukes, John Sarracino, Cora Coleman, Hila Peleg, Sorin Lerner, and Nadia Polikarpova (University of California at San Diego, USA; Cornell University, USA; Technion, Israel) We present a new technique for synthesizing dynamic, constraint-based visual layouts from examples. Our technique tackles two major challenges of layout synthesis. First, realistic layouts, especially on the web, often contain hundreds of elements, so the synthesizer needs to scale to layouts of this complexity. Second, in common usage scenarios, examples contain noise, so the synthesizer needs to be tolerant to imprecise inputs. To address these challenges we propose a two-phase approach to synthesis, where a local inference phase rapidly generates a set of likely candidate constraints that satisfy the given examples, and then a global inference phase selects a subset of the candidates that generalizes to unseen inputs. This separation of concerns helps our technique tackle the two challenges: the local phase employs Bayesian inference to handle noisy inputs, while the global phase leverages the hierarchical nature of complex layouts to decompose the global inference problem into inference of independent sub-layouts. We implemented this technique in a tool called Mockdown and evaluated it on nine real-world web layouts, as well as a series of widespread layout components and an existing dataset of 644 Android applications. Our experiments show that Mockdown is able to synthesize a highly accurate layout for the majority of benchmarks from just three examples (two for Android layouts), and that it scales to layouts with over 600 elements, about 30x more than has been reported in prior work on layout synthesis. @InProceedings{ESEC/FSE21p651, author = {Dylan Lukes and John Sarracino and Cora Coleman and Hila Peleg and Sorin Lerner and Nadia Polikarpova}, title = {Synthesis of Web Layouts from Examples}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {651--663}, doi = {10.1145/3468264.3468533}, year = {2021}, } Publisher's Version Info Artifacts Reusable |
|
Li, Bo |
ESEC/FSE '21: "Embedding App-Library Graph ..."
Embedding App-Library Graph for Neural Third Party Library Recommendation
Bo Li, Qiang He, Feifei Chen, Xin Xia, Li Li, John Grundy, and Yun Yang (Swinburne University of Technology, Australia; Deakin University, Australia; Monash University, Australia) The mobile app marketplace has fierce competition for mobile app developers, who need to develop and update their apps as soon as possible to gain first mover advantage. Third-party libraries (TPLs) offer developers an easier way to enhance their apps with new features. However, how to find suitable candidates among the high number and fast-changing TPLs is a challenging problem. TPL recommendation is a promising solution, but unfortunately existing approaches suffer from low accuracy in recommendation results. To tackle this challenge, we propose GRec, a graph neural network (GNN) based approach, for recommending potentially useful TPLs for app development. GRec models mobile apps, TPLs, and their interactions into an app-library graph. It then distills app-library interaction information from the app-library graph to make more accurate TPL recommendations. To evaluate GRec’s performance, we conduct comprehensive experiments based on a large-scale real-world Android app dataset containing 31,432 Android apps, 752 distinct TPLs, and 537,011 app-library usage records. Our experimental results illustrate that GRec can significantly increase the prediction accuracy and diversify the prediction results compared with state-of-the-art methods. A user study performed with app developers also confirms GRec's usefulness for real-world mobile app development. @InProceedings{ESEC/FSE21p466, author = {Bo Li and Qiang He and Feifei Chen and Xin Xia and Li Li and John Grundy and Yun Yang}, title = {Embedding App-Library Graph for Neural Third Party Library Recommendation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {466--477}, doi = {10.1145/3468264.3468552}, year = {2021}, } Publisher's Version |
|
Li, Huizhong |
ESEC/FSE '21: "Making Smart Contract Development ..."
Making Smart Contract Development More Secure and Easier
Meng Ren, Fuchen Ma, Zijing Yin, Ying Fu, Huizhong Li, Wanli Chang, and Yu Jiang (Tsinghua University, China; Ant Group, China; WeBank, China; University of York, UK) With the rapid development of distributed applications, smart contracts have attracted more and more developers' attentions. However, developers or domain experts have different levels of familiarity with specific programming languages, like Solidity, and those vulnerabilities hidden in the code would be exploited and result in huge property losses. Existing auxiliary tools lack security considerations. Most of them only provide word completion based on fuzzy search and detection services for limited types of vulnerabilities, which results in the manpower waste during coding and potential vulnerability threats after deployment. In this work, we propose an integrated framework to enhance security in the two stages of recommendation and validation, assisting developers to implement more secure contracts more quickly. First, we reinforce original smart contracts with general patch patterns and secure programming standards for training, and design a real-time code suggestion algorithm to predict secure words for selection. Then, we integrate multiple widely-used testing tools to provide validation services. For evaluation, we collected 47,398 real-world contracts, and the result shows that it outperforms existing platforms and tools, improving the average word suggestion accuracy by 30%-60% and helping detect about 25%-61% more vulnerabilities. In most cases, our framework can correctly predict next words with the probability up to 82%-97% within top ten candidates. Compared with professional vulnerability mining tools, it can find more vulnerabilities and provide targeted modification suggestions without frivolous configurations. Currently, this framework has been used as the official development tool of WeBank and integrated as the recommended platform by FISCO-BCOS community. @InProceedings{ESEC/FSE21p1360, author = {Meng Ren and Fuchen Ma and Zijing Yin and Ying Fu and Huizhong Li and Wanli Chang and Yu Jiang}, title = {Making Smart Contract Development More Secure and Easier}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1360--1370}, doi = {10.1145/3468264.3473929}, year = {2021}, } Publisher's Version |
|
Li, Jiesong |
ESEC/FSE '21: "Identifying Bad Software Changes ..."
Identifying Bad Software Changes via Multimodal Anomaly Detection for Online Service Systems
Nengwen Zhao, Junjie Chen, Zhaoyang Yu, Honglin Wang, Jiesong Li, Bin Qiu, Hongyu Xu, Wenchi Zhang, Kaixin Sui, and Dan Pei (Tsinghua University, China; Tianjin University, China; BizSeer, China; China Guangfa Bank, China) In large-scale online service systems, software changes are inevitable and frequent. Due to importing new code or configurations, changes are likely to incur incidents and destroy user experience. Thus it is essential for engineers to identify bad software changes, so as to reduce the influence of incidents and improve system re- liability. To better understand bad software changes, we perform the first empirical study based on large-scale real-world data from a large commercial bank. Our quantitative analyses indicate that about 50.4% of incidents are caused by bad changes, mainly be- cause of code defect, configuration error, resource contention, and software version. Besides, our qualitative analyses show that the current practice of detecting bad software changes performs not well to handle heterogeneous multi-source data involved in soft- ware changes. Based on the findings and motivation obtained from the empirical study, we propose a novel approach named SCWarn aiming to identify bad changes and produce interpretable alerts accurately and timely. The key idea of SCWarn is drawing support from multimodal learning to identify anomalies from heterogeneous multi-source data. An extensive study on two datasets with various bad software changes demonstrates our approach significantly outperforms all the compared approaches, achieving 0.95 F1-score on average and reducing MTTD (mean time to detect) by 20.4%∼60.7%. In particular, we shared some success stories and lessons learned from the practical usage. @InProceedings{ESEC/FSE21p527, author = {Nengwen Zhao and Junjie Chen and Zhaoyang Yu and Honglin Wang and Jiesong Li and Bin Qiu and Hongyu Xu and Wenchi Zhang and Kaixin Sui and Dan Pei}, title = {Identifying Bad Software Changes via Multimodal Anomaly Detection for Online Service Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {527--539}, doi = {10.1145/3468264.3468543}, year = {2021}, } Publisher's Version |
|
Li, Johnny |
ESEC/FSE '21: "Intelligent Container Reallocation ..."
Intelligent Container Reallocation at Microsoft 365
Bo Qiao, Fangkai Yang, Chuan Luo, Yanan Wang, Johnny Li, Qingwei Lin, Hongyu Zhang, Mohit Datta, Andrew Zhou, Thomas Moscibroda, Saravanakumar Rajmohan, and Dongmei Zhang (Microsoft Research, China; Microsoft 365, China; University of Newcastle, Australia; Microsoft 365, USA; Microsoft Azure, USA) The use of containers in microservices has gained popularity as it facilitates agile development, resource governance, and software maintenance. Container reallocation aims to achieve workload balance via reallocating containers over physical machines. It affects the overall performance of microservice-based systems. However, container scheduling and reallocation remain an open issue due to their complexity in real-world scenarios. In this paper, we propose a novel Multi-Phase Local Search (MPLS) algorithm to optimize container reallocation. The experimental results show that our optimization algorithm outperforms state-of-the-art methods. In practice, it has been successfully applied to Microsoft 365 system to mitigate hotspot machines and balance workloads across the entire system. @InProceedings{ESEC/FSE21p1438, author = {Bo Qiao and Fangkai Yang and Chuan Luo and Yanan Wang and Johnny Li and Qingwei Lin and Hongyu Zhang and Mohit Datta and Andrew Zhou and Thomas Moscibroda and Saravanakumar Rajmohan and Dongmei Zhang}, title = {Intelligent Container Reallocation at Microsoft 365}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1438--1443}, doi = {10.1145/3468264.3473936}, year = {2021}, } Publisher's Version |
|
Li, Kai |
ESEC/FSE '21: "iBatch: Saving Ethereum Fees ..."
iBatch: Saving Ethereum Fees via Secure and Cost-Effective Batching of Smart-Contract Invocations
Yibo Wang, Qi Zhang, Kai Li, Yuzhe Tang, Jiaqi Chen, Xiapu Luo, and Ting Chen (Syracuse University, USA; Hong Kong Polytechnic University, China; University of Electronic Science and Technology of China, China) This paper presents iBatch, a middleware system running on top of an operational Ethereum network to enable secure batching of smart-contract invocations against an untrusted relay server off-chain. iBatch does so at a low overhead by validating the server's batched invocations in smart contracts without additional states. The iBatch mechanism supports a variety of policies, ranging from conservative to aggressive batching, and can be configured adaptively to the current workloads. iBatch automatically rewrites smart contracts to integrate with legacy applications and support large-scale deployment. For cost evaluation, we develop a platform with fast and cost-accurate transaction replaying, build real transaction benchmarks on popular Ethereum applications, and build a functional prototype of iBatch on Ethereum. The evaluation results show that iBatch saves 14.6%-59.1% Gas cost per invocation with a moderate 2-minute delay and 19.06%-31.52% Ether cost per invocation with a delay of 0.26-1.66 blocks. @InProceedings{ESEC/FSE21p566, author = {Yibo Wang and Qi Zhang and Kai Li and Yuzhe Tang and Jiaqi Chen and Xiapu Luo and Ting Chen}, title = {iBatch: Saving Ethereum Fees via Secure and Cost-Effective Batching of Smart-Contract Invocations}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {566--577}, doi = {10.1145/3468264.3468568}, year = {2021}, } Publisher's Version |
|
Li, Keqiu |
ESEC/FSE '21: "A Comprehensive Study on Learning-Based ..."
A Comprehensive Study on Learning-Based PE Malware Family Classification Methods
Yixuan Ma, Shuang Liu, Jiajun Jiang, Guanhong Chen, and Keqiu Li (State Key Laboratory of Communication Content Cognition, China; Tianjin University, China) Driven by the high profit, Portable Executable (PE) malware has been consistently evolving in terms of both volume and sophistication. PE malware family classification has gained great attention and a large number of approaches have been proposed. With the rapid development of machine learning techniques and the exciting results they achieved on various tasks, machine learning algorithms have also gained popularity in the PE malware family classification task. Three mainstream approaches that use learning based algorithms, as categorized by the input format the methods take, are image-based, binary-based and disassembly-based approaches. Although a large number of approaches are published, there is no consistent comparisons on those approaches, especially from the practical industry adoption perspective. Moreover, there is no comparison in the scenario of concept drift, which is a fact for the malware classification task due to the fast evolving nature of malware. In this work, we conduct a thorough empirical study on learning-based PE malware classification approaches on 4 different datasets and consistent experiment settings. Based on the experiment results and an interview with our industry partners, we find that (1) there is no individual class of methods that significantly outperforms the others; (2) All classes of methods show performance degradation on concept drift (by an average F1-score of 32.23%); and (3) the prediction time and high memory consumption hinder existing approaches from being adopted for industry usage. @InProceedings{ESEC/FSE21p1314, author = {Yixuan Ma and Shuang Liu and Jiajun Jiang and Guanhong Chen and Keqiu Li}, title = {A Comprehensive Study on Learning-Based PE Malware Family Classification Methods}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1314--1325}, doi = {10.1145/3468264.3473925}, year = {2021}, } Publisher's Version Info |
|
Li, Li |
ESEC/FSE '21: "Embedding App-Library Graph ..."
Embedding App-Library Graph for Neural Third Party Library Recommendation
Bo Li, Qiang He, Feifei Chen, Xin Xia, Li Li, John Grundy, and Yun Yang (Swinburne University of Technology, Australia; Deakin University, Australia; Monash University, Australia) The mobile app marketplace has fierce competition for mobile app developers, who need to develop and update their apps as soon as possible to gain first mover advantage. Third-party libraries (TPLs) offer developers an easier way to enhance their apps with new features. However, how to find suitable candidates among the high number and fast-changing TPLs is a challenging problem. TPL recommendation is a promising solution, but unfortunately existing approaches suffer from low accuracy in recommendation results. To tackle this challenge, we propose GRec, a graph neural network (GNN) based approach, for recommending potentially useful TPLs for app development. GRec models mobile apps, TPLs, and their interactions into an app-library graph. It then distills app-library interaction information from the app-library graph to make more accurate TPL recommendations. To evaluate GRec’s performance, we conduct comprehensive experiments based on a large-scale real-world Android app dataset containing 31,432 Android apps, 752 distinct TPLs, and 537,011 app-library usage records. Our experimental results illustrate that GRec can significantly increase the prediction accuracy and diversify the prediction results compared with state-of-the-art methods. A user study performed with app developers also confirms GRec's usefulness for real-world mobile app development. @InProceedings{ESEC/FSE21p466, author = {Bo Li and Qiang He and Feifei Chen and Xin Xia and Li Li and John Grundy and Yun Yang}, title = {Embedding App-Library Graph for Neural Third Party Library Recommendation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {466--477}, doi = {10.1145/3468264.3468552}, year = {2021}, } Publisher's Version |
|
Li, Miqing |
ESEC/FSE '21: "Multi-objectivizing Software ..."
Multi-objectivizing Software Configuration Tuning
Tao Chen and Miqing Li (University of Electronic Science and Technology of China, China; Loughborough University, UK; University of Birmingham, UK) Automatically tuning software configuration for optimizing a single performance attribute (e.g., minimizing latency) is not trivial, due to the nature of the configuration systems (e.g., complex landscape and expensive measurement). To deal with the problem, existing work has been focusing on developing various effective optimizers. However, a prominent issue that all these optimizers need to take care of is how to avoid the search being trapped in local optima — a hard nut to crack for software configuration tuning due to its rugged and sparse landscape, and neighboring configurations tending to behave very differently. Overcoming such in an expensive measurement setting is even more challenging. In this paper, we take a different perspective to tackle this issue. Instead of focusing on improving the optimizer, we work on the level of optimization model. We do this by proposing a meta multi-objectivization model (MMO) that considers an auxiliary performance objective (e.g., throughput in addition to latency). What makes this model unique is that we do not optimize the auxiliary performance objective, but rather use it to make similarly-performing while different configurations less comparable (i.e. Pareto nondominated to each other), thus preventing the search from being trapped in local optima. Experiments on eight real-world software systems/environments with diverse performance attributes reveal that our MMO model is statistically more effective than state-of-the-art single-objective counterparts in overcoming local optima (up to 42% gain), while using as low as 24% of their measurements to achieve the same (or better) performance result. @InProceedings{ESEC/FSE21p453, author = {Tao Chen and Miqing Li}, title = {Multi-objectivizing Software Configuration Tuning}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {453--465}, doi = {10.1145/3468264.3468555}, year = {2021}, } Publisher's Version Info |
|
Li, Shuqing |
ESEC/FSE '21: "An Exploratory Study of Autopilot ..."
An Exploratory Study of Autopilot Software Bugs in Unmanned Aerial Vehicles
Dinghua Wang, Shuqing Li, Guanping Xiao, Yepang Liu, and Yulei Sui (University of Technology Sydney, Australia; Southern University of Science and Technology, China; Nanjing University of Aeronautics and Astronautics, China) Unmanned aerial vehicles (UAVs) are becoming increasingly important and widely used in modern society. Software bugs in these systems can cause severe issues, such as system crashes, hangs, and undefined behaviors. Some bugs can also be exploited by hackers to launch security attacks, resulting in catastrophic consequences. Therefore, techniques that can help detect and fix software bugs in UAVs are highly desirable. However, although there are many existing studies on bugs in various types of software, the characteristics of UAV software bugs have never been systematically studied. This impedes the development of tools for assuring the dependability of UAVs. To bridge this gap, we conducted the first large-scale empirical study on two well-known open-source autopilot software platforms for UAVs, namely PX4 and Ardupilot, to characterize bugs in UAVs. Through analyzing 569 bugs from these two projects, we observed eight types of UAV-specific bugs (i.e., limit, math, inconsistency, priority, parameter, hardware support, correction, and initialization) and learned their root causes. Based on the bug taxonomy, we summarized common bug patterns and repairing strategies. We further identified five challenges associated with detecting and fixing such UAV-specific bugs. Our study can help researchers and practitioners to better understand the threats to the dependability of UAV systems and facilitate the future development of UAV bug diagnosis tools. @InProceedings{ESEC/FSE21p20, author = {Dinghua Wang and Shuqing Li and Guanping Xiao and Yepang Liu and Yulei Sui}, title = {An Exploratory Study of Autopilot Software Bugs in Unmanned Aerial Vehicles}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {20--31}, doi = {10.1145/3468264.3468559}, year = {2021}, } Publisher's Version ESEC/FSE '21: "ÐArcher: Detecting On-Chain-Off-Chain ..." ÐArcher: Detecting On-Chain-Off-Chain Synchronization Bugs in Decentralized Applications Wuqi Zhang, Lili Wei, Shuqing Li, Yepang Liu, and Shing-Chi Cheung (Hong Kong University of Science and Technology, China; Southern University of Science and Technology, China) Since the emergence of Ethereum, blockchain-based decentralized applications (DApps) have become increasingly popular and important. To balance the security, performance, and costs, a DApp typically consists of two layers: an on-chain layer to execute transactions and store crucial data on the blockchain and an off-chain layer to interact with users. A DApp needs to synchronize its off-chain layer with the on-chain layer proactively. Otherwise, the inconsistent data in the off-chain layer could mislead users and cause undesirable consequences, e.g., loss of transaction fees. However, transactions sent to the blockchain are not guaranteed to be executed and could even be reversed after execution due to chain reorganization. Such non-determinism in the transaction execution is unique to blockchain. DApp developers may fail to perform the on-chain-off-chain synchronization accurately due to their lack of familiarity with the complex transaction lifecycle. In this work, we investigate the challenges of synchronizing on-chain and off-chain data in Ethereum-based DApps. We present two types of bugs that could result in inconsistencies between the on-chain and off-chain layers. To help detect such on-chain-off-chain synchronization bugs, we introduce a state transition model to guide the testing of DApps and propose two effective oracles to facilitate the automatic identification of bugs. We build the first testing framework, ÐArcher, to detect on-chain-off-chain synchronization bugs in DApps. We have evaluated ÐArcher on 11 popular real-world DApps. ÐArcher achieves high precision (99.3%), recall (87.6%), and accuracy (89.4%) in bug detection and significantly outperforms the baseline methods. It has found 15 real bugs in the 11 DApps. So far, six of the 15 bugs have been confirmed by the developers, and three have been fixed. These promising results demonstrate the usefulness of ÐArcher. @InProceedings{ESEC/FSE21p553, author = {Wuqi Zhang and Lili Wei and Shuqing Li and Yepang Liu and Shing-Chi Cheung}, title = {ÐArcher: Detecting On-Chain-Off-Chain Synchronization Bugs in Decentralized Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {553--565}, doi = {10.1145/3468264.3468546}, year = {2021}, } Publisher's Version Info Artifacts Functional |
|
Li, Song |
ESEC/FSE '21: "Detecting Node.js Prototype ..."
Detecting Node.js Prototype Pollution Vulnerabilities via Object Lookup Analysis
Song Li, Mingqing Kang, Jianwei Hou, and Yinzhi Cao (Johns Hopkins University, USA; Renmin University of China, China) Prototype pollution is a type of vulnerability specific to prototype-based languages, such as JavaScript, which allows an adversary to pollute a base object’s property, leading to a further consequence such as Denial of Service (DoS), arbitrary code execution, and session fixation. On one hand, the only prior work in detecting prototype pollution adopts dynamic analysis to fuzz package inputs, which inevitably has code coverage issues in triggering some deeply embedded vulnerabilities. On the other hand, it is challenging to apply state-of-the-art static analysis in detecting prototype pollution because of the involvement of prototype chains and fine-grained object relations including built-in ones. In this paper, we propose a flow-, context-, and branch-sensitive static taint analysis tool, called ObjLupAnsys, to detect prototype pollution vulnerabilities. The key of ObjLupAnsys is a so-called object lookup analysis, which gradually expands the source and sink objects into big clusters with a complex inner structure by performing targeted object lookups in both clusters so that a system built-in function can be redefined. Specifically, at the source cluster, ObjLupAnsys proactively creates new object properties based on how the target program uses the initial source object; at the sink cluster, ObjLupAnsys assigns property values in object lookups to decrease the number of object lookups to reach a system built-in function. We implemented an open-source tool and applied it for the detection of prototype pollution among Node.js packages. Our evaluation shows that ObjLupAnsys finds 61 zero-day, previously-unknown, exploitable vulnerabilities as opposed to 18 by the state-of-the-art dynamic fuzzing tool and three by a state-of-the-art static analysis tool that is modified to detect prototype pollution. To date, 11 vulnerable Node.js packages are assigned with CVE numbers and five have already been patched by their developers. In addition, ObjLupAnsys also discovered seven applications or packages including a real-world, online website, which are indirectly vulnerable due to the inclusion of vulnerable packages found by ObjLupAnsys. @InProceedings{ESEC/FSE21p268, author = {Song Li and Mingqing Kang and Jianwei Hou and Yinzhi Cao}, title = {Detecting Node.js Prototype Pollution Vulnerabilities via Object Lookup Analysis}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {268--279}, doi = {10.1145/3468264.3468542}, year = {2021}, } Publisher's Version |
|
Li, Wenxin |
ESEC/FSE '21: "Generating Efficient Solvers ..."
Generating Efficient Solvers from Constraint Models
Shu Lin, Na Meng, and Wenxin Li (Peking University, China; Virginia Tech, USA) Combinatorial problems (CPs) arise in many areas, and people use constraint solvers to automatically solve these problems. However, the state-of-the-art constraint solvers (e.g., Gecode and Chuffed) have overly complicated software architectures; they compute solutions inefficiently. This paper presents a novel and model-driven approach---SoGen---to synthesize efficient problem-specific solvers from constraint models. Namely, when users model a CP with our domain-specific language PDL (short for Problem Description Language), SoGen automatically analyzes various properties of the problem (e.g., search space, value boundaries, function monotonicity, and overlapping subproblems), synthesizes an efficient solver algorithm based on those properties, and generates a C program as the problem solver. PDL is unique because it can create solvers that resolve constraints via dynamic programming (DP) search. For evaluation, we compared the solvers generated by SoGen with two state-of-the-art constraint solvers: Gecode and Chuffed. PDL's solvers resolved constraints more efficiently; they achieved up to 6,058x speedup over Gecode and up to 31,300x speedup over Chuffed. Additionally, we experimented with both SoGen and the state-of-the-art solver generator---Dominion. We found SoGen to generate solvers faster and the produced solvers are more efficient. @InProceedings{ESEC/FSE21p956, author = {Shu Lin and Na Meng and Wenxin Li}, title = {Generating Efficient Solvers from Constraint Models}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {956--967}, doi = {10.1145/3468264.3468566}, year = {2021}, } Publisher's Version |
|
Li, Xia |
ESEC/FSE '21: "Boosting Coverage-Based Fault ..."
Boosting Coverage-Based Fault Localization via Graph-Based Representation Learning
Yiling Lou, Qihao Zhu, Jinhao Dong, Xia Li, Zeyu Sun, Dan Hao, Lu Zhang, and Lingming Zhang (Peking University, China; Kennesaw State University, USA; University of Illinois at Urbana-Champaign, USA) Coverage-based fault localization has been extensively studied in the literature due to its effectiveness and lightweightness for real-world systems. However, existing techniques often utilize coverage in an oversimplified way by abstracting detailed coverage into numbers of tests or boolean vectors, thus limiting their effectiveness in practice. In this work, we present a novel coverage-based fault localization technique, GRACE, which fully utilizes detailed coverage information with graph-based representation learning. Our intuition is that coverage can be regarded as connective relationships between tests and program entities, which can be inherently and integrally represented by a graph structure: with tests and program entities as nodes, while with coverage and code structures as edges. Therefore, we first propose a novel graph-based representation to reserve all detailed coverage information and fine-grained code structures into one graph. Then we leverage Gated Graph Neural Network to learn valuable features from the graph-based coverage representation and rank program entities in a listwise way. Our evaluation on the widely used benchmark Defects4J (V1.2.0) shows that GRACE significantly outperforms state-of-the-art coverage-based fault localization: GRACE localizes 195 bugs within Top-1 whereas the best compared technique can at most localize 166 bugs within Top-1. We further investigate the impact of each GRACE component and find that they all positively contribute to GRACE. In addition, our results also demonstrate that GRACE has learnt essential features from coverage, which are complementary to various information used in existing learning-based fault localization. Finally, we evaluate GRACE in the cross-project prediction scenario on extra 226 bugs from Defects4J (V2.0.0), and find that GRACE consistently outperforms state-of-the-art coverage-based techniques. @InProceedings{ESEC/FSE21p664, author = {Yiling Lou and Qihao Zhu and Jinhao Dong and Xia Li and Zeyu Sun and Dan Hao and Lu Zhang and Lingming Zhang}, title = {Boosting Coverage-Based Fault Localization via Graph-Based Representation Learning}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {664--676}, doi = {10.1145/3468264.3468580}, year = {2021}, } Publisher's Version |
|
Li, Xukun |
ESEC/FSE '21: "Onion: Identifying Incident-Indicating ..."
Onion: Identifying Incident-Indicating Logs for Cloud Systems
Xu Zhang, Yong Xu, Si Qin, Shilin He, Bo Qiao, Ze Li, Hongyu Zhang, Xukun Li, Yingnong Dang, Qingwei Lin, Murali Chintalapati, Saravanakumar Rajmohan, and Dongmei Zhang (Microsoft Research, China; Microsoft Azure, USA; University of Newcastle, Australia; Microsoft 365, USA) In cloud systems, incidents affect the availability of services and require quick mitigation actions. Once an incident occurs, operators and developers often examine logs to perform fault diagnosis. However, the large volume of diverse logs and the overwhelming details in log data make the manual diagnosis process time-consuming and error-prone. In this paper, we propose Onion, an automatic solution for precisely and efficiently locating incident-indicating logs, which can provide useful clues for diagnosing the incidents. We first point out three criteria for localizing incident-indicating logs, i.e., Consistency, Impact, and Bilateral-Difference. Then we propose a novel agglomeration of logs, called log clique, based on which these criteria are satisfied. To obtain log cliques, we develop an incident-aware log representation and a progressive log clustering technique. Contrast analysis is then performed on the cliques to identify the incident-indicating logs. We have evaluated Onion using well-labeled log datasets. Onion achieves an average F1-score of 0.95 and can process millions of logs in only a few minutes, demonstrating its effectiveness and efficiency. Onion has also been successfully applied to the cloud system of Microsoft. Its practicability has been confirmed through the quantitative and qualitative analysis of the real incident cases. @InProceedings{ESEC/FSE21p1253, author = {Xu Zhang and Yong Xu and Si Qin and Shilin He and Bo Qiao and Ze Li and Hongyu Zhang and Xukun Li and Yingnong Dang and Qingwei Lin and Murali Chintalapati and Saravanakumar Rajmohan and Dongmei Zhang}, title = {Onion: Identifying Incident-Indicating Logs for Cloud Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1253--1263}, doi = {10.1145/3468264.3473919}, year = {2021}, } Publisher's Version |
|
Li, Yi |
ESEC/FSE '21: "Vulnerability Detection with ..."
Vulnerability Detection with Fine-Grained Interpretations
Yi Li, Shaohua Wang, and Tien N. Nguyen (New Jersey Institute of Technology, USA; University of Texas at Dallas, USA) Despite the successes of machine learning (ML) and deep learning (DL)-based vulnerability detectors (VD), they are limited to providing only the decision on whether a given code is vulnerable or not, without details on what part of the code is relevant to the detected vulnerability. We present IVDetect, an interpretable vulnerability detector with the philosophy of using Artificial Intelligence (AI) to detect vulnerabilities, while using Intelligence Assistant (IA) to provide VD interpretations in terms of vulnerable statements. For vulnerability detection, we separately consider the vulnerable statements and their surrounding contexts via data and control dependencies. This allows our model better discriminate vulnerable statements than using the mixture of vulnerable code and contextual code as in existing approaches. In addition to the coarse-grained vulnerability detection result, we leverage interpretable AI to provide users with fine-grained interpretations that include the sub-graph in the Program Dependency Graph (PDG) with the crucial statements that are relevant to the detected vulnerability. Our empirical evaluation on vulnerability databases shows that IVDetect outperforms the existing DL-based approaches by 43%–84% and 105%–255% in top-10 nDCG and MAP ranking scores. IVDetect correctly points out the vulnerable statements relevant to the vulnerability via its interpretation in 67% of the cases with a top-5 ranked list. IVDetect improves over the baseline interpretation models by 12.3%–400% and 9%–400% in accuracy. @InProceedings{ESEC/FSE21p292, author = {Yi Li and Shaohua Wang and Tien N. Nguyen}, title = {Vulnerability Detection with Fine-Grained Interpretations}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {292--303}, doi = {10.1145/3468264.3468597}, year = {2021}, } Publisher's Version ESEC/FSE '21: "DIFFBASE: A Differential Factbase ..." DIFFBASE: A Differential Factbase for Effective Software Evolution Management Xiuheng Wu, Chenguang Zhu, and Yi Li (Nanyang Technological University, Singapore; University of Texas at Austin, USA) Numerous tools and techniques have been developed to extract and analyze information from software development artifacts. Yet, there is a lack of effective method to process, store, and exchange information among different analyses. In this paper, we propose differential factbase, a uniform exchangeable representation supporting efficient querying and manipulation, based on the existing concept of program facts. We consider program changes as first-class objects, which establish links between intra-version facts of single program snapshots and provide insights on how certain artifacts evolve over time via inter-version facts. We implement a series of differential fact extractors supporting different programming languages and platforms, and demonstrate with usage scenarios the benefits of adopting differential facts in supporting software evolution management. @InProceedings{ESEC/FSE21p503, author = {Xiuheng Wu and Chenguang Zhu and Yi Li}, title = {DIFFBASE: A Differential Factbase for Effective Software Evolution Management}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {503--515}, doi = {10.1145/3468264.3468605}, year = {2021}, } Publisher's Version Info Artifacts Reusable |
|
Li, Yuanchun |
ESEC/FSE '21: "TaintStream: Fine-Grained ..."
TaintStream: Fine-Grained Taint Tracking for Big Data Platforms through Dynamic Code Translation
Chengxu Yang, Yuanchun Li, Mengwei Xu, Zhenpeng Chen, Yunxin Liu, Gang Huang, and Xuanzhe Liu (Peking University, China; Microsoft Research, China; Beijing University of Posts and Telecommunications, China; Tsinghua University, China) Big data has become valuable property for enterprises and enabled various intelligent applications. Today, it is common to host data in big data platforms (e.g., Spark), where developers can submit scripts to process the original and intermediate data tables. Meanwhile, it is highly desirable to manage the data to comply with various privacy requirements. To enable flexible and automated privacy policy enforcement, we propose TaintStream, a fine-grained taint tracking framework for Spark-like big data platforms. TaintStream works by automatically injecting taint tracking logic into the data processing scripts, and the injected scripts are dynamically translated to maintain a taint tag for each cell during execution. The dynamic translation rules are carefully designed to guarantee non-interference in the original data operation. By defining different semantics of taint tags, TaintStream can enable various data management applications such as access control, data retention, and user data erasure. Our experiments on a self-crafted benchmarksuite show that TaintStream is able to achieve accurate cell-level taint tracking with a precision of 93.0% and less than 15% overhead. We also demonstrate the usefulness of TaintStream through several real-world use cases of privacy policy enforcement. @InProceedings{ESEC/FSE21p806, author = {Chengxu Yang and Yuanchun Li and Mengwei Xu and Zhenpeng Chen and Yunxin Liu and Gang Huang and Xuanzhe Liu}, title = {TaintStream: Fine-Grained Taint Tracking for Big Data Platforms through Dynamic Code Translation}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {806--817}, doi = {10.1145/3468264.3468532}, year = {2021}, } Publisher's Version |
|
Li, Yuan-Fang |
ESEC/FSE '21: "Code2Que: A Tool for Improving ..."
Code2Que: A Tool for Improving Question Titles from Mined Code Snippets in Stack Overflow
Zhipeng Gao, Xin Xia, David Lo, John Grundy, and Yuan-Fang Li (Monash University, Australia; Singapore Management University, Singapore) Stack Overflow is one of the most popular technical Q&A sites used by software developers. Seeking help from Stack Overflow has become an essential part of software developers’ daily work for solving programming-related questions. Although the Stack Overflow community has provided quality assurance guidelines to help users write better questions, we observed that a significant number of questions submitted to Stack Overflow are of low quality. In this paper, we introduce a new web-based tool, Code2Que, which can help developers in writing higher quality questions for a given code snippet. Code2Que consists of two main stages: offline learning and online recommendation. In the offline learning phase, we first collect a set of good quality ⟨code snippet, question⟩ pairs as training samples. We then train our model on these training samples via a deep sequence-to-sequence approach, enhanced with an attention mechanism, a copy mechanism and a coverage mechanism. In the online recommendation phase, for a given code snippet, we use the offline trained model to generate question titles to assist less experienced developers in writing questions more effectively. To evaluate Code2Que, we first sampled 50 low quality ⟨code snippet, question⟩ pairs from the Python and Java datasets on Stack Overflow. Then we conducted a user study to evaluate the question titles generated by our approach as compared to human-written ones using three metrics: Clearness, Fitness and Willingness to Respond. Our experimental results show that for a large number of low-quality questions in Stack Overflow, Code2Que can improve the question titles in terms of Clearness, Fitness and Willingness measures. @InProceedings{ESEC/FSE21p1525, author = {Zhipeng Gao and Xin Xia and David Lo and John Grundy and Yuan-Fang Li}, title = {Code2Que: A Tool for Improving Question Titles from Mined Code Snippets in Stack Overflow}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1525--1529}, doi = {10.1145/3468264.3473114}, year = {2021}, } Publisher's Version |
|
Li, Yufei |
ESEC/FSE '21: "GLIB: Towards Automated Test ..."
GLIB: Towards Automated Test Oracle for Graphically-Rich Applications
Ke Chen, Yufei Li, Yingfeng Chen, Changjie Fan, Zhipeng Hu, and Wei Yang (Netease, China; University of Texas at Dallas, USA) Graphically-rich applications such as games are ubiquitous with attractive visual effects of Graphical User Interface (GUI) that offers a bridge between software applications and end-users. However, various types of graphical glitches may arise from such GUI complexity and have become one of the main component of software compatibility issues. Our study on bug reports from game development teams in NetEase Inc. indicates that graphical glitches frequently occur during the GUI rendering and severely degrade the quality of graphically-rich applications such as video games. Existing automated testing techniques for such applications focus mainly on generating various GUI test sequences and check whether the test sequences can cause crashes. These techniques require constant human attention to captures non-crashing bugs such as bugs causing graphical glitches. In this paper, we present the first step in automating the test oracle for detecting non-crashing bugs in graphically-rich applications. Specifically, we propose GLIB based on a code-based data augmentation technique to detect game GUI glitches. We perform an evaluation of GLIB on 20 real-world game apps (with bug reports available) and the result shows that GLIB can achieve 100% precision and 99.5% recall in detecting non-crashing bugs such as game GUI glitches. Practical application of GLIB on another 14 real-world games (without bug reports) further demonstrates that GLIB can effectively uncover GUI glitches, with 48 of 53 bugs reported by GLIB having been confirmed and fixed so far. @InProceedings{ESEC/FSE21p1093, author = {Ke Chen and Yufei Li and Yingfeng Chen and Changjie Fan and Zhipeng Hu and Wei Yang}, title = {GLIB: Towards Automated Test Oracle for Graphically-Rich Applications}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1093--1104}, doi = {10.1145/3468264.3468586}, year = {2021}, } Publisher's Version |
|
Li, Ze |
ESEC/FSE '21: "Onion: Identifying Incident-Indicating ..."
Onion: Identifying Incident-Indicating Logs for Cloud Systems
Xu Zhang, Yong Xu, Si Qin, Shilin He, Bo Qiao, Ze Li, Hongyu Zhang, Xukun Li, Yingnong Dang, Qingwei Lin, Murali Chintalapati, Saravanakumar Rajmohan, and Dongmei Zhang (Microsoft Research, China; Microsoft Azure, USA; University of Newcastle, Australia; Microsoft 365, USA) In cloud systems, incidents affect the availability of services and require quick mitigation actions. Once an incident occurs, operators and developers often examine logs to perform fault diagnosis. However, the large volume of diverse logs and the overwhelming details in log data make the manual diagnosis process time-consuming and error-prone. In this paper, we propose Onion, an automatic solution for precisely and efficiently locating incident-indicating logs, which can provide useful clues for diagnosing the incidents. We first point out three criteria for localizing incident-indicating logs, i.e., Consistency, Impact, and Bilateral-Difference. Then we propose a novel agglomeration of logs, called log clique, based on which these criteria are satisfied. To obtain log cliques, we develop an incident-aware log representation and a progressive log clustering technique. Contrast analysis is then performed on the cliques to identify the incident-indicating logs. We have evaluated Onion using well-labeled log datasets. Onion achieves an average F1-score of 0.95 and can process millions of logs in only a few minutes, demonstrating its effectiveness and efficiency. Onion has also been successfully applied to the cloud system of Microsoft. Its practicability has been confirmed through the quantitative and qualitative analysis of the real incident cases. @InProceedings{ESEC/FSE21p1253, author = {Xu Zhang and Yong Xu and Si Qin and Shilin He and Bo Qiao and Ze Li and Hongyu Zhang and Xukun Li and Yingnong Dang and Qingwei Lin and Murali Chintalapati and Saravanakumar Rajmohan and Dongmei Zhang}, title = {Onion: Identifying Incident-Indicating Logs for Cloud Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1253--1263}, doi = {10.1145/3468264.3473919}, year = {2021}, } Publisher's Version |
|
Li, Zeyan |
ESEC/FSE '21: "An Empirical Investigation ..."
An Empirical Investigation of Practical Log Anomaly Detection for Online Service Systems
Nengwen Zhao, Honglin Wang, Zeyan Li, Xiao Peng, Gang Wang, Zhu Pan, Yong Wu, Zhen Feng, Xidao Wen, Wenchi Zhang, Kaixin Sui, and Dan Pei (Tsinghua University, China; BizSeer, China; China Everbright Bank, China) Log data is an essential and valuable resource of online service systems, which records detailed information of system running status and user behavior. Log anomaly detection is vital for service reliability engineering, which has been extensively studied. However, we find that existing approaches suffer from several limitations when deploying them into practice, including 1) inability to deal with various logs and complex log abnormal patterns; 2) poor interpretability; 3) lack of domain knowledge. To help understand these practical challenges and investigate the practical performance of existing work quantitatively, we conduct the first empirical study and an experimental study based on large-scale real-world data. We find that logs with rich information indeed exhibit diverse abnormal patterns (e.g., keywords, template count, template sequence, variable value, and variable distribution). However, existing approaches fail to tackle such complex abnormal patterns, producing unsatisfactory performance. Motivated by obtained findings, we propose a generic log anomaly detection system named LogAD based on ensemble learning, which integrates multiple anomaly detection approaches and domain knowledge, so as to handle complex situations in practice. About the effectiveness of LogAD, the average F1-score achieves 0.83, outperforming all baselines. Besides, we also share some success cases and lessons learned during our study. To our best knowledge, we are the first to investigate practical log anomaly detection in the real world deeply. Our work is helpful for practitioners and researchers to apply log anomaly detection to practice to enhance service reliability. @InProceedings{ESEC/FSE21p1404, author = {Nengwen Zhao and Honglin Wang and Zeyan Li and Xiao Peng and Gang Wang and Zhu Pan and Yong Wu and Zhen Feng and Xidao Wen and Wenchi Zhang and Kaixin Sui and Dan Pei}, title = {An Empirical Investigation of Practical Log Anomaly Detection for Online Service Systems}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {1404--1415}, doi = {10.1145/3468264.3473933}, year = {2021}, } Publisher's Version |
|
Liang, Guangtai |
ESEC/FSE '21: "SmartCommit: A Graph-Based ..."
SmartCommit: A Graph-Based Interactive Assistant for Activity-Oriented Commits
Bo Shen, Wei Zhang, Christian Kästner, Haiyan Zhao, Zhao Wei, Guangtai Liang, and Zhi Jin (Peking University, China; Carnegie Mellon University, USA; Huawei Technologies, China) In collaborative software development, it is considered to be a best practice to submit code changes as a sequence of cohesive commits, each of which records the work result of a specific development activity, such as adding a new feature, bug fixing, and refactoring. However, rather than following this best practice, developers often submit a set of loosely-related changes serving for different development activities as a composite commit, due to the tedious manual work and lack of effective tool support to decompose such a tangled changeset. Composite commits often obfuscate the change history of software artifacts and bring challenges to efficient collaboration among developers. To encourage activity-oriented commits, we propose SmartCommit, a graph-partitioning-based interactive approach to tangled changeset decomposition that leverages not only the efficiency of algorithms but also the knowledge of developers. To evaluate the effectiveness of our approach, we (1) deployed SmartCommit in an international IT company, and analyzed usage data collected from a field study with 83 engineers over 9 months; and (2) conducted a controlled experiment on 3,000 synthetic composite commits from 10 diverse open-source projects. Results show that SmartCommit achieves a median accuracy between 71–84% when decomposing composite commits without developer involvement, and significantly helps developers follow the best practice of submitting activity-oriented commits with acceptable interaction effort and time cost in real collaborative software development. @InProceedings{ESEC/FSE21p379, author = {Bo Shen and Wei Zhang and Christian Kästner and Haiyan Zhao and Zhao Wei and Guangtai Liang and Zhi Jin}, title = {SmartCommit: A Graph-Based Interactive Assistant for Activity-Oriented Commits}, booktitle = {Proc.\ ESEC/FSE}, publisher = {ACM}, pages = {379--390}, doi = {10.1145/3468264.3468551}, year = {2021}, } Publisher's Version |