ASE 2012
2012 27th IEEE/ACM International Conference on Automated Software Engineering (ASE)
Powered by
Conference Publishing Consulting

2012 27th IEEE/ACM International Conference on Automated Software Engineering (ASE), September 3–7, 2012, Essen, Germany

ASE 2012 – Proceedings

Contents - Abstracts - Authors


Title Page

This conference publication contains the proceedings of the 27th International Conference on Automated Software Engineering (ASE 2012), held at the Atlantic Congress Hotel Essen in Germany, on September 3–7, 2012.
The IEEE/ACM International Conference on Automated Software Engineering brings together researchers and practitioners to share ideas on the foundations, techniques, tools, and applications of automated software engineering. The specific topics targeted by ASE 2012 included but were not limited to: Automated reasoning techniques, Component-based systems, Computer-supported cooperative work, Configuration management, Data mining and software engineering, Domain modeling and meta-modeling, Empirical software engineering, Human-computer interaction, Knowledge acquisition and management, Maintenance and evolution, Model-based software development, Model-driven engineering and model transformation, Modeling language semantics, Open systems development, Product line architectures, Program understanding, Program synthesis, Program transformation, Re-engineering, Requirements engineering, Specification languages, Software analysis, Software architecture and design, Software visualization, Testing, verification, and validation, Tutoring, help, and documentation system.
The technical program included 21 long papers and 26 short papers out of 138 submissions. Long papers include technical papers describing innovative research in automated software engineering and experience reports describing a significant experience in applying automated software engineering technology in practice. Short papers describe promising research that has not been fully evaluated. All submissions went through a rigorous reviewing process, where each paper received a minimum of 3 reviews. The program selection was performed through a physical Program Committee meeting, that was held at the University of Zurich, Switzerland.




The GISMOE Challenge: Constructing the Pareto Program Surface Using Genetic Programming to Find Better Programs (Keynote Paper)
Mark Harman, William B. Langdon, Yue Jia, David R. White, Andrea Arcuri, and John A. Clark
(University College London, UK; University of Glasgow, UK; Simula Research Laboratory, Norway; University of York, UK)
Optimising programs for non-functional properties such as speed, size, throughput, power consumption and bandwidth can be demanding; pity the poor programmer who is asked to cater for them all at once! We set out an alternate vision for a new kind of software development environment inspired by recent results from Search Based Software Engineering (SBSE). Given an input program that satisfies the functional requirements, the proposed programming environment will automatically generate a set of candidate program implementations, all of which share functionality, but each of which differ in their non-functional trade offs. The software designer navigates this diverse Pareto surface of candidate implementations, gaining insight into the trade offs and selecting solutions for different platforms and environments, thereby stretching beyond the reach of current compiler technologies. Rather than having to focus on the details required to manage complex, inter-related and conflicting, non-functional trade offs, the designer is thus freed to explore, to understand, to control and to decide rather than to construct.

Article Search
Re-founding Software Engineering – SEMAT at the Age of Three (Keynote Abstract)
Ivar Jacobson, Ian Spence, Pontus Johnson, and Mira Kajko-Mattsson
(Ivar Jacobson International, UK; KTH Royal Institute of Technology, Sweden)
Software engineering is gravely hampered by immature practices. Specific problems include: The prevalence of fads more typical of the fashion industry than an engineering discipline; a huge number of methods and method variants, with differences little understood and artificially magnified; the lack of credible experimental evaluation and validation; and the split between industry practice and academic research.
At the root of the problems we lack a sound, widely accepted theoretical basis. A prime example of such a basis is Maxwell’s equations in electrical engineering. It is difficult to fathom what electrical engineering would be today without those four concise equations. They are a great example to the statement “There is nothing so practical as a good theory”. In software engineering we have nothing similar, and there is widespread doubt whether it is needed. This talk will argue for the need of a basic theory in software engineering, a theory identifying its pure essence, its common ground or its kernel.
The Semat (Software Engineering Methods and Theory) community addresses this huge challenge. It supports a process to refound software engineering based on a kernel of widely-agreed elements, extensible for specific uses, addressing both technology and people issues. This kernel represents the essence of software engineering. This talk promises to make you see the light in the tunnel.

Article Search

Debugging I

Practical Isolation of Failure-Inducing Changes for Debugging Regression Faults
Kai Yu, Mengxiang Lin, Jin Chen, and Xiangyu Zhang
(Beihang University, China)
During software evolution, new released versions still contain many bugs. One common scenario is that end users encounter regression faults and submit them to bug tracking systems. Different from in-house regression testing, typically only one test input is available, which passes the old version and fails the modified new version. To address the issue, delta debugging has been proposed for failure-inducing changes identification between two versions. Despite promising results, there are two practical factors that thwart the application of delta debugging: a large number of tests and misleading false positives. In this work, we present a combination of coverage analysis and delta debugging that automatically isolates failure-inducing changes. Evaluations on twelve real regression faults in GNU software demonstrate both the speed gain and effectiveness improvements. Moreover, a case study on libPNG and TCPflow indicates that our technique is comparable to peer techniques in debugging regressions faults.

Article Search
Diversity Maximization Speedup for Fault Localization
Liang Gong, David Lo, Lingxiao Jiang, and Hongyu Zhang
(Tsinghua University, China; Singapore Management University, Singapore)
Fault localization is useful for reducing debugging effort. However, many fault localization techniques require non-trivial number of test cases with oracles, which can determine whether a program behaves correctly for every test input. Test oracle creation is expensive because it can take much manual labeling effort. Given a number of test cases to be executed, it is challenging to minimize the number of test cases requiring manual labeling and in the meantime achieve good fault localization accuracy. To address this challenge, this paper presents a novel test case selection strategy based on Diversity Maximization Speedup (DMS). DMS orders a set of unlabeled test cases in a way that maximizes the effectiveness of a fault localization technique. Developers are only expected to label a much smaller number of test cases along this ordering to achieve good fault localization results. Our experiments with more than 250 bugs from the Software-artifact Infrastructure Repository show (1) that DMS can help existing fault localization techniques to achieve comparable accuracy with on average 67% fewer labeled test cases than previously best test case prioritization techniques, and (2) that given a labeling budget (i.e., a fixed number of labeled test cases), DMS can help existing fault localization techniques reduce their debugging cost (in terms of the amount of code needed to be inspected to locate faults). We conduct hypothesis test and show that the saving of the debugging cost we achieve for the real C programs are statistically significant.

Article Search
Improving the Effectiveness of Spectra-Based Fault Localization Using Specifications
Divya Gopinath, Razieh Nokhbeh Zaeem, and Sarfraz Khurshid
(University of Texas at Austin, USA)
Fault localization i.e., locating faulty lines of code, is a key step in removing bugs and often requires substantial manual effort. Recent years have seen many automated localization techniques, specifically using the program’s passing and failing test runs, i.e., test spectra. However, the effectiveness of these approaches is sensitive to factors such as the type and number of faults, and the quality of the test-suite. This paper presents a novel technique that applies spectra-based localization in synergy with specification-based analysis to more accurately locate faults. Our insight is that unsatisfiability analysis of violated specifications, enabled by SAT technology, could be used to (1) compute unsatisfiable cores that contain likely faulty statements and (2) generate tests that help spectra-based localization. Our technique is iterative and driven by a feedback loop that enables more precise fault localization. SAT-TAR is a framework that embodies our technique for Java programs, including those with multiple faults. An experimental evaluation using a suite of widely-studied data structure programs, including the ANTLR and JTopas parser applications, shows that our technique localizes faults more accurately than state-of-the-art approaches.

Article Search

Debugging II

To What Extent Could We Detect Field Defects? An Empirical Study of False Negatives in Static Bug Finding Tools
Ferdian Thung, Lucia, David Lo, Lingxiao Jiang, Foyzur Rahman, and Premkumar T. Devanbu
(Singapore Management University, Singapore; UC Davis, USA)
Software defects can cause much loss. Static bug-finding tools are believed to help detect and remove defects. These tools are designed to find programming errors; but, do they in fact help prevent actual defects that occur in the field and reported by users? If these tools had been used, would they have detected these field defects, and generated warnings that would direct programmers to fix them? To answer these questions, we perform an empirical study that investigates the effectiveness of state-of-the-art static bug finding tools on hundreds of reported and fixed defects extracted from three open source programs: Lucene, Rhino, and AspectJ. Our study addresses the question: To what extent could field defects be found and detected by state-of-the-art static bug-finding tools? Different from past studies that are concerned with the numbers of false positives produced by such tools, we address an orthogonal issue on the numbers of false negatives. We find that although many field defects could be detected by static bug finding tools, a substantial proportion of defects could not be flagged. We also analyze the types of tool warnings that are more effective in finding field defects and characterize the types of missed defects.

Article Search
Diagnosys: Automatic Generation of a Debugging Interface to the Linux Kernel
Tegawendé F. Bissyandé, Laurent Réveillère, Julia L. Lawall, and Gilles Muller
(University of Bordeaux, France; INRIA, France)
The Linux kernel does not export a stable, well-defined kernel interface, complicating the development of kernel-level services, such as device drivers and file systems. While there does exist a set of functions that are exported to external modules, this set of functions frequently changes, and the functions have implicit, ill-documented preconditions. No specific debugging support is provided.
We present Diagnosys, an approach to automatically constructing a debugging interface for the Linux kernel. First, a designated kernel maintainer uses Diagnosys to identify constraints on the use of the exported functions. Based on this information, developers of kernel services can then use Diagnosys to generate a debugging interface specialized to their code. When a service including this interface is tested, it records information about potential problems. This information is preserved following a kernel crash or hang. Our experiments show that the generated debugging interface provides useful log information and incurs a low performance penalty.

Article Search
Duplicate Bug Report Detection with a Combination of Information Retrieval and Topic Modeling
Anh Tuan Nguyen, Tung Thanh Nguyen, Tien N. Nguyen, David Lo, and Chengnian Sun
(Iowa State University, USA; Singapore Management University, Singapore; National University of Singapore, Singapore)
Detecting duplicate bug reports helps reduce triaging efforts and save time for developers in fixing the same issues. Among several automated detection approaches, text-based information retrieval (IR) approaches have been shown to outperform others in term of both accuracy and time efficiency. However, those IR-based approaches do not detect well the duplicate reports on the same technical issues written in different descriptive terms.
This paper introduces DBTM, a duplicate bug report detection approach that takes advantage of both IR-based features and topic-based features. DBTM models a bug report as a textual document describing certain technical issue(s), and models duplicate bug reports as the ones about the same technical issue(s). Trained with historical data including identified duplicate reports, it is able to learn the sets of different terms describing the same technical issues and to detect other not-yet-identified duplicate ones. Our empirical evaluation on real-world systems shows that DBTM improves the state-of-the-art approaches by up to 20% in accuracy.

Article Search

Privacy, Security, and Performance

User-Aware Privacy Control via Extended Static-Information-Flow Analysis
Xusheng Xiao, Nikolai Tillmann, Manuel Fahndrich, Jonathan de Halleux, and Michal Moskal
(North Carolina State University, USA; Microsoft Research, USA)
Applications in mobile-marketplaces may leak private user information without notification. Existing mobile platforms provide little information on how applications use private user data, making it difficult for experts to validate applications and for users to grant applications access to their private data. We propose a user-aware privacy control approach, which reveals how private information is used inside applications. We compute static information flows and classify them as safe/unsafe based on a tamper analysis that tracks whether private data is obscured before escaping through output channels. This flow information enables platforms to provide default settings that expose private data only for safe flows, thereby preserving privacy and minimizing decisions required from users. We built our approach into TouchDevelop, an application-creation environment that allows users to write scripts on mobile devices and install scripts published by other users. We evaluate our approach by studying 546 scripts published by 194 users.

Article Search
Automatic Query Performance Assessment during the Retrieval of Software Artifacts
Sonia Haiduc, Gabriele Bavota, Rocco Oliveto, Andrea De Lucia, and Andrian Marcus
(Wayne State University, USA; University of Salerno, Italy; University of Molise, Italy)
Text-based search and retrieval is used by developers in the context of many SE tasks, such as, concept location, traceability link retrieval, reuse, impact analysis, etc. Solutions for software text search range from regular expression matching to complex techniques using text retrieval. In all cases, the results of a search depend on the query formulated by the developer. A developer needs to run a query and look at the results before realizing that it needs reformulating. Our aim is to automatically assess the performance of a query before it is executed. We introduce an automatic query performance assessment approach for software artifact retrieval, which uses 21 measures from the field of text retrieval. We evaluate the approach in the context of concept location in source code. The evaluation shows that our approach is able to predict the performance of queries with 79% accuracy, using very little training data.

Article Search
Supporting Automated Vulnerability Analysis Using Formalized Vulnerability Signatures
Mohamed Almorsy, John Grundy, and Amani S. Ibrahim
(Swinburne University of Technology, Australia)
Adopting publicly accessible platforms such as cloud computing model to host IT systems has become a leading trend. Although this helps to minimize cost and increase availability and reachability of applications, it has serious implications on applications’ security. Hackers can easily exploit vulnerabilities in such publically accessible services. In addition to, 75% of the total reported application vulnerabilities are web application specific. Identifying such known vulnerabilities as well as newly discovered vulnerabilities is a key challenging security requirement. However, existing vulnerability analysis tools cover no more than 47% of the known vulnerabilities. We introduce a new solution that supports automated vulnerability analysis using formalized vulnerability signatures. Instead of depending on formal methods to locate vulnerability instances where analyzers have to be developed to locate specific vulnerabilities, our approach incorporates a formal vulnerability signature described using OCL. Using this formal signature, we perform program analysis of the target system to locate signature matches (i.e. signs of possible vulnerabilities). A newly–discovered vulnerability can be easily identified in a target program provided that a formal signature for it exists. We have developed a prototype static vulnerability analysis tool based on our formalized vulnerability signatures specification approach. We have validated our approach in capturing signatures of the OWSAP Top10 vulnerabilities and applied these signatures in analyzing a set of seven benchmark applications.

Article Search

Configuration Management and QoS

A Qualitative Study on User Guidance Capabilities in Product Configuration Tools
Rick Rabiser, Paul Grünbacher, and Martin Lehofer
(JKU Linz, Austria; Siemens, Austria)
Software systems are nowadays often configured by sales people, domain experts, or even customers instead of engineers. Configuration tools communicate the systems' variability to these end users and provide guidance for selecting and customizing the available features. However, even if a configuration tool creates technically correct systems, addressing the specific needs of business-oriented users remains challenging. We analyze existing configuration tools to identify key capabilities for guiding end users and discuss these capabilities using the cognitive dimensions of notations framework. We present an implementation of the capabilities in our configuration tool DOPLER CW. We performed a qualitative investigation on the usefulness of the tool's capabilities for user guidance in product configuration by involving nine business-oriented experts of two industry partners from the domain of industrial automation. We present key results and derive general implications for tool developers.

Article Search
Structured Merge with Auto-Tuning: Balancing Precision and Performance
Sven Apel, Olaf Leßenich, and Christian Lengauer
(University of Passau, Germany)
Software-merging techniques face the challenge of finding a balance between precision and performance. In practice, developers use unstructured-merge (i.e., line-based) tools, which are fast but imprecise. In academia, many approaches incorporate information on the structure of the artifacts being merged. While this increases precision in conflict detection and resolution, it can induce severe performance penalties. Striving for a proper balance between precision and performance, we propose a structured-merge approach with auto-tuning. In a nutshell, we tune the merge process on-line by switching between unstructured and structured merge, depending on the presence of conflicts. We implemented a corresponding merge tool for Java, called JDime. Our experiments with 8 real-world Java projects, involving 72 merge scenarios with over 17 million lines of code, demonstrate that our approach indeed hits a sweet spot: While largely maintaining a precision that is superior to the one of unstructured merge, structured merge with auto-tuning is up to 12 times faster than purely structured merge, 5 times on average.

Article Search
An Automated Approach to Forecasting QoS Attributes Based on Linear and Non-linear Time Series Modeling
Ayman Amin, Lars Grunske, and Alan Colman
(Swinburne University of Technology, Australia; University of Kaiserslautern, Germany)
Predicting future values of Quality of Service (QoS) attributes can assist in the control of software intensive systems by preventing QoS violations before they happen. Currently, many approaches prefer Autoregressive Integrated Moving Average (ARIMA) models for this task, and assume the QoS attributes' behavior can be linearly modeled. However, the analysis of real QoS datasets shows that they are characterized by a highly dynamic and mostly nonlinear behavior to the extent that existing ARIMA models cannot guarantee accurate QoS forecasting, which can introduce crucial problems such as proactively triggering unrequired adaptations and thus leading to follow-up failures and increased costs. To address this limitation, we propose an automated forecasting approach that integrates linear and nonlinear time series models and automatically, without human intervention, selects and constructs the best suitable forecasting model to fit the QoS attributes' dynamic behavior. Using real-world QoS datasets of 800 web services we evaluate the applicability, accuracy, and performance aspects of the proposed approach, and results show that the approach outperforms the popular existing ARIMA models and improves the forecasting accuracy by on average 35.4%.

Article Search

Testing and Monitoring

Puzzle-Based Automatic Testing: Bringing Humans into the Loop by Solving Puzzles
Ning Chen and Sunghun Kim
(Hong Kong University of Science and Technology, China)
Recently, many automatic test generation techniques have been proposed, such as Randoop, Pex and jCUTE. However, usually test coverage of these techniques has been around 50-60% only, due to several challenges, such as 1) the object mutation problem, where test generators cannot create and/or modify test inputs to desired object states; and 2) the constraint solving problem, where test generators fail to solve path conditions to cover certain branches. By analyzing branches not covered by state-of-the-art techniques, we noticed that these challenges might not be so difficult for humans.
To verify this hypothesis, we propose a Puzzle-based Automatic Testing environment (PAT) which decomposes object mutation and complex constraint solving problems into small puzzles for humans to solve. We generated PAT puzzles for two open source projects and asked different groups of people to solve these puzzles. It was shown that they could be effectively solved by humans: 231 out of 400 puzzles were solved by humans at an average speed of one minute per puzzle. The 231 puzzle solutions helped cover 534 and 308 additional branches (7.0% and 5.8% coverage improvement) in the two open source projects, on top of the saturated branch coverages achieved by the two state-of-the-art test generation techniques.

Article Search
Using Unfoldings in Automated Testing of Multithreaded Programs
Kari Kähkönen, Olli Saarikivi, and Keijo Heljanko
(Aalto University, Finland)
In multithreaded programs both environment input data and the nondeterministic interleavings of concurrent events can affect the behavior of the program. One approach to systematically explore the nondeterminism caused by input data is dynamic symbolic execution. For testing multithreaded programs we present a new approach that combines dynamic symbolic execution with unfoldings, a method originally developed for Petri nets but also applied to many other models of concurrency. We provide an experimental comparison of our new approach with existing algorithms combining dynamic symbolic execution and partial-order reductions and show that the new algorithm can explore the reachable control states of each thread with a significantly smaller number of test runs. In some cases the reduction to the number of test runs can be even exponential allowing programs with long test executions or hard-to-solve constrains generated by symbolic execution to be tested more efficiently.

Article Search
Runtime Monitoring of Software Energy Hotspots
Adel Noureddine, Aurelien Bourdon, Romain Rouvoy, and Lionel Seinturier
(INRIA, France; University of Lille, France; Institut Universitaire de France, France)
GreenIT has emerged as a discipline concerned with the optimization of software solutions with regards to their energy consumption. In this domain, most of the state-of-the-art solutions concentrate on coarse-grained approaches to monitor the energy consumption of a device or a process. However, none of the existing solutions addresses in-process energy monitoring to provide in-depth analysis of a process energy consumption. In this paper, we therefore report on a fine-grained runtime energy monitoring framework we developed to help developers to diagnose energy hotspots with a better accuracy than the state-of-the-art.
Concretely, our approach adopts a 2-layer architecture including OS-level and process-level energy monitoring. OS-level energy monitoring estimates the energy consumption of processes according to different hardware devices (CPU, network card). Process-level energy monitoring focuses on Java-based applications and builds on OS-level energy monitoring to provide an estimation of energy consumption at the granularity of classes and methods. We argue that this per-method analysis of energy consumption provides better insights to the application in order to identify potential energy hotspots. In particular, our preliminary validation demonstrates that we can monitor energy hotspots of Jetty web servers and monitor their variations under stress scenarios.

Article Search


Can I Clone This Piece of Code Here?
Xiaoyin Wang, Yingnong Dang, Lu Zhang, Dongmei Zhang, Erica Lan, and Hong Mei
(Peking University, China; Microsoft Research, China; Microsoft, USA)
While code cloning is a convenient way for developers to reuse existing code, it may potentially lead to negative impacts, such as degrading code quality or increasing maintenance costs. Actually, some cloned code pieces are viewed as harmless since they evolve independently, while some other cloned code pieces are viewed as harmful since they need to be changed consistently, thus incurring extra maintenance costs. Recent studies demonstrate that neither the percentage of harmful code clones nor that of harmless code clones is negligible. To assist developers in leveraging the benefits of harmless code cloning and/or in avoiding the negative impacts of harmful code cloning, we propose a novel approach that automatically predicts the harmfulness of a code cloning operation at the point of performing copy-and-paste. Our insight is that the potential harmfulness of a code cloning operation may relate to some characteristics of the code to be cloned and the characteristics of its context. Based on a number of features extracted from the cloned code and the context of the code cloning operation, we use Bayesian Networks, a machine-learning technique, to predict the harmfulness of an intended code cloning operation. We evaluated our approach on two large-scale industrial software projects under two usage scenarios: 1) approving only cloning operations predicted to be very likely of no harm, and 2) blocking only cloning operations predicted to be very likely of harm. In the first scenario, our approach is able to approve more than 50% cloning operations with a precision higher than 94.9% in both subjects. In the second scenario, our approach is able to avoid more than 48% of the harmful cloning operations by blocking only 15% of the cloning operations for the first subject, and avoid more than 67% of the cloning operations by blocking only 34% of the cloning operations for the second subject.

Article Search
Predicting Recurring Crash Stacks
Hyunmin Seo and Sunghun Kim
(Hong Kong University of Science and Technology, China)
Software crash is one of the most severe bug manifestations and developers want to fix crash bugs quickly and efficiently. The Crash Reporting System (CRS) is widely deployed for this purpose. Even with the help of CRS, fixes are largely by manual effort, which is error-prone and results in recurring crashes even after the fixes. Our empirical study reveals that 48% of fixed crashes in Firefox CRS are recurring mostly due to incomplete or missing fixes. It is desirable to automatically check if a crash fix misses some reported crash traces at the time of the first fix. This paper proposes an automatic technique to predict recurring crash traces. We first extract stack traces and then compare them with bug fix locations to predict recurring crash traces. Evaluation using the real Firefox crash data shows that the approach yields reasonable accuracy in prediction of recurring crashes. Had our technique been deployed earlier, more than 2,225 crashes in Firefox 3.6 could have been avoided.

Article Search
Automated Inference of Goal-Oriented Performance Prediction Functions
Dennis Westermann, Jens Happe, Rouven Krebs, and Roozbeh Farahbod
(SAP Research, Germany)
Understanding the dependency between performance metrics (such as response time) and software configuration or usage parameters is crucial in improving software quality. However, the size of most modern systems makes it nearly impossible to provide a complete performance model. Hence, we focus on scenario-specific problems where software engineers require practical and efficient approaches to draw conclusions, and we propose an automated, measurement-based model inference method to derive goal-oriented performance prediction functions. For the practicability of the approach it is essential to derive functional dependencies with the least possible amount of data. In this paper, we present different strategies for automated improvement of the prediction model through an adaptive selection of new measurement points based on the accuracy of the prediction model. In order to derive the prediction models, we apply and compare different statistical methods. Finally, we evaluate the different combinations based on case studies using SAP and SPEC benchmarks.

Article Search

Validation, Verification, and Consistency

Code Patterns for Automatically Validating Requirements-to-Code Traces
Achraf Ghabi and Alexander Egyed
(JKU Linz, Austria)
Traces between requirements and code reveal where requirements are implemented. Such traces are essential for code understanding and change management. Unfortunately, traces are known to be error prone. This paper introduces a novel approach for validating requirements-to-code traces through calling relationships within the code. As input, the approach requires an executable system, the corresponding requirements, and the requirements-to-code traces that need validating. As output, the approach identifies likely incorrect or missing traces by investigating patterns of traces with calling relationships. The empirical evaluation of four case study systems covering 150 KLOC and 59 requirements demonstrates that the approach detects most errors with 85-95% precision and 82-96% recall and is able to handle traces of varying levels of correctness and completeness. The approach is fully automated, tool supported, and scalable.

Article Search
Unbounded Data Model Verification Using SMT Solvers
Jaideep Nijjar and Tevfik Bultan
(UC Santa Barbara, USA)
The growing influence of web applications in every aspect of society makes their dependability an immense concern. A fundamental building block of web applications that use the Model-View-Controller (MVC) pattern is the data model, which specifies the object classes and the relations among them. We present an approach for unbounded, automated verification of data models that 1) extracts a formal data model from an Object Relational Mapping, 2) converts verification queries about the data model to queries about the satisfiability of formulas in the theory of uninterpreted functions, and 3) uses a Satisfiability Modulo Theories (SMT) solver to check the satisfiability of the resulting formulas. We implemented this approach and applied it to five open-source Rails applications. Our results demonstrate that the proposed approach is feasible, and is more efficient than SAT-based bounded verification.

Article Search
Computing Repair Trees for Resolving Inconsistencies in Design Models
Alexander Reder and Alexander Egyed
(JKU Linz, Austria)
Resolving inconsistencies in software models is a complex task because the number of repairs grows exponentially. Existing approaches thus emphasize on selected repairs only but doing so diminishes their usefulness. This paper copes with the large number of repairs by focusing on what caused an inconsistency and presenting repairs as a linearly growing repair tree. The cause is computed by examining the run-time evaluation of the inconsistency to understand where and why it failed. The individual changes that make up repairs are then modeled in a repair tree as alternatives and sequences reflecting the syntactic structure of the inconsistent design rule. The approach is automated and tool supported. Its scalability was empirically evaluated on 29 UML models and 18 OCL design rules where we show that the approach computes repair trees in milliseconds on average. We believe that the approach is applicable to arbitrary modeling and constraint languages.

Article Search

Re-engineering, Program Understanding, and Model Transformation (Short Papers)

Supporting Automated Software Re-engineering Using Re-aspects
Mohamed Almorsy, John Grundy, and Amani S. Ibrahim
(Swinburne University of Technology, Australia)
System maintenance, including omitting an existing system feature e.g. buggy or vulnerable code, or modifying existing features, e.g. replacing them, is still very challenging. To address this problem we introduce the “re-aspect” (re-engineering aspect), inspired from traditional AOP. A re-aspect captures system modification details including signatures of entities to be updated; actions to apply including remove, modify, replace, or inject new code; and code to apply. Re-aspects locate entities to update, entities that will be impacted by the given update, and finally propagate changes on the system source code. We have applied our re-aspects technique to the security re-engineering problem and evaluated it on a set of open source .NET applications to demonstrate its usefulness.

Article Search
Supporting Operating System Kernel Data Disambiguation Using Points-to Analysis
Amani S. Ibrahim, John Grundy, James Hamlyn-Harris, and Mohamed Almorsy
(Swinburne University of Technology, Australia)
Generic pointers scattered around operating system (OS) kernels make the kernel data layout ambiguous. This limits current kernel integrity checking research to covering a small fraction of kernel data. Hence, there is a great need to obtain an accurate kernel data definition that resolves generic pointer ambiguities, in order to formulate a set of constraints between structures to support precise integrity checking. In this paper, we present KDD, a new tool for systematically generating a sound kernel data definition for any C-based OS e.g. Windows and Linux, without any prior knowledge of the kernel data layout. KDD performs static points-to analysis on the kernel’s source code to infer the appropriate candidate types for generic pointers. We implemented a prototype of KDD and evaluated it to prove its scalability and effectiveness.

Article Search
Automatic Recovery of Statecharts from Procedural Code
Moria Abadi and Yishai A. Feldman
(Tel Aviv University, Israel; IBM Research, Israel)
We have developed a static-analysis algorithm that extracts statecharts from procedural implementations of state machines. The extracted statecharts are semantically-equivalent to the original program, and can be used for further development instead of the procedural code. We have implemented this algorithm in a tool called StatRec. We report on the results of running StatRec on a number of examples, including an implementation of the TCP protocol.

Article Search
Locating Distinguishing Features Using Diff Sets
Julia Rubin and Marsha Chechik
(University of Toronto, Canada; IBM Research, Israel)
In this paper, we focus on the problem of feature location for families of related software products realized via code cloning. Locating code that corresponds to features in such families is an important task in many software development activities, such as support for sharing features between different products of the family or refactoring the code into product line representations that eliminate duplications and facilitate reuse. We suggest two heuristics for improving the accuracy of existing feature location techniques when locating distinguishing features – those that are present in one product variant while absent in another. Our heuristics are based on identifying code regions that have a high potential to implement a feature of interest. We refer to these regions as diff sets and compute them by comparing product variants to each other. We exemplify our approach on a small but realistic example and describe initial evaluation results.

Article Search
Slicing and Replaying Code Change History
Katsuhisa Maruyama, Eijiro Kitsu, Takayuki Omori, and Shinpei Hayashi
(Ritsumeikan University, Japan; Tokyo Institute of Technology, Japan)
Change-aware development environments have recently become feasible and reasonable. These environments can automatically record fine-grained code changes on a program and allow programmers to replay the recorded changes in chronological order. However, they do not always need to replay all the code changes to investigate how a particular entity of the program has been changed. Therefore, they often skip several code changes of no interest. This skipping action is an obstacle that makes many programmers hesitate in using existing replaying tools. This paper proposes a slicing mechanism that can extract only code changes necessary to construct a particular class member of a Java program from the whole history of past code changes. In this mechanism, fine-grained code changes are represented by edit operations recorded on source code of a program. The paper also presents a running tool that implements the proposed slicing and replays its resulting slices. With this tool, programmers can avoid replaying edit operations nonessential to the construction of class members they want to understand.

Article Search
Generating Model Transformation Rules from Examples Using an Evolutionary Algorithm
Martin Faunes, Houari Sahraoui, and Mounir Boukadoum
(Université de Montréal, Canada; Université du Québec à Montréal, Canada)
We propose an evolutionary approach to automatically generate model transformation rules from a set of examples. To this end, genetic programming is adapted to the problem of model transformation in the presence of complex input/output relationships (i.e., models conforming to meta-models) by generating declarative programs (i.e., transformation rules in this case). Our approach does not rely on prior transformation traces for the model-example pairs, and directly generates executable, many-to-many rules with complex conditions. The applicability of the approach is illustrated with the well-known problem of transforming UML class diagrams into relational schemas, using examples collected from the literature.

Article Search

Testing and Code Analysis (Short Papers)

Augmented Dynamic Symbolic Execution
Konrad Jamrozik, Gordon Fraser, Nikolai Tillmann, and Jonathan de Halleux
(Saarland University, Germany; University of Sheffield, UK; Microsoft Research, USA)
Dynamic symbolic execution (DSE) can efficiently explore all simple paths through a program, reliably determining whether there are any program crashes or violations of assertions or code contracts. However, if such automated oracles do not exist, the traditional approach is to present the developer a small and representative set of tests in order to let him/her determine their correctness. Customer feedback on Microsoft's Pex tool revealed that users expect different values and also more values than those produced by Pex, which threatens the applicability of DSE in a scenario without automated oracles. Indeed, even though all paths might be covered by DSE, the resulting tests are usually not sensitive enough to make a good regression test suite. In this paper, we present augmented dynamic symbolic execution, which aims to produce representative test sets by augmenting path conditions with additional conditions that enforce target criteria such as boundary or mutation adequacy, or logical coverage criteria.

Article Search
Using GUI Ripping for Automated Testing of Android Applications
Domenico Amalfitano, Anna Rita Fasolino, Porfirio Tramontana, Salvatore De Carmine, and Atif M. Memon
(Università Federico II Napoli, Italy; University of Maryland, USA)
We present AndroidRipper, an automated technique that tests Android apps via their Graphical User Interface (GUI). AndroidRipper is based on a user-interface driven ripper that automatically explores the app’s GUI with the aim of exercising the application in a structured manner. We evaluate AndroidRipper on an open-source Android app. Our results show that our GUI-based test cases are able to detect severe, previously unknown, faults in the underlying code, and the structured exploration outperforms a random approach.

Article Search
kbe-Anonymity: Test Data Anonymization for Evolving Programs
Lucia, David Lo, Lingxiao Jiang, and Aditya Budi
(Singapore Management University, Singapore)
High-quality test data that is useful for effective testing is often available on users’ site. However, sharing data owned by users with software vendors may raise privacy concerns. Techniques are needed to enable data sharing among data owners and the vendors without leaking data privacy. Evolving programs bring additional challenges because data may be shared multiple times for every version of a program. When multiple versions of the data are cross-referenced, private information could be inferred. Although there are studies addressing the privacy issue of data sharing for testing and debugging, little work has explicitly addressed the challenges when programs evolve. In this paper, we examine kb-anonymity that is recently proposed for anonymizing data for a single version of a program, and identify a potential privacy risk if it is repeatedly applied for evolving programs. We propose kbe-anonymity to address the insufficiencies of kb-anonymity and evaluate our model on three Java programs. We demonstrate that kbe -anonymity can successfully address the potential risk of kb-anonymity, maintain sufficient path coverage for testing, and be as efficient as kb-anonymity.

Article Search
Selection of Regression System Tests for Security Policy Evolution
JeeHyun Hwang, Tao Xie, Donia El Kateb, Tejeddine Mouelhi, and Yves Le Traon
(North Carolina State University, USA; University of Luxembourg, Luxembourg)
As security requirements of software often change, developers may modify security policies such as access control policies (policies in short) according to evolving requirements. To increase confidence that the modification of policies is correct, developers conduct regression testing. However, rerunning all of existing system test cases could be costly and time-consuming. To address this issue, we develop a regression-test-selection approach, which selects every system test case that may reveal regression faults caused by policy changes. Our evaluation results show that our test-selection approach reduces a substantial number of system test cases efficiently.

Article Search
Fast and Precise Points-to Analysis with Incremental CFL-Reachability Summarisation: Preliminary Experience
Lei Shang, Yi Lu, and Jingling Xue
(University of New South Wales, Australia)
We describe our preliminary experience in the design and implementation of a points-to analysis for Java, called EMU, that enables developers to perform pointer-related queries in programs undergoing constant changes in IDEs. EMU achieves fast response times by adopting a modular approach to incrementally updating method summaries upon small code changes: the points-to information in a method is summarised indirectly by CFL reachability rather than directly by points-to sets. Thus, the impact of a small code change made in a method is localised, requiring only its affected part to be re-summarised just to reflect the change. EMU achieves precision by being context-sensitive (for both method invocation and heap abstraction) and field-sensitive. Our evaluation shows that EMU can be promisingly deployed in IDEs where the changes are small.

Article Search

Detection and Refactoring (Short Papers)

Automatically Securing Permission-Based Software by Reducing the Attack Surface: An Application to Android
Alexandre Bartel, Jacques Klein, Yves Le Traon, and Martin Monperrus
(University of Luxembourg, Luxembourg; SnT, Luxembourg; University of Lille, France; INRIA, France)
In the permission-based security model (used e.g. in Android and Blackberry), applications can be granted more permissions than they actually need, what we call a “permission gap”. Malware can leverage the unused permissions for achieving their malicious goals, for instance using code injection. In this paper, we present an approach to detecting permission gaps using static analysis. Using our tool on a dataset of Android applications, we found out that a non negligible part of applications suffers from permission gaps, i.e. does not use all the permissions they declare.

Article Search
Support Vector Machines for Anti-pattern Detection
Abdou Maiga, Nasir Ali, Neelesh Bhattacharya, Aminata Sabané, Yann-Gaël Guéhéneuc, Giuliano Antoniol, and Esma Aïmeur
(Université de Montréal, Canada; École Polytechnique de Montréal, Canada)
Developers may introduce anti-patterns in their software systems because of time pressure, lack of understanding, communication, and--or skills. Anti-patterns impede development and maintenance activities by making the source code more difficult to understand. Detecting anti-patterns in a whole software system may be infeasible because of the required parsing time and of the subsequent needed manual validation. Detecting anti-patterns on subsets of a system could reduce costs, effort, and resources. Researchers have proposed approaches to detect occurrences of anti-patterns but these approaches have currently some limitations: they require extensive knowledge of anti-patterns, they have limited precision and recall, and they cannot be applied on subsets of systems. To overcome these limitations, we introduce SVMDetect, a novel approach to detect anti-patterns, based on a machine learning technique---support vector machines. Indeed, through an empirical study involving three subject systems and four anti-patterns, we showed that the accuracy of SVMDetect is greater than of DETEX when detecting anti-patterns occurrences on a set of classes. Concerning, the whole system, SVMDetect is able to find more anti-patterns occurrences than DETEX.

Article Search
Detection of Embedded Code Smells in Dynamic Web Applications
Hung Viet Nguyen, Hoan Anh Nguyen, Tung Thanh Nguyen, Anh Tuan Nguyen, and Tien N. Nguyen
(Iowa State University, USA)
In dynamic Web applications, there often exists a type of code smells, called embedded code smells, that violate important principles in software development such as software modularity and separation of concerns, resulting in much maintenance effort. Detecting and fixing those code smells is crucial yet challenging since the code with smells is embedded and generated from the server-side code.
We introduce WebScent, a tool to detect such embedded code smells. WebScent first detects the smells in the generated code, and then locates them in the server-side code using the mapping between client-side code fragments and their embedding locations in the server program, which is captured during the generation of those fragments. Our empirical evaluation on real-world Web applications shows that 34%-81% of the tested server files contain embedded code smells. We also found that the source files with more embedded code smells are likely to have more defects and scattered changes, thus potentially require more maintenance effort.

Article Search
Boreas: An Accurate and Scalable Token-Based Approach to Code Clone Detection
Yang Yuan and Yao Guo
(Peking University, China)
Detecting code clones in a program has many applications in software engineering and other related fields. In this paper, we present Boreas, an accurate and scalable token-based approach for code clone detection. Boreas introduces a novel counting-based method to define the characteristic matrices, which are able to describe the program segments distinctly and effectively for the purpose of clone detection. We conducted experiments on JDK 7 and Linux kernel source code. Experimental results show that Boreas is able to match the detecting accuracy of a recently proposed syntactic-based tool Deckard, with the execution time reduced by more than an order of magnitude.

Article Search
Refactorings without Names
Friedrich Steimann and Jens von Pilgrim
(Fernuniversität in Hagen, Germany)
As with design patterns before, the naming and cataloguing of refactorings has contributed significantly to the recognition of the discipline. However, in practice concrete refactoring needs may deviate from what has been distilled as a named refactoring, and mapping these needs to a series of such refactorings — if at all possible — can be difficult. To address this, we propose a framework of specifying refactorings in an ad hoc fashion, and demonstrate its feasibility by presenting an implementation. Evaluation is done by simulating application through a user on a set of given sample programs. Results suggest that our proposal of ad hoc refactoring is, for the investigated scenarios at least, viable.

Article Search
Automated API Migration in a User-Extensible Refactoring Tool for Erlang Programs
Huiqing Li and Simon Thompson
(University of Kent, UK)
Wrangler is a refactoring and code inspection tool for Erlang programs. Apart from providing a set of built-in refactorings and code inspection functionalities, Wrangler allows users to define refactorings, code inspections, and general program transformations for themselves to suit their particular needs. These are defined using a template- and rule-based program transformation and analysis framework built into Wrangler.
This paper reports an extension to Wrangler's extension framework, supporting the automatic generation of API migration refactorings from a user-defined adapter module.

Article Search

Requirements Engineering and Model based Development (Short Papers)

Using Mobile Devices for Collaborative Requirements Engineering
Rainer Lutz, Sascha Schäfer, and Stephan Diehl
(University of Trier, Germany)
In requirements engineering, CRC modeling and use case analysis are established techniques and are often performed as a group work activity. In particular, role play is used to involve different stakeholders into the use case analysis. To support this kind of co-located collaboration we developed CREW-Space, which allows several users to simultaneously interact through Android-enabled mobile devices with the same model displayed on a shared screen. Furthermore, it keeps track of the current state of the role play and, in addition, each mobile device serves as a private workspace; it actually turns into a tangible digital CRC card.

Article Search
Automatically Generating and Adapting Model Constraints to Support Co-evolution of Design Models
Andreas Demuth, Roberto E. Lopez-Herrejon, and Alexander Egyed
(JKU Linz, Austria)
Design models must abide by constraints that can come from diverse sources, like their metamodels, requirements, or the problem domain. Software modelers expect these constraints to be enforced on their models and receive instant error feedback if they fail. This works well when constraints are stable. However, constraints may evolve much like their models do. This evolution demands efficient constraint adaptation mechanisms to ensure that models are always validated against the correct constraints. In this paper, we present an idea based on constraint templates that tackles this evolution scenario by automatically generating and updating constraints.

Article Search
Adaptability of Model Comparison Tools
Timo Kehrer, Udo Kelter, Pit Pietsch, and Maik Schmidt
(University of Siegen, Germany)
Modern model-based development methodologies require a large number of efficient, high-quality model comparison tools. They must be carefully adapted to the specific model type, user preferences and application context. Implementing a large number of dedicated, monolithic tools is infeasible, the only viable approach are generic, adaptable tools. Generic tools currently available provide only partial or low-quality solutions to this challenge; their results are not satisfactory for model types such as state machines or block diagrams. This paper presents the SiDiff approach to model comparison which includes a set of highly configurable incremental matchers and a specification language to control their application.

Article Search

Defect Prediction and Recovery (Short Papers)

Predicting Common Web Application Vulnerabilities from Input Validation and Sanitization Code Patterns
Lwin Khin Shar and Hee Beng Kuan Tan
(Nanyang Technological University, Singapore)
Software defect prediction studies have shown that defect predictors built from static code attributes are useful and effective. On the other hand, to mitigate the threats posed by common web application vulnerabilities, many vulnerability detection approaches have been proposed. However, finding alternative solutions to address these risks remains an important research problem. As web applications generally adopt input validation and sanitization routines to prevent web security risks, in this paper, we propose a set of static code attributes that represent the characteristics of these routines for predicting the two most common web application vulnerabilities—SQL injection and cross site scripting. In our experiments, vulnerability predictors built from the proposed attributes detected more than 80% of the vulnerabilities in the test subjects at low false alarm rates.

Article Search
Software Defect Prediction Using Semi-supervised Learning with Dimension Reduction
Huihua Lu, Bojan Cukic, and Mark Culp
(West Virginia University, USA)
Accurate detection of fault prone modules offers the path to high quality software products while minimizing non essential assurance expenditures. This type of quality modeling requires the availability of software modules with known fault content developed in similar environment. Establishing whether a module contains a fault or not can be expensive. The basic idea behind semi-supervised learning is to learn from a small number of software modules with known fault content and supplement model training with modules for which the fault information is not available. In this study, we investigate the performance of semi-supervised learning for software fault prediction. A preprocessing strategy, multidimensional scaling, is embedded in the approach to reduce the dimensional complexity of software metrics. Our results show that the semi-supervised learning algorithm with dimension-reduction preforms significantly better than one of the best performing supervised learning algorithms, random forest, in situations when few modules with known fault content are available for training.

Article Search
Healing Online Service Systems via Mining Historical Issue Repositories
Rui Ding, Qiang Fu, Jian-Guang Lou, Qingwei Lin, Dongmei Zhang, Jiajun Shen, and Tao Xie
(Microsoft Research, China; Microsoft Research, USA; Shanghai Jiao Tong University, China; North Carolina State University, USA)
Online service systems have been increasingly popular and important nowadays, with an increasing demand on the availability of services provided by these systems, while significant efforts have been made to strive for keeping services up continuously. Therefore, reducing the MTTR (Mean Time to Restore) of a service remains the most important step to assure the user-perceived availability of the service. To reduce the MTTR, a common practice is to restore the service by identifying and applying an appropriate healing action (i.e., a temporary workaround action such as rebooting a SQL machine). However, manually identifying an appropriate healing action for a given new issue (such as service down) is typically time consuming and error prone. To address this challenge, in this paper, we present an automated mining-based approach for suggesting an appropriate healing action for a given new issue. Our approach generates signatures of an issue from its corresponding transaction logs and then retrieves historical issues from a historical issue repository. Finally, our approach suggests an appropriate healing action by adapting healing actions for the retrieved historical issues. We have implemented a healing suggestion system for our approach and applied it to a real-world product online service that serves millions of online customers globally. The studies on 77 incidents (severe issues) over 3 months showed that our approach can effectively provide appropriate healing actions to reduce the MTTR of the service.

Article Search
Automated Evaluation of Syntax Error Recovery
Maartje de Jonge and Eelco Visser
(TU Delft, Netherlands)
Evaluation of parse error recovery techniques is an open problem. The community lacks objective standards and methods to measure the quality of recovery results. This paper proposes an automated technique for recovery evaluation that offers a solution for two main problems in this area. First, a representative testset is generated by a mutation based fuzzing technique that applies knowledge about common syntax errors. Secondly, the quality of the recovery results is automatically measured using an oracle-based evaluation technique. We evaluate the validity of our approach by comparing results obtained by automated evaluation with results obtained by manual inspection. The evaluation shows a clear correspondence between our quality metric and human judgement.

Article Search

Tool Demonstrations 1

MaramaAI: Tool Support for Capturing and Managing Consistency of Multi-lingual Requirements
Massila Kamalrudin, John Grundy, and John Hosking
(Universiti Teknikal Malaysia Melaka, Malaysia; Swinburne University of Technology, Australia; Australian National University, Australia)
Requirements captured by Requirements Engineers are commonly inconsistent with their client’s intended requirements and are often error prone especially if the requirements are written in multiple languages. We demonstrate the use of our automated inconsistency-checking tool MaramaAI to capture and manage the consistency of multi-lingual requirements in both the English and Malay languages for requirements engineers and clients using a round-trip, rapid prototyping approach.

Article Search
GUITest: A Java Library for Fully Automated GUI Robustness Testing
Sebastian Bauersfeld and Tanja E. J. Vos
(Universitat Politècnica de València, Spain)
Graphical User Interfaces (GUIs) are substantial parts of today's applications, no matter whether these run on tablets, smartphones or desktop platforms. Since the GUI is often the only component that humans interact with, it demands for thorough testing to ensure an efficient and satisfactory user experience. Being the glue between almost all of an application's components, GUIs also lend themselves for system level testing. However, GUI testing is inherently difficult and often involves great manual labor, even with modern tools which promise automation. This paper introduces a Java library called GUITest, which allows to generate fully automated GUI robustness tests for complex applications, without the need to manually generate models or input sequences. We will explain how it operates and present first results on its applicability and effectivity during a test involving Microsoft Word.

Article Search
Observatory of Trends in Software Related Microblogs
Palakorn Achananuparp, Ibrahim Nelman Lubis, Yuan Tian, David Lo, and Ee-Peng Lim
(Singapore Management University, Singapore)
Microblogging has recently become a popular means to disseminate information among millions of people. Interestingly, software developers also use microblog to communicate with one another. Different from traditional media, microblog users tend to focus on recency and informality of content. Many tweet contents are relatively more personal and opinionated, compared to that of traditional news report. Thus, by analyzing microblogs, one could get the up-to-date information about what people are interested in or feel toward a particular topic. In this paper, we describe our microblog observatory that aggregates more than 70,000 Twitter feeds, captures software-related tweets, and computes trends from across topics and time points. Finally, we present the results to the end users via a web interface available at

Article Search
Arcade.PLC: A Verification Platform for Programmable Logic Controllers
Sebastian Biallas, Jörg Brauer, and Stefan Kowalewski
(RWTH Aachen University, Germany; Verified Systems International, Germany)
This paper introduces Arcade.PLC, a verification platform for programmable logic controllers (PLCs). The tool supports static analysis as well as ACTL and past-time LTL model checking using counterexample-guided abstraction refinement for different programming languages used in industry. In the underlying principles of the framework, knowledge about the hardware platform is exploited so as to provide efficient techniques. The effectiveness of the approach is evaluated on programs implemented using a combination of programming languages.

Article Search
Test Suite Selection Based on Traceability Annotations
Yves Ledru, German Vega, Taha Triki, and Lydie du Bousquet
(UJF-Grenoble 1, France; Grenoble-INP, France; UPMF-Grenoble2, France; CNRS, France)
This paper describes the Tobias tool. Tobias is a combinatorial test generator which unfolds a test pattern provided by the test engineer, and performs various combinations and repetitions of test parameters and methods. Tobias is available on-line at . This website features recent improvements of the tool including a new input language, a traceability mechanism, and the definition of various ``selectors'' which achieve test suite reduction.

Article Search
PuMoC: A CTL Model-Checker for Sequential Programs
Fu Song and Tayssir Touili
(CNRS, France; University Paris Diderot, France)
In this paper, we present PuMoC, a CTL model checker for Pushdown systems (PDSs) and sequential C/C++ and Java programs. PuMoC allows to do CTL model-checking w.r.t simple valuations, where the atomic propositions depend on the control locations of the PDSs, and w.r.t. regular valuations, where atomic propositions are regular predicates over the stack content. Our tool allowed to (1) check 500 randomly generated PDSs against several CTL formulas; (2) check around 1461 versions of 30 Windows drivers taken from SLAM benchmarks; (3) check several C and Java programs; and (4) perform data flow analysis of real-world Java programs. Our results show the efficiency and the applicability of our tool.

Article Search
Weave Droid: Aspect-Oriented Programming on Android Devices: Fully Embedded or in the Cloud
Yliès Falcone and Sebastian Currea
(UJF-Grenoble 1, France; LIG, France)
Weave Droid is an Android application that makes Aspect-Oriented Programming (AOP) on Android devices possible and user-friendly. It allows to retrieve applications and aspects and weave them together in several ways. Applications and aspects can be loaded from Google Play, personal repositories, or the local memory of a device. Then, two complementary weaving modes are provided: local or remote, using the embedded aspect compiler or the compiler in the cloud, respectively. This provides flexibility and preserves the mobility of the target devices. Weave Droid opens a world of possible applications, not only by benefiting from the already existing uses of AOP on standard machines, but also by the various uses related to the mobile devices. Effectiveness of Weave Droid is demonstrated by weaving aspects with off-the-shelf applications from Google Play.

Article Search
Caprice: A Tool for Engineering Adaptive Privacy
Inah Omoronyia, Liliana Pasquale, Mazeiar Salehie, Luca Cavallaro, Gavin Doherty, and Bashar Nuseibeh
(Lero, Ireland; University of Limerick, Ireland; Trinity College Dublin, Ireland; Open University, UK)
In a dynamic environment where context changes frequently, users’ privacy requirements can also change. To satisfy such changing requirements, there is a need for continuous analysis to discover new threats and possible mitigation actions. A frequently changing context can also blur the boundary between public and personal space, making it difficult for users to discover and mitigate emerging privacy threats. This challenge necessitates some degree of self-adaptive privacy management in software applications.
This paper presents Caprice - a tool for enabling software engineers to design systems that discover and mitigate context-sensitive privacy threats. The tool uses privacy policies, and associated domain and software behavioural models, to reason over the contexts that threaten privacy. Based on the severity of a discovered threat, adaptation actions are then suggested to the designer. We present the Caprice architecture and demonstrate, through an example, that the tool can enable designers to focus on specific privacy threats that arise from changing context and the plausible category of adaptation action, such as ignoring, preventing, reacting, and terminating interactions that threaten privacy.

Article Search

Tool Demonstrations 2

JStereoCode: Automatically Identifying Method and Class Stereotypes in Java Code
Laura Moreno and Andrian Marcus
(Wayne State University, USA)
Object-Oriented (OO) code stereotypes are low-level patterns that reveal the design intent of a source code artifact, such as, a method or a class. They are orthogonal to the problem domain of the software and they reflect the role of a method or class from the OO problem solving point of view. However, the research community in automated reverse engineering has focused more on higher-level design information, such as design patterns. Existing work on reverse engineering code stereotypes is scarce and focused on C++ code, while no tools are freely available as of today. We present JStereoCode, a tool that automatically identifies the stereotypes of methods and classes in Java systems. The tool is integrated with Eclipse and for a given Java project will classify each method and class in the system based on their stereotypes. Applications of JStereoCode include: program comprehension, defect prediction, etc.

Article Search
CHESS: A Model-Driven Engineering Tool Environment for Aiding the Development of Complex Industrial Systems
Antonio Cicchetti, Federico Ciccozzi, Silvia Mazzini, Stefano Puri, Marco Panunzio, Alessandro Zovi, and Tullio Vardanega
(Mälardalen University, Sweden; Intecs, Italy; University of Padova, Italy)
Modern software systems require advanced design support especially capable of mastering rising complexity, as well as of automating as many development tasks as possible. Model-Driven Engineering (MDE) is earning consideration as a solid response to those challenges on account of its support for abstraction and domain specialisation. However, MDE adoption often shatters industrial practice because its novelty opposes the need to preserve vast legacy and to not disband the skills matured in pre-MDE or alternative development solutions. This work presents the CHESS tool environment, a novel approach for cross-domain modelling of industrial complex systems. It leverages on UML profiling and separation of concerns realised through the specification of well-defined design views, each of which addresses a particular aspect of the problem. In this way, extra-functional, functional, and deployment descriptions of the system can be given in a focused manner, avoiding issues pertaining to distinct concerns to interfere with one another.

Article Search
SYMake: A Build Code Analysis and Refactoring Tool for Makefiles
Ahmed Tamrawi, Hoan Anh Nguyen, Hung Viet Nguyen, and Tien N. Nguyen
(Iowa State University, USA)
Software building is an important task during software development. However, program analysis support for build code is still limited, especially for build code written in a dynamic language such as Make. We introduce SYMake, a novel program analysis and refactoring tool for build code in Makefiles. SYMake is capable of detecting several types of code smells and errors such as cyclic dependencies, rule inclusion, duplicate prerequisites, recursive variable loops, etc. It also supports automatic build code refactoring, e.g. rule extraction/removal, target creation, target/variable renaming, prerequisite extraction, etc. In addition, SYMake provides the analysis on defined rules, targets, prerequisites, and associated information to help developers better understand build code in a Makefile and its included files.

Article Search
Quokka: Visualising Interactions of Enterprise Software Environment Emulators
Cameron Hine, Jean-Guy Schneider, Jun Han, and Steve Versteeg
(Swinburne University of Technology, Australia; CA Labs, Australia)
Enterprise software systems operate in large-scale, heterogeneous, distributed environments which makes assessment of non-functional properties, such as scalability and robustness, of those systems particularly challenging. Enterprise environment emulators can provide test-beds representative of real environments using only a few physical hosts thereby allowing assessment of the non-functional properties of enterprise software systems. To date, analysing outcomes of these tests has been an ad hoc and somewhat tedious affair; largely based on manual and/or script-assisted inspection of interaction logs. Quokka visualises emulations significantly aiding analysis and comprehension. Emulated interactions can be viewed live (in real-time) as well as be replayed at a later stage, furthermore, basic charts are used to aggregate and summarise emulations, helping to identify performance and scalability issues.

Article Search
Communicating Continuous Integration Servers for Increasing Effectiveness of Automated Testing
Stefan Dösinger, Richard Mordinyi, and Stefan Biffl
(TU Vienna, Austria)
Automated testing and continuous integration are established concepts in today’s software engineering landscape, but they work in a kind of isolated environment as they do not fully take into consideration the complexity of dependencies between code artifacts in different projects. In this paper, we demonstrate the Continuous Change Impact Analysis Process (CCIP) that breaks up the isolation by actively taking into account project dependencies. The implemented CCIP approach extends the traditional continuous integration (CI) process by enforcing communication between CI servers whenever new artifact updates are available. We show that the exchange of CI process results contribute to improving effectiveness of automated testing.

Article Search
GZoltar: An Eclipse Plug-In for Testing and Debugging
José Campos, André Riboira, Alexandre Perez, and Rui Abreu
(University of Porto, Portugal)
Testing and debugging is the most expensive, error-prone phase in the software development life cycle. Automated testing and diagnosis of software faults can drastically improve the efficiency of this phase, this way improving the overall quality of the software. In this paper we present a toolset for automatic testing and fault localization, dubbed GZoltar, which hosts techniques for (regression) test suite minimization and automatic fault diagnosis (namely, spectrum-based fault localization). The toolset provides the infrastructure to automatically instrument the source code of software programs to produce runtime data. Subsequently the data was analyzed to both minimize the test suite and return a ranked list of diagnosis candidates. The toolset is a plug-and-play plug-in for the Eclipse IDE to ease world-wide adoption.

Article Search
Semantic Patch Inference
Jesper Andersen, Anh Cuong Nguyen, David Lo, Julia L. Lawall, and Siau-Cheng Khoo
(University of Copenhagen, Denmark; National University of Singapore, Singapore; Singapore Management University, Singapore; INRIA, France)
We propose a tool for inferring transformation specifications from a few examples of original and updated code. These transformation specifications may contain multiple code fragments from within a single function, all of which must be present for the transformation to apply. This makes the inferred transformations context sensitive. Our algorithm is based on depth-first search, with pruning. Because it is applied locally to a collection of functions that contain related changes, it is efficient in practice. We illustrate the approach on an example drawn from recent changes to the Linux kernel.

Article Search
REInDetector: A Framework for Knowledge-Based Requirements Engineering
Tuong Huan Nguyen, Bao Quoc Vo, Markus Lumpe, and John Grundy
(Swinburne University of Technology, Australia)
Requirements engineering (RE) is a coordinated effort to allow clients, users, and software engineers to jointly formulate assumptions, constraints, and goals about a software solution. However, one of the most challenging aspects of RE is the detection of inconsistencies between requirements. To address this issue, we have developed REInDetector, a knowledge-based requirements engineering tool, supporting automatic detection of a range of inconsistencies. It provides facilities to elicit, structure, and manage requirements with distinguished capabilities for capturing the domain knowledge and the semantics of requirements. This permits an automatic analysis of both consistency and realizability of requirements. REInDetector finds implicit consequences of explicit requirements and offers all stakeholders an additional means to identify problems in a more timely fashion than existing RE tools. In this paper, we describe the Description Logic used to capture requirements, the REInDetector tool, its support for inconsistency detection, and its efficacy as applied to several RE examples. An important feature of REInDetector is also its ability to generate comprehensive explanations to provide more insights into the detected inconsistencies.

Article Search

Doctoral Symposium

Formal Verification Techniques for Model Transformations Specified By-Demonstration
Sebastian Gabmeyer
(TU Vienna, Austria)
Model transformations play an essential role in many aspects of model-driven development. By-demonstration approaches provide a user-friendly tool for specifying reusable model transformations. Here, a modeler performs the model transformation only once by hand and an executable transformation is automatically derived. Such a transformation is characterized by the set of pre- and postconditions that are required to be satisfied prior and after the execution of the transformation. However, the automatically derived conditions are usually too restrictive or incomplete and need to be refined manually to obtain the intended model transformation.
As model transformations may be specified improperly despite the use of by-demonstration development approaches, we propose to employ formal verification techniques to detect inconsistent and erroneous transformations. In particular, we conjecture that methods drawn from software model checking and theorem proving might be employed to verify certain correctness properties of model transformations.

Article Search
A Model-Driven Parser Generator with Reference Resolution Support
Luis Quesada
(University of Granada, Spain)
ModelCC is a model-based parser generator. Model-based parser generators decouple language specification from language processing. This model-driven approach avoids the limitations imposed by parser generators whose language specifications must conform to specific grammar constraints. Moreover, ModelCC supports reference resolution within the language specification. Therefore, it does not parse just trees but it can also efficiently deal with abstract syntax graphs. These graphs can even include cycles (i.e. they are not constrained to directed acyclic graphs).

Article Search
Property-Preserving Program Refinement
Yosuke Yamamoto
(University of Saskatchewan, Canada)
During the development and maintenance process, a program changes form, often being refined as specifications and implementation decisions are realized. A correctness proof built in parallel with an original program can be extended to a proof of refined program by showing equivalences between the original and refined program. This paper illustrates two examples of property-preserving refinement, partial evaluation and generalization, and explores the correctness-preserving equivalences underpinning those refinement techniques. We plan to explore ways in which the informal reasoning behind these and similar program refinement tasks may be captured to extend the proof for an original program into a proof of the refined program.

Article Search
Predicting Software Complexity by Means of Evolutionary Testing
Ana Filipa Nogueira
(University of Coimbra, Portugal)
One characteristic that impedes software from achieving good levels of maintainability is the increasing complexity of software. Empirical observations have shown that typically, the more complex the software is, the bigger the test suite is. Thence, a relevant question, which originated the main research topic of our work, has raised: "Is there a way to correlate the complexity of the test cases utilized to test a software product with the complexity of the software under test?". This work presents a new approach to infer software complexity with basis on the characteristics of automatically generated test cases. From these characteristics, we expect to create a test case profile for a software product, which will then be correlated to the complexity, as well as to other characteristics, of the software under test. This research is expected to provide developers and software architects with means to support and validate their decisions, as well as to observe the evolution of a software product during its life-cycle. Our work focuses on object-oriented software, and the corresponding test suites will be automatically generated through an emergent approach for creating test data named as Evolutionary Testing.

Article Search
Identifying Refactoring Sequences for Improving Software Maintainability
Panita Meananeatra
(Thammasat University, Thailand; National Electronics and Computer Technology Center, Thailand)
Refactoring is a well-known technique that preserves software behaviors and improves its bad structures or bad smells. In most cases, more than one bad smell is found in a program. Consequently, developers frequently apply refactorings more than once. Applying appropriate refactoring sequences, an ordered list of refactorings, developers can remove bad smells as well as reduce improvement time and produce highly maintainable software. According to our 2011 survey, developers consider four main criteria to select an optimal refactoring sequence: 1) the number of removed bad smells, 2) maintainability, 3) the size of refactoring sequence and 4) the number of modified program elements. A refactoring sequence that satisfies these four criteria produces code without bad smells, with higher maintainability, using the least improvement effort and time, and providing more traceability. Some existing works suggest a list of refactorings without ordering, and others suggest refactoring sequences. However, these works do not consider the four criteria discussed earlier. Therefore, our research proposes an approach to identify an optimal refactoring sequence that meets these criteria. In addition, it is expected that the findings will reduce maintenance time and cost, increase maintainability and enhance software quality.

Article Search

proc time: 0.21