Powered by
36th International Conference on Software Engineering (ICSE Companion 2014),
May 31 – June 7, 2014,
Hyderabad, India
Formal Demonstrations
Automated Programming Support
Thu, Jun 5, 10:30 - 12:30, MR.G.1-3 (Chair: Andrew Begel)
ImpactMiner: A Tool for Change Impact Analysis
Bogdan Dit, Michael Wagner, Shasha Wen, Weilin Wang, Mario Linares-Vásquez,
Denys Poshyvanyk , and Huzefa Kagdi
(College of William and Mary, USA; Wichita State University, USA)
Developers are often faced with a natural language change request (such as a bug report) and tasked with identifying all code elements that must be modified in order to fulfill the request (e.g., fix a bug or implement a new feature). In order to accomplish this task, developers frequently and routinely perform change impact analysis. This formal demonstration paper presents ImpactMiner, a tool that implements an integrated approach to software change impact analysis. The proposed approach estimates an impact set using an adaptive combination of static textual analysis, dynamic execution tracing, and mining software repositories techniques. ImpactMiner is available from our online appendix http://www.cs.wm.edu/semeru/ImpactMiner/
@InProceedings{ICSE Companion14p540,
author = {Bogdan Dit and Michael Wagner and Shasha Wen and Weilin Wang and Mario Linares-Vásquez and Denys Poshyvanyk and Huzefa Kagdi},
title = {ImpactMiner: A Tool for Change Impact Analysis},
booktitle = {Proc.\ ICSE Companion},
publisher = {ACM},
pages = {540--543},
doi = {},
year = {2014},
}
Video
Info
Migrating Code with Statistical Machine Translation
Anh Tuan Nguyen, Tung Thanh Nguyen, and Tien N. Nguyen
(Iowa State University, USA; Utah State University, USA)
In the era of mobile computing, developers often need to migrate code written for one platform in a programming language to another language for a different platform, e.g., from Java for Android to C# for Windows Phone. The migration process is often performed manually or semi-automatically, in which developers are required to manually define translation rules and API mappings. This paper presents semSMT, an automatic tool to migrate code written in Java to C#. semSMT utilizes statistical machine translation to automatically infer translation rules from existing migrated code, thus, requires no manual defining of rules. The video demonstration on semSMT can be found on YouTube at http://www.youtube.com/watch?v=aRSnl5-7vNo.
@InProceedings{ICSE Companion14p544,
author = {Anh Tuan Nguyen and Tung Thanh Nguyen and Tien N. Nguyen},
title = {Migrating Code with Statistical Machine Translation},
booktitle = {Proc.\ ICSE Companion},
publisher = {ACM},
pages = {544--547},
doi = {},
year = {2014},
}
LTSA-PCA: Tool Support for Compositional Reliability Analysis
Pedro Rodrigues, Emil Lupu, and Jeff Kramer
(Imperial College London, UK)
Software systems are often constructed by combining new and existing services and components. Models of such systems should therefore be compositional in order to reflect the architectural structure. We present herein an extension of the LTSA model checker. It supports the specification, visualisation and failure analysis of composable, probabilistic behaviour of component-based systems, modelled as Probabilistic Component Automata (PCA). To evaluate aspects such as the probability of system failure, a DTMC model can be automatically constructed from the composition of the PCA representations of each component and analysed in tools such as PRISM. Before composition, we reduce each PCA to its interface behaviour in order to mitigate state explosion associated with composite representations. Moreover, existing behavioural analysis techniques in LTSA can be applied to PCA representations to verify the compatibility of interface behaviour between components with matching provided-required interfaces. A video highlighting the main features of the tool can be found at: http://youtu.be/moIkx8JHE7o.
@InProceedings{ICSE Companion14p548,
author = {Pedro Rodrigues and Emil Lupu and Jeff Kramer},
title = {LTSA-PCA: Tool Support for Compositional Reliability Analysis},
booktitle = {Proc.\ ICSE Companion},
publisher = {ACM},
pages = {548--551},
doi = {},
year = {2014},
}
DASHboards: Enhancing Developer Situational Awareness
Oleksii Kononenko, Olga Baysal, Reid Holmes, and
Michael W. Godfrey
(University of Waterloo, Canada)
Issue trackers monitor the progress of software development "issues", such as bug fixes and discussions about features. Typically, developers subscribe to issues they are interested in through the tracker, and are informed of changes and new developments via automated email. In practice, however, this approach does not scale well, as developers may receive large volumes of messages that they must sort through using their mail client; over time, it becomes increasingly challenging for them to maintain awareness of the issues that are relevant to their activities and tasks. To address this problem, we present a tool called DASH that is implemented in the form of personalized views of issues; developers indicate issues of interest and DASH presents customized views of their progress and informs them of changes as they occur.
Video: http://youtu.be/Jka_MsZet20
@InProceedings{ICSE Companion14p552,
author = {Oleksii Kononenko and Olga Baysal and Reid Holmes and Michael W. Godfrey},
title = {DASHboards: Enhancing Developer Situational Awareness},
booktitle = {Proc.\ ICSE Companion},
publisher = {ACM},
pages = {552--555},
doi = {},
year = {2014},
}
Product Assignment Recommender
Jialiang Xie, Qimu Zheng, Minghui Zhou, and Audris Mockus
(Peking University, China; Avaya Labs Research, USA)
Effectiveness of software development process depends on the
accuracy of data in supporting tools. In particular, a customer
issue assigned to a wrong product team takes much longer to
resolve (negatively affecting user-perceived quality) and wastes
developer effort. In Open Source Software (OSS) and in commercial projects
values in issue-tracking systems (ITS) or Customer
Relationship Management (CRM) systems are often assigned by
non-developers for whom the assignment task is difficult.
We propose PAR (Product Assignment Recommender) to
estimate the odds that a value in the ITS is
incorrect. PAR learns from the past activities in ITS and performs
prediction using a logistic regression model. Our demonstrations show how
PAR helps developers to focus on fixing real problems, and how it can
be used to improve data accuracy in ITS by crowd-sourcing
non-developers to verify and correct low-accuracy data.
http://youtu.be/IuykbzSTj8s
@InProceedings{ICSE Companion14p556,
author = {Jialiang Xie and Qimu Zheng and Minghui Zhou and Audris Mockus},
title = {Product Assignment Recommender},
booktitle = {Proc.\ ICSE Companion},
publisher = {ACM},
pages = {556--559},
doi = {},
year = {2014},
}
Verily: A Web Framework for Creating More Reasonable Web Applications
John L. Singleton and
Gary T. Leavens
(University of Central Florida, USA)
The complexity of web application construction is increasing at an astounding rate. Developing for the web typically crosses multiple application tiers in a variety of languages, which can result in disjoint code bases. This lack of standardization introduces new challenges for reasoning.
In this paper we introduce Verily, a new web framework for Java that supports the development of verifiable web applications. Rather than requiring that programs be verified in separate a posteriori analysis, Verily supports construction via a series of Recipes, which are properties of an application that are enforced at compile time. In addition to introducing the Verily framework, we also present two Recipes: the Core Recipe, an application architecture for web applications designed to replace traditional server-side Model View Controller, and the Global Mutable State Recipe, which enables developers to use sessions within their applications without resorting to the use of unrestricted global mutable state. Demo Video: http://www.youtube.com/watch?v=TjRF7E4um3c
@InProceedings{ICSE Companion14p560,
author = {John L. Singleton and Gary T. Leavens},
title = {Verily: A Web Framework for Creating More Reasonable Web Applications},
booktitle = {Proc.\ ICSE Companion},
publisher = {ACM},
pages = {560--563},
doi = {},
year = {2014},
}
Video
VeriWS: A Tool for Verification of Combined Functional and Non-functional Requirements of Web Service Composition
Manman Chen, Tian Huat Tan, Jun Sun, Yang Liu
, and Jin Song Dong
(National University of Singapore, Singapore; Singapore University of Technology and Design, Singapore; Nanyang Technological University, Singapore)
Web service composition is an emerging technique to develop Web
applications by composing existing Web services. Web service composition
is subject to two important classes of requirements, i.e.,
functional and non-functional requirements. Both are crucial to
Web service composition. Therefore, it is desirable to verify combined
functional and non-functional requirements for Web service
composition.
We present VeriWS, a tool to verify combined functional and
non-functional requirements of Web service composition. VeriWS
captures the semantics of Web service composition and verifies it
directly based on the semantics. We also show how to describe Web
service composition and properties using VeriWS. The YouTube
video for demonstration of VeriWS is available at https://sites.
google.com/site/veriwstool/.
@InProceedings{ICSE Companion14p564,
author = {Manman Chen and Tian Huat Tan and Jun Sun and Yang Liu and Jin Song Dong},
title = {VeriWS: A Tool for Verification of Combined Functional and Non-functional Requirements of Web Service Composition},
booktitle = {Proc.\ ICSE Companion},
publisher = {ACM},
pages = {564--567},
doi = {},
year = {2014},
}
Software Understanding for Programmers and Researchers
Thu, Jun 5, 14:00 - 16:00, MR.G.1-3 (Chair: Tim Menzies)
SEWordSim: Software-Specific Word Similarity Database
Yuan Tian,
David Lo , and Julia Lawall
(Singapore Management University, Singapore; INRIA, France; LIP6, France)
Measuring the similarity of words is important in accurately representing
and comparing documents, and thus improves the results of many natural
language processing (NLP) tasks. The NLP community has proposed various
measurements based on WordNet, a lexical database that contains
relationships between many pairs of words. Recently, a number of
techniques have been proposed to address software engineering issues such
as code search and fault localization that require understanding natural language documents, and a measure of word similarity could improve their results. However, WordNet only contains information about words senses in general-purpose conversation, which often differ from word senses in a software-engineering context, and the software-specific word similarity resources that have been developed rely on data sources containing only a limited range of words and word uses.
In recent work, we have proposed a word similarity resource based on information collected automatically from StackOverflow. We have found that the results of this resource are given scores on a 3-point Likert scale that are over 50% higher than the results of a resource based on WordNet. In this demo paper, we review our data collection methodology and propose a Java API to make the resulting word similarity resource useful in practice.
The SEWordSim database and related information can be found at http://goo.gl/BVEAs8. Demo video is available at http://goo.gl/dyNwyb.
@InProceedings{ICSE Companion14p568,
author = {Yuan Tian and David Lo and Julia Lawall},
title = {SEWordSim: Software-Specific Word Similarity Database},
booktitle = {Proc.\ ICSE Companion},
publisher = {ACM},
pages = {568--571},
doi = {},
year = {2014},
}
Video
BOAT: An Experimental Platform for Researchers to Comparatively and Reproducibly Evaluate Bug Localization Techniques
Xinyu Wang,
David Lo , Xin Xia, Xingen Wang,
Pavneet Singh Kochhar,
Yuan Tian, Xiaohu Yang
, Shanping Li
, Jianling Sun
, and Bo Zhou
(Zhejiang University, China; Singapore Management University, Singapore)
Bug localization refers to the process of identifying source code files that contain defects from descriptions of these defects which are typically contained in bug reports. There have been many bug localization techniques proposed in the literature. However, often it is hard to compare these techniques since different evaluation datasets are used. At times the datasets are not made publicly available and thus it is difficult to reproduce reported results. Furthermore, some techniques are only evaluated on small datasets and thus it is not clear whether the results are generalizable. Thus, there is a need for a platform that allows various techniques to be compared with one another on a common pool containing a large number of bug reports with known defective source code files. In this paper, we address this need by proposing our Bug lOcalization experimental plATform (BOAT). BOAT is an extensible web application that contains thousands of bug reports with known defective source code files. Researchers can create accounts in BOAT, upload executables of their bug localization techniques, and see how these techniques perform in comparison with techniques uploaded by other researchers, with respect to some standard evaluation measures. BOAT is already preloaded with several bug localization techniques and thus researchers can directly compare their newly proposed techniques against these existing techniques. BOAT has been made available online since October 2013, and researchers could access the platform at: http://www.vlis.zju.edu.cn/blp.
@InProceedings{ICSE Companion14p572,
author = {Xinyu Wang and David Lo and Xin Xia and Xingen Wang and Pavneet Singh Kochhar and Yuan Tian and Xiaohu Yang and Shanping Li and Jianling Sun and Bo Zhou},
title = {BOAT: An Experimental Platform for Researchers to Comparatively and Reproducibly Evaluate Bug Localization Techniques},
booktitle = {Proc.\ ICSE Companion},
publisher = {ACM},
pages = {572--575},
doi = {},
year = {2014},
}
Video
VMVM: Unit Test Virtualization for Java
Jonathan Bell and Gail Kaiser
(Columbia University, USA)
As software evolves and grows, its regression test suites tend to grow as well. When these test suites become too large, they can eventually reach a point where they become too length to regularly execute. Previous work in Test Suite Minimization has reduced the number of tests in such suites by attempting to identify those that are redundant (e.g. by a coverage metric). Our approach to ameliorating the runtime of these large test suites is complementary, instead focusing on reducing the overhead of running each test, an approach that we call Unit Test Virtualization. This Tool Demonstration presents our implementation of Unit Test Virtualization, VMVM (pronounced "vroom-vroom") and summarizes an evaluation of our implementation on 20 real-world Java applications, showing that it reduces test suite execution time by up to 97% (on average, 62%). A companion video to this demonstration is available online, at https://www.youtube.com/watch?v=sRpqF3rJERI.
@InProceedings{ICSE Companion14p576,
author = {Jonathan Bell and Gail Kaiser},
title = {VMVM: Unit Test Virtualization for Java},
booktitle = {Proc.\ ICSE Companion},
publisher = {ACM},
pages = {576--579},
doi = {},
year = {2014},
}
Video
Info
ViVA: A Visualization and Analysis Tool for Distributed Event-Based Systems
Youn Kyu Lee, Jae young Bang, Joshua Garcia, and Nenad Medvidovic
(University of Southern California, USA)
Distributed event-based (DEB) systems are characterized by highly-decoupled components that communicate by exchanging messages. This form of communication enables flexible and scalable system
composition but also reduces understandability and maintainability due to the indirect manner in which DEB components communicate. To tackle this problem, we present Visualizer for eVent-based Architectures, ViVA, a tool that effectively visualizes the large number of messages and dependencies that can be exchanged between components and the order in which the exchange of messages occur. In
this paper, we describe the design, implementation, and key features of ViVA. (Demo video at http://youtu.be/jHVwuR5AYgA)
@InProceedings{ICSE Companion14p580,
author = {Youn Kyu Lee and Jae young Bang and Joshua Garcia and Nenad Medvidovic},
title = {ViVA: A Visualization and Analysis Tool for Distributed Event-Based Systems},
booktitle = {Proc.\ ICSE Companion},
publisher = {ACM},
pages = {580--583},
doi = {},
year = {2014},
}
Video
Cookbook: In Situ Code Completion using Edit Recipes Learned from Examples
John Jacobellis, Na Meng, and Miryung Kim
(University of Texas at Austin, USA)
Existing code completion engines leverage only pre-defined templates or match a set of user-defined APIs to complete the rest of changes. We propose a new code completion technique, called Cookbook, where developers can define custom edit recipes—a reusable template of complex edit operations—by specifying change examples. It generates an abstract edit recipe that describes the most specific generalization of the demonstrated example program transformations. Given a library of edit recipes, it matches a developer’s edit stream to recommend a suitable recipe that is capable of filling out the rest of change customized to the target. We evaluate Cookbook using 68 systematic changed methods drawn from the version history of Eclipse SWT. Cookbook is able to narrow down to the most suitable recipe in 75% of the cases. It takes 120 milliseconds to find the correct suitable recipe on average, and the edits produced by the selected recipe are on average 82% similar to developer’s hand edit. This shows Cookbook’s potential to speed up manual editing and to minimize developer’s errors. Our demo video is available at https://www.youtube.com/watch?v=y4BNc8FT4RU.
@InProceedings{ICSE Companion14p584,
author = {John Jacobellis and Na Meng and Miryung Kim},
title = {Cookbook: In Situ Code Completion using Edit Recipes Learned from Examples},
booktitle = {Proc.\ ICSE Companion},
publisher = {ACM},
pages = {584--587},
doi = {},
year = {2014},
}
Video
Atlas: A New Way to Explore Software, Build Analysis Tools
Tom Deering, Suresh Kothari, Jeremias Sauceda, and Jon Mathews
(Iowa State University, USA; EnSoft, USA)
Atlas is a new software analysis platform from EnSoft Corp. Atlas decouples the domain-specific analysis goal from its underlying mechanism by splitting analysis into two distinct phases. In the first phase, polynomial-time static analyzers index the software AST, building a rich graph database. In the second phase, users can explore the graph directly or run custom analysis scripts written using a convenient API. These features make Atlas ideal for both interaction and automation. In this paper, we describe the motivation, design, and use of Atlas. We present validation case studies, including the verification of safe synchronization of the Linux kernel, and the detection of malware in Android applications. Our ICSE 2014 demo explores the comprehension and malware detection use cases.
Video: http://youtu.be/cZOWlJ-IO0k
@InProceedings{ICSE Companion14p588,
author = {Tom Deering and Suresh Kothari and Jeremias Sauceda and Jon Mathews},
title = {Atlas: A New Way to Explore Software, Build Analysis Tools},
booktitle = {Proc.\ ICSE Companion},
publisher = {ACM},
pages = {588--591},
doi = {},
year = {2014},
}
Video
Info
Teamscale: Software Quality Control in Real-Time
Lars Heinemann, Benjamin Hummel, and Daniela Steidl
(CQSE, Germany)
When large software systems evolve, the quality of source code is essential for successful maintenance. Controlling code quality continuously requires adequate tool support. Current quality analysis tools operate in batch-mode and run up to several hours for large systems, which hampers the integration of quality control into daily development. In this paper, we present the incremental quality analysis tool Teamscale, providing feedback to developers within seconds after a commit and thus enabling real-time software quality control. We evaluated the tool within a development team of a German insurance company. A video demonstrates our tool: http://www.youtube.com/watch?v=nnuqplu75Cg.
@InProceedings{ICSE Companion14p592,
author = {Lars Heinemann and Benjamin Hummel and Daniela Steidl},
title = {Teamscale: Software Quality Control in Real-Time},
booktitle = {Proc.\ ICSE Companion},
publisher = {ACM},
pages = {592--595},
doi = {},
year = {2014},
}
proc time: 1.34