Powered by
Conference Publishing Consulting

33rd International Conference on Software Engineering, May 21–28, 2011, Waikiki, Honolulu, HI, USA

ICSE 2011 – Proceedings

Contents - Abstracts - Authors


Title Page




Technical / Research Track

Testing the Waters I

A Practical Guide for Using Statistical Tests to Assess Randomized Algorithms in Software Engineering
Andrea Arcuri and Lionel C. Briand
(Simula Research Laboratory, Norway)
Randomized algorithms have been used to successfully address many different types of software engineering problems. This type of algorithms employ a degree of randomness as part of their logic. Randomized algorithms are useful for difficult problems where a precise solution cannot be derived in a deterministic way within reasonable time. However, randomized algorithms produce different results on every run when applied to the same problem instance. It is hence important to assess the effectiveness of randomized algorithms by collecting data from a large enough number of runs. The use of rigorous statistical tests is then essential to provide support to the conclusions derived by analyzing such data. In this paper, we provide a systematic review of the use of randomized algorithms in selected software engineering venues in 2009. Its goal is not to perform a complete survey but to get a representative snapshot of current practice in software engineering research. We show that randomized algorithms are used in a significant percentage of papers but that, in most cases, randomness is not properly accounted for. This casts doubts on the validity of most empirical results assessing randomized algorithms. There are numerous statistical tests, based on different assumptions, and it is not always clear when and how to use these tests. We hence provide practical guidelines to support empirical research on randomized algorithms in software engineering.
Article Search
aComment: Mining Annotations from Comments and Code to Detect Interrupt Related Concurrency Bugs
Lin Tan, Yuanyuan Zhou, and Yoann Padioleau
(University of Waterloo, Canada; UC San Diego, USA; Facebook Inc., USA)
Concurrency bugs in an operating system (OS) are detrimental as they can cause the OS to fail and affect all applications running on top of the OS. Detecting OS concurrency bugs is challenging due to the complexity of the OS synchronization, particularly with the presence of the OS specific interrupt context. Existing dynamic concurrency bug detection techniques are designed for user level applications and cannot be applied to operating systems. To detect OS concurrency bugs, we proposed a new type of annotations – interrupt related annotations – and generated 96,821 such annotations for the Linux kernel with little manual effort. These annotations have been used to automatically detect 9 real OS concurrency bugs (7 of which were previously unknown). Two of the key techniques that make the above contributions possible are: (1) using a hybrid approach to extract annotations from both code and comments written in natural language to achieve better coverage and accuracy in annotation extraction and bug detection; and (2) automatically propagating annotations to caller functions to improve annotating and bug detection. These two techniques are general and can be applied to non-OS code, code written in other programming languages such as Java, and for extracting other types of specifications.
Article Search
Camouflage: Automated Anonymization of Field Data
James Clause and Alessandro Orso
(University of Delaware, USA; Georgia Institute of Technology, USA)
Privacy and security concerns have adversely affected the usefulness of many types of techniques that leverage information gathered from deployed applications. To address this issue, we present an approach for automatically anonymizing failure-inducing inputs that builds on a previously developed technique. Given an input I that causes a failure f, our approach generates an anonymized input I' that is different from I but still causes f. I' can thus be sent to developers to enable them to debug f without having to know I. We implemented our approach in a prototype tool, Camouflage, and performed an extensive empirical evaluation where we applied Camouflage to a large set of failure-inducing inputs for several real applications. The results of the evaluation are promising, as they show that Camouflage is both practical and effective at generating anonymized inputs; for the inputs that we considered, I and I' shared no sensitive information. The results also show that our approach can outperform the general technique it extends.
Article Search

Surfing the Dependability Wave

A Lightweight Code Analysis and its Role in Evaluation of a Dependability Case
Joseph P. Near, Aleksandar Milicevic, Eunsuk Kang, and Daniel Jackson
(Massachusetts Institute of Technology, USA)
A dependability case is an explicit, end-to-end argument, based on concrete evidence, that a system satisfies a critical property. We report on a case study constructing a dependability case for the control software of a medical device. The key novelty of our approach is a lightweight code analysis that generates a list of side conditions that correspond to assumptions to be discharged about the code and the environment in which it executes. This represents an unconventional trade-off between, at one extreme, more ambitious analyses that attempt to discharge all conditions automatically (but which cannot even in principle handle environmental assumptions), and at the other, flow- or contextinsensitive analyses that require more user involvement. The results of the analysis suggested a variety of ways in which the dependability of the system might be improved.
Article Search
Towards Quantitative Software Reliability Assessment in Incremental Development Processes
Tadashi Dohi and Takaji Fujiwara
(Hiroshima University, Japan; Fujitsu Quality Laboratory, Japan)
The iterative and incremental development is becoming a major development process model in industry, and allows us for a good deal of parallelism between development and testing. In this paper we develop a quantitative software reliability assessment method in incremental development processes, based on the familiar non-homogeneous Poisson processes. More specifically, we utilize the software metrics observed in each incremental development and testing, and estimate the associated software reliability measures. In a numerical example with a real incremental developmental project data, it is shown that the estimate of software reliability with a specific model can take a realistic value, and that the reliability growth phenomenon can be observed even in the incremental development scheme.
Article Search
The Impact of Fault Models on Software Robustness Evaluations
Stefan Winter, Constantin Sârbu, Neeraj Suri, and Brendan Murphy
(TU Darmstadt, Germany; Microsoft Research, UK)
Following the design and in-lab testing of software, the evaluation of its resilience to actual operational perturbations in the field is a key validation need. Software-implemented fault injection (SWIFI) is a widely used approach for evaluating the robustness of software components. Recent research [24, 18] indicates that the selection of the applied fault model has considerable influence on the results of SWIFI-based evaluations, thereby raising the question how to select appropriate fault models (i.e. that provide justified robustness evidence). This paper proposes several metrics for comparatively evaluating fault models’s abilities to reveal robustness vulnerabilities. It demonstrates their application in the context of OS device drivers by investigating the influence (and relative utility) of four commonly used fault models, i.e. bit flips (in function parameters and in binaries), data type dependent parameter corruptions, and parameter fuzzing. We assess the efficiency of these models at detecting robustness vulnerabilities during the SWIFI evaluation of a real embedded operating system kernel and discuss application guidelines for our metrics alongside.
Article Search

Refactoring Your Lei I

Transformation for Class Immutability
Fredrik Kjolstad, Danny Dig, Gabriel Acevedo, and Marc Snir
(University of Illinois at Urbana-Champaign, USA)
It is common for object-oriented programs to have both mutable and immutable classes. Immutable classes simplify programing because the programmer does not have to reason about side-effects. Sometimes programmers write immutable classes from scratch, other times they transform mutable into immutable classes. To transform a mutable class, programmers must find all methods that mutate its transitive state and all objects that can enter or escape the state of the class. The analyses are non-trivial and the rewriting is tedious. Fortunately, this can be automated. We present an algorithm and a tool, Immutator, that enables the programmer to safely transform a mutable class into an immutable class. Two case studies and one controlled experiment show that Immutator is useful. It (i) reduces the burden of making classes immutable, (ii) is fast enough to be used interactively, and (iii) is much safer than manual transformations.
Article Search
Refactoring Java Programs for Flexible Locking
Max Schäfer, Manu Sridharan, Julian Dolby, and Frank Tip
(Oxford University, UK; IBM Research Watson, USA)
Recent versions of the Java standard library offer flexible locking constructs that go beyond the language’s built-in monitor locks in terms of features, and that can be fine-tuned to suit specific application scenarios. Under certain conditions, the use of these constructs can improve performance significantly, by reducing lock contention. However, the code transformations needed to convert between locking constructs are non-trivial, and great care must be taken to update lock usage throughout the program consistently. We present Relocker, an automated tool that assists programmers with refactoring synchronized blocks into ReentrantLocks and ReadWriteLocks, to make exploring the performance tradeoffs among these constructs easier. In experiments on a collection of real-world Java applications, Relocker was able to refactor over 80% of built-in monitors into ReentrantLocks. Additionally, in most cases the tool could automatically infer the same ReadWriteLock usage that programmers had previously introduced manually.
Article Search
Refactoring Pipe-like Mashups for End-User Programmers
Kathryn T. Stolee and Sebastian Elbaum
(University of Nebraska-Lincoln, USA)
Mashups are becoming increasingly popular as end users are able to easily access, manipulate, and compose data from many web sources. We have observed, however, that mashups tend to suffer from deficiencies that propagate as mashups are reused. To address these deficiencies, we would like to bring some of the benefits of software engineering techniques to the end users creating these programs. In this work, we focus on identifying code smells indicative of the deficiencies we observed in web mashups programmed in the popular Yahoo! Pipes environment. Through an empirical study, we explore the impact of those smells on end-user programmers and observe that users generally prefer mashups without smells. We then introduce refactorings targeting those smells, reducing the complexity of the mashup programs, increasing their abstraction, updating broken data sources and dated components, and standardizing their structures to fit the community development patterns. Our assessment of a large sample of mashups shows that smells are present in 81% of them and that the proposed refactorings can reduce the number of smelly mashups to 16%, illustrating the potential of refactoring to support the thousands of end users programming mashups.
Article Search

Comprehending the Drift I

Mining Message Sequence Graphs
Sandeep Kumar, Siau Cheng Khoo, Abhik Roychoudhury, and David Lo
(National University of Singapore, Singapore; Singapore Management University, Singapore)
Dynamic specification mining involves discovering software behavior from traces for the purpose of program comprehension and bug detection. However, mining program behavior from execution traces is difficult for concurrent/distributed programs. Specifically, the inherent partial order relationships among events occurring across processes pose a big challenge to specification mining. In this paper, we propose a framework for mining partial orders so as to understand concurrent program behavior. Our miner takes in a set of concurrent program traces, and produces a message sequence graph (MSG) to represent the concurrent program behavior. An MSG represents a graph where the nodes of the graph are partial orders, represented as Message Sequence Charts. Mining an MSG allows us to understand concurrent program behaviors since the nodes of the MSG depict important “phases” or “interaction snippets” involving several concurrently executing processes. To demonstrate the power of this technique, we conducted experiments on mining behaviors of several fairly complex distributed systems. We show that our miner can produce the corresponding MSGs with both high precision and recall.
Article Search
Automatically Detecting and Describing High Level Actions within Methods
Giriprasad Sridhara, Lori Pollock, and K. Vijay-Shanker
(University of Delaware, USA)
One approach to easing program comprehension is to reduce the amount of code that a developer has to read. Describing the high level abstract algorithmic actions associated with code fragments using succinct natural language phrases potentially enables a newcomer to focus on fewer and more abstract concepts when trying to understand a given method. Unfortunately, such descriptions are typically missing because it is tedious to create them manually. We present an automatic technique for identifying code fragments that implement high level abstractions of actions and expressing them as a natural language description. Our studies of 1000 Java programs indicate that our heuristics for identifying code fragments implementing high level actions are widely applicable. Judgements of our generated descriptions by 15 experienced Java programmers strongly suggest that indeed they view the fragments that we identify as representing high level actions and our synthesized descriptions accurately express the abstraction.
Article Search
Portfolio: Finding Relevant Functions and Their Usages
Collin McMillan, Mark Grechanik, Denys Poshyvanyk, Qing Xie, and Chen Fu
(College of William and Mary, USA; Accenture Technology Lab, USA)
Different studies show that programmers are more interested in finding definitions of functions and their uses than variables, statements, or arbitrary code fragments [30, 29, 31]. Therefore, programmers require support in finding relevant functions and determining how those functions are used. Unfortunately, existing code search engines do not provide enough of this support to developers, thus reducing the effectiveness of code reuse. We provide this support to programmers in a code search system called Portfolio that retrieves and visualizes relevant functions and their usages. We have built Portfolio using a combination of models that address surfing behavior of programmer and sharing related concepts among functions. We conducted an experiment with 49 professional programmers to compare Portfolio to Google Code Search and Koders using a standard methodology. The results show with strong statistical significance that users find more relevant functions with higher precision with Portfolio than with Google Code Search and Koders.
Article Search

Debugging the Surf

Angelic Debugging
Satish Chandra, Emina Torlak, Shaon Barman, and Rastislav Bodik
(IBM Research, USA; UC Berkeley, USA)
Software ships with known bugs because it is expensive to pinpoint and fix the bug exposed by a failing test. To reduce the cost of bug identification, we locate expressions that are likely causes of bugs and thus candidates for repair. Our symbolic method approximates an ideal approach to fixing bugs mechanically, which is to search the space of all edits to the program for one that repairs the failing test without breaking any passing test. We approximate the expensive ideal of exploring syntactic edits by instead computing the set of values whose substitution for the expression corrects the execution. We observe that an expression is a repair candidate if it can be replaced with a value that fixes a failing test and in each passing test, its value can be changed to another value without breaking the test. The latter condition makes the expression flexible in that it permits multiple values. The key observation is that the repair of a flexible expression is less likely to break a passing test. The method is called angelic debugging because the values are computed by angelically nondeterministic statements. We implemented the method on top of the Java PathFinder model checker. Our experiments with this technique show promise of its applicability in speeding up program debugging.
Article Search
Static Extraction of Program Configuration Options
Ariel S. Rabkin and Randy Katz
(UC Berkeley, USA)
Many programs use a key-value model for configuration options. We examined how this model is used in seven open source Java projects totaling over a million lines of code. We present a static analysis that extracts a list of configuration options for a program. Our analysis finds 95% of the options read by the programs in our sample, making it more complete than existing documentation. Most configuration options we saw fall into a small number of types. A dozen types cover 90% of options. We present a second analysis that exploits this fact, inferring a type for most options. Together, these analyses enable more visibility into program configuration, helping reduce the burden of configuration documentation and configuration debugging.
Article Search
An Empirical Study of Build Maintenance Effort
Shane McIntosh, Bram Adams, Thanh H. D. Nguyen, Yasutaka Kamei, and Ahmed E. Hassan
(Queen's University, Canada)
The build system of a software project is responsible for transforming source code and other development artifacts into executable programs and deliverables. Similar to source code, build system specifications require maintenance to cope with newly implemented features, changes to imported Application Program Interfaces (APIs), and source code restructuring. In this paper, we mine the version histories of one proprietary and nine open source projects of different sizes and domain to analyze the overhead that build maintenance imposes on developers. We split our analysis into two dimensions: (1) Build Coupling, i.e., how frequently source code changes require build changes, and (2) Build Ownership, i.e., the proportion of developers responsible for build maintenance. Our results indicate that, despite the difference in scale, the build system churn rate is comparable to that of the source code, and build changes induce more relative churn on the build system than source code changes induce on the source code. Furthermore, build maintenance yields up to a 27% overhead on source code development and a 44% overhead on test development. Up to 79% of source code developers and 89% of test code developers are significantly impacted by build maintenance, yet investment in build experts can reduce the proportion of impacted developers to 22% of source code developers and 24% of test code developers.
Article Search

Empirical Luau I

An Empirical Investigation into the Role of API-Level Refactorings during Software Evolution
Miryung Kim, Dongxiang Cai, and Sunghun Kim
(University of Texas at Austin, USA; Hong Kong University of Science and Technology, China)
It is widely believed that refactoring improves software quality and programmer productivity by making it easier to maintain and understand software systems. However, the role of refactorings has not been systematically investigated using fine-grained evolution history. We quantitatively and qualitatively studied API-level refactorings and bug fixes in three large open source projects, totaling 26523 revisions of evolution. The study found several surprising results: One, there is an increase in the number of bug fixes after API-level refactorings. Two, the time taken to fix bugs is shorter after API-level refactorings than before. Three, a large number of refactoring revisions include bug fixes at the same time or are related to later bug fix revisions. Four, API-level refactorings occur more frequently before than after major software releases. These results call for re-thinking refactoring’s true benefits. Furthermore, frequent floss refactoring mistakes observed in this study call for new software engineering tools to support safe application of refactoring and behavior modifying edits together.
Article Search
Factors Leading to Integration Failures in Global Feature-Oriented Development: An Empirical Analysis
Marcelo Cataldo and James D. Herbsleb
Feature-driven software development is a novel approach that has grown in popularity over the past decade. Researchers and practitioners alike have argued that numerous benefits could be garnered from adopting a feature-driven development approach. However, those persuasive arguments have not been matched with supporting empirical evidence. Moreover, developing software systems around features involves new technical and organizational elements that could have significant implications for outcomes such as software quality. This paper presents an empirical analysis of a large-scale project that implemented 1195 features in a software system. We examined the impact that technical attributes of product features, attributes of the feature teams and crossfeature interactions have on software integration failures. Our results show that technical factors such as the nature of component dependencies and organizational factors such as the geographic dispersion of the feature teams and the role of the feature owners had complementary impact suggesting their independent and important role in terms of software quality. Furthermore, our analyses revealed that cross-feature interactions, measured as the number of architectural dependencies between two product features, are a major driver of integration failures. The research and practical implications of our results are discussed.
Article Search
Assessing Programming Language Impact on Development and Maintenance: A Study on C and C++
Pamela Bhattacharya and Iulian Neamtiu
(UC Riverside, USA)
Billions of dollars are spent every year for building and maintaining software. To reduce these costs we must identify the key factors that lead to better software and more productive development. One such key factor, and the focus of our paper, is the choice of programming language. Existing studies that analyze the impact of choice of programming language suffer from several deficiencies with respect to methodology and the applications they consider. For example, they consider applications built by different teams in different languages, hence fail to control for developer competence, or they consider small-sized, infrequently-used, short-lived projects. We propose a novel methodology which controls for development process and developer competence, and quantifies how the choice of programming language impacts software quality and developer productivity. We conduct a study and statistical analysis on a set of long-lived, widely-used, open source projects—Firefox, Blender, VLC, and MySQL. The key novelties of our study are: (1) we only consider projects which have considerable portions of development in two languages, C and C++, and (2) a majority of developers in these projects contribute to both C and C++ code bases. We found that using C++ instead of C results in improved software quality and reduced maintenance effort, and that code bases are shifting from C to C++. Our methodology lays a solid foundation for future studies on comparative advantages of particular programming languages.
Article Search

Far-out Surfware Engineering

On-demand Feature Recommendations Derived from Mining Public Product Descriptions
Horatiu Dumitru, Marek Gibiec, Negar Hariri, Jane Cleland-Huang, Bamshad Mobasher, Carlos Castro-Herrera, and Mehdi Mirakhorli
(DePaul University, USA)
We present a recommender system that models and recommends product features for a given domain. Our approach mines product descriptions from publicly available online specifications, utilizes text mining and a novel incremental diffusive clustering algorithm to discover domain-specific features, generates a probabilistic feature model that represents commonalities, variants, and cross-category features, and then uses association rule mining and the k-NearestNeighbor machine learning strategy to generate product specific feature recommendations. Our recommender system supports the relatively labor-intensive task of domain analysis, potentially increasing opportunities for re-use, reducing time-to-market, and delivering more competitive software products. The approach is empirically validated against 20 different product categories using thousands of product descriptions mined from a repository of free software applications.
Article Search
Inferring Better Contracts
Yi Wei, Carlo A. Furia, Nikolay Kazmin, and Bertrand Meyer
(ETH Zurich, Switzerland)
Considerable progress has been made towards automatic support for one of the principal techniques available to enhance program reliability: equipping programs with extensive contracts. The results of current contract inference tools are still often unsatisfactory in practice, especially for programmers who already apply some kind of basic Design by Contract discipline, since the inferred contracts tend to be simple assertions—the very ones that programmers find easy to write. We present new, completely automatic inference techniques and a supporting tool, which take advantage of the presence of simple programmer-written contracts in the code to infer sophisticated assertions, involving for example implication and universal quantification. Applied to a production library of classes covering standard data structures such as linked lists, arrays, stacks, queues and hash tables, the tool is able, entirely automatically, to infer 75% of the complete contracts—contracts yielding the full formal specification of the classes—with very few redundant or irrelevant clauses.
Article Search

Riding the Design Wave I

LIME: A Framework for Debugging Load Imbalance in Multi-threaded Execution
Jungju Oh, Christopher J. Hughes, Guru Venkataramani, and Milos Prvulovic
(Georgia Institute of Technology, USA; Intel Corporation, USA; George Washington University, USA)
With the ubiquity of multi-core processors, software must make effective use of multiple cores to obtain good performance on modern hardware. One of the biggest roadblocks to this is load imbalance, or the uneven distribution of work across cores. We propose LIME, a framework for analyzing parallel programs and reporting the cause of load imbalance in application source code. This framework uses statistical techniques to pinpoint load imbalance problems stemming from both control flow issues (e.g., unequal iteration counts) and interactions between the application and hardware (e.g., unequal cache miss counts). We evaluate LIME on applications from widely used parallel benchmark suites, and show that LIME accurately reports the causes of load imbalance, their nature and origin in the code, and their relative importance.
Article Search
Synthesis of Live Behaviour Models for Fallible Domains
Nicolás D'Ippolito, Víctor Braberman, Nir Piterman, and Sebastián Uchitel
(Imperial College London, UK; Universidad de Buenos Aires, Argentina; University of Leicester, UK)
We revisit synthesis of live controllers for event-based operational models. We remove one aspect of an idealised problem domain by allowing to integrate failures of controller actions in the environment model. Classical treatment of failures through strong fairness leads to a very high computational complexity and may be insufficient for many interesting cases. We identify a realistic stronger fairness condition on the behaviour of failures. We show how to construct controllers satisfying liveness specifications under these fairness conditions. The resulting controllers exhibit the only possible behaviour in face of the given topology of failures: they keep retrying and never give up. We then identify some well-structure conditions on the environment. These conditions ensure that the resulting controller will be eager to satisfy its goals. Furthermore, for environments that satisfy these conditions and have an underlying probabilistic behaviour, the measure of traces that satisfy our fairness condition is 1, giving a characterisation of the kind of domains in which the approach is applicable.
Article Search
Coverage Guided Systematic Concurrency Testing
Chao Wang, Mahmoud Said, and Aarti Gupta
(NEC Laboratories America, USA; Western Michigan University, USA)
Shared-memory multi-threaded programs are notoriously difficult to test, and because of the often astronomically large number of thread schedules, testing all possible interleavings is practically infeasible. In this paper we propose a coverage-guided systematic testing framework, where we use dynamically learned ordering constraints over shared object accesses to select only high-risk interleavings for test execution. An interleaving is of high-risk if it has not been covered by the ordering constraints, meaning that it has concurrency scenarios that have not been tested. Our method consists of two components. First, we utilize dynamic information collected from good test runs to learn ordering constraints over the memory-accessing and synchronization statements. These ordering constraints are treated as likely invariants since they are respected by all the tested runs. Second, during the process of systematic testing, we use the learned ordering constraints to guide the selection of interleavings for future test execution. Our experiments on public domain multithreaded C/C++ programs show that, by focusing on only the high-risk interleavings rather than enumerating all possible interleavings, our method can increase the coverage of important concurrency scenarios with a reasonable cost and detect most of the concurrency bugs in practice.
Article Search

Program Surfing I

Inference of Field Initialization
Fausto Spoto and Michael D. Ernst
(Università di Verona, Italy; University of Washington, USA)
A raw object is partially initialized, with only some fields set to legal values. It may violate its object invariants, such as that a given field is non-null. Programs often manipulate partially-initialized objects, but they must do so with care. Furthermore, analyses must be aware of field initialization. For instance, proving the absence of null pointer dereferences or of division by zero, or proving that object invariants are satisfied, requires information about initialization. We present a static analysis that infers a safe over-approximation of the program variables, fields, and array elements that, at run time, might hold raw objects. Our formalization is flow-sensitive and interprocedural, and it considers the exception flow in the analyzed program. We have proved the analysis sound and implemented it in a tool called Julia that computes initialization and nullness information. We have evaluated Julia on over 160K lines of code. We have compared its output to manually-written initialization and nullness information, and to an independently-written type-checking tool that checks initialization and nullness. Julia's output is accurate and useful both to programmers and to static analyses.
Article Search
Taming Reflection: Aiding Static Analysis in the Presence of Reflection and Custom Class Loaders
Eric Bodden, Andreas Sewe, Jan Sinschek, Hela Oueslati, and Mira Mezini
(TU Darmstadt, Germany; Center for Advanced Security Research Darmstadt, Germany)
Static program analyses and transformations for Java face many problems when analyzing programs that use reflection or custom class loaders: How can a static analysis know which reflective calls the program will execute? How can it get hold of classes that the program loads from remote locations or even generates on the fly? And if the analysis transforms classes, how can these classes be re-inserted into a program that uses custom class loaders? In this paper, we present TamiFlex, a tool chain that offers a partial but often effective solution to these problems. With TamiFlex, programmers can use existing staticanalysis tools to produce results that are sound at least with respect to a set of recorded program runs. TamiFlex inserts runtime checks into the program that warn the user in case the program executes reflective calls that the analysis did not take into account. TamiFlex further allows programmers to re-insert offline-transformed classes into a program. We evaluate TamiFlex in two scenarios: benchmarking with the DaCapo benchmark suite and analysing large-scale interactive applications. For the latter, TamiFlex significantly improves code coverage of the static analyses, while for the former our approach even appears complete: the inserted runtime checks issue no warning. Hence, for the first time, TamiFlex enables sound static whole-program analyses on DaCapo. During this process, TamiFlex usually incurs less than 10% runtime overhead.
Article Search
Patching Vulnerabilities with Sanitization Synthesis
Fang Yu, Muath Alkhalaf, and Tevfik Bultan
(National Chengchi University, Taiwan; UC Santa Barbara, USA)
We present automata-based static string analysis techniques that automatically generate sanitization statements for patching vulnerable web applications. Our approach consists of three phases: Given an attack pattern we first conduct a vulnerability analysis to identify if strings that match the attack pattern can reach the security-sensitive functions. Next, we compute vulnerability signatures that characterize all input strings that can exploit the discovered vulnerability. Given the vulnerability signatures, we then construct sanitization statements that 1) check if a given input matches the vulnerability signature and 2) modify the input in a minimal way so that the modified input does not match the vulnerability signature. Our approach is capable of generating relational vulnerability signatures (and corresponding sanitization statements) for vulnerabilities that are due to more than one input.
Article Search

Developer Waves

Configuring Global Software Teams: A Multi-Company Analysis of Project Productivity, Quality, and Profits
Narayan Ramasubbu, Marcelo Cataldo, Rajesh Krishna Balan, and James D. Herbsleb
(Singapore Management University, Singapore; CMU, USA)
In this paper, we examined the impact of project-level configurational choices of globally distributed software teams on project productivity, quality, and profits. Our analysis used data from 362 projects of four different firms. These projects spanned a wide range of programming languages, application domain, process choices, and development sites spread over 15 countries and 5 continents. Our analysis revealed fundamental tradeoffs in choosing configurational choices that are optimized for productivity, quality, and/or profits. In particular, achieving higher levels of productivity and quality require diametrically opposed configurational choices. In addition, creating imbalances in the expertise and personnel distribution of project teams significantly helps increase profit margins. However, a profitoriented imbalance could also significantly affect productivity and/or quality outcomes. Analyzing these complex tradeoffs, we provide actionable managerial insights that can help software firms and their clients choose configurations that achieve desired project outcomes in globally distributed software development.
Article Search
Does the Initial Environment Impact the Future of Developers?
Minghui Zhou and Audris Mockus
(Peking University, China; Ministry of Education, China; Avaya Labs Research, USA)
Software developers need to develop technical and social skills to be successful in large projects. We model the relative sociality of a developer as a ratio between the size of her communication network and the number of tasks she participates in. We obtain both measures from the problem tracking systems. We use her workflow peer network to represent her social learning, and the issues she has worked on to represent her technical learning. Using three open source and three traditional projects we investigate how the project environment reflected by the sociality measure at the time a developer joins, affects her future participation. We find: a) the probability that a new developer will become one of long-term and productive developers is highest when the project sociality is low; b) times of high sociality are associated with a higher intensity of new contributors joining the project; c) there are significant differences between the social learning trajectories of the developers who join in low and in high sociality environments; d) the open source and commercial projects exhibit different nature in the relationship between developer’s tenure and the project’s environment at the time she joins. These findings point out the importance of the initial environment in determining the future of the developers and may lead to better training and learning strategies in software organizations.
Article Search
Socio-Technical Developer Networks: Should We Trust Our Measurements?
Andrew Meneely and Laurie Williams
(North Carolina State University, USA)
Software development teams must be properly structured to provide effective collaboration to produce quality software. Over the last several years, social network analysis (SNA) has emerged as a popular method for studying the collaboration and organization of people working in large software development teams. Researchers have been modeling networks of developers based on socio-technical connections found in software development artifacts. Using these developer networks, researchers have proposed several SNA metrics that can predict software quality factors and describe the team structure. But do SNA metrics measure what they purport to measure? The objective of this research is to investigate if SNA metrics represent socio-technical relationships by examining if developer networks can be corroborated with developer perceptions. To measure developer perceptions, we developed an online survey that is personalized to each developer of a development team based on that developer’s SNA metrics. Developers answered questions about other members of the team, such as identifying their collaborators and the project experts. A total of 124 developers responded to our survey from three popular open source projects: the Linux kernel, the PHP programming language, and the Wireshark network protocol analyzer. Our results indicate that connections in the developer network are statistically associated with the collaborators whom the developers named. Our results substantiate that SNA metrics represent socio-technical relationships in open source development projects, while also clarifying how the developer network can be interpreted by researchers and practitioners.
Article Search

Outrigger Models and Clones

Model Projection: Simplifying Models in Response to Restricting the Environment
Kelly Androutsopoulos, David Binkley, David Clark, Nicolas Gold, Mark Harman, Kevin Lano, and Zheng Li
(University College London, UK; Loyola University Maryland, USA; King's College London, UK)
This paper introduces Model Projection. Finite state models such as Extended Finite State Machines are being used in an ever increasing number of software engineering activities. Model projection facilitates model development by specializing models for a specific operating environment. A projection is useful in many design-level applications including specification reuse and property verification. The applicability of model projection rests upon three critical concerns: correctness, effectiveness, and efficiency, all of which are addressed in this paper. We introduce four related algorithms for model projection and prove each correct. We also present an empirical study of effectiveness and efficiency using ten models, including widely–studied benchmarks as well as industrial models. Results show that a typical projection includes about half of the states and a third of the transitions from the original model.
Article Search
MeCC: Memory Comparison-based Clone Detector
Heejung Kim, Yungbum Jung, Sunghun Kim, and Kwankeun Yi
(Seoul National University, South Korea; Hong Kong University of Science and Technology, China)
In this paper, we propose a new semantic clone detection technique by comparing programs’ abstract memory states, which are computed by a semantic-based static analyzer. Our experimental study using three large-scale open source projects shows that our technique can detect semantic clones that existing syntactic- or semantic-based clone detectors miss. Our technique can help developers identify inconsistent clone changes, find refactoring candidates, and understand software evolution related to semantic clones.
Article Search
Frequency and Risks of Changes to Clones
Nils Göde and Rainer Koschke
(University of Bremen, Germany)
Code Clones—duplicated source fragments—are said to increase maintenance effort and to facilitate problems caused by inconsistent changes to identical parts. While this is certainly true for some clones and certainly not true for others, it is unclear how many clones are real threats to the system’s quality and need to be taken care of. Our analysis of clone evolution in mature software projects shows that most clones are rarely changed and the number of unintentional inconsistent changes to clones is small. We thus have to carefully select the clones to be managed to avoid unnecessary effort managing clones with no risk potential.
Article Search

Surfer Model Checking

Symbolic Model Checking of Software Product Lines
Andreas Classen, Patrick Heymans, Pierre-Yves Schobbens, and Axel Legay
(University of Namur, Belgium; IRISA/INRIA Rennes, France; University of Liège, Belgium)
We study the problem of model checking software product line (SPL) behaviours against temporal properties. This is more difficult than for single systems because an SPL with n features yields up to 2^n individual systems to verify. As each individual verification suffers from state explosion, it is crucial to propose efficient formalisms and heuristics. We recently proposed featured transition systems (FTS), a compact representation for SPL behaviour, and defined algorithms for model checking FTS against linear temporal properties. Although they showed to outperform individual system verifications, they still face a state explosion problem as they enumerate and visit system states one by one. In this paper, we tackle this latter problem by using symbolic representations of the state space. This lead us to consider computation tree logic (CTL) which is supported by the industry-strength symbolic model checker NuSMV. We first lay the foundations for symbolic SPL model checking by defining a feature-oriented version of CTL and its dedicated algorithms. We then describe an implementation that adapts the NuSMV language and tool infrastructure. Finally, we propose theoretical and empirical evaluations of our results. The benchmarks show that for certain properties, our algorithm is over a hundred times faster than model checking each system with the standard algorithm.
Article Search
Verifying Multi-threaded Software using SMT-based Context-Bounded Model Checking
Lucas Cordeiro and Bernd Fischer
(University of Southampton, UK)
We describe and evaluate three approaches to model check multi-threaded software with shared variables and locks using bounded model checking based on Satisfiability Modulo Theories (SMT) and our modelling of the synchronization primitives of the Pthread library. In the lazy approach, we generate all possible interleavings and call the SMT solver on each of them individually, until we either find a bug, or have systematically explored all interleavings. In the schedule recording approach, we encode all possible interleavings into one single formula and then exploit the high speed of the SMT solvers. In the underapproximation and widening approach, we reduce the state space by abstracting the number of interleavings from the proofs of unsatisfiability generated by the SMT solvers. In all three approaches, we bound the number of context switches allowed among threads in order to reduce the number of interleavings explored. We implemented these approaches in ESBMC, our SMT-based bounded model checker for ANSI-C programs. Our experiments show that ESBMC can analyze larger problems and substantially reduce the verification time compared to stateof-the-art techniques that use iterative context-bounding algorithms or counter-example guided abstraction refinement.
Article Search
Run-Time Efficient Probabilistic Model Checking
Antonio Filieri, Carlo Ghezzi, and Giordano Tamburrelli
(Politecnico di Milano, Italy)
Unpredictable changes continuously affect software systems and may have a severe impact on their quality of service, potentially jeopardizing the system’s ability to meet the desired requirements. Changes may occur in critical components of the system, clients’ operational profiles, requirements, or deployment environments. The adoption of software models and model checking techniques at run time may support automatic reasoning about such changes, detect harmful configurations, and potentially enable appropriate (self-)reactions. However, traditional model checking techniques and tools may not be simply applied as they are at run time, since they hardly meet the constraints imposed by on-the-fly analysis, in terms of execution time and memory occupation. This paper precisely addresses this issue and focuses on reliability models, given in terms of Discrete Time Markov Chains, and probabilistic model checking. It develops a mathematical framework for run-time probabilistic model checking that, given a reliability model and a set of requirements, statically generates a set of expressions, which can be efficiently used at run-time to verify system requirements. An experimental comparison of our approach with existing probabilistic model checkers shows its practical applicability in run-time verification.
Article Search

Comprehending the Drift II

Non-Essential Changes in Version Histories
David Kawrykow and Martin P. Robillard
(McGill University, Canada)
Numerous techniques involve mining change data captured in software archives to assist engineering efforts, for example to identify components that tend to evolve together. We observed that important changes to software artifacts are sometimes accompanied by numerous non-essential modifications, such as local variable refactorings, or textual differences induced as part of a rename refactoring. We developed a tool-supported technique for detecting nonessential code differences in the revision histories of software systems. We used our technique to investigate code changes in over 24 000 change sets gathered from the change histories of seven long-lived open-source systems. We found that up to 15.5% of a system’s method updates were due solely to non-essential differences. We also report on numerous observations on the distribution of non-essential differences in change history and their potential impact on change-based analyses.
Article Search
Aspect Recommendation for Evolving Software
Tung Thanh Nguyen, Hung Viet Nguyen, Hoan Anh Nguyen, and Tien N. Nguyen
(Iowa State University, USA)
Cross-cutting concerns are unavoidable and create difficulties in the development and maintenance of large-scale systems. In this paper, we present a novel approach that identifies certain groups of code units that potentially share some cross-cutting concerns and recommends them for creating and updating aspects. Those code units, called concern peers, are detected based on their similar interactions (similar calling relations in similar contexts, either internally or externally). The recommendation is applicable to both the aspectization of non-aspect-oriented programs (i.e. for aspect creation), and the evolution of aspect-oriented programs (i.e. for aspect updating). The empirical evaluation on several real-world software systems shows that our approach is scalable and provides useful recommendations.
Article Search
Identifying Program, Test, and Environmental Changes That Affect Behaviour
Reid Holmes and David Notkin
(University of Waterloo, Canada; University of Washington, USA)
Developers evolve a software system by changing the program source code, by modifying its context by updating libraries or changing its configuration, and by improving its test suite. Any of these changes can cause differences in program behaviour. In general, program paths may appear or disappear between executions of two subsequent versions of a system. Some of these behavioural differences are expected by a developer; for example, executing new program paths is often precisely what is intended when adding a new test. Other behavioural differences may or may not be expected or benign. For example, changing an XML configuration file may cause a previously-executed path to disappear, which may or may not be expected and could be problematic. Furthermore, the degree to which a behavioural change might be problematic may only become apparent over time as the new behaviour interacts with other changes. We present an approach to identify specific program call dependencies where the programmer’s changes to the program source code, its tests, or its environment are not apparent in the system’s behaviour, or vice versa. Using a static and a dynamic call graph from each of two program versions, we partition dependencies based on their presence in each of the four graphs. Particular partitions contain dependencies that help a programmer develop insights about often subtle behavioural changes.
Article Search

Testing the Waters II

Program Abstractions for Behaviour Validation
Guido de Caso, Víctor Braberman, Diego Garbervetsky, and Sebastián Uchitel
(Universidad de Buenos Aires, Argentina; Imperial College London, UK)

Article Search
Programs, Tests, and Oracles: The Foundations of Testing Revisited
Matt Staats, Michael W. Whalen, and Mats P. E. Heimdahl
(University of Minnesota, USA)
In previous decades, researchers have explored the formal foundations of program testing. By exploring the foundations of testing largely separate from any specific method of testing, these researchers provided a general discussion of the testing process, including the goals, the underlying problems, and the limitations of testing. Unfortunately, a common, rigorous foundation has not been widely adopted in empirical software testing research, making it difficult to generalize and compare empirical research. We continue this foundational work, providing a framework intended to serve as a guide for future discussions and empirical studies concerning software testing. Specifically, we extend Gourlay’s functional description of testing with the notion of a test oracle, an aspect of testing largely overlooked in previous foundational work and only lightly explored in general. We argue additional work exploring the interrelationship between programs, tests, and oracles should be performed, and use our extension to clarify concepts presented in previous work, present new concepts related to test oracles, and demonstrate that oracle selection must be considered when discussing the efficacy of a testing process.
Article Search
RACEZ: A Lightweight and Non-Invasive Race Detection Tool for Production Applications
Tianwei Sheng, Neil Vachharajani, Stephane Eranian, Robert Hundt, Wenguang Chen, and Weimin Zheng
(Tsinghua University, China; Google Inc., USA)
Concurrency bugs, particularly data races, are notoriously difficult to debug and are a significant source of unreliability in multithreaded applications. Many tools to catch data races rely on program instrumentation to obtain memory instruction traces. Unfortunately, this instrumentation introduces significant runtime overhead, is extremely invasive, or has a limited domain of applicability making these tools unsuitable for many production systems. Consequently, these tools are typically used during application testing where many data races go undetected. This paper proposes R ACEZ, a novel race detection mechanism which uses a sampled memory trace collected by the hardware performance monitoring unit rather than invasive instrumentation. The approach introduces only a modest overhead making it usable in production environments. We validate R ACEZ using two open source server applications and the PARSEC benchmarks. Our experiments show that R ACEZ catches a set of known bugs with reasonable probability while introducing only 2.8% runtime slow down on average.
Article Search

Riding the Design Wave II

Detecting Software Modularity Violations
Sunny Wong, Yuanfang Cai, Miryung Kim, and Michael Dalton
(Drexel University, USA; University of Texas at Austin, USA)
This paper presents Clio, an approach that detects modularity violations, which can cause software defects, modularity decay, or expensive refactorings. Clio computes the discrepancies between how components should change together based on the modular structure, and how components actually change together as revealed in version history. We evaluated Clio using 15 releases of Hadoop Common and 10 releases of Eclipse JDT. The results show that hundreds of violations identified using Clio were indeed recognized as design problems or refactored by the developers in later versions. The identified violations exhibit multiple symptoms of poor design, some of which are not easily detectable using existing approaches.
Article Search
Feature Cohesion in Software Product Lines: An Exploratory Study
Sven Apel and Dirk Beyer
(University of Passau, Germany; Simon Fraser University, Canada)
Software product lines gain momentum in research and industry. Many product-line approaches use features as a central abstraction mechanism. Feature-oriented software development aims at encapsulating features in cohesive units to support program comprehension, variability, and reuse. Surprisingly, not much is known about the characteristics of cohesion in feature-oriented product lines, although proper cohesion is of special interest in product-line engineering due to its focus on variability and reuse. To fill this gap, we conduct an exploratory study on forty software product lines of different sizes and domains. A distinguishing property of our approach is that we use both classic software measures and novel measures that are based on distances in clustering layouts, which can be used also for visual exploration of product-line architectures. This way, we can draw a holistic picture of feature cohesion. In our exploratory study, we found several interesting correlations (e.g., between development process and feature cohesion) and we discuss insights and perspectives of investigating feature cohesion (e.g., regarding feature interfaces and programming style).
Article Search
Leveraging Software Architectures to Guide and Verify the Development of Sense/Compute/Control Applications
Damien Cassou, Emilie Balland, Charles Consel, and Julia Lawall
(University of Bordeaux, France; INRIA, France; DIKU, Denmark; LIP6, France)
A software architecture describes the structure of a computing system by specifying software components and their interactions. Mapping a software architecture to an implementation is a well known challenge. A key element of this mapping is the architecture’s description of the data and control-flow interactions between components. The characterization of these interactions can be rather abstract or very concrete, providing more or less implementation guidance, programming support, and static verification. In this paper, we explore one point in the design space between abstract and concrete component interaction specifications. We introduce a notion of interaction contract that expresses the set of allowed interactions between components, describing both data and control-flow constraints. This declaration is part of the architecture description, allows generation of extensive programming support, and enables various verifications. We instantiate our approach in an architecture description language for Sense/Compute/Control applications, and describe associated compilation and verification strategies.
Article Search

Refactoring Your Lei II

Refactoring to Role Objects
Friedrich Steimann and Fabian Urs Stolz
(Fernuniversität in Hagen, Germany; Volkswohl Bund Versicherungen, Germany)
Role objects are a widely recognized design pattern for representing objects that expose different properties in different contexts. By developing a tool that automatically refactors legacy code towards this pattern and by applying this tool to several programs, we have found not only that refactoring to role objects as currently defined produces code that is hard to read and to maintain, but also that the refactoring has preconditions so strong that it is rarely applicable in practice. We have therefore taken a fresh look at role objects and devised an alternative form that solves the exact same design problems, yet is much simpler to introduce and to maintain. We describe refactoring to this new, lightweight form of role objects in informal terms and report on the implementation of our refactoring tool for the JAVA programming language, presenting evidence of the refactoring’s increased applicability in several sample programs.
Article Search
Supporting Professional Spreadsheet Users by Generating Leveled Dataflow Diagrams
Felienne Hermans, Martin Pinzger, and Arie van Deursen
(Delft University of Technology, Netherlands)
Thanks to their flexibility and intuitive programming model, spreadsheets are widely used in industry, often for businesscritical applications. Similar to software developers, professional spreadsheet users demand support for maintaining and transferring their spreadsheets. In this paper, we first study the problems and information needs of professional spreadsheet users by means of a survey conducted at a large financial company. Based on these needs, we then present an approach that extracts this information from spreadsheets and presents it in a compact and easy to understand way, with leveled dataflow diagrams. Our approach comes with three different views on the dataflow that allow the user to analyze the dataflow diagrams in a top-down fashion. To evaluate the usefulness of the proposed approach, we conducted a series of interviews as well as nine case studies in an industrial setting. The results of the evaluation clearly indicate the demand for and usefulness of our approach in ease the understanding of spreadsheets.
Article Search
Reverse Engineering Feature Models
Steven She, Rafael Lotufo, Thorsten Berger, Andrzej Wasowski, and Krzysztof Czarnecki
(University of Waterloo, Canada; University of Leipzig, Germany; IT University of Copenhagen, Denmark)
Feature models describe the common and variable characteristics of a product line. Their advantages are well recognized in product line methods. Unfortunately, creating a feature model for an existing project is time-consuming and requires substantial effort from a modeler. We present procedures for reverse engineering feature models based on a crucial heuristic for identifying parents—the major challenge of this task. We also automatically recover constructs such as feature groups, mandatory features, and implies/excludes edges. We evaluate the technique on two large-scale software product lines with existing reference feature models—the Linux and eCos kernels—and FreeBSD, a project without a feature model. Our heuristic is effective across all three projects by ranking the correct parent among the top results for a vast majority of features. The procedures effectively reduce the information a modeler has to consider from thousands of choices to typically five or less.
Article Search

Empirical Luau II

Empirical Assessment of MDE in Industry
John Hutchinson, Jon Whittle, Mark Rouncefield, and Steinar Kristoffersen
(Lancaster University, UK; Østfold University College, Norway; Møreforskning Molde AS, Norway)
This paper presents some initial results from a twelve-month empirical research study of model driven engineering (MDE). Using largely qualitative questionnaire and interview methods we investigate and document a range of technical, organizational and social factors that apparently influence organizational responses to MDE: specifically, its perception as a successful or unsuccessful organizational intervention. We then outline a range of lessons learned. Whilst, as with all qualitative research, these lessons should be interpreted with care, they should also be seen as providing a greater understanding of MDE practice in industry, as well as shedding light on the varied, and occasionally surprising, social, technical and organizational factors that affect success and failure. We conclude by suggesting how the next phase of the research will attempt to investigate some of these issues from a different angle and in greater depth.
Article Search
Dealing with Noise in Defect Prediction
Sunghun Kim, Hongyu Zhang, Rongxin Wu, and Liang Gong
(Hong Kong University of Science and Technology, China; Tsinghua University, China)
Many software defect prediction models have been built using historical defect data obtained by mining software repositories (MSR). Recent studies have discovered that data so collected contain noises because current defect collection practices are based on optional bug fix keywords or bug report links in change logs. Automatically collected defect data based on the change logs could include noises. This paper proposes approaches to deal with the noise in defect data. First, we measure the impact of noise on defect prediction models and provide guidelines for acceptable noise level. We measure noise resistant ability of two well-known defect prediction algorithms and find that in general, for large defect datasets, adding FP (false positive) or FN (false negative) noises alone does not lead to substantial performance differences. However, the prediction performance decreases significantly when the dataset contains 20%-35% of both FP and FN noises. Second, we propose a noise detection and elimination algorithm to address this problem. Our empirical study shows that our algorithm can identify noisy instances with reasonable accuracy. In addition, after eliminating the noises using our algorithm, defect prediction accuracy is improved.
Article Search
Ownership, Experience and Defects: A Fine-Grained Study of Authorship
Foyzur Rahman and Premkumar Devanbu
(UC Davis, USA)
Recent research indicates that “people” factors such as ownership, experience, organizational structure, and geographic distribution have a big impact on software quality. Understanding these factors, and properly deploying people resources can help managers improve quality outcomes. This paper considers the impact of code ownership and developer experience on software quality. In a large project, a file might be entirely owned by a single developer, or worked on by many. Some previous research indicates that more developers working on a file might lead to more defects. Prior research considered this phenomenon at the level of modules or files, and thus does not tease apart and study the effect of contributions of different developers to each module or file. We exploit a modern version control system to examine this issue at a fine-grained level. Using version history, we examine contributions to code fragments that are actually repaired to fix bugs. Are these code fragments “implicated” in bugs the result of contributions from many? or from one? Does experience matter? What type of experience? We find that implicated code is more strongly associated with a single developer’s contribution; our findings also indicate that an author’s specialized experience in the target file is more important than general experience. Our findings suggest that quality control efforts could be profitably targeted at changes made by single developers with limited prior experience on that file.
Article Search

Program Surfing II

Interface Decomposition for Service Compositions
Domenico Bianculli, Dimitra Giannakopoulou, and Corina S. Păsăreanu
(University of Lugano, Switzerland; NASA Ames Research Center, USA; Carnegie Mellon Silicon Valley, USA)
Service-based applications can be realized by composing existing services into new, added-value composite services. The external services with which a service composition interacts are usually known by means of their syntactical interface. However, an interface providing more information, such as a behavioral specification, could be more useful to a service integrator for assessing that a certain external service can contribute to fulfill the functional requirements of the composite application. Given the requirements specification of a composite service, we present a technique for obtaining the behavioral interfaces — in the form of labeled transition systems — of the external services, by decomposing the global interface specification that characterizes the environment of the service composition. The generated interfaces guarantee that the service composition fulfills its requirements during the execution. Our approach has been implemented in the LTSA tool and has been applied to two case studies.
Article Search
Unifying Execution of Imperative and Declarative Code
Aleksandar Milicevic, Derek Rayside, Kuat Yessenov, and Daniel Jackson
(Massachusetts Institute of Technology, USA)
We present a unified environment for running declarative specifications in the context of an imperative object-oriented programming language. Specifications are Alloy-like, writ- ten in first-order relational logic with transitive closure, and the imperative language is Java. By being able to mix im- perative code with executable declarative specifications, the user can easily express constraint problems in place, i.e., in terms of the existing data structures and objects on the heap. After a solution is found, the heap is updated to reflect the solution, so the user can continue to manipulate the pro- gram heap in the usual imperative way. We show that this approach is not only convenient, but, for certain problems can also outperform a standard imperative implementation. We also present an optimization technique that allowed us to run our tool on heaps with almost 2000 objects.
Article Search
Always-Available Static and Dynamic Feedback
Michael Bayne, Richard Cook, and Michael D. Ernst
(University of Washington, USA)
Developers who write code in a statically typed language are denied the ability to obtain dynamic feedback by executing their code during periods when it fails the static type checker. They are further confined to the static typing discipline during times in the development process where it does not yield the highest productivity. If they opt instead to use a dynamic language, they forgo the many benefits of static typing, including machine-checked documentation, improved correctness and reliability, tool support (such as for refactoring), and better runtime performance. We present a novel approach to giving developers the benefits of both static and dynamic typing, throughout the development process, and without the burden of manually separating their program into statically- and dynamically-typed parts. Our approach, which is intended for temporary use during the development process, relaxes the static type system and provides a semantics for many type-incorrect programs. It defers type errors to run time, or suppresses them if they do not affect runtime semantics. We implemented our approach in a publicly available tool, DuctileJ, for the Java language. In case studies, DuctileJ conferred benefits both during prototyping and during the evolution of existing code.
Article Search

Comprehending the Drift III

Improving Requirements Quality using Essential Use Case Interaction Patterns
Massila Kamalrudin, John Hosking, and John Grundy
(University of Auckland, New Zealand; Swinburne University of Technology at Hawthorn, Australia)
Requirements specifications need to be checked against the 3C’s Consistency, Completeness and Correctness – in order to achieve high quality. This is especially difficult when working with both natural language requirements and associated semi-formal modelling representations. We describe a technique and support tool that allows us to perform semi-automated checking of natural language and semi-formal requirements models, supporting both consistency management between representations but also correctness and completeness analysis. We use a concept of essential use case interaction patterns to perform the correctness and completeness analysis on the semi-formal representation. We highlight potential inconsistencies, incompleteness and incorrectness using visual differencing in our support tool. We have evaluated our approach via an end user study which focused on the tool’s usefulness, ease of use, ease of learning and user satisfaction and provided data for cognitive dimensions of notations analysis of the tool.
Article Search
Understanding Broadcast Based Peer Review on Open Source Software Projects
Peter C. Rigby and Margaret-Anne Storey
(University of Victoria, Canada)
Software peer review has proven to be a successful technique in open source software (OSS) development. In contrast to industry, where reviews are typically assigned to specific individuals, changes are broadcast to hundreds of potentially interested stakeholders. Despite concerns that reviews may be ignored, or that discussions will deadlock because too many uninformed stakeholders are involved, we find that this approach works well in practice. In this paper, we describe an empirical study to investigate the mechanisms and behaviours that developers use to find code changes they are competent to review. We also explore how stakeholders interact with one another during the review process. We manually examine hundreds of reviews across five high profile OSS projects. Our findings provide insights into the simple, community-wide techniques that developers use to effectively manage large quantities of reviews. The themes that emerge from our study are enriched and validated by interviewing long-serving core developers.
Article Search
Software Systems as Cities: A Controlled Experiment
Richard Wettel, Michele Lanza, and Romain Robbes
(University of Lugano, Switzerland; University of Chile, Chile)
Software visualization is a popular program comprehension technique used in the context of software maintenance, reverse engineering, and software evolution analysis. While there is a broad range of software visualization approaches, only few have been empirically evaluated. This is detrimental to the acceptance of software visualization in both the academic and the industrial world. We present a controlled experiment for the empirical evaluation of a 3D software visualization approach based on a city metaphor and implemented in a tool called CodeCity. The goal is to provide experimental evidence of the viability of our approach in the context of program comprehension by having subjects perform tasks related to program comprehension. We designed our experiment based on lessons extracted from the current body of research. We conducted the experiment in four locations across three countries, involving 41 participants from both academia and industry. The experiment shows that CodeCity leads to a statistically significant increase in terms of task correctness and decrease in task completion time. We detail the experiment we performed, discuss its results and reflect on the many lessons learned.
Article Search

Web Surfing

Automated Cross-Browser Compatibility Testing
Ali Mesbah and Mukul R. Prasad
(University of British Columbia, Canada; Fujitsu Laboratories of America, USA)
With the advent of Web 2.0 applications and new browsers, the cross-browser compatibility issue is becoming increasingly important. Although the problem is widely recognized among web developers, no systematic approach to tackle it exists today. None of the current tools, which provide screenshots or emulation environments, specifies any notion of cross-browser compatibility, much less check it automatically. In this paper, we pose the problem of cross-browser compatibility testing of modern web applications as a ‘functional consistency’ check of web application behavior across different web browsers and present an automated solution for it. Our approach consists of (1) automatically analyzing the given web application under different browser environments and capturing the behavior as a finite-state machine; (2) formally comparing the generated models for equivalence on a pairwise-basis and exposing any observed discrepancies. We validate our approach on several open-source and industrial case studies to demonstrate its effectiveness and real-world relevance.
Article Search
A Framework for Automated Testing of JavaScript Web Applications
Shay Artzi, Julian Dolby, Simon Holm Jensen, Anders Møller, and Frank Tip
(IBM Research, USA; Aarhus University, Denmark)
Current practice in testing JavaScript web applications requires manual construction of test cases, which is difficult and tedious. We present a framework for feedback-directed automated test generation for JavaScript in which execution is monitored to collect information that dire cts the test generator towards inputs that yield increased coverage. We implemented several instantiations of the framework, corresponding to variations on feedback-directed random testing, in a tool called Artemis. Experiments on a suite of JavaScript applications demonstrate that a simple instantiation of the framework that uses event handler registrations as feedback information produces surprisingly good coverage if enough tests are generated. By also using coverage information and read-write sets as feedback information, a slightly better level of coverage can be achieved, and sometimes with many fewer tests. The generated tests can be used for detecting HTML validity problems and other programming errors.
Article Search
Coalescing Executions for Fast Uncertainty Analysis
William N. Sumner, Tao Bao, Xiangyu Zhang, and Sunil Prabhakar
(Purdue University, USA)
Uncertain data processing is critical in a wide range of applications such as scientific computation handling data with inevitable errors and financial decision making relying on human provided parameters. While increasingly studied in the area of databases, uncertain data processing is often carried out by software, and thus software based solutions are attractive. In particular, Monte Carlo (MC) methods execute software with many samples from the uncertain inputs and observe the statistical behavior of the output. In this paper, we propose a technique to improve the cost-effectiveness of MC methods. Assuming only part of the input is uncertain, the certain part of the input always leads to the same execution across multiple sample runs. We remove such redundancy by coalescing multiple sample runs in a single run. In the coalesced run, the program operates on a vector of values if uncertainty is present and a single value otherwise. We handle cases where control flow and pointers are uncertain. Our results show that we can speed up the execution time of 30 sample runs by an average factor of 2.3 without precision lost or by up to 3.4 with negligible precision lost.
Article Search

Testing the Waters III

Mining Parametric Specifications
Choonghwan Lee, Feng Chen, and Grigore Roşu
(University of Illinois at Urbana-Champaign, USA)
Specifications carrying formal parameters that are bound to concrete data at runtime can effectively and elegantly capture multi-object behaviors or protocols. Unfortunately, parametric specifications are not easy to formulate by nonexperts and, consequently, are rarely available. This paper presents a general approach for mining parametric specifications from program executions, based on a strict separation of concerns: (1) a trace slicer first extracts sets of independent interactions from parametric execution traces; and (2) the resulting non-parametric trace slices are then passed to any conventional non-parametric property learner. The presented technique has been implemented in jMiner, which has been used to automatically mine many meaningful and non-trivial parametric properties of OpenJDK 6.
Article Search
Estimating Footprints of Model Operations
Cédric Jeanneret, Martin Glinz, and Benoit Baudry
(University of Zurich, Switzerland; IRISA, France)
When performed on a model, a set of operations (e.g., queries or model transformations) rarely uses all the information present in the model. Unintended underuse of a model can indicate various problems: the model may contain more detail than necessary or the operations may be immature or erroneous. Analyzing the footprints of the operations — i.e., the part of a model actually used by an operation — is a simple technique to diagnose and analyze such problems. However, precisely calculating the footprint of an operation is expensive, because it requires analyzing the operation’s execution trace. In this paper, we present an automated technique to estimate the footprint of an operation without executing it. We evaluate our approach by applying it to 75 models and five operations. Our technique provides software engineers with an efficient, yet precise, evaluation of the usage of their models.
Article Search
Precise Identification of Problems for Structural Test Generation
Xusheng Xiao, Tao Xie, Nikolai Tillmann, and Jonathan de Halleux
(North Carolina State University, USA; Microsoft Research, USA)
An important goal of software testing is to achieve at least high structural coverage. To reduce the manual efforts of producing such high-covering test inputs, testers or developers can employ tools built based on automated structural test-generation approaches. Although these tools can easily achieve high structural coverage for simple programs, when they are applied on complex programs in practice, these tools face various problems, such as (1) the external-method-call problem (EMCP), where tools cannot deal with method calls to external libraries; (2) the object-creation problem (OCP), where tools fails to generate method-call sequences to produce desirable object states. Since these tools currently could not be powerful enough to deal with these problems in testing complex programs in practice, we propose cooperative developer testing, where developers provide guidance to help tools achieve higher structural coverage. To reduce the efforts of developers in providing guidance to tools, in this paper, we propose a novel approach, called Covana, which precisely identifies and reports problems that prevent the tools from achieving high structural coverage primarily by determining whether branch statements containing not-covered branches have data dependencies on problem candidates. We provide two techniques to instantiate Covana to identify EMCPs and OCPs. Finally, we conduct evaluations on two open source projects to show the effectiveness of Covana in identifying EMCPs and OCPs.
Article Search


Interactivity, Continuity, Sketching, and Experience (Keynote Abstract)
Kumiyo Nakakoji
(Software Research Associates Inc., Japan)
The design and development of a software system deals with both the world of making and that of using. The world of making is concerned with molding, constructing and building. The world of using is concerned with engaging, experiencing, and interacting-with. The two worlds are structurally, semantically, and temporally intertwined through software. Designing the world of using requires painstaking effort to envision all the possible situations of use for all the possible types of users in all the possible contexts, in various temporal and situational levels of granularity, to create a coherent and convivial user experience. To frame this world of using, it is not sufficient to identify typical use scenarios or depict snapshots of crucial usage situations. Thorough analyses of the possible flows of interactions over a long period of time through the dynamism of user engagement and experience are essential in framing the world of using. Through the delineation of our work on designing and developing research prototype tools for creative knowledge activities, observing and analyzing interaction design processes, and directing user experience design teams for consumer products, this talk will address the expression, representation, communication, and assessment of the design of the world of using from the perspectives of interactivity, continuity, sketching and experience.
Article Search
Exciting New Trends in Design Thinking (Keynote Abstract)
Bill Dresselhaus
(DRESSELHAUSgroup Inc., USA/Korea)
Design and design thinking are becoming the hot topics and new business processes around the world—yes, business processes! Business schools are adding design thinking courses to their curricula and business professors are writing books on design thinking. Countries like Korea and Singapore are vying to be the leading Asian Design Nations. New, socalled Convergent courses, programs and schools are emerging globally that combine engineering, business and design disciplines and departments into integrated efforts. The Do-It-Yourself (DIY) Design Movement is gaining momentum and the personal discipline of Making things is coming back. DIY Prototyping and Manufacturing are gaining ground and opportunities with new technologies and innovations. User- Generated Design is becoming a common corporate process. Design process and design thinking are being applied cross-functionally to such global issues as clean water and alternative energy. And the old traditional view of design as art and decoration and styling is giving way to a broader and more comprehensive way of thinking and solving human-centered problems by other than just a few elite professionals. In light of all this and more, Bill is excited about the ideas of ubiquitous design education for everyone and DIY design as a universal human experience. He is passionate about an idea in what Victor Papanek said 40 years ago in his seminal book, Design for the Real World, “All that we do, almost all the time, is design, for design is basic to all human activity”. Just as all humans are inherently businesspeople in many ways at many times, we are also all designers in many ways at many times—it is time to believe this and make the best of it.
Article Search

Software Engineering in Practice

Empirical Software Engineering

A Case Study of Measuring Process Risk for Early Insights into Software Safety
Lucas Layman, Victor R. Basili, Marvin V. Zelkowitz, and Karen L. Fisher
(Fraunhofer CESE, USA; University of Maryland, USA; NASA Goddard Spaceflight Center, USA)
In this case study, we examine software safety risk in three flight hardware systems in NASA’s Constellation spaceflight program. We applied our Technical and Process Risk Measurement (TPRM) methodology to the Constellation hazard analysis process to quantify the technical and process risks involving software safety in the early design phase of these projects. We analyzed 154 hazard reports and collected metrics to measure the prevalence of software in hazards and the specificity of descriptions of software causes of hazardous conditions. We found that 49-70% of 154 hazardous conditions could be caused by software or software was involved in the prevention of the hazardous condition. We also found that 12-17% of the 2013 hazard causes involved software, and that 23-29% of all causes had a software control. The application of the TRPM methodology identified process risks in the application of the hazard analysis process itself that may lead to software safety risk.
Article Search
Model-Driven Engineering Practices in Industry
John Hutchinson, Mark Rouncefield, and Jon Whittle
(Lancaster University, UK)
In this paper, we attempt to address the relative absence of empirical studies of model driven engineering through describing the practices of three commercial organizations as they adopted a model driven engineering approach to their software development. Using in-depth semi-structured interviewing we invited practitioners to reflect on their experiences and selected three to use as exemplars or case studies. In documenting some details of attempts to deploy model driven practices, we identify some ‘lessons learned’, in particular the importance of complex organizational, managerial and social factors – as opposed to simple technical factors – in the relative success, or failure, of the endeavour. As an example of organizational change management the successful deployment of model driven engineering appears to require: a progressive and iterative approach; transparent organizational commitment and motivation; integration with existing organizational processes and a clear business focus.
Article Search
SORASCS: A Case Study in SOA-based Platform Design for Socio-Cultural Analysis
Bradley Schmerl, David Garlan, Vishal Dwivedi, Michael W. Bigrigg, and Kathleen M. Carley
An increasingly important class of software-based systems is platforms that permit integration of third-party components, services, and tools. Service-Oriented Architecture (SOA) is one such platform that has been successful in providing integration and distribution in the business domain, and could be effective in other domains (e.g., scientific computing, healthcare, and complex decision making). In this paper, we discuss our application of SOA to provide an integration platform for socio-cultural analysis, a domain that, through models, tries to understand, analyze and predict relationships in large complex social systems. In developing this platform, called SORASCS, we had to overcome issues we believe are generally applicable to any application of SOA within a domain that involves technically naïve users and seeks to establish a sustainable software ecosystem based on a common integration platform. We discuss these issues, the lessons learned about the kinds of problems that occur, and pathways toward a solution.
Article Search

Industry Software Architecture

A Method for Selecting SOA Pilot Projects Including a Pilot Metrics Framework
Liam O'Brien, James Gibson, and Jon Gray
(CSIRO, Australia; ANU, Australia; NICTA, Australia)
Many organizations are introducing Service Oriented Architecture (SOA) as part of their business transformation projects to take advantage of the proposed benefits associated with using SOA. However, in many cases organizations don’t necessarily know on which projects introducing SOA would be of value and show real benefits to the organization. In this paper we outline a method and pilot metrics framework (PMF) to help organization’s select from a set of candidate projects those which would be most suitable for piloting SOA. The PMF is used as part of a method based on identifying a set of benefit and risk criteria, investigating each of the candidate projects, mapping them to the criteria and then selecting the most suitable project(s). The paper outlines a case study where the PMF was applied in a large government organization to help them select pilot projects and develop an overall strategy for introducing SOA into their organization.
Article Search
Architecture Evaluation without an Architecture: Experience with the Smart Grid
Rick Kazman, Len Bass, James Ivers, and Gabriel A. Moreno
(SEI/CMU, USA; University of Hawaii, USA)
This paper describes an analysis of some of the challenges facing one portion of the Smart Grid in the United States—residential Demand Response (DR) systems. The purposes of this paper are twofold: 1) to discover risks to residential DR systems and 2) to illustrate an architecture-based analysis approach to uncovering risks that span a collection of technical and social concerns. The results presented here are specific to residential DR but the approach is general and it could be applied to other systems within the Smart Grid and other critical infrastructure domains. Our architecture-based analysis is different from most other approaches to analyzing complex systems in that it addresses multiple quality attributes simultaneously (e.g., performance, reliability, security, modifiability, usability, etc.) and it considers the architecture of a complex system from a socio-technical perspective where the actions of the people in the system are as important, from an analysis perspective, as the physical and computational elements of the system. This analysis can be done early in a system’s lifetime, before substantial resources have been committed to its construction or procurement, and so it provides extremely cost-effective risk analysis.
Article Search
Bringing Domain-Specific Languages to Digital Forensics
Jeroen van den Bos and Tijs van der Storm
(Netherlands Forensic Institute, Netherlands; Centrum Wiskunde en Informatica, Netherlands)
Digital forensics investigations often consist of analyzing large quantities of data. The software tools used for analyzing such data are constantly evolving to cope with a multiplicity of versions and variants of data formats. This process of customization is time consuming and error prone. To improve this situation we present Derric, a domainspecific language (DSL) for declaratively specifying data structures. This way, the specification of structure is separated from data processing. The resulting architecture encourages customization and facilitates reuse. It enables faster development through a division of labour between investigators and software engineers. We have performed an initial evaluation of Derric by constructing a data recovery tool. This so-called carver has been automatically derived from a declarative description of the structure of JPEG files. We compare it to existing carvers, and show it to be in the same league both with respect to recovered evidence, and runtime performance.
Article Search

Software Engineering at Large

Building and Using Pluggable Type-Checkers
Werner Dietl, Stephanie Dietzel, Michael D. Ernst, Kıvanç Muşlu, and Todd W. Schiller
(University of Washington, USA)
This paper describes practical experience building and using pluggable type-checkers. A pluggable type-checker refines (strengthens) the built-in type system of a programming language. This permits programmers to detect and prevent, at compile time, defects that would otherwise have been manifested as run-time errors. The prevented defects may be generally applicable to all programs, such as null pointer dereferences. Or, an application-specific pluggable type system may be designed for a single application. We built a series of pluggable type checkers using the Checker Framework, and evaluated them on 2 million lines of code, finding hundreds of bugs in the process. We also observed 28 first-year computer science students use a checker to eliminate null pointer errors in their course projects. Along with describing the checkers and characterizing the bugs we found, we report the insights we had throughout the process. Overall, we found that the type checkers were easy to write, easy for novices to productively use, and effective in finding real bugs and verifying program properties, even for widely tested and used open source projects.
Article Search
Deploying CogTool: Integrating Quantitative Usability Assessment into Real-World Software Development
Rachel Bellamy, Bonnie E. John, and Sandra Kogan
(IBM Research Watson, USA; CMU, USA; IBM Software Group, USA)
Usability concerns are often difficult to integrate into real-world software development processes. To remedy this situation, IBM research and development, partnering with Carnegie Mellon University, has begun to employ a repeatable and quantifiable usability analysis method, embodied in CogTool, in its development practice. CogTool analyzes tasks performed on an interactive system from a storyboard and a demonstration of tasks on that storyboard, and predicts the time a skilled user will take to perform those tasks. We discuss how IBM designers and UX professionals used CogTool in their existing practice for contract compliance, communication within a product team and between a product team and its customer, assigning appropriate personnel to fix customer complaints, and quantitatively assessing design ideas before a line of code is written. We then reflect on the lessons learned by both the development organizations and the researchers attempting this technology transfer from academic research to integration into real-world practice, and we point to future research to even better serve the needs of practice.
Article Search
Experiences with Text Mining Large Collections of Unstructured Systems Development Artifacts at JPL
Daniel Port, Allen Nikora, Jairus Hihn, and LiGuo Huang
(University of Hawaii, USA; Jet Propulsion Laboratory, USA; Southern Methodist University, USA)
Often repositories of systems engineering artifacts at NASA’s Jet Propulsion Laboratory (JPL) are so large and poorly structured that they have outgrown our capability to effectively manually process their contents to extract useful information. Sophisticated text mining methods and tools seem a quick, low-effort approach to automating our limited manual efforts. Our experiences of exploring such methods mainly in three areas including historical risk analysis, defect identification based on requirements analysis, and over-time analysis of system anomalies at JPL, have shown that obtaining useful results requires substantial unanticipated efforts - from preprocessing the data to transforming the output for practical applications. We have not observed any quick “wins” or realized benefit from short-term effort avoidance through automation in this area. Surprisingly we have realized a number of unexpected long-term benefits from the process of applying text mining to our repositories. This paper elaborates some of these benefits and our important lessons learned from the process of preparing and applying text mining to large unstructured system artifacts at JPL aiming to benefit future TM applications in similar problem domains and also in hope for being extended to broader areas of applications.
Article Search

Software Metrics

An Evaluation of the Internal Quality of Business Applications: Does Size Matter?
Bill Curtis, Jay Sappidi, and Jitendra Subramanyam
This study summarizes results of a study of the internal, structural quality of 288 business applications comprising 108 million lines of code collected from 75 companies in 8 industry segments. These applications were submitted to a static analysis that evaluates quality within and across application components that may be coded in different languages. The analysis consists of evaluating the application against a repository of over 900 rules of good architectural and coding practice. Results are presented for measures of security, performance, and changeability. The effect of size on quality is evaluated, and the ability of modularity to reduce the impact of size is suggested by the results.
Article Search
Characterizing the Differences Between Pre- and Post- Release Versions of Software
Paul Luo Li, Ryan Kivett, Zhiyuan Zhan, Sung-eok Jeon, Nachiappan Nagappan, Brendan Murphy, and Andrew J. Ko
(Microsoft Inc., USA; University of Washington, USA; Microsoft Research, USA)
Many software producers utilize beta programs to predict postrelease quality and to ensure that their products meet quality expectations of users. Prior work indicates that software producers need to adjust predictions to account for usage environments and usage scenarios differences between beta populations and postrelease populations. However, little is known about how usage characteristics relate to field quality and how usage characteristics differ between beta and post-release. In this study, we examine application crash, application hang, system crash, and usage information from millions of Windows® users to 1) examine the effects of usage characteristics differences on field quality (e.g. which usage characteristics impact quality), 2) examine usage characteristics differences between beta and post-release (e.g. do impactful usage characteristics differ), and 3) report experiences adjusting field quality predictions for Windows. Among the 18 usage characteristics that we examined, the five most important were: the number of application executed, whether the machines was pre-installed by the original equipment manufacturer, two sub-populations (two language/geographic locales), and whether Windows was 64-bit (not 32-bit). We found each of these usage characteristics to differ between beta and post-release, and by adjusting for the differences, accuracy of field quality predictions for Windows improved by ~59%.
Article Search
Why Software Quality Improvement Fails (and How to Succeed Nevertheless)
Jonathan Streit and Markus Pizka
(itestra GmbH, Germany)
Quality improvement is the key to enormous cost reduction in the IT business. However, improvement projects often fail in practice. In many cases, stakeholders fearing, e.g., a loss of power or not recognizing the benefits inhibit the improvement. Systematic change management and an economic perspective help to overcome these issues, but are little known and seldom applied. This industrial experience report presents the main challenges in software quality improvement projects as well as practices for tackling them. The authors have performed over 50 quality analyses and quality improvement projects in mission-critical software systems of European banking, insurance and automotive companies.
Article Search

Software Testing and Analysis

Code Coverage Analysis in Practice for Large Systems
Yoram Adler, Noam Behar, Orna Raz, Onn Shehory, Nadav Steindler, Shmuel Ur, and Aviad Zlotnick
(IBM Research Haifa, Israel; Microsoft, Israel; Shmuel Ur Innovation, Israel)
Large systems generate immense quantities of code coverage data. A user faced with the task of analyzing this data, for example, to decide on test areas to improve, faces a ’needle in a haystack’ problem. In earlier studies we introduced substring hole analysis, a technique for presenting large quantities of coverage data in a succinct way. Here we demonstrate the successful use of substring hole analysis on large scale data from industrial software systems. For this end we augment substring hole analysis by introducing a work flow and tool support for practical code coverage analysis. We conduct real data experiments indicating that augmented substring hole analysis enables code coverage analysis where it was previously impractical, correctly identifies functionality that is missing from existing tests, and can increase the probability of finding bugs. These facilitate cost-effective code coverage analysis.
Article Search
Practical Change Impact Analysis Based on Static Program Slicing for Industrial Software Systems
Mithun Acharya and Brian Robinson
(ABB Corporate Research, USA)
Change impact analysis, i.e., knowing the potential consequences of a software change, is critical for the risk analysis, developer effort estimation, and regression testing of evolving software. Static program slicing is an attractive option for enabling routine change impact analysis for newly committed changesets during daily software build. For small programs with a few thousand lines of code, static program slicing scales well and can assist precise change impact analysis. However, as we demonstrate in this paper, static program slicing faces unique challenges when applied routinely on large and evolving industrial software systems. Despite recent advances in static program slicing, to our knowledge, there have been no studies of static change impact analysis applied on large and evolving industrial software systems. In this paper, we share our experiences in designing a static change impact analysis framework for such software systems. We have implemented our framework as a tool called Imp and have applied Imp on an industrial codebase with over a million lines of C/ C++ code with promising empirical results.
Article Search
Value-Based Program Characterization and Its Application to Software Plagiarism Detection
Yoon-Chan Jhi, Xinran Wang, Xiaoqi Jia, Sencun Zhu, Peng Liu, and Dinghao Wu
(Pennsylvania State University, USA; Chinese Academy of Sciences, China)
Identifying similar or identical code fragments becomes much more challenging in code theft cases where plagiarizers can use various automated code transformation techniques to hide stolen code from being detected. Previous works in this field are largely limited in that (1) most of them cannot handle advanced obfuscation techniques; (2) the methods based on source code analysis are less practical since the source code of suspicious programs is typically not available until strong evidences are collected; and (3) those depending on the features of specific operating systems or programming languages have limited applicability. Based on an observation that some critical runtime values are hard to be replaced or eliminated by semanticspreserving transformation techniques, we introduce a novel approach to dynamic characterization of executable programs. Leveraging such invariant values, our technique is resilient to various control and data obfuscation techniques. We show how the values can be extracted and refined to expose the critical values and how we can apply this runtime property to help solve problems in software plagiarism detection. We have implemented a prototype with a dynamic taint analyzer atop a generic processor emulator. Our experimental results show that the value-based method successfully discriminates 34 plagiarisms obfuscated by SandMark, plagiarisms heavily obfuscated by KlassMaster, programs obfuscated by Thicket, and executables obfuscated by Loco/Diablo.
Article Search

Tools and Environments

A Comparison of Model-based and Judgment-based Release Planning in Incremental Software Projects
Hans Christian Benestad and Jo E. Hannay
(Simula Research Laboratory, Norway)
Numerous factors are involved when deciding when to implement which features in incremental software development. To facilitate a rational and efficient planning process, release planning models make such factors explicit and compute release plan alternatives according to optimization principles. However, experience suggests that industrial use of such models is limited. To investigate the feasibility of model and tool support, we compared input factors assumed by release planning models with factors considered by expert planners. The former factors were cataloged by systematically surveying release planning models, while the latter were elicited through repertory grid interviews in three software organizations. The findings indicate a substantial overlap between the two approaches. However, a detailed analysis reveals that models focus on only select parts of a possibly larger space of relevant planning factors. Three concrete areas of mismatch were identified: (1) continuously evolving requirements and specifications, (2) continuously changing prioritization criteria, and (3) authority-based decision processes. With these results in mind, models, tools and guidelines can be adjusted to address better real-life development processes.
Article Search
An Industrial Case Study on Quality Impact Prediction for Evolving Service-Oriented Software
Heiko Koziolek, Bastian Schlich, Carlos Bilich, Roland Weiss, Steffen Becker, Klaus Krogmann, Mircea Trifu, Raffaela Mirandola, and Anne Koziolek
(ABB Corporate Research, Germany; University of Paderborn, Germany; FZI, Germany; Politecnico di Milano, Italy; KIT, Germany)
Systematic decision support for architectural design decisions is a major concern for software architects of evolving service-oriented systems. In practice, architects often analyse the expected performance and reliability of design alternatives based on prototypes or former experience. Model-driven prediction methods claim to uncover the tradeoffs between different alternatives quantitatively while being more cost-effective and less error-prone. However, they often suffer from weak tool support and focus on single quality attributes. Furthermore, there is limited evidence on their effectiveness based on documented industrial case studies. Thus, we have applied a novel, model-driven prediction method called Q-ImPrESS on a large-scale process control system consisting of several million lines of code from the automation domain to evaluate its evolution scenarios. This paper reports our experiences with the method and lessons learned. Benefits of Q-ImPrESS are the good architectural decision support and comprehensive tool framework, while one drawback is the time-consuming data collection.
Article Search
Enabling the Runtime Assertion Checking of Concurrent Contracts for the Java Modeling Language
Wladimir Araujo, Lionel C. Briand, and Yvan Labiche
(Juniper Networks, Canada; Simula Research Laboratory, Norway; University of Oslo, Norway; Carleton University, Canada)
Though there exists ample support for Design by Contract (DbC) for sequential programs, applying DbC to concurrent programs presents several challenges. In previous work, we extended the Java Modeling Language (JML) with constructs to specify concurrent contracts for Java programs. We present a runtime assertion checker (RAC) for the expanded JML capable of verifying assertions for concurrent Java programs. We systematically evaluate the validity of system testing results obtained via runtime assertion checking using actual concurrent and functional faults on a highly concurrent industrial system from the telecommunications domain.
Article Search

New Ideas and Emerging Results


Perspectives of Delegation in Team-Based Distributed Software Development over the GENI Infrastructure (NIER Track)
Pierre F. Tiako
(Langston University, USA; Tiako University, USA)
Team-based distributed software development (TBDSD) is one of the single biggest challenges facing software companies. The need to manage development efforts and resources in different locations increase the complexity and cost of modern day software development. Current software development environments do not provide suitable support to delegate task among teams with appropriate directives. TBDSD is also limited to the current internet capabilities. One of the resulting problems is the difficulty to delegate and control tasks assigned among remote teams. This paper proposes (1) a new framework for delegation in TBDSD, and (2) perspectives for deploying Process-centered Software Engineering Environments (PSEE) over the Global Environment for Network Innovations (GENI) infrastructure. GENI, the “future Internet” that is taking shape in prototypes across the US, will allow, in the context of our study, to securely access and share software artifacts, resources, and tools as never before seen over the current Internet.
Article Search
The Hidden Experts in Software-Engineering Communication (NIER Track)
Irwin Kwan and Daniela Damian
(University of Victoria, Canada)
Sharing knowledge in a timely fashion is important in distributed software development. However, because experts are difficult to locate, developers tend to broadcast information to find the right people, which leads to overload and to communication breakdowns. We study the context in which experts are included in an email discussion so that team members can identify experts sooner. In this paper, we conduct a case study examining why people emerge in discussions by examining email within a distributed team. We find that people emerge in the following four situations: when a crisis occurs, when they respond to explicit requests, when they are forwarded in announcements, and when discussants follow up on a previous event such as a meeting. We observe that emergent people respond not only to situations where developers are seeking expertise, but also to execute routine tasks. Our findings have implications for expertise seeking and knowledge management processes.
Article Search
How Do Programmers Ask and Answer Questions on the Web? (NIER Track)
Christoph Treude, Ohad Barzilay, and Margaret-Anne Storey
(University of Victoria, Canada; Tel-Aviv University, Israel)
Question and Answer (Q&A) websites, such as Stack Overflow, use social media to facilitate knowledge exchange between programmers and fill archives with millions of entries that contribute to the body of knowledge in software development. Understanding the role of Q&A websites in the documentation landscape will enable us to make recommendations on how individuals and companies can leverage this knowledge effectively. In this paper, we analyze data from Stack Overflow to categorize the kinds of questions that are asked, and to explore which questions are answered well and which ones remain unanswered. Our preliminary findings indicate that Q&A websites are particularly effective at code reviews and conceptual questions. We pose research questions and suggest future work to explore the motivations of programmers that contribute to Q&A websites, and to understand the implications of turning Q&A exchanges into technical mini-blogs through the editing of questions and answers.
Article Search


Sketching Tools for Ideation (NIER Track)
Rachel Bellamy, Michael Desmond, Jacquelyn Martino, Paul Matchen, Harold Ossher, John Richards, and Cal Swart
(IBM Research Watson, USA)
Sketching facilitates design in the exploration of ideas about concrete objects and abstractions. In fact, throughout the software engineering process when grappling with new ideas, people reach for a pen and start sketching. While pen and paper work well, digital media can provide additional features to benefit the sketcher. Digital support will only be successful, however, if it does not detract from the core sketching experience. Based on research that defines characteristics of sketches and sketching, this paper offers three preliminary tool examples. Each example is intended to enable sketching while maintaining its characteristic experience.
Article Search
Digitally Annexing Desk Space for Software Development (NIER Track)
John Hardy, Christopher Bull, Gerald Kotonya, and Jon Whittle
(Lancaster University, UK)
Software engineering is a team activity yet the programmer’s key tool, the IDE, is still largely that of a soloist. This paper describes the vision, implementation and initial evaluation of CoffeeTable – a fully featured research prototype resulting from our reflections on the software design process. CoffeeTable exchanges the traditional IDE for one built around a shared interactive desk. The proposed solution encourages smooth transitions between agile and traditional modes of working whilst helping to create a shared vision and common reference frame – key to sustaining a good design. This paper also presents early results from the evaluation of CoffeeTable and offers some insights from the lessons learned. In particular, it highlights the role of developer tools and the software constructions that are shaped by them.
Article Search
Information Foraging as a Foundation for Code Navigation (NIER Track)
Nan Niu, Anas Mahmoud, and Gary Bradshaw
(Mississippi State University, USA)
A major software engineering challenge is to understand the fundamental mechanisms that underlie the developer’s code navigation behavior. We propose a novel and unified theory based on the premise that we can study developer’s information seeking strategies in light of the foraging principles that evolved to help our animal ancestors to find food. Our preliminary study on code navigation graphs suggests that the tenets of information foraging provide valuable insight into software maintenance. Our research opens the avenue towards the development of ecologically valid tool support to augment developers’ code search skills.
Article Search

Tools & Languages

Identifying Method Friendships to Remove the Feature Envy Bad Smell (NIER Track)
Rocco Oliveto, Malcom Gethers, Gabriele Bavota, Denys Poshyvanyk, and Andrea De Lucia
(University of Molise, Italy; College of William and Mary, USA; University of Salerno, Italy)
We propose a novel approach to identify Move Method refactoring opportunities and remove the Feature Envy bad smell from source code. The proposed approach analyzes both structural and conceptual relationships between methods and uses Relational Topic Models (RTM) to identify sets of methods that share several responsibilities, i.e., "friend methods". The analysis of method friendships of a given method can be used to pinpoint the target class (envied class) where the method should be moved in. The results of a preliminary empirical evaluation indicate that the proposed approach provides accurate and meaningful refactoring opportunities.
Article Search
The Code Orb -- Supporting Contextualized Coding via At-a-Glance Views (NIER Track)
Nicolas Lopez and André van der Hoek
(UC Irvine, USA)
While code is typically presented as a flat file to a developer who must change it, this flat file exists within a context that can drastically influence how a developer approaches changing it. While the developer clearly must be careful changing any code, they probably should be yet more careful in changing code that recently saw major changes, is barely covered by test cases, and was the source of a number of bugs. Contextualized coding refers to the ability of the developer to effectively use such contextual information while they work on some changes. In this paper, we introduce the Code Orb, a contextualized coding tool that builds upon existing mining and analysis techniques to warn developers on a line-by-line basis of the volatility of the code they are working on. The key insight underneath the Code Orb is that it is neither desired nor possible to always present a code’s context in its entirety; instead, it is necessary to provide an abstracted view of the context that informs the developer of which parts of the code they need to pay more attention to. This paper discusses the principles of and rationale behind contextualized coding, introduces the Code Orb, and illustrates its function with example code and context drawn from the Mylyn [11] project.
Article Search
Permission-Based Programming Languages (NIER Track)
Jonathan Aldrich, Ronald Garcia, Mark Hahnenberg, Manuel Mohr, Karl Naden, Darpan Saini, Sven Stork, Joshua Sunshine, Éric Tanter, and Roger Wolff
(CMU, USA; Karlsruhe Institute of Technology, Germany; University of Chile, Chile)
Linear permissions have been proposed as a lightweight way to specify how an object may be aliased, and whether those aliases allow mutation. Prior work has demonstrated the value of permissions for addressing many software engineering concerns, including information hiding, protocol checking, concurrency, security, and memory management. We propose the concept of a permission-based programming language--a language whose object model, type system, and runtime are all co-designed with permissions in mind. This approach supports an object model in which the structure of an object can change over time, a type system that tracks changing structure in addition to addressing the other concerns above, and a runtime system that can dynamically check permission assertions and leverage permissions to parallelize code. We sketch the design of the permission-based programming language Plaid, and argue that the approach may provide significant software engineering benefits.
Article Search


Toward a Better Understanding of Tool Usage (NIER Track)
Alberto Sillitti, Giancarlo Succi, and Jelena Vlasenko
(Free University of Bozen, Italy)
Developers use tools to develop software systems and always alleged better tools are being produced and purchased. Still there have been only limited studies on how people really use tools; these studies have used limited data, and the interactions between tools have not been properly elaborated. The advent of the AISEMA (Automated In-Process Software Engineering Measurement and Analysis) systems [3] has enabled a more detailed collection of tools data. Our “new idea” is to take advantage of such data to build a simple model based on an oriented graph that enables a good understanding on how tools are used individually and collectively. We have empirically validated the model analyzing an industrial team of 19 developers for a period of 10 months.
Article Search
Characterizing Process Variation (NIER Track)
Borislava I. Simidchieva and Leon J. Osterweil
(University of Massachusetts at Amherst, USA)
A process model, namely a formal definition of the coordination of agents performing activities using resources and artifacts, can aid understanding of the real-world process it models. Moreover, analysis of the model can suggest improvements to the real-world process. Complex real-world processes, however, exhibit considerable amounts of variation that can be difficult or impossible to represent with a single process model. Such processes can often be modeled better, within the restrictions of a given modeling notation, by a family of models. This paper presents an approach to the formal characterization of some of these process families. A variety of needs for process variation are identified, and suggestions are made about how to meet some of these needs using different approaches. Some mappings of different needs for variability to approaches for meeting them are presented as case studies.
Article Search
Blending Freeform and Managed Information in Tables (NIER Track)
Nicolas Mangano, Harold Ossher, Ian Simmonds, Matthew Callery, Michael Desmond, and Sophia Krasikov
(UC Irvine, USA; IBM Research Watson, USA)
Tables are an important tool used by business analysts engaged in early requirements activities (in fact it is safe to say that tables appeal to many other types of user, in a variety of activities and domains). Business analysts typically use the tables provided by office tools. These tables offer great flexibility, but no underlying model, and hence no consistency management, multiple views or other advantages familiar to the users of modeling tools. Modeling tools, however, are usually too rigid for business analysts. In this paper we present a flexible modeling approach to tables, which combines the advantages of both office and modeling tools. Freeform information can co-exist with information managed by an underlying model, and an incremental formalization approach allows each item of information to transition fluidly between freeform and managed. As the model evolves, it is used to guide the user in the process of formalizing any remaining freeform information. The model therefore helps users without restricting them. Early feedback is described, and the approach is analyzed briefly in terms of cognitive dimensions.
Article Search
Design and Implementation of a Data Analytics Infrastructure in Support of Crisis Informatics Research (NIER Track)
Kenneth M. Anderson and Aaron Schram
(University of Colorado, USA)
Crisis informatics is an emerging research area that studies how information and communication technology (ICT) is used in emergency response. An important branch of this area includes investigations of how members of the public make use of ICT to aid them during mass emergencies. Data collection and analytics during crisis events is a critical prerequisite for performing such research, as the data generated during these events on social media networks are ephemeral and easily lost. We report on the current state of a crisis informatics data analytics infrastructure that we are developing in support of a broader, interdisciplinary research program. We also comment on the role that software engineering research plays in these increasingly common, highlyinterdisciplinary research efforts.
Article Search


A Domain Specific Requirements Model for Scientific Computing (NIER Track)
Yang Li, Nitesh Narayan, Jonas Helming, and Maximilian Koegel
(TU München, Germany)
Requirements engineering is a core activity in software engineering. However, formal requirements engineering methodologies and documented requirements are often missing in scientific computing projects. We claim that there is a need for methodologies, which capture requirements for scientific computing projects, because traditional requirements engineering methodologies are difficult to apply in this domain. We propose a novel domain specific requirements model to meet this need. We conducted an exploratory experiment to evaluate the usage of this model in scientific computing projects. The results indicate that the proposed model facilitates the communication across the domain boundary, which is between the scientific computing domain and the software engineering domain. It supports requirements elicitation for the projects efficiently.
Article Search
CREWW - Collaborative Requirements Engineering with Wii-Remotes (NIER Track)
Felix Bott, Stephan Diehl, and Rainer Lutz
(University of Trier, Germany)
In this paper, we present CREWW, a tool for co-located, collaborative CRC modeling and use case analysis. In CRC sessions role play is used to involve all stakeholders when determining whether the current software model completely and consistently captures the modeled use case. In this activity it quickly becomes difficult to keep track of which class is currently active or along which path the current state was reached. CREWW was designed to alleviate these and other weaknesses of the traditional approach.
Article Search
Learning to Adapt Requirements Specifications of Evolving Systems (NIER Track)
Rafael V. Borges, Artur D'Avila Garcez, Luis C. Lamb, and Bashar Nuseibeh
(City University London, UK; UFRGS, Brazil; The Open University, UK; Lero, Ireland)
We propose a novel framework for adapting and evolving software requirements models. The framework uses model checking and machine learning techniques for verifying properties and evolving model descriptions. The paper offers two novel contributions and a preliminary evaluation and application of the ideas presented. First, the framework is capable of coping with errors in the specification process so that performance degrades gracefully. Second, the framework can also be used to re-engineer a model from examples only, when an initial model is not available. We provide a preliminary evaluation of our framework by applying it to a Pump System case study, and integrate our prototype tool with the NuSMV model checker. We show how the tool integrates verification and evolution of abstract models, and also how it is capable of re-engineering partial models given examples from an existing system.
Article Search
Towards Overcoming Human Analyst Fallibility in the Requirements Tracing Process (NIER Track)
David Cuddeback, Alex Dekhtyar, Jane Huffman Hayes, Jeff Holden, and Wei-Keat Kong
(California Polytechnic State University, USA; University of Kentucky, USA)
Our research group recently discovered that human analysts, when asked to validate candidate traceability matrices, produce predictably imperfect results, in some cases less accurate than the starting candidate matrices. This discovery radically changes our understanding of how to design a fast, accurate and certifiable tracing process that can be implemented as part of software assurance activities. We present our vision for the new approach to achieving this goal. Further, we posit that human fallibility may impact other softare engineering activities involving decision support tools.
Article Search

Verification 1

Positive Effects of Utilizing Relationships Between Inconsistencies for more Effective Inconsistency Resolution (NIER Track)
Alexander Nöhrer, Alexander Reder, and Alexander Egyed
(Johannes Kepler University, Austria)
State-of-the-art modeling tools can help detect inconsistencies in software models. Some can even generate fixing actions for these inconsistencies. However such approaches handle inconsistencies individually, assuming that each single inconsistency is a manifestation of an individual defect. We believe that inconsistencies are merely expressions of defects. That is, inconsistencies highlight situations under which defects are observable. However, a single defect in a software model may result in many inconsistencies and a single inconsistency may be the result of multiple defects. Inconsistencies may thus be related to other inconsistencies and we believe that during fixing, one should consider clusters of such related inconsistencies. This paper provides first evidence and emerging results that several inconsistencies can be linked to a single defect and show that with such knowledge only a subset of fixes need to be considered during inconsistency resolution.
Article Search
Matching Logic: A New Program Verification Approach (NIER Track)
Grigore Roşu and Andrei Ştefănescu
(University of Illinois at Urbana-Champaign, USA)
Matching logic is a new program verification logic, which builds upon operational semantics. Matching logic specifications are constrained symbolic program configurations, called patterns, which can be matched by concrete configurations. By building upon an operational semantics of the language and allowing specifications to directly refer to the structure of the configuration, matching logic has at least three benefits: (1) One's familiarity with the formalism reduces to one's familiarity with the operational semantics of the language, that is, with the language itself; (2) The verification process proceeds the same way as the program execution, making debugging failed proof attempts manageable because one can always see the ``current configuration'' and ``what went wrong'', same like in a debugger; and (3) Nothing is lost in translation, that is, there is no gap between the language itself and its verifier. Moreover, direct access to the structure of the configuration facilitates defining sub-patterns that one may reason about, such as disjoint lists or trees in the heap, as well as supporting framing in various components of the configuration at no additional costs.
Article Search


Model-based Performance Testing (NIER Track)
Cornel Barna, Marin Litoiu, and Hamoun Ghanbari
(York University, Canada)
In this paper, we present a method for performance testing of transactional systems. The methods models the system under test, finds the software and hardware bottlenecks and generate the workloads that saturate them. The framework is adaptive, the model and workloads are determined during the performance test execution by measuring the system performance, fitting a performance model and by analytically computing the number and mix of users that will saturate the bottlenecks. We model the software system using a two layers queuing model and use analytical techniques to find the workload mixes that change the bottlenecks in the system. Those workload mixes become stress vectors and initial starting points for the stress test cases. The rest of test cases are generated based on a feedback loop that drives the software system towards the worst case behaviour.
Article Search
Tuple Density: A New Metric for Combinatorial Test Suites (NIER Track)
Baiqiang Chen and Jian Zhang
(Chinese Academy of Sciences, China)
We propose tuple density to be a new metric for combinatorial test suites. It can be used to distinguish one test suite from another even if they have the same size and strength. Moreover, it is also illustrated how a given test suite can be optimized based on this metric. The initial experimental results are encouraging
Article Search
Search-Enhanced Testing (NIER Track)
Colin Atkinson, Oliver Hummel, and Werner Janjic
(University of Mannheim, Germany)
The prime obstacle to automated defect testing has always been the generation of “correct” results against which to judge the behavior of the system under test – the “oracle problem”. So called “back-to-back” testing techniques that exploit the availability of multiple versions of a system to solve the oracle problem have mainly been restricted to very special, safety critical domains such as military and space applications since it is so expensive to manually develop the additional versions. However, a new generation of software search engines that can find multiple copies of software components at virtually zero cost promise to change this situation. They make it economically feasible to use the knowledge locked in reusable software components to dramatically improve the efficiency of the software testing process. In this paper we outline the basic ingredients of such an approach.
Article Search

Testing & Debugging

Fuzzy Set-based Automatic Bug Triaging (NIER Track)
Ahmed Tamrawi, Tung Thanh Nguyen, Jafar Al-Kofahi, and Tien N. Nguyen
(Iowa State University, USA)
Assigning a bug to the right developer is a key in reducing the cost, time, and efforts for developers in a bug fixing process. This assignment process is often referred to as bug triaging. In this paper, we propose Bugzie, a novel approach for automatic bug triaging based on fuzzy set-based modeling of bug-fixing expertise of developers. Bugzie considers a system to have multiple technical aspects, each is associated with technical terms. Then, it uses a fuzzy set to represent the developers who are capable/competent of fixing the bugs relevant to each term. The membership function of a developer in a fuzzy set is calculated via the terms extracted from the bug reports that (s)he has fixed, and the function is updated as new fixed reports are available. For a new bug report, its terms are extracted and corresponding fuzzy sets are union'ed. Potential fixers will be recommended based on their membership scores in the union'ed fuzzy set. Our preliminary results show that Bugzie achieves higher accuracy and efficiency than other state-of-the-art approaches.
Article Search
Exploiting Hardware Advances for Software Testing and Debugging (NIER Track)
Mary Lou Soffa, Kristen R. Walcott, and Jason Mars
(University of Virginia, USA)
Despite the emerging ubiquity of hardware monitoring mechanisms and prior research work in other fields, the applicability and usefulness of hardware monitoring mechanisms have not been fully scrutinized for software engineering. In this work, we identify several recently developed hardware mechanisms that lend themselves well to structural test coverage analysis and automated fault localization and explore their potential. We discuss key factors impacting the applicability of hardware monitoring mechanism for these software engineering tasks, present novel online analyses leveraging these mechanisms, and provide preliminary results demonstrating the promise of this emerging hardware.
Article Search
Better Testing Through Oracle Selection (NIER Track)
Matt Staats, Michael W. Whalen, and Mats P. E. Heimdahl
(University of Minnesota, USA)
In software testing, the test oracle determines if the application under test has performed an execution correctly. In current testing practice and research, significant effort and thought is placed on selecting test inputs, with the selection of test oracles largely neglected. Here, we argue that improvements to the testing process can be made by considering the problem of oracle selection. In particular, we argue that selecting the test oracle and test inputs together to complement one another may yield improvements testing effectiveness. We illustrate this using an example and present selected results from an ongoing study demonstrating the relationship between test suite selection, oracle selection, and fault finding.
Article Search

Program Analysis 1

Tracking Data Structures for Postmortem Analysis (NIER Track)
Xiao Xiao, Jinguo Zhou, and Charles Zhang
(Hong Kong University of Science and Technology, Hong Kong)

Analyzing the runtime behaviors of the data structures is important because they usually relate to the obscured program performance and understanding issues. The runtime evolution history of data structures creates the possibility of building a lightweight and non-checkpointing based solution for the backward analysis for validating and mining both the temporal and stationary properties of the data structure. We design and implement TAEDS, a framework that focuses on gathering the data evolution history of a program at the runtime and provides a virtual machine for programmers to examine the behavior of data structures back in time. We show that our approach facilitates many programming tasks such as diagnosing memory problems and improving the design of the data structures themselves.

Article Search
Iterative Context-Aware Feature Location (NIER Track)
Xin Peng, Zhenchang Xing, Xi Tan, Yijun Yu, and Wenyun Zhao
(Fudan University, China; National University of Singapore, Singapore; The Open University, UK)
Locating the program element(s) relevant to a particular feature is an important step in efficient maintenance of a software system. The existing feature location techniques analyze each feature independently and perform a one-time analysis after being provided an initial input. As a result, these techniques are sensitive to the quality of the input, and they tend to miss the nonlocal interactions among features. In this paper, we propose to address the proceeding two issues in feature location using an iterative context-aware approach. The underlying intuition is that the features are not independent of each other, and the structure of source code resembles the structure of features. The distinguishing characteristics of the proposed approach are: 1) it takes into account the structural similarity between a feature and a program element to determine their relevance; 2) it employs an iterative process to propagate the relevance of the established mappings between a feature and a program element to the neighboring features and program elements. Our initial evaluation suggests the proposed approach is more robust and can significantly increase the recall of feature location with a slight decrease in precision.
Article Search
A Study of Ripple Effects in Software Ecosystems (NIER Track)
Romain Robbes and Mircea Lungu
(University of Chile, Chile; University of Bern, Switzerland)
When the Application Programming Interface (API) of a framework or library changes, its clients must be adapted. This change propagation—known as a ripple effect—is a problem that has garnered interest: several approaches have been proposed in the literature to react to these changes. Although studies of ripple effects exist at the single system level, no study has been performed on the actual extent and impact of these API changes in practice, on an entire software ecosystem associated with a community of developers. This paper reports on early results of such an empirical study of API changes that led to ripple effects across an entire ecosystem. Our case study subject is the development community gravitating aroung the Squeak and Pharo software ecosystems: six years of evolution, nearly 3,000 contributors, and close to 2,500 distinct systems.
Article Search

Design Traceability

Tracing Architectural Concerns in High Assurance Systems (NIER Track)
Mehdi Mirakhorli and Jane Cleland-Huang
(DePaul University, USA)
Software architecture is shaped by a diverse set of interacting and competing quality concerns, each of which may have broad-reaching impacts across multiple architectural views. Without traceability support, it is easy for developers to inadvertently change critical architectural elements during ongoing system maintenance and evolution, leading to architectural erosion. Unfortunately, existing traceability practices, tend to result in the proliferation of traceability links, which can be difficult to create, maintain, and understand. We therefore present a decision-centric approach that focuses traceability links around the architectural decisions that have shaped the delivered system. Our approach, which is informed through an extensive investigation of architectural decisions made in real-world safety-critical and performance-critical applications, provides enhanced support for advanced software engineering tasks.
Article Search
A Combination Approach for Enhancing Automated Traceability (NIER Track)
Xiaofan Chen, John Hosking, and John Grundy
(University of Auckland, New Zealand; Swinburne University of Technology at Melbourne, Australia)
Tracking a variety of traceability links between artifacts assists software developers in comprehension, efficient development, and effective management of a system. Traceability systems to date based on various Information Retrieval (IR) techniques have been faced with a major open research challenge: how to extract these links with both high precision and high recall. In this paper we describe an experimental approach that combines Regular Expression, Key Phrases, and Clustering with IR techniques to enhance the performance of IR for traceability link recovery between documents and source code. Our preliminary experimental results show that our combination technique improves the performance of IR, increases the precision of retrieved links, and recovers more true links than IR alone.
Article Search
Capturing Tacit Architectural Knowledge Using the Repertory Grid Technique (NIER Track)
Dan Tofan, Matthias Galster, and Paris Avgeriou
(University of Groningen, Netherlands)
Knowledge about the architecture of a software-intensive system tends to vaporize easily. This leads to increased maintenance costs. We explore a new idea: utilizing the repertory grid technique to capture tacit architectural knowledge. Particularly, we investigate the elicitation of design decision alternatives and their characteristics. To study the applicability of this idea, we performed an exploratory study. Seven independent subjects applied the repertory grid technique to document a design decision they had to take in previous projects. Then, we interviewed each subject to understand their perception about the technique. We identified advantages and disadvantages of using the technique. The main advantage is the reasoning support it provides; the main disadvantage is the additional effort it requires. Also, applying the technique depends on the context of the project. Using the repertory grid technique is a promising approach for fighting architectural knowledge vaporization.
Article Search

Modeling (or not)

Flexible Generators for Software Reuse and Evolution (NIER Track)
Stanislaw Jarzabek and Ha Duy Trung
(National University of Singapore, Singapore)
Developers tend to use models and generators during initial development, but often abandon them later in software evolution and reuse. One reason for that is that code generated from models (e.g., UML) is often manually modified, and changes cannot be easily propagated back to models. Once models become out of sync with code, any future re-generation of code overrides manual modifications. We propose a flexible generator solution that alleviates the above problem. The idea is to let developers weave arbitrary manual modifications into the generation process, rather than modify already generated code. A flexible generator stores specifications of manual modifications in executable form, so that weaving can be automatically re-done any time code is regenerated from modified models. In that way, models and manual modification can evolve independently but in sync with each other, and the generated code never gets directly changed. As a proof of concept, we have already built a flexible generator prototype by a merger of conventional generation system and variability technique to handle manual modifications. We believe a flexible generator approach alleviates an important problem that hinders wide spread adoption of MDD in software practice.
Article Search
The Lazy Initialization Multilayered Modeling Framework (NIER Track)
Fahad R. Golra and Fabien Dagnat
(Université Européenne de Bretagne, France; Institut Télécom, France)
Lazy Initialization Multilayer Modeling (LIMM) is an object oriented modeling language targeted to the declarative definition of Domain Specific Languages (DSLs) for Model Driven Engineering. It focuses on the precise definition of modeling frameworks spanning over multiple layers. In particular, it follows a two dimensional architecture instead of the linear architecture followed by many other modeling frameworks. The novelty of our approach is to use lazy initialization for the definition of mapping between different modeling abstractions, within and across multiple layers, hence providing the basis for exploiting the potential of metamodeling.
Article Search
Towards Architectural Information in Implementation (NIER Track)
Henrik Bærbak Christensen and Klaus Marius Hansen
(Aarhus University, Denmark; University of Copenhagen, Denmark)
Agile development methods favor speed and feature producing iterations. Software architecture, on the other hand, is ripe with techniques that are slow and not oriented directly towards implementation of costumers’ needs. Thus, there is a major challenge in retaining architectural information in a fast-faced agile project. We propose to embed as much architectural information as possible in the central artefact of the agile universe, the code. We argue that thereby valuable architectural information is retained for (automatic) documentation, validation, and further analysis, based on a relatively small investment of effort. We outline some preliminary examples of architectural annotations in Java and Python and their applicability in practice.
Article Search

Empirical SE

Topic-based Defect Prediction (NIER Track)
Tung Thanh Nguyen, Tien N. Nguyen, and Tu Minh Phuong
(Iowa State University, USA; Posts and Telecommunications Institute of Technology, Vietnam)
Defects are unavoidable in software development and fixing them is costly and resource-intensive. To build defect pre- diction models, researchers have investigated a number of factors related to the defect-proneness of source code, such as code complexity, change complexity, or socio-technical factors. In this paper, we propose a new approach that emphasizes on technical concerns/functionality of a system. In our approach, a software system is viewed as a collection of software artifacts that describe different technical concerns/aspects. Those concerns are assumed to have different levels of defect-proneness, thus, cause different levels of defect-proneness to the relevant software artifacts. We use topic modeling to measure the concerns in source code, and use them as the input for machine learning-based defect prediction models. Preliminary result on Eclipse JDT shows that the topic-based metrics have high correlation to the number of bugs (defect-proneness), and our topic-based defect prediction has better predictive performance than existing state-of-the-art approaches.
Article Search
Automated Usability Evaluation of Parallel Programming Constructs (NIER Track)
Victor Pankratius
(Karlsruhe Institute of Technology, Germany)
Multicore computers are ubiquitous, and proposals to extend existing languages with parallel constructs mushroom. While everyone claims to make parallel programming easier and less error-prone, empirical language usability evaluations are rarely done in-the-field with many users and real programs. Key obstacles are costs and a lack of appropriate environments to gather enough data for representative conclusions. This paper discusses the idea of automating the usability evaluation of parallel language constructs by gathering subjective and objective data directly in every software engineer’s IDE. The paper presents an Eclipse prototype suite that can aggregate such data from potentially hundreds of thousands of programmers. Mismatch detection in subjective and objective feedback as well as construct usage mining can improve language design at an early stage, thus reducing the risk of developing and maintaining inappropriate constructs. New research directions arising from this idea are outlined for software repository mining, debugging, and software economics.
Article Search
Data Analytics for Game Development (NIER Track)
Kenneth Hullett, Nachiappan Nagappan, Eric Schuh, and John Hopson
(UC Santa Cruz, USA; Microsoft Research, USA; Microsoft Game Studios, USA; Bungie Studios, USA)
The software engineering community has had seminal papers on data analysis for software productivity, quality, reliability, performance etc. Analyses have involved software systems ranging from desktop software to telecommunication switching systems. Little work has been done on the emerging digital game industry. In this paper we explore how data can drive game design and production decisions in game development. We define a mixture of qualitative and quantitative data sources, broken down into three broad categories: internal testing, external testing, and subjective evaluations. We present preliminary results of a case study of how data collected from users of a released game can inform subsequent development.
Article Search

Program Analysis 2

Mining Service Abstractions (NIER Track)
Dionysis Athanasopoulos, Apostolos V. Zarras, Panos Vassiliadis, and Valerie Issarny
(University of Ioannina, Greece; INRIA-Paris, France)
Several lines of research rely on the concept of service abstractions to enable the organization, the composition and the adaptation of services. However, what is still missing, is a systematic approach for extracting service abstractions out of the vast amount of services that are available all over the Web. To deal with this issue, we propose an approach for mining service abstractions, based on an agglomerative clustering algorithm. Our experimental findings suggest that the approach is promising and can serve as a basis for future research.
Article Search
A Software Behaviour Analysis Framework Based on the Human Perception Systems (NIER Track)
Heidar Pirzadeh and Abdelwahab Hamou-Lhadj
(Concordia University, Canada)
Understanding software behaviour can help in a variety of software engineering tasks if one can develop effective techniques for analyzing the information generated from a system's run. These techniques often rely on tracing. Traces, however, can be considerably large and complex to process. In this paper, we present an innovative approach for trace analysis inspired by the way the human brain and perception systems operate. The idea is to mimic the psychological processes that have been developed over the years to explain how our perception system deals with huge volume of visual data. We show how similar mechanisms can be applied to the abstraction and simplification of large traces. Some preliminary results are also presented.
Article Search
Dynamic Shape Analysis of Program Heap using Graph Spectra (NIER Track)
Muhammad Zubair Malik
(University of Texas at Austin, USA)
Programs written in languages such as Java and C# maintain most of their state on the heap. The size and complexity of these programs pose a challenge in understanding and maintaining them; Heap analysis by summarizing the state of heap graphs assists programmers in these tasks. In this paper we present a novel dynamic heap analysis technique that uses spectra of the heap graphs to summarize them. These summaries capture the shape of recursive data structures as dynamic invariants or likely properties of these structures that must be preserved after any destructive update. Initial experiments show that this approach can generate meaningful summaries for a range of subject structures.
Article Search
Program Analysis: From Qualitative Analysis to Quantitative Analysis (NIER Track)
Sheng Liu and Jian Zhang
(Chinese Academy of Sciences, China)
We propose to combine symbolic execution with volume computation to compute the exact execution frequency of program paths and branches. Given a path, we use symbolic execution to obtain the path condition which is a set of constraints; then we use volume computation to obtain the size of the solution space for the constraints. With such a methodology and supporting tools, we can decide which paths in a program are executed more often than the others. We can also generate certain test cases that are related to the execution frequency, e.g., those covering cold paths.
Article Search

Verification 2

Diagnosing New Faults Using Mutants and Prior Faults (NIER Track)
Syed Shariyar Murtaza, Nazim Madhavji, Mechelle Gittens, and Zude Li
(University of Western Ontario, Canada; University of West Indies, Barbados)
Literature indicates that 20% of a program’s code is responsible for 80% of the faults, and 50-90% of the field failures are rediscoveries of previous faults. Despite this, identification of faulty code can consume 30-40% time of error correction. Previous fault-discovery techniques focusing on field failures either require many pass-fail traces, discover only crashing failures, or identify faulty “files” (which are of large granularity) as origin of the source code. In our earlier work (the F007 approach), we identify faulty “functions” (which are of small granularity) in a field trace by using earlier resolved traces of the same release, which limits it to the known faulty functions. This paper overcomes this limitation by proposing a new “strategy” to identify new and old faulty functions using F007. This strategy uses failed traces of mutants (artificial faults) and failed traces of prior releases to identify faulty functions in the traces of succeeding release. Our results on two UNIX utilities (i.e., Flex and Gzip) show that faulty functions in the traces of the majority (60-85%) of failures of a new software release can be identified by reviewing only 20% of the code. If compared against prior techniques then this is a notable improvement in terms of contextual knowledge required and accuracy in the discovery of finer-grain fault origin.
Article Search
Empirical Results on the Study of Software Vulnerabilities (NIER Track)
Yan Wu, Harvey Siy, and Robin Gandhi
(University of Nebraska at Omaha, USA)
While the software development community has put a significant effort to capture the artifacts related to a discovered vulnerability in organized repositories, much of this information is not amenable to meaningful analysis and requires a deep and manual inspection. In the software assurance community a body of knowledge that provides an enumeration of common weaknesses has been developed, but it is not readily usable for the study of vulnerabilities in specific projects and user environments. We propose organizing the information in project repositories around semantic templates. In this paper, we present preliminary results of an experiment conducted to evaluate the effectiveness of using semantic templates as an aid to studying software vulnerabilities.
Article Search

Different Angles

Multifractal Aspects of Software Development (NIER Track)
Abram Hindle, Michael W. Godfrey, and Richard C. Holt
(UC Davis, USA; University of Waterloo, Canada)
Software development is difficult to model, particularly the noisy, non-stationary signals of changes per time unit, extracted from version control systems (VCSs). Currently researchers are utilizing timeseries analysis tools such as ARIMA to model these signals extracted from a project's VCS. Unfortunately current approaches are not very amenable to the underlying power-law distributions of this kind of signal. We propose modeling changes per time unit using multifractal analysis. This analysis can be used when a signal exhibits multiscale self-similarity, as in the case of complex data drawn from power-law distributions. Specifically we utilize multifractal analysis to demonstrate that software development is multifractal, that is the signal is a fractal composed of multiple fractal dimensions along a range of Hurst exponents. Thus we show that software development has multi-scale self-similarity, that software development is multifractal. We also pose questions that we hope multifractal analysis can answer.
Article Search
The American Law Institute's Principles on Software Contracts and their Ramifications for Software Engineering Research (NIER Track)
James B. Williams and Jens H. Weber-Jahnke
(University of Victoria, Canada)
The American Law Institute has recently published principles of software contracts that may have profound impact on changing the software industry. One of the principles implies a nondisclaimable liability of software vendors for any hidden material defects. In this paper, we describe the new principle, first from a legal and then from a software engineering point of view. We point out potential ramifications and research directions for the software engineering community.
Article Search
Toward Sustainable Software Engineering (NIER Track)
Nadine Amsel, Zaid Ibrahim, Amir Malik, and Bill Tomlinson
(UC Irvine, USA)
Current software engineering practices have significant effects on the environment. Examples include e-waste from computers made obsolete due to software upgrades, and changes in the power demands of new versions of software. Sustainable software engineering aims to create reliable, long-lasting software that meets the needs of users while reducing environmental impacts. We conducted three related research efforts to explore this area. First, we investigated the extent to which users thought about the environmental impact of their software usage. Second, we created a tool called GreenTracker, which measures the energy consumption of software in order to raise awareness about the environmental impact of software usage. Finally, we explored the indirect environmental effects of software in order to understand how software affects sustainability beyond its own power consumption. The relationship between environmental sustainability and software engineering is complex; understanding both direct and indirect effects is critical to helping humans live more sustainably.
Article Search

Research Demonstrations

DemoSurf: Software Analysis and Model Evolution

MT-Scribe: An End-User Approach to Automate Software Model Evolution
Yu Sun, Jeff Gray, and Jules White
(University of Alabama at Birmingham, USA; University of Alabama, USA; Virginia Tech, USA)
Model evolution is an essential activity in software system modeling, which is traditionally supported by manual editing or writing model transformation rules. However, the current state of practice for model evolution presents challenges to those who are unfamiliar with model transformation languages or metamodel definitions. This demonstration presents a demonstration-based approach that assists end-users through automation of model evolution tasks (e.g., refactoring, model scaling, and aspect weaving).
Article Search Video
Inconsistent Path Detection for XML IDEs
Pierre Genevès and Nabil Layaïda
(CNRS, France; INRIA, France)
We present the first IDE augmented with static detection of inconsistent paths for simplifying the development and debugging of any application involving XPath expressions.
Article Search Video
Automated Security Hardening for Evolving UML Models
Jan Jürjens
(TU Dortmund, Germany; Fraunhofer ISST, Germany)
Developing security-critical software correctly and securely is difficult. To address this problem, there has been a significant amount of work over the last 10 years on providing model-based development approaches based on the Unified Modeling Language which aim to raise the trustworthiness of security-critical systems, some of them including tools allowing the user to check whether a UML model satisfies the relevant security requirements. However, when the requirements are not satisfied by a given model, it can be challenging for the user to determine which changes to do to the model so that it will indeed satisfy the security requirements. Also, the fact that software continues to evolve on an ongoing basis, even after the implementation has been shipped to the customer, increases the challenge since in principle, the software has to be re-verified after each modification, requiring significant efforts. We present work on automated tool-support that exploits recent work on secure software evolution in the Secure Change project in order to support the security hardening of evolving UML models (within the context of the UML security extension UMLsec).
Article Search Video

DemoSun: Dynamic Software Updates and Analysis

JavAdaptor: Unrestricted Dynamic Software Updates for Java
Mario Pukall, Alexander Grebhahn, Reimar Schröter, Christian Kästner, Walter Cazzola, and Sebastian Götz
(University of Magdeburg, Germany; Philipps-University Marburg, Germany; University of Milano, Italy; University of Dresden, Germany)
Dynamic software updates (DSU) are one of the top-most features requested by developers and users. As a result, DSU is already standard in many dynamic programming languages. But, it is not standard in statically typed languages such as Java. Even if at place number three of Oracle’s current request for enhancement (RFE) list, DSU support in Java is very limited. Therefore, over the years many different DSU approaches for Java have been proposed. Nevertheless, DSU for Java is still an active field of research, because most of the existing approaches are too restrictive. Some of the approaches have shortcomings either in terms of flexibility or performance, whereas others are platform dependent or dictate the program’s architecture. With JavAdaptor, we present the first DSU approach which comes without those restrictions. We will demonstrate JavAdaptor based on the well-known arcade game Snake which we will update stepwise at runtime.
Article Search Video
DyTa: Dynamic Symbolic Execution Guided with Static Verification Results
Xi Ge, Kunal Taneja, Tao Xie, and Nikolai Tillmann
(North Carolina State University, USA; Microsoft Research, USA)
Software-defect detection is an increasingly important research topic in software engineering. To detect defects in a program, static verification and dynamic test generation are two important proposed techniques. However, both of these techniques face their respective issues. Static verification produces false positives, and on the other hand, dynamic test generation is often time consuming. To address the limitations of static verification and dynamic test generation, we present an automated defect-detection tool, called DyTa, that combines both static verification and dynamic test generation. DyTa consists of a static phase and a dynamic phase. The static phase detects potential defects with a static checker; the dynamic phase generates test inputs through dynamic symbolic execution to confirm these potential defects. DyTa reduces the number of false positives compared to static verification and performs more efficiently compared to dynamic test generation.
Article Search Video
Identifying Opaque Behavioural Changes
Reid Holmes and David Notkin
(University of Waterloo, Canada; University of Washington, USA)
Developers modify their systems by changing source code, updating test suites, and altering their system’s execution context. When they make these modifications, they have an understanding of the behavioural changes they expect to happen when the system is executed; when the system does not conform to their expectations, developers try to ensure their modification did not introduce some unexpected or undesirable behavioural change. We present an approach that integrates with existing continuous integration systems to help developers identify situations whereby their changes may have introduced unexpected behavioural consequences. In this research demonstration, we show how our approach can help developers identify and investigate unanticipated behavioural changes.
Article Search Video
FireDetective: Understanding Ajax Client/Server Interactions
Nick Matthijssen and Andy Zaidman
(Delft University of Technology, Netherlands)
Ajax-enabled web applications are a new breed of highly interactive, highly dynamic web applications. Although Ajax allows developers to create rich web applications, Ajax applications can be difficult to comprehend and thus to maintain. FireDetective aims to facilitate the understanding of Ajax applications. It uses dynamic analysis at both the client (browser) and server side and subsequently connects both traces for further analysis.
Article Search Video

DemoSky: Software Testing and Quality Assessment

BQL: Capturing and Reusing Debugging Knowledge
Zhongxian Gu, Earl T. Barr, and Zhendong Su
(UC Davis, USA)
When fixing a bug, a programmer tends to search for similar bugs that have been resolved in the past. A fix for a similar bug may help him fix his bug or at least understand his bug. We designed and implemented the Bug Query Language (BQL) and its accompanying tools to help users search for similar bugs to aid debugging. This paper demonstrates the main features of the BQL infrastructure. We populated BQL with bugs collected from open-source projects and show that BQL could have helped users to fix real-world bugs.
Article Search Video
Covana: Precise Identification of Problems in Pex
Xusheng Xiao, Tao Xie, Nikolai Tillmann, and Jonathan de Halleux
(North Carolina State University, USA; Microsoft Research, USA)
Achieving high structural coverage is an important goal of software testing. Instead of manually producing high-covering test inputs that achieve high structural coverage, testers or developers can employ tools built based on automated test-generation approaches, such as Pex, to automatically generate such test inputs. Although these tools can easily generate test inputs that achieve high structural coverage for simple programs, when applied on complex programs in practice, these tools face various problems, such as the problems of dealing with method calls to external libraries or generating method-call sequences to produce desired object states. Since these tools are currently not powerful enough to deal with these various problems in testing complex programs, we propose cooperative developer testing, where developers provide guidance to help tools achieve higher structural coverage. In this demo, we present Covana, a tool that precisely identifies and reports problems that prevent Pex from achieving high structural coverage. Covana identifies problems primarily by determining whether branch statements containing not-covered branches have data dependencies on problem candidates.
Article Search Video
The Quamoco Tool Chain for Quality Modeling and Assessment
Florian Deissenboeck, Lars Heinemann, Markus Herrmannsdoerfer, Klaus Lochmann, and Stefan Wagner
(TU München, Germany)
Continuous quality assessment is crucial for the long-term success of evolving software. On the one hand, code analysis tools automatically supply quality indicators, but do not provide a complete overview of software quality. On the other hand, quality models define abstract characteristics that influence quality, but are not operationalized. Currently, no tool chain exists that integrates code analysis tools with quality models. To alleviate this, the Quamoco project provides a tool chain to both define and assess software quality. The tool chain consists of a quality model editor and an integration with the quality assessment toolkit ConQAT. Using the editor, we can define quality models ranging from abstract characteristics down to operationalized measures. From the quality model, a ConQAT configuration can be generated that can be used to automatically assess the quality of a software system.
Article Search Video
ReAssert: A Tool for Repairing Broken Unit Tests
Brett Daniel, Danny Dig, Tihomir Gvero, Vilas Jagannath, Johnston Jiaa, Damion Mitchell, Jurand Nogiec, Shin Hwei Tan, and Darko Marinov
(University of Illinois at Urbana-Champaign, USA; EPFL, Switzerland)
Successful software systems continuously change their requirements and thus code. When this happens, some existing tests get broken because they no longer reflect the intended behavior, and thus they need to be updated. Repairing broken tests can be time-consuming and difficult. We present ReAssert, a tool that can automatically suggest repairs for broken unit tests. Examples include replacing literal values in tests, changing assertion methods, or replacing one assertion with several. Our experiments show that ReAssert can repair many common test failures and that its suggested repairs match developers’ expectations.
Article Search Video
AutoBlackTest: A Tool for Automatic Black-Box Testing
Leonardo Mariani, Mauro Pezzè, Oliviero Riganelli, and Mauro Santoro
(University of Milano Bicocca, Italy; University of Lugano, Switzerland)
In this paper we present AutoBlackTest, a tool for the automatic generation of test cases for interactive applications. AutoBlackTest interacts with the application though its GUI, and uses reinforcement learning techniques to understand the interaction modalities and to generate relevant testing scenarios. Early results show that the tool has the potential of automatically discovering bugs and generating useful system and regression test suites.
Article Search Video

DemoSand: Computer Supported Cooperative Work and Software Engineering

Using MATCON to Generate CASE Tools That Guide Deployment of Pre-Packaged Applications
Elad Fein, Natalia Razinkov, Shlomit Shachor, Pietro Mazzoleni, Sweefen Goh, Richard Goodwin, Manisha Bhandar, Shyh-Kwei Chen, Juhnyoung Lee, Vibha Singhal Sinha, Senthil Mani, Debdoot Mukherjee, Biplav Srivastava, and Pankaj Dhoolia
(IBM Research Haifa, Israel; IBM Research Watson, USA; IBM Research, India)
The complex process of adapting pre-packaged applications, such as Oracle or SAP, to an organization’s needs is full of challenges. Although detailed, structured, and well-documented methods govern this process, the consulting team implementing the method must spend a huge amount of manual effort to make sure the guidelines of the method are followed as intended by the method author. MATCON breaks down the method content, documents, templates, and work products into reusable objects, and enables them to be cataloged and indexed so these objects can be easily found and reused on subsequent projects. By using models and meta-modeling the reusable methods, we automatically produce a CASE tool to apply these methods, thereby guiding consultants through this complex process. The resulting tool helps consultants create the method deliverables for the initial phases of large customization projects. Our MATCON output, referred to as Consultant Assistant, has shown significant savings in training costs, a 20–30% improvement in productivity, and positive results in large Oracle and SAP implementations.
Article Search Video
SEREBRO: Facilitating Student Project Team Collaboration
Noah M. Jorgenson, Matthew L. Hale, and Rose F. Gamble
(University of Tulsa, USA)
In this demonstration, we show SEREBRO, a lightweight courseware developed for student team collaboration in a software engineering class. SEREBRO couples an idea forum with software project management tools to maintain cohesive interaction between team discussion and resulting work products, such as tasking, documentation, and version control. SEREBRO has been used consecutively for two years of software engineering classes. Student input and experiments on student use in these classes has directed SERBRO to its current functionality.
Article Search Video
StakeSource2.0: Using Social Networks of Stakeholders to Identify and Prioritise Requirements
Soo Ling Lim, Daniela Damian, and Anthony Finkelstein
(University College London, UK; University of Victoria, Canada)
Software projects typically rely on system analysts to conduct requirements elicitation, an approach potentially costly for large projects with many stakeholders and requirements. This paper describes StakeSource2.0, a web-based tool that uses social networks and collaborative filtering, a “crowdsourcing” approach, to identify and prioritise stakeholders and their requirements.
Article Search Video
Miler: A Toolset for Exploring Email Data
Alberto Bacchelli, Michele Lanza, and Marco D'Ambros
(University of Lugano, Switzerland)
Source code is the target and final outcome of software development. By focusing our research and analysis on source code only, we risk forgetting that software is the product of human efforts, where communication plays a pivotal role. One of the most used communications means are emails, which have become vital for any distributed development project. Analyzing email archives is non-trivial, due to the noisy and unstructured nature of emails, the vast amounts of information, the unstandardized storage systems, and the gap with development tools. We present Miler, a toolset that allows the exploration of this form of communication, in the context of software maintenance and evolution. With Miler we can retrieve data from mailing list repositories in different formats, model emails as first-class entities, and transparently store them in databases. Miler offers tools and support for navigating the content, manually labelling emails with discussed source code entities, automatically linking emails to source code, measuring code entities’ popularity in mailing lists, exposing structured content in the unstructured content, and integrating email communication in an IDE.
Article Search Video
A Demonstration of a Distributed Software Design Sketching Tool
Nicolas Mangano, Mitch Dempsey, Nicolas Lopez, and André van der Hoek
(UC Irvine, USA)
Software designers frequently sketch when they design, particularly during the early phases of exploration of a design problem and its solution. In so doing, they shun formal design tools, the reason being that such tools impose conformity and precision prematurely. Sketching on the other hand is a highly fluid and flexible way of expressing oneself. In this paper, we present Calico, a sketch-based distributed software design tool that supports software designers with a variety of features that improve over the use of just pen-and-paper or a regular whiteboard, and are tailored specifically for software design. Calico is meant to be used on electronic whiteboards or tablets, and provides for rapid creation and manipulation of design content by sets of developers who can collaborate distributedly.
Article Search Video

DemoShore: Software Development and Maintenance

View Infinity: A Zoomable Interface for Feature-Oriented Software Development
Michael Stengel, Janet Feigenspan, Mathias Frisch, Christian Kästner, Sven Apel, and Raimund Dachselt
(University of Magdeburg, Germany; University of Marburg, Germany; University of Passau, Germany)
Software product line engineering provides efficient means to develop variable software. To support program comprehension of software product lines (SPLs), we developed View Infinity, a tool that provides seamless and semantic zooming of different abstraction layers of an SPL. First results of a qualitative study with experienced SPL developers are promising and indicate that View Infinity is useful and intuitive to use.
Article Search Video
CodeTopics: Which Topic am I Coding Now?
Malcom Gethers, Trevor Savage, Massimiliano Di Penta, Rocco Oliveto, Denys Poshyvanyk, and Andrea De Lucia
(College of William and Mary, USA; CMU, USA; University of Sannio, Italy; University of Molise, Italy; University of Salerno, Italy)
Recent studies indicated that showing the similarity between the source code being developed and related high-level artifacts (HLAs), such as requirements, helps developers improve the quality of source code identifiers. In this paper, we present CodeTopics, an Eclipse plug-in that in addition to showing the similarity between source code and HLAs also highlights to what extent the code under development covers topics described in HLAs. Such views complement information derived by showing only the similarity between source code and HLAs helping (i) developers to identify functionality that are not implemented yet or (ii) newcomers to comprehend source code artifacts by showing them the topics that these artifacts relate to.
Article Search Video
JDeodorant: Identification and Application of Extract Class Refactorings
Marios Fokaefs, Nikolaos Tsantalis, Eleni Stroulia, and Alexander Chatzigeorgiou
(University of Alberta, Canada; University of Macedonia, Greece)
Evolutionary changes in object-oriented systems can result in large, complex classes, known as “God Classes”. In this paper, we present a tool, developed as part of the JDeodorant Eclipse plugin, that can recognize opportunities for extracting cohesive classes from “God Classes” and automatically apply the refactoring chosen by the developer.
Article Search Video
Evolve: Tool Support for Architecture Evolution
Andrew McVeigh, Jeff Kramer, and Jeff Magee
(Imperial College London, UK)
Incremental change is intrinsic to both the initial development and subsequent evolution of large complex software systems. Evolve is a graphical design tool that captures this incremental change in the definition of software architecture. It supports a principled and manageable way of dealing with unplanned change and extension. In addition, Evolve supports decentralized evolution in which software is extended and evolved by multiple independent developers. Evolve supports a model-driven approach in that architecture definition is used to directly construct both initial implementations and extensions to these implementations. The tool implements Backbone - an architectural description language (ADL), which has both a textual and a UML2, based graphical representation. The demonstration focuses on the graphical representation.
Article Search Video
Portfolio: A Search Engine for Finding Functions and Their Usages
Collin McMillan, Mark Grechanik, Denys Poshyvanyk, Qing Xie, and Chen Fu
(College of William and Mary, USA; University of Illinois at Chicago, USA; Accenture Technology Labs, USA)
In this demonstration, we present a code search system called Portfolio that retrieves and visualizes relevant functions and their usages. We will show how chains of relevant functions and their usages can be visualized to users in response to their queries.
Article Search Video

Impact Project Focus Area

Impact of Process Simulation on Software Practice: An Initial Report
He Zhang, Ross Jeffery, Dan Houston, LiGuo Huang, and Liming Zhu
(NICTA, Australia; University of New South Wales, Australia; The Aerospace Corporation, USA; Southern Methodist University, USA)
Process simulation has become a powerful technology in support of software project management and process improvement over the past decades. This research, inspired by the Impact Project, intends to investigate the technology transfer of software process simulation to the use in industrial settings, and further identify the best practices to release its full potential in software practice. We collected the reported applications of process simulation in software industry, and identified its wide adoption in the organizations delivering various software intensive systems. This paper, as an initial report of the research, briefs a historical perspective of the impact upon practice based on the documented evidence, and also elaborates the research-practice transition by examining one detailed case study. It is shown that research has a significant impact on practice in this area. The analysis of impact trace also reveals that the success of software process simulation in practice highly relies on the association with other software process techniques or practices and the close collaboration between researchers and practitioners.
Article Search
Impact of Software Resource Estimation Research on Practice: A Preliminary Report on Achievements, Synergies, and Challenges
Barry W. Boehm and Ricardo Valerdi
(University of Southern California, USA; MIT, USA)
This paper is a contribution to the Impact Project in the area of software resource estimation. The objective of the Impact Project has been to analyze the impact of software engineering research investments on software engineering practice. The paper begins by summarizing the motivation and context for analyzing software resource estimation; and by summarizing the study’s purpose, scope, and approach. The approach includes analyses of the literature; interviews of leading software resource estimation researchers, practitioners, and users; and value/impact surveys of estimators and users. The study concludes that research in software resource estimation has had a significant impact on the practice of software engineering, but also faces significant challenges in addressing likely future software trends.
Article Search
Symbolic Execution for Software Testing in Practice -- Preliminary Assessment
Cristian Cadar, Patrice Godefroid, Sarfraz Khurshid, Corina S. Păsăreanu, Koushik Sen, Nikolai Tillmann, and Willem Visser
(Imperial College London, UK; Microsoft Research, USA; University of Texas at Austin, USA; CMU, USA; NASA Ames Research Center, USA; UC Berkeley, USA; Stellenbosch University, South Africa)
We present results for the “Impact Project Focus Area” on the topic of symbolic execution as used in software testing. Symbolic execution is a program analysis technique introduced in the 70s that has received renewed interest in recent years, due to algorithmic advances and increased availability of computational power and constraint solving technology. We review classical symbolic execution and some modern extensions such as generalized symbolic execution and dynamic test generation. We also give a preliminary assessment of the use in academia, research labs, and industry.
Article Search

Technical Briefings

ICSE 2011 Technical Briefings
Gail C. Murphy and Andreas Zeller
(University of British Columbia, Canada; Saarland University, Germany)
The better we meet the interest of our community, the better we can help bringing ourselves up-to-date with the latest and greatest in and around software engineering. To this purpose, ICSE 2011 for the first time featured technical briefings, an all-day venue for communicating the state of topics related to software engineering, thus providing an exchange of ideas as well as an introduction to the main conference itself.
Article Search

Doctoral Symposium

Mature Phase Extended Abstracts

Exploring, Exposing, and Exploiting Emails to Include Human Factors in Software Engineering
Alberto Bacchelli
(University of Lugano, Switzerland)
Researchers mine software repositories to support software maintenance and evolution. The analysis of the structured data, mainly source code and changes, has several benefits and offers precise results. This data, however, leaves communication in the background, and does not permit a deep investigation of the human factor, which is crucial in software engineering. Software repositories also archive documents, such as emails or comments, that are used to exchange knowledge among people--we call it "people-centric information." By covering this data, we include the human factor in our analysis, yet its unstructured nature makes it currently sub-exploited. Our work, by focusing on email communication and by implementing the necessary tools, investigates methods for exploring, exposing, and exploiting unstructured data. We believe it is possible to close the gap between development and communication, extract opinions, habits, and views of developers, and link implementation to its rationale; we see in a future where software analysis and development is routinely augmented with people-centric information.
Article Search
GATE: Game-based Testing Environment
Ning Chen
(Hong Kong University of Science and Technology, China)
In this paper, we propose a game-based public testing mechanism called GATE. The purpose of GATE is to make use of the rich human resource on the Internet to help increase effectiveness in software testing and improve test adequacy. GATE facilitates public testing in three main steps: 1) decompose the test criterion satisfaction problem into many smaller sub-model satisfaction problems; 2) construct games for each individual sub-models and presenting the games to the public through web servers; 3) collect and convert public users’ action sequence data into real test cases which guarantee to cover not adequately tested elements. A preliminary study on apache-commons-math library shows that 44% of the branches have not been adequately tested by state of the art automatic test generation techniques. Among these branches, at least 42% are decomposable by GATE into smaller sub-problems. These elements naturally become the potential targets of GATE for public game-based testing.
Article Search
Reuse vs. Maintainability: Revealing the Impact of Composition Code Properties
Francisco Dantas
(PUC-Rio, Brazil)
Over the last years, several composition mechanisms have emerged to improve program modularity. Even though these mechanisms widely vary in their notation and semantics, they all promote a shift in the way programs are structured. They promote expressive means to define the composition of two or more reusable modules. However, given the complexity of the composition code, its actual effects on software quality are not well understood. This PhD research aims at investigating the impact of emerging composition mechanisms on the simultaneous satisfaction of software reuse and maintainability. In order to perform this analysis, we intend to define a set of compositiondriven metrics and compare their efficacy with traditional modularity metrics. Finally, we plan to derive guidelines on how to use new composition mechanisms to maximize reuse and stability of software modules.
Article Search
Specification Mining in Concurrent and Distributed Systems
Sandeep Kumar
(National University of Singapore, Singapore)
Distributed systems contain several interacting components that perform complex computational tasks. Formal specification of the interaction protocols are crucial to the understanding of these systems. Dynamic specification mining from traces containing information about actual interactions during execution of distributed systems can play a useful role in verification and comprehension when formal specification is not available. A framework for behavioral specification mining in distributed systems is proposed. Concurrency and complexity in the distributed models raise special challenges to specification mining in such systems.
Article Search
Detecting Architecturally-Relevant Code Smells in Evolving Software Systems
Isela Macia Bertran
(PUC Rio, Brazil)
Refactoring tends to avoid the early deviation of a program from its intended architecture design. However, there is little knowledge about whether the manifestation of code smells in evolving software is indicator of architectural deviations. A fundamental difficulty in this process is that developers are only equipped with static analysis techniques for the source code, which do not exploit traceable architectural information. This work addresses this problem by: (1) identifying a family of architecturally-relevant code smells; (2) providing empirical evidence about the correlation of code smell patterns and architectural degeneration; (3) proposing a set of metrics and detection strategies and that exploit traceable architectural information in smell detection; and (4) conceiving a technique to support the early identification of architecture degeneration symptoms by reasoning about code smell patterns.
Article Search
Pragmatic Reuse in Web Application Development
Josip Maras
(University of Split, Croatia)
Highly interactive web applications that offer user experience and responsiveness of desktop applications are becoming increasingly popular. They are often composed out of visually distinctive user-interface (UI) elements that encapsulate a certain behavior – the so called UI controls. Similar controls are often used in a large number of web pages, and facilitating their reuse would offer considerable benefits. Unfortunately, because of a very short time-to-market, and a fast pace of technology development, preparing controls for reuse is usually not a primary concern. The focus of my research will be to circumvent this limitation by developing a method, and the accompanying tool for supporting web UI control reuse.
Article Search
Inconsistency Management Framework for Model-Based Development
Alexander Reder
(Johannes Kepler University, Austria)
In contrast to programming environments, model-based development tools lack in an efficient support in detecting and repairing design errors. However, inconsistencies must be resolved eventually and the detection of inconsistencies is of little use if it is not known how to use this information. Quite many approaches exist for detecting inconsistencies. Only some of them provide solutions for repairing individual inconsistencies but none of them are able to investigate the repair problem comprehensively – in particular considering the side effects that might occur when applying a repair. My PhD thesis focuses on resolving inconsistencies, the different strategies one can follow and how to deal with the critical problem of side effects. My approach is based on an incremental approach for detecting inconsistencies and combines runtime analysis of the design rules’ evaluation behavior with static analysis on the design rules’ structure. The main contribution is an efficient and effective inconsistency management framework that can be applied generically to (most) modeling and design rule languages.
Article Search
Mental Models and Parallel Program Maintenance
Caitlin Sadowski
(UC Santa Cruz, USA)
Parallel programs are difficult to write, test, and debug. This thesis explores how programmers build mental models about parallel programs, and demonstrates, through user evaluations, that maintenance activities can be improved by incorporating theories based on such models. By doing so, this work aims to increase the reliability and performance of today’s information technology infrastructure by improving the practice of maintaining and testing parallel software.
Article Search
Pragmatic Prioritization of Software Quality Assurance Efforts
Emad Shihab
(Queen's University, Canada)
A plethora of recent work leverages historical data to help practitioners better prioritize their software quality assurance efforts. However, the adoption of this prior work in practice remains low. In our work, we identify a set of challenges that need to be addressed to make previous work on quality assurance prioritization more pragmatic. We outline four guidelines that address these challenges to make prior work on software quality assurance more pragmatic: 1) Focused Granularity (i.e., small prioritization units), 2) Timely Feedback (i.e., results can be acted on in a timely fashion), 3) Estimate Effort (i.e., estimate the time it will take to complete tasks), and 4) Evaluate Generality (i.e., evaluate findings across multiple projects and multiple domains). We present two approaches, at the code and change level, that demonstrate how prior approaches can be more pragmatic.
Article Search
Directed Test Suite Augmentation
Zhihong Xu
(University of Nebraska-Lincoln, USA)
Test suite augmentation techniques are used in regression testing to identify code elements affected by changes and to generate test cases to cover those elements. Whereas methods and techniques to find affected elements have been extensively researched in regression testing, how to generate new test cases to cover these elements cost-effectively has rarely been studied. It is known that generating test cases is very expensive, so we want to focus on this second step. We believe that reusing existing test cases will help us achieve this task. This research intends to provide a framework for test suite augmentation techniques that will reuse existing test cases to automatically generate new test cases to cover as many affected elements as possible cost-effectively.
Article Search
Reengineering Legacy Software Products into Software Product Line Based on Automatic Variability Analysis
Yinxing Xue
(National University of Singapore, Singapore)
In order to deliver the various and short time-to-market software products to customers, the paradigm of Software Product Line (SPL) represents a new endeavor to the software development. To migrate a family of legacy software products into SPL for effective reuse, one has to understand commonality and variability among existing products variants. The existing techniques rely on manual identification and modeling of variability, and the analysis based on those techniques is performed at several mutually independent levels of abstraction. We propose a sandwich approach that consolidates feature knowledge from top-down domain analysis with bottom-up analysis of code similarities in subject software products. Our proposed method integrates model differencing, clone detection, and information retrieval techniques, which can provide a systematic means to reengineer the legacy software products into SPL based on automatic variability analysis.
Article Search
1.x-Way Architecture-Implementation Mapping
Yongjie Zheng
(UC Irvine, USA)
A new architecture-implementation mapping approach, 1.x-way mapping, is presented to address architecture-implementation conformance. It targets maintaining conformance of structure and behavior, providing a solution to architecture changes, and protecting architecture-prescribed code from being manually changed. Technologies developed in this work include deep separation of generated and non-generated code, an architecture change model, architecture-based code regeneration, and architecture change notification.
Article Search

Early Phase Extended Abstracts

Using Software Evolution History to Facilitate Development and Maintenance
Pamela Bhattacharya
(UC Riverside, USA)
Much research in software engineering have been focused on improving software quality and automating the maintenance process to reduce software costs and mitigating complications associated with the evolution process. Despite all these efforts, there are still high cost and effort associated with software bugs and software maintenance, software still continues to be unreliable, and software bugs can wreak havoc on software producers and consumers alike. My dissertation aims to advance the state-of-art in software evolution research by designing tools that can measure and predict software quality and to create integrated frameworks that helps in improving software maintenance and research that involves mining software repositories.
Article Search
Searching, Selecting, and Synthesizing Source Code
Collin McMillan
(College of William and Mary, USA)
As a programmer writes new software, he or she may instinctively sense that certain functionality is generally or widely-enough applicable to have been implemented before. Unfortunately, programmers face major challenges when attempting to reuse this functionality: First, developers must search for source code relevant to the high-level task at hand. Second, they must select specific components from the relevant code to reuse. Third, they synthesize these components into their own software projects. Techniques exist to address specific instances of these three challenges, but these techniques do not support programmers throughout the reuse process. The goal of this research is to create a unified approach to searching, selecting, and synthesizing source code. We believe that this approach will provide programmers with crucial insight on how high-level functionality present in existing software can be reused.
Article Search
Tracing Architecturally Significant Requirements: A Decision-Centric Approach
Mehdi Mirakhorli
(DePaul University, USA)
This thesis describes a Decision-Centric traceability framework that supports software engineering activities such as architectural preservation, impact analysis, and visualization of design intent. We present a set of traceability patterns, derived from studying real-world architectural designs in high-assurance and highperformance systems. We further present a trace-retrieval approach that reverse engineers design decisions and their associated traceability links by training a classifier to recognize fragments of design decisions and then using the traceability patterns to reconstitute the decisions from their individual parts. traceability meta-models [2] for tracing ASRs can lead to the proliferation of a large number of unmanageable traceability links. This research presents a Decision-Centric Traceability (DCT) approach for tracing between ASRs and architectural components, and is designed to help developers ensure long-term integrity of the system-level qualities. In comparison to existing traceability meta-models [2], DCT reduces the number of traceability links and provides semantically rich traceability links that are used to support critical software engineering activities such as architectural preservation, impact analysis, and design visualization.
Article Search
Predictable Dynamic Deployment of Components in Embedded Systems
Ana Petričić
(Mälardalen University, Sweden)
Dynamic reconfiguration – the ability to hot swap a component or to introduce a new component into a system – is essential to supporting evolutionary change in long-live and highly available systems. A major issue related to this process is to ensure application consistency and performance after reconfiguration. This task is especially challenging for embedded systems which run with limited resources and have specific dependability requirements. We focus on checking resources constraints and propose for a component compliance checking to be performed during its deployment. Our main objective is preserving system integrity during and after reconfiguration by developing a resource efficient dynamic deployment mechanism that will include component validation in respect to available system resources and performance requirements.
Article Search
A Declarative Approach to Enable Flexible and Dynamic Service Compositions
Leandro Sales Pinto
(Politecnico di Milano, Italy)
Service Oriented Architecture (SOA) is considered the best solution to develop distributed applications that compose existing services to provide new added-value services for their users. However, we claim that the fully potential of SOA is still to be reached and it is our belief that this is caused by the same tools used to build service orchestrations. To overcome this situation we are developing a new declarative language for service orchestration, enhanced with an innovative runtime system which simplifies the development of flexible and robust service compositions.
Article Search
A Framework for the Integration of User Centered Design and Agile Software Development Processes
Dina Salah
(University of York, UK)
Agile and user centered design integration (AUCDI) is of significant interest to researchers who want to achieve synergy and eliminate limitations of each. Currently, there are no clear principles or guidelines for practitioners to achieve successful integration. In addition, substantial differences exist between agile and UCD approaches which pose challenges to integration attempts. As a result, practitioners developed individual integration strategies. However, success evaluation of current AUCDI attempts has been anecdotal. Moreover, AUCDI attempts cannot be generalized to provide guidance and assistance to other projects or organizations with different needs. My thesis aims to provide a Software Process Improvement (SPI) framework for AUCDI by providing generic guidelines and practices for organizations aspiring to achieve AUCDI in order to address AUCDI challenges including: introducing systematicity and structure into AUCDI, assessing AUCDI processes, and accommodating project and organizational characteristics.
Article Search
Improving Open Source Software Patch Contribution Process: Methods and Tools
Bhuricha Deen Sethanandha
(Portland State University, USA)
The patch contribution process (PCP) is very important to the sustainability of OSS projects. Nevertheless, there are several issues on patch contribution in mature OSS projects, which include time consuming process, lost and ignored patches, slow review process. These issues are recognized by researchers and OSS projects, but have not been addressed. In this dissertation, I apply Kanban method to guide process improvement and tools development to reduce PCP cycle time.
Article Search
Systematizing Security Test Case Planning Using Functional Requirements Phrases
Ben Smith
(North Carolina State University, USA)
Security experts use their knowledge to attempt attacks on an application in an exploratory and opportunistic way in a process known as penetration testing. However, building security into a product is the responsibility of the whole team, not just the security experts who are often only involved in the final phases of testing. Through the development of a black box security test plan, software testers who are not necessarily security experts can work proactively with the developers early in the software development lifecycle. The team can then establish how security will be evaluated such that the product can be designed and implemented with security in mind. The goal of this research is to improve the security of applications by introducing a methodology that uses the software system's requirements specification statements to systematically generate a set of black box security tests. We used our methodology on a public requirements specification to create 137 tests and executed these tests on five electronic health record systems. The tests revealed 253 successful attacks on these five systems, which are used to manage the clinical records for approximately 59 million patients, collectively. If non-expert testers can surface the more common vulnerabilities present in an application, security experts can attempt more devious, novel attacks.
Article Search
Mining Software Repositories Using Topic Models
Stephen W. Thomas
(Queen's University, Canada)
Software repositories, such as source code, email archives, and bug databases, contain unstructured and unlabeled text that is difficult to analyze with traditional techniques. We propose the use of statistical topic models to automatically discover structure in these textual repositories. This discovered structure has the potential to be used in software engineering tasks, such as bug prediction and traceability link recovery. Our research goal is to address the challenges of applying topic models to software repositories.
Article Search

ACM Student Research Competition

Test Blueprint: An Effective Visual Support for Test Coverage
Vanessa Peña Araya
(University of Chile, Chile)
Test coverage is about assessing the relevance of unit tests against the tested application. It is widely acknowledged that a software with a “good” test coverage is more robust against unanticipated execution, thus lowering the maintenance cost. However, insuring a coverage of a good quality is challenging, especially since most of the available test coverage tools do not discriminate software components that require a “strong” coverage from the components that require less attention from the unit tests. Hapao is an innovative test coverage tool, implemented in the Pharo Smalltalk programming language. It employs an effective and intuitive graphical representation to visually assess the quality of the coverage. A combination of appropriate metrics and relations visually shapes methods and classes, which indicates to the programmer whether more effort on testing is required. This paper presents the essence of Hapao using a real world case study.
Article Search
A Formal Approach to Software Synthesis for Architectural Platforms
Hamid Bagheri
(University of Virginia, USA)
Software-intensive systems today often rely on middleware platforms as major building blocks. As such, the architectural choices of such systems are being driven to a significant extent by such platforms. However, the diversity and rapid evolution of these platforms lead to architectural choices quickly becoming obsolete. Yet architectural choices are among the most difficult to change. This paper presents a novel and formal approach to end-to-end transformation of application models into architecturally correct code, averting the problem of mapping application models to such architectural platforms.
Article Search
Detecting Cross-browser Issues in Web Applications
Shauvik Roy Choudhary
(Georgia Institute of Technology, USA)
Cross-browser issues are prevalent in web applications. However, existing tools require considerable manual effort from developers to detect such issues. Our technique and prototype tool - WEBDIFF detects such issues automatically and reports them to the developer. Along with each issue reported, the tool also provides details about the affected HTML element, thereby helping the developer to fix the issue. WEBDIFF is the first technique to apply concepts from computer vision and graph theory to identify cross-browser issues in web applications. Our results show that W EB D IFF is practical and can find issues in real world web applications.
Article Search
Measuring Subversions: Security and Legal Risk in Reused Software Artifacts
Julius Davies
(University of Victoria, Canada)
A software system often includes a set of library dependencies and other software artifacts necessary for the system's proper operation. However, long-term maintenance problems related to reused software can gradually emerge over the lifetime of the deployed system. In our exploratory study we propose a manual technique to locate documented security and legal problems in a set of reused software artifacts. We evaluate our technique with a case study of 81 Java libraries found in a proprietary e-commerce web application. Using our approach we discovered both a potential legal problem with one library, and a second library that was affected by a known security vulnerability. These results support our larger thesis: software reuse entails long-term maintenance costs. In future work we strive to develop automated techniques by which developers, managers, and other software stakeholders can measure, address, and minimize these costs over the lifetimes of their software assets.
Article Search
Building Domain Specific Software Architectures from Software Architectural Design Patterns
Julie Street Fant
(George Mason University, USA)
Software design patterns are best practice solutions to common software problems. However, applying design patterns in practice can be difficult since design pattern descriptions are general and can be applied at multiple levels of abstraction. In order to address the aforementioned issue, this research focuses on creating a systematic approach to designing domain specific distributed, real-time and embedded (DRE) software from software architectural design patterns. To address variability across a DRE domain, software product line concepts are used to categorize and organize the features and design patterns. The software architectures produced are also validated through design time simulation. This research is applied and validated using the space flight software (FSW) domain.
Article Search
Using Impact Analysis in Industry
Robert Goeritzer
(University of Klagenfurt, Austria)
Software is subjected to continuous change, and with increasing size and complexity performing changes becomes more critical. Impact analysis assists in estimating the consequences of a change, and is an important research topic. Nevertheless, until now researchers have not applied and evaluated those techniques in industry. This paper contributes an approach suitable for an industrial setting, and an evaluation of its application in a large software system.
Article Search
A Decision Support System for the Classification of Software Coding Faults: A Research Abstract
Billy Kidwell
(University of Kentucky, USA)
A decision support system for fault classification is presented. The fault classification scheme is developed to provide guidance in process improvement and fault-based testing. The research integrates results in fault classification, source code analysis, and fault-based testing research. Initial results indicate that existing change type and fault classification schemes are insufficient for this purpose. Development of sufficient schemes and their evaluation are discussed.
Article Search
Specification Mining in Concurrent and Distributed Systems
Sandeep Kumar
(National University of Singapore, Singapore)
Dynamic specification mining involves discovering software behavior from traces for the purpose of program comprehension and bug detection. However, in concurrent/distributed programs, the inherent partial order relationships among events occurring across processes pose a big challenge to specification mining. A framework for mining partial orders that takes in a set of concurrent program traces, and produces a message sequence graph (MSG) is proposed. Mining an MSG allows one to understand concurrent behaviors since the nodes of the MSG depict important “phases” or “interaction snippets” involving several concurrently executing processes. Experiments on mining behaviors of fairly complex distributed systems show that the proposed miner can produce the corresponding MSGs with both high precision and high recall.
Article Search
A Case Study on Refactoring in Haskell Programs
Da Young Lee
(North Carolina State University, USA)
Programmers use refactoring to improve the design of existing code without changing external behavior. Current research does not empirically answer the question, “Why and how do programmers refactor functional programs?” In order to answer the question, I conducted a case study on three open source projects in Haskell. I investigated changed portions of code in 55 successive versions of a given project to classify how programmers refactor. I found a total of 143 refactorings classified by 12 refactoring types. I also found 5 new refactoring types and propose two new refactoring tools that would be useful for developers.
Article Search
Build System Maintenance
Shane McIntosh
(Queen's University, Canada)
The build system, i.e., the infrastructure that converts source code into deliverables, plays a critical role in the development of a software project. For example, developers rely upon the build system to test and run their source code changes. Without a working build system, development progress grinds to a halt, as the source code is rendered useless. Based on experiences reported by developers, we conjecture that build maintenance for large software systems is considerable, yet this maintenance is not well understood. A firm understanding of build maintenance is essential for project managers to allocate personnel and resources to build maintenance tasks effectively, and reduce the build maintenance overhead on regular development tasks, such as fixing defects and adding new features. In our work, we empirically study build maintenance in one proprietary and nine open source projects of different sizes and domain. Our case studies thus far show that: (1) similar to Lehman's first law of software evolution, build system specifications tend to grow unless effort is invested into restructuring them, (2) the build system accounts for up to 31% of the code files in a project, and (3) up to 27% of development tasks that change the source code also require build maintenance. Currently, we are working on identifying concrete measures that projects can take to reduce the build maintenance overhead.
Article Search
Finding Relevant Functions in Millions of Lines of Code
Collin McMillan
(College of William and Mary, USA)
Source code search engines locate and display fragments of code relevant to user queries. These fragments are often isolated and detached from one another. Programmers need to see how source code interacts in order to understand the concepts implemented in that code, however. In this paper, we present Portfolio, a source code search engine that retrieves and visualizes relevant functions as chains of function invocations. We evaluated Portfolio against Google Code Search and Koders in a case study with 49 professional programmers. Portfolio outperforms both of these engines in terms of relevance and visualization of the returned results.
Article Search
Requirements Tracing: Discovering Related Documents through Artificial Pheromones and Term Proximity
Hakim Sultanov
(University of Kentucky, USA)
Requirements traceability is an important undertaking as part of ensuring the quality of software in the early stages of the Software Development Life Cycle. This paper demonstrates the applicability of swarm intelligence to the requirements tracing problem using pheromone communication and a focus on the common text around linking terms or words in order to find related textual documents. Through the actions and contributions of each individual member of the swarm, the swarm as a whole exposes relationships between documents in a collective manner. Two techniques have been examined, simple swarm and pheromone swarm. The techniques have been validated using two real-world datasets from two problem domains.
Article Search
An End-User Demonstration Approach to Support Aspect-Oriented Modeling
Yu Sun
(University of Alabama at Birmingham, USA)
Aspect-oriented modeling (AOM) is a technique to separate concerns that crosscut the modularity boundaries of a modeling hierarchy. AOM is traditionally supported by manual editing or writing model transformation rules that refine a base model. However, traditional model weaving approaches present challenges to those who are unfamiliar with a model transformation language or metamodel definitions. This poster describes an approach to weave aspect models by recording and analyzing demonstrated operations by end-users.
Article Search
Problem Identification for Structural Test Generation: First Step Towards Cooperative Developer Testing
Xusheng Xiao
(North Carolina State University, USA)
Achieving high structural coverage is an important goal of software testing. Instead of manually producing high-covering test inputs that achieve high structural coverage, testers or developers can employ tools built based on automated test-generation approaches to automatically generate such test inputs. Although these tools can easily generate high-covering test inputs for simple programs, when applied on complex programs in practice, these tools face various problems, such as the problems of dealing with method calls to external libraries, generating method-call sequences to produce desired object states, and exceeding defined boundaries of resources due to loops. Since these tools currently are not powerful enough to deal with these various problems in testing complex programs, we propose cooperative developer testing, where developers provide guidance to help tools achieve higher structural coverage. To reduce the efforts of developers in providing guidance to the tools, we propose a novel approach, called Covana. Covana precisely identifies and reports problems that prevent the tools from achieving high structural coverage primarily by determining whether branch statements containing not-covered branches have data dependencies on problem candidates.
Article Search
Palus: A Hybrid Automated Test Generation Tool for Java
Sai Zhang
(University of Washington, USA)
In object-oriented programs, a unit test often consists of a sequence of method calls that create and mutate objects. It is challenging to automatically generate sequences that are legal and behaviorally-diverse, that is, reaching as many different program states as possible. This paper proposes a combined static and dynamic test generation approach to address these problems, for code without a formal specification. Our approach first uses dynamic analysis to infer a call sequence model from a sample execution, then uses static analysis to identify method dependence relations based on the fields they may read or write. Finally, both the dynamicallyinferred model (which tends to be accurate but incomplete) and the statically-identified dependence information (which tends to be conservative) guide a random test generator to create legal and behaviorally-diverse tests. Our Palus tool implements this approach. We compared it with a pure random approach, a dynamic-random approach (without a static phase), and a static-random approach (without a dynamic phase) on six popular open-source Java programs. Tests generated by Palus achieved 35% higher structural coverage on average. Palus is also internally used in Google, and has found 22 new bugs in four well-tested products.
Article Search
Scalable Automatic Linearizability Checking
Shao Jie Zhang
(National University of Singapore, Singapore)
Concurrent data structures are widely used but notoriously difficult to implement correctly. Linearizability is one main correctness criterion of concurrent data structure algorithms. It guarantees that a concurrent data structure appears as a sequential one to users. Unfortunately, linearizability is challenging to verify since a subtle bug may only manifest in a small portion of numerous thread interleavings. Model checking is therefore a potential primary candidate. However, current approaches of model checking linearizability suffer from severe state space explosion problem and are thus restricted in handling few threads and/or operations. This paper describes a scalable, fully automatic and general linearizability checking method based on [16] by incorporating symmetry and partial order reduction techniques. Our insights emerged from the observation that the similarity of threads using concurrent data structures causes model checking to generate large redundant equivalent portions of the state space, and the loose coupling of threads causes it to explore lots of unnecessary transition execution orders. We prove that symmetry reduction and partial order reduction can be combined in our approach and integrate them into the model checking algorithm. We demonstrate its efficiency using a number of real-world concurrent data structure algorithms.
Article Search

Workshop Summaries

Workshop on Cooperative and Human Aspects of Software Engineering (CHASE 2011)
Marcelo Cataldo, Cleidson de Souza, Yvonne Dittrich, Rashina Hoda, and Helen Sharp
(Robert Bosch Research, USA; IBM Research, Brazil; IT University of Copenhagen, Denmark; Victoria University of Wellington, New Zealand; The Open University, UK)
Software is created by people for people working in varied environments, under various conditions. Thus understanding cooperative and human aspects of software development is crucial to comprehend how methods and tools are used, and thereby improve the creation and maintenance of software. Over the years, both researchers and practitioners have recognized the need to study and understand these aspects. Despite recognizing this, researchers in cooperative and human aspects have no clear place to meet and are dispersed in different research conferences and areas. The goal of this workshop is to provide a forum for discussing high quality research on human and cooperative aspects of software engineering. We aim at providing both a meeting place for the growing community and the possibility for researchers interested in joining the field to present their work in progress and get an overview over the field.
Article Search
Fourth International Workshop on Multicore Software Engineering (IWMSE 2011)
Victor Pankratius and Michael Philippsen
(Karlsruhe Institute of Technology, Germany; University of Erlangen-Nuremberg, Germany)
This paper summarizes the highlights of the Fourth International Workshop on Multicore Software Engineering (IWMSE 2011). The workshop addresses the software engineering and parallel programming challenges that come with the wide availability of multicore processors. Researchers and practitioners have come together to present and discuss new work on programming techniques, refactoring, performance engineering, and applications.
Article Search
Workshop on Flexible Modeling Tools (FlexiTools 2011)
Harold Ossher, André van der Hoek, Margaret-Anne Storey, John Grundy, Rachel Bellamy, and Marian Petre
(IBM Research Watson, USA; UC Irvine, USA; University of Victoria, Canada; Swinburne University of Technology at Hawthorn, Australia; The Open University, UK)
Modeling tools are often not used for tasks during the software lifecycle for which they should be more helpful; instead free-from approaches, such as office tools and white boards, are frequently used. Prior workshops explored why this is the case and what might be done about it. The goal of this workshop is to continue those discussions and also to form an initial set of challenge problems and research challenges that researchers and developers of flexible modeling tools should address.
Article Search
Workshop on Games and Software Engineering (GAS 2011)
Jim Whitehead and Chris Lewis
(UC Santa Cruz, USA)
At the core of video games are complex interactions leading to emergent behaviors. This complexity creates difficulties architecting components, predicting their behaviors and testing the results. The Workshop on Games and Software Engineering (GAS 2011) provides an opportunity for software engineering researchers and practitioners who work with games to come together and discuss how these two areas can be intertwined.
Article Search
Workshop on Software Engineering for Cloud Computing (SECLOUD 2011)
Chris A. Mattmann, Nenad Medvidovic, T. S. Mohan, and Owen O'Malley
(Jet Propulsion Laboratory, USA; University of Southern California, USA; Infosys Technologies, India; Yahoo Inc., USA)
Cloud computing is emerging as more than simply a technology platform but a software engineering paradigm for the future. Hordes of cloud computing technologies, techniques, and integration approaches are widely being adopted, taught at the university level, and expected as key skills in the job market. The principles and practices of the software engineering and software architecture community can serve to help guide this emerging domain. The fundamental goal of the ICSE 2011 Software Engineering for Cloud Workshop is to bring together the diverse communities of cloud computing and of software engineering and architecture research with the hopes of sharing and disseminating key tribal knowledge between these rich areas. We expect as the workshop output a set of identified key software engineering challenges and important issues in the domain of cloud computing, specifically focused on how software engineering practice and research can play a role in shaping the next five years of research and practice for clouds. Furthermore, we expect to share “war stories”, best practices and lessons learned between leaders in the software engineering and cloud computing communities.
Article Search
Second International Workshop on Software Engineering for Sensor Network Applications (SESENA 2011)
Kurt Geihs, Luca Mottola, Gian Pietro Picco, and Kay Römer
(University of Kassel, Germany; Swedish Institute of Computer Science, Sweden; University of Trento, Italy; University of Lübeck, Germany)
We describe the motivation, focus, and organization of SESENA11, the 2nd International Workshop on Software Engineering for Sensor Network Applications. The workshop took place under the umbrella of ICSE 2011, the 33rd ACM/IEEE International Conference on Software Engineering, in Honolulu, Hawaii, on May 22, 2011. The aim was to attract researchers belonging to the Software Engineering (SE) and Wireless Sensor Network (WSN) communities, not only to exchange their recent research results on the topic, but also to stimulate discussion on the core open problems and define a shared research agenda. More information can be found at the workshop website: http://www.sesena.info.
Article Search
Seventh International Workshop on Software Engineering for Secure Systems (SESS 2011)
Seok-Won Lee, Mattia Monga, and Jan Jürjens
(University of Nebraska-Lincoln, USA; Università degli Studi di Milano, Italy; TU Dortmund, Germany)
The 7th edition of the SESS workshop aims at providing a venue for software engineers and security researchers to exchange ideas and techniques. In fact, software is at core of most of the business transactions and its smart integration in an industrial setting may be the competitive advantage even when the core competence is outside the ICT field. As a result, the revenues of a firm depend directly on several complex software-based systems. Thus, stakeholders and users should be able to trust these systems to provide data and elaborations with a degree of confidentiality, integrity, and availability compatible with their needs. Moreover, the pervasiveness of software products in the creation of critical infrastructures has raised the value of trustworthiness and new efforts should be dedicated to achieve it. However, nowadays almost every application has some kind of security requirement even if its use is not to be considered critical.
Article Search
Fourth Workshop on Refactoring Tools (WRT 2011)
Danny Dig and Don Batory
(University of Illinois at Urbana-Champaign, USA; University of Texas at Austin, USA)
Refactoring is the process of applying behavior-preserving transformations to a program with the objective of improving the program’s design. A specific refactoring is identified by a name (e.g., Extract Method), a set of preconditions, and a set of transformations that need to be performed. Tool support for refactoring is essential because checking the preconditions of refactoring often requires nontrivial program analysis, and applying transformations may affect many locations throughout a program. In recent years, the emergence of light-weight programming methodologies such as Extreme Programming has generated a great amount of interest in refactoring, and refactoring support has become a required feature in today’s IDEs. This workshop is a continuation of a series of previous workshops (ECOOP 2007, OOPSLA 2008 and 2009 – see http://refactoring.info/WRT) where researchers and developers of refactoring tools can meet, discuss recent ideas and work, and view tool demonstrations.
Article Search
Second International Workshop on Product Line Approaches in Software Engineering (PLEASE 2011)
Julia Rubin, Goetz Botterweck, Andreas Pleuss, and David M. Weiss
(IBM Research Haifa, Israel; Lero, Ireland; University of Limerick, Ireland; Iowa State University, USA)
PLEASE workshop series focuses on exploring the present and the future of Software Product Line Engineering techniques. The main goal of PLEASE 2011 is to bring together industrial practitioner and software product line researchers in order to couple real-life industrial problems with concrete solutions developed by the community. We plan for an interactive workshop, where participants can apply their expertise to current industrial problems, while those who face challenges in the area of product line engineering can benefit from the suggested solutions. We also intend to establish ongoing, long-lasting relationships between industrial and research participants to the mutual benefits of both. The second edition of PLEASE is held in conjunction with the 33rd International Conference in Software Engineering (May 21-28, 2011, Honolulu, Hawaii).
Article Search
Third International Workshop on Software Engineering in Healthcare (SEHC 2011)
Eleni Stroulia and Kevin Sullivan
(University of Alberta, Canada; University of Virginia, USA)
This paper briefly describes the 3rd International Workshop on Software Engineering in Healthcare, held at the 2011 International Conference on Software Engineering.
Article Search
Collaborative Teaching of Globally Distributed Software Development: Community Building Workshop (CTGDSD 2011)
Stuart Faulk, Michal Young, David M. Weiss, and Lian Yu
(University of Oregon, USA; Iowa State University, USA; Peking University, China)
Software engineering project courses where student teams are geographically distributed can effectively simulate the problems of globally distributed software development (DSD). However, this pedagogical model has proven difficult to adopt or sustain. It requires significant pedagogical resources and collaboration infrastructure. Institutionalizing such courses also requires compatible and reliable teaching partners. The purpose of this workshop is to foster a community of international faculty and institutions committed to developing, supporting, and teaching DSD. Foundational materials presented will include pedagogical materials and infrastructure developed and used in teaching DSD courses along with results and lessons learned. Long-range goals include: lowering adoption barriers by providing common pedagogical materials, validated collaboration infrastructure, and a pool of potential teaching partners from around the globe.
Article Search
Fifth International Workshop on Software Clones (IWSC 2011)
James R. Cordy, Katsuro Inoue, Stanislaw Jarzabek, and Rainer Koschke
(Queen's University, Canada; Osaka University, Japan; National University of Singapore, Singapore; University of Bremen, Germany)
Software clones are identical or similar pieces of code, design or other artifacts. Clones are known to be closely related to various issues in software engineering, such as software quality, complexity, architecture, refactoring, evolution, licensing, plagiarism, and so on. Various characteristics of software systems can be uncovered through clone analysis, and system restructuring can be performed by merging clones. The goals of this workshop are to bring together researchers and practitioners from around the world to evaluate the current state of research and applications, discuss common problems, discover new opportunities for collaboration, exchange ideas, envision new areas of research and applications, and explore synergies with similarity analysis in other areas and disciplines.
Article Search
Second International Workshop on Managing Technical Debt (MTD 2011)
Ipek Ozkaya, Philippe Kruchten, Robert L. Nord, and Nanette Brown
(SEI/CMU, USA; University of British Columbia, Canada)
The technical debt metaphor is gaining significant traction in the software development community as a way to understand and communicate issues of intrinsic quality, value, and cost. The idea is that developers sometimes accept compromises in a system in one dimension (e.g., modularity) to meet an urgent demand in some other dimension (e.g., a deadline), and that such compromises incur a “debt”: on which “interest” has to be paid and which should be repaid at some point for the long-term health of the project. Little is known about technical debt, beyond feelings and opinions. The software engineering research community has an opportunity to study this phenomenon and improve the way it is handled. We can offer software engineers a foundation for managing such trade-offs based on models of their economic impacts. The goal of this second workshop is to discuss managing technical debt as a part of the research agenda for the software engineering field.
Article Search
Sixth International Workshop on Traceability in Emerging Forms of Software Engineering (TEFSE 2011)
Denys Poshyvanyk, Massimiliano Di Penta, and Huzefa Kagdi
(College of William and Mary, USA; University of Sannio, Italy; Winston-Salem State University, USA)
The Sixth International Workshop on Traceability in Emerging Forms of Software Engineering (TEFSE 2011) will bring together researchers and practitioners to examine the challenges of recovering and maintaining traceability for the myriad forms of software engineering artifacts, ranging from user needs to models to source code. The objective of the 6th edition of TEFSE is to build on the work the traceability research community has completed in identifying the open traceability challenges. In particular, it is intended to be a working event focused on discussing the main problems related to software artifact traceability and propose possible solutions for such problems. Moreover, the workshop also aims at identifying key issues concerning the importance of maintaining the traceability information during software development, to further improve the cooperation between academia and industry and to facilitate technology transfer.
Article Search
Sixth International Workshop on Automation of Software Test (AST 2011)
Howard Foster, Antonia Bertolino, and J. Jenny Li
(City University London, UK; ISTI-CNR, Italy; Avaya Research Labs, USA)
The Sixth International Workshop on Automation of Software Test (AST 2011) is associated with the 33rd International Conference on Software Engineering (ICSE 2011). This edition of AST was focused on the special theme of Software Design and the Automation of Software Test and authors were encouraged to submit work in this area. The workshop covers two days with presentations of regular research papers, industrial case studies and experience reports. The workshop also aims to have extensive discussions on collaborative solutions in the form of charette sessions. This paper summarizes the organization of the workshop, the special theme, as well as the sessions.
Article Search
Third International Workshop on Principles of Engineering Service-Oriented Systems (PESOS 2011)
Manuel Carro, Dimka Karastoyanova, Grace A. Lewis, and Anna Liu
(Universidad Politécnica de Madrid, Spain; University of Stuttgart, Germany; CMU, USA; NICTA, Australia)
ervice-oriented systems have attracted great interest from industry and research communities worldwide. Service integrators, developers, and providers are collaborating to address the various challenges in the field. PESOS 2011 is a forum for all these communities to present and discuss a wide range of topics related to service-oriented systems. The goal of PESOS is to bring together researchers from academia and industry, as well as practitioners working in the areas of software engineering and service-oriented systems to discuss research challenges, recent developments, novel applications, as well as methods, techniques, experiences, and tools to support the engineering of service-oriented systems.
Article Search
Workshop on SHAring and Reusing architectural Knowledge (SHARK 2011)
Paris Avgeriou, Patricia Lago, and Philippe Kruchten
(University of Groningen, Netherlands; VU University Amsterdam, Netherlands; University of British Columbia, Canada)
Architectural Knowledge (AK) is defined as the integrated representation of the software architecture of a software-intensive system or family of systems along with architectural decisions and their rationale, external influence and the development environment. The SHARK workshop series focuses on current methods, languages, and tools that can be used to extract, represent, share, apply, and reuse AK, and the experimentation and/or exploitation thereof. This sixth edition of SHARK will discuss, among other topics, the approaches for AK personalization, where knowledge is not codified through templates or annotations, but it is exchanged through the discussion between the different stakeholders.
Article Search
Second International Workshop on Web 2.0 for Software Engineering (Web2SE 2011)
Christoph Treude, Margaret-Anne Storey, Arie van Deursen, Andrew Begel, and Sue Black
(University of Victoria, Canada; Delft University of Technology, Netherlands; Microsoft Research, USA; University College London, UK)
Social software is built around an "architecture of participation" where user data is aggregated as a side-effect of using Web 2.0 applications. Web 2.0 implies that processes and tools are socially open, and that content can be used in several different contexts. Web 2.0 tools and technologies support interactive information sharing, data interoperability and user centered design. For instance, wikis, blogs, tags and feeds help us organize, manage and categorize content in an informal and collaborative way. Some of these technologies have made their way into collaborative software development processes and development platforms. These processes and environments are just scratching the surface of what can be done by incorporating Web 2.0 approaches and technologies into collaborative software development. Web 2.0 opens up new opportunities for developers to form teams and collaborate, but it also comes with challenges for developers and researchers. Web2SE aims to improve our understanding of how Web 2.0, manifested in technologies such as mashups or dashboards, can change the culture of collaborative software development.
Article Search
Workshop on Emerging Trends in Software Metrics (WETSoM 2011)
Giulio Concas, Massimiliano Di Penta, Ewan Tempero, and Hongyu Zhang
(University of Cagliari, Italy; University of Sannio, Italy; University of Auckland, New Zealand; Tsinghua University, China)
The Workshop on Emerging Trends in Software Metrics aims at bringing together researchers and practitioners to discuss the progress of software metrics. The motivation for this workshop is the low impact that software metrics has on current software development. The goals of this workshop are to critically examine the evidence for the effectiveness of existing metrics and to identify new directions for development of software metrics.
Article Search
Fourth International Workshop on Software Engineering for Computational Science and Engineering (SE-CSE 2011)
Jeffrey C. Carver, Roscoe Bartlett, Ian Gorton, Lorin Hochstein, Diane Kelly, and Judith Segal
(University of Alabama, USA; Sandia National Laboratories, USA; Pacific Northwest National Laboratory, USA; USC-ISI, USA; Royal Military College, Canada; The Open University, UK)
Computational Science and Engineering (CSE) software supports a wide variety of domains including nuclear physics, crash simulation, satellite data processing, fluid dynamics, climate modeling, bioinformatics, and vehicle development. The increase in the importance of CSE software motivates the need to identify and understand appropriate software engineering (SE) practices for CSE. Because of the uniqueness of CSE software development, existing SE tools and techniques developed for the business/IT community are often not efficient or effective. Appropriate SE solutions must account for the salient characteristics of the CSE development environment. This situation creates an opportunity for members of the SE community to interact with members of the CSE community to address this need. This workshop facilitates that collaboration by bringing together members of the SE community and the CSE community to share perspectives and present findings from research and practice relevant to CSE software. A significant portion of the workshop is devoted to focused interaction among the participants with the goal of generating a research agenda to improve tools, techniques, and experimental methods for studying CSE software engineering.
Article Search
Third International Workshop on Search-Driven Development: Users, Infrastructure, Tools, and Evaluation (SUITE 2011)
Sushil Bajracharya, Adrian Kuhn, and Yunwen Ye
(Black Duck Software, USA; University of Bern, Switzerland; Software Research Associates Inc., Japan)
SUITE is a workshop that focuses on exploring the notion of search as a fundamental activity during software development. The first two editions of SUITE were held at ICSE 2009/2010, and they have focused on the building of a research community that brings researchers and practioners who are interested in the research areas that SUITE addresses. While this thrid workshop continues the effort of community building, it puts more focus on addressing directly some of the urgent issues identified by previous two workshops, encouraging researchers to contribute to and take advantage of common datasets that we have started assembling for SUITE research. .
Article Search
First Workshop on Developing Tools as Plug-ins (TOPI 2011)
Judith Bishop, David Notkin, and Karin K. Breitman
(Microsoft Research, USA; University of Washington, USA; PUC-Rio, Brazil)
Our knowledge as to how to solve software engineering problems is increasingly being encapsulated in tools. These tools are at their strongest when they operate in a pre-existing development environment that can provide integration with existing elements such as compilers, debuggers, profilers and visualizers. The first Workshop on Developing Tools as Plug-ins is a new forum in which to addresses research, ongoing work, ideas, concepts, and critical questions related to the engineering of software tools and plug-ins.
Article Search

SCORE Student Contest

SCORE 2011, the Second Student Contest on Software Engineering
Matteo Rossi and Michal Young
(Politecnico di Milano, Italy; University of Oregon, USA)
SCORE 2011 is the second iteration of a team-oriented software engineering contest that attracts student teams from around the world, culminating in a final round of competition and awards at ICSE. Each team has responded to one of the project proposals provided by the SCORE program committee, usually in the context of a software engineering project course. In this second iteration we have built on the success of SCORE 2009, greatly expanding the number and geographical distribution of student teams, including many of very high quality.
Article Search

proc time: 0.4