ICSE 2011 – Author Index |
Contents -
Abstracts -
Authors
|
A B C D E F G H I J K L M N O P R S T U V W X Y Z
Acevedo, Gabriel |
ICSE '11: "Transformation for Class Immutability ..."
Transformation for Class Immutability
Fredrik Kjolstad, Danny Dig, Gabriel Acevedo, and Marc Snir (University of Illinois at Urbana-Champaign, USA) It is common for object-oriented programs to have both mutable and immutable classes. Immutable classes simplify programing because the programmer does not have to reason about side-effects. Sometimes programmers write immutable classes from scratch, other times they transform mutable into immutable classes. To transform a mutable class, programmers must find all methods that mutate its transitive state and all objects that can enter or escape the state of the class. The analyses are non-trivial and the rewriting is tedious. Fortunately, this can be automated. We present an algorithm and a tool, Immutator, that enables the programmer to safely transform a mutable class into an immutable class. Two case studies and one controlled experiment show that Immutator is useful. It (i) reduces the burden of making classes immutable, (ii) is fast enough to be used interactively, and (iii) is much safer than manual transformations. @InProceedings{ICSE11p61, author = {Fredrik Kjolstad and Danny Dig and Gabriel Acevedo and Marc Snir}, title = {Transformation for Class Immutability}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {61--70}, doi = {}, year = {2011}, } |
|
Acharya, Mithun |
ICSE '11-SEIP: "Practical Change Impact Analysis ..."
Practical Change Impact Analysis Based on Static Program Slicing for Industrial Software Systems
Mithun Acharya and Brian Robinson (ABB Corporate Research, USA) Change impact analysis, i.e., knowing the potential consequences of a software change, is critical for the risk analysis, developer effort estimation, and regression testing of evolving software. Static program slicing is an attractive option for enabling routine change impact analysis for newly committed changesets during daily software build. For small programs with a few thousand lines of code, static program slicing scales well and can assist precise change impact analysis. However, as we demonstrate in this paper, static program slicing faces unique challenges when applied routinely on large and evolving industrial software systems. Despite recent advances in static program slicing, to our knowledge, there have been no studies of static change impact analysis applied on large and evolving industrial software systems. In this paper, we share our experiences in designing a static change impact analysis framework for such software systems. We have implemented our framework as a tool called Imp and have applied Imp on an industrial codebase with over a million lines of C/ C++ code with promising empirical results. @InProceedings{ICSE11p746, author = {Mithun Acharya and Brian Robinson}, title = {Practical Change Impact Analysis Based on Static Program Slicing for Industrial Software Systems}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {746--755}, doi = {}, year = {2011}, } |
|
Adams, Bram |
ICSE '11: "An Empirical Study of Build ..."
An Empirical Study of Build Maintenance Effort
Shane McIntosh, Bram Adams, Thanh H. D. Nguyen, Yasutaka Kamei, and Ahmed E. Hassan (Queen's University, Canada) The build system of a software project is responsible for transforming source code and other development artifacts into executable programs and deliverables. Similar to source code, build system specifications require maintenance to cope with newly implemented features, changes to imported Application Program Interfaces (APIs), and source code restructuring. In this paper, we mine the version histories of one proprietary and nine open source projects of different sizes and domain to analyze the overhead that build maintenance imposes on developers. We split our analysis into two dimensions: (1) Build Coupling, i.e., how frequently source code changes require build changes, and (2) Build Ownership, i.e., the proportion of developers responsible for build maintenance. Our results indicate that, despite the difference in scale, the build system churn rate is comparable to that of the source code, and build changes induce more relative churn on the build system than source code changes induce on the source code. Furthermore, build maintenance yields up to a 27% overhead on source code development and a 44% overhead on test development. Up to 79% of source code developers and 89% of test code developers are significantly impacted by build maintenance, yet investment in build experts can reduce the proportion of impacted developers to 22% of source code developers and 24% of test code developers. @InProceedings{ICSE11p141, author = {Shane McIntosh and Bram Adams and Thanh H. D. Nguyen and Yasutaka Kamei and Ahmed E. Hassan}, title = {An Empirical Study of Build Maintenance Effort}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {141--150}, doi = {}, year = {2011}, } |
|
Adler, Yoram |
ICSE '11-SEIP: "Code Coverage Analysis in ..."
Code Coverage Analysis in Practice for Large Systems
Yoram Adler, Noam Behar, Orna Raz, Onn Shehory, Nadav Steindler, Shmuel Ur, and Aviad Zlotnick (IBM Research Haifa, Israel; Microsoft, Israel; Shmuel Ur Innovation, Israel) Large systems generate immense quantities of code coverage data. A user faced with the task of analyzing this data, for example, to decide on test areas to improve, faces a ’needle in a haystack’ problem. In earlier studies we introduced substring hole analysis, a technique for presenting large quantities of coverage data in a succinct way. Here we demonstrate the successful use of substring hole analysis on large scale data from industrial software systems. For this end we augment substring hole analysis by introducing a work flow and tool support for practical code coverage analysis. We conduct real data experiments indicating that augmented substring hole analysis enables code coverage analysis where it was previously impractical, correctly identifies functionality that is missing from existing tests, and can increase the probability of finding bugs. These facilitate cost-effective code coverage analysis. @InProceedings{ICSE11p736, author = {Yoram Adler and Noam Behar and Orna Raz and Onn Shehory and Nadav Steindler and Shmuel Ur and Aviad Zlotnick}, title = {Code Coverage Analysis in Practice for Large Systems}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {736--745}, doi = {}, year = {2011}, } |
|
Aldrich, Jonathan |
ICSE '11-NIER: "Permission-Based Programming ..."
Permission-Based Programming Languages (NIER Track)
Jonathan Aldrich, Ronald Garcia, Mark Hahnenberg, Manuel Mohr, Karl Naden, Darpan Saini, and Roger Wolff (CMU, USA; Karlsruhe Institute of Technology, Germany; University of Chile, Chile) Linear permissions have been proposed as a lightweight way to specify how an object may be aliased, and whether those aliases allow mutation. Prior work has demonstrated the value of permissions for addressing many software engineering concerns, including information hiding, protocol checking, concurrency, security, and memory management. We propose the concept of a permission-based programming language--a language whose object model, type system, and runtime are all co-designed with permissions in mind. This approach supports an object model in which the structure of an object can change over time, a type system that tracks changing structure in addition to addressing the other concerns above, and a runtime system that can dynamically check permission assertions and leverage permissions to parallelize code. We sketch the design of the permission-based programming language Plaid, and argue that the approach may provide significant software engineering benefits. @InProceedings{ICSE11p828, author = {Jonathan Aldrich and Ronald Garcia and Mark Hahnenberg and Manuel Mohr and Karl Naden and Darpan Saini and Roger Wolff}, title = {Permission-Based Programming Languages (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {828--831}, doi = {}, year = {2011}, } |
|
Alkhalaf, Muath |
ICSE '11: "Patching Vulnerabilities with ..."
Patching Vulnerabilities with Sanitization Synthesis
Fang Yu, Muath Alkhalaf, and Tevfik Bultan (National Chengchi University, Taiwan; UC Santa Barbara, USA) We present automata-based static string analysis techniques that automatically generate sanitization statements for patching vulnerable web applications. Our approach consists of three phases: Given an attack pattern we first conduct a vulnerability analysis to identify if strings that match the attack pattern can reach the security-sensitive functions. Next, we compute vulnerability signatures that characterize all input strings that can exploit the discovered vulnerability. Given the vulnerability signatures, we then construct sanitization statements that 1) check if a given input matches the vulnerability signature and 2) modify the input in a minimal way so that the modified input does not match the vulnerability signature. Our approach is capable of generating relational vulnerability signatures (and corresponding sanitization statements) for vulnerabilities that are due to more than one input. @InProceedings{ICSE11p251, author = {Fang Yu and Muath Alkhalaf and Tevfik Bultan}, title = {Patching Vulnerabilities with Sanitization Synthesis}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {251--260}, doi = {}, year = {2011}, } |
|
Al-Kofahi, Jafar |
ICSE '11-NIER: "Fuzzy Set-based Automatic ..."
Fuzzy Set-based Automatic Bug Triaging (NIER Track)
Ahmed Tamrawi, Tung Thanh Nguyen, Jafar Al-Kofahi, and Tien N. Nguyen (Iowa State University, USA) Assigning a bug to the right developer is a key in reducing the cost, time, and efforts for developers in a bug fixing process. This assignment process is often referred to as bug triaging. In this paper, we propose Bugzie, a novel approach for automatic bug triaging based on fuzzy set-based modeling of bug-fixing expertise of developers. Bugzie considers a system to have multiple technical aspects, each is associated with technical terms. Then, it uses a fuzzy set to represent the developers who are capable/competent of fixing the bugs relevant to each term. The membership function of a developer in a fuzzy set is calculated via the terms extracted from the bug reports that (s)he has fixed, and the function is updated as new fixed reports are available. For a new bug report, its terms are extracted and corresponding fuzzy sets are union'ed. Potential fixers will be recommended based on their membership scores in the union'ed fuzzy set. Our preliminary results show that Bugzie achieves higher accuracy and efficiency than other state-of-the-art approaches. @InProceedings{ICSE11p884, author = {Ahmed Tamrawi and Tung Thanh Nguyen and Jafar Al-Kofahi and Tien N. Nguyen}, title = {Fuzzy Set-based Automatic Bug Triaging (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {884--887}, doi = {}, year = {2011}, } |
|
Amsel, Nadine |
ICSE '11-NIER: "Toward Sustainable Software ..."
Toward Sustainable Software Engineering (NIER Track)
Nadine Amsel, Zaid Ibrahim, Amir Malik, and Bill Tomlinson (UC Irvine, USA) Current software engineering practices have significant effects on the environment. Examples include e-waste from computers made obsolete due to software upgrades, and changes in the power demands of new versions of software. Sustainable software engineering aims to create reliable, long-lasting software that meets the needs of users while reducing environmental impacts. We conducted three related research efforts to explore this area. First, we investigated the extent to which users thought about the environmental impact of their software usage. Second, we created a tool called GreenTracker, which measures the energy consumption of software in order to raise awareness about the environmental impact of software usage. Finally, we explored the indirect environmental effects of software in order to understand how software affects sustainability beyond its own power consumption. The relationship between environmental sustainability and software engineering is complex; understanding both direct and indirect effects is critical to helping humans live more sustainably. @InProceedings{ICSE11p976, author = {Nadine Amsel and Zaid Ibrahim and Amir Malik and Bill Tomlinson}, title = {Toward Sustainable Software Engineering (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {976--979}, doi = {}, year = {2011}, } |
|
Anderson, Kenneth M. |
ICSE '11-NIER: "Design and Implementation ..."
Design and Implementation of a Data Analytics Infrastructure in Support of Crisis Informatics Research (NIER Track)
Kenneth M. Anderson and Aaron Schram (University of Colorado, USA) Crisis informatics is an emerging research area that studies how information and communication technology (ICT) is used in emergency response. An important branch of this area includes investigations of how members of the public make use of ICT to aid them during mass emergencies. Data collection and analytics during crisis events is a critical prerequisite for performing such research, as the data generated during these events on social media networks are ephemeral and easily lost. We report on the current state of a crisis informatics data analytics infrastructure that we are developing in support of a broader, interdisciplinary research program. We also comment on the role that software engineering research plays in these increasingly common, highlyinterdisciplinary research efforts. @InProceedings{ICSE11p844, author = {Kenneth M. Anderson and Aaron Schram}, title = {Design and Implementation of a Data Analytics Infrastructure in Support of Crisis Informatics Research (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {844--847}, doi = {}, year = {2011}, } |
|
Androutsopoulos, Kelly |
ICSE '11: "Model Projection: Simplifying ..."
Model Projection: Simplifying Models in Response to Restricting the Environment
Kelly Androutsopoulos, David Binkley, David Clark, Nicolas Gold, Mark Harman, Kevin Lano, and Zheng Li (University College London, UK; Loyola University Maryland, USA; King's College London, UK) This paper introduces Model Projection. Finite state models such as Extended Finite State Machines are being used in an ever increasing number of software engineering activities. Model projection facilitates model development by specializing models for a specific operating environment. A projection is useful in many design-level applications including specification reuse and property verification. The applicability of model projection rests upon three critical concerns: correctness, effectiveness, and efficiency, all of which are addressed in this paper. We introduce four related algorithms for model projection and prove each correct. We also present an empirical study of effectiveness and efficiency using ten models, including widely–studied benchmarks as well as industrial models. Results show that a typical projection includes about half of the states and a third of the transitions from the original model. @InProceedings{ICSE11p291, author = {Kelly Androutsopoulos and David Binkley and David Clark and Nicolas Gold and Mark Harman and Kevin Lano and Zheng Li}, title = {Model Projection: Simplifying Models in Response to Restricting the Environment}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {291--300}, doi = {}, year = {2011}, } |
|
Apel, Sven |
ICSE '11: "Feature Cohesion in Software ..."
Feature Cohesion in Software Product Lines: An Exploratory Study
Sven Apel and Dirk Beyer (University of Passau, Germany; Simon Fraser University, Canada) Software product lines gain momentum in research and industry. Many product-line approaches use features as a central abstraction mechanism. Feature-oriented software development aims at encapsulating features in cohesive units to support program comprehension, variability, and reuse. Surprisingly, not much is known about the characteristics of cohesion in feature-oriented product lines, although proper cohesion is of special interest in product-line engineering due to its focus on variability and reuse. To fill this gap, we conduct an exploratory study on forty software product lines of different sizes and domains. A distinguishing property of our approach is that we use both classic software measures and novel measures that are based on distances in clustering layouts, which can be used also for visual exploration of product-line architectures. This way, we can draw a holistic picture of feature cohesion. In our exploratory study, we found several interesting correlations (e.g., between development process and feature cohesion) and we discuss insights and perspectives of investigating feature cohesion (e.g., regarding feature interfaces and programming style). @InProceedings{ICSE11p421, author = {Sven Apel and Dirk Beyer}, title = {Feature Cohesion in Software Product Lines: An Exploratory Study}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {421--430}, doi = {}, year = {2011}, } ICSE '11-DEMOS: "View Infinity: A Zoomable ..." View Infinity: A Zoomable Interface for Feature-Oriented Software Development Michael Stengel, Janet Feigenspan, Mathias Frisch, Christian Kästner, Sven Apel, and Raimund Dachselt (University of Magdeburg, Germany; University of Marburg, Germany; University of Passau, Germany) Software product line engineering provides efficient means to develop variable software. To support program comprehension of software product lines (SPLs), we developed View Infinity, a tool that provides seamless and semantic zooming of different abstraction layers of an SPL. First results of a qualitative study with experienced SPL developers are promising and indicate that View Infinity is useful and intuitive to use. @InProceedings{ICSE11p1031, author = {Michael Stengel and Janet Feigenspan and Mathias Frisch and Christian Kästner and Sven Apel and Raimund Dachselt}, title = {View Infinity: A Zoomable Interface for Feature-Oriented Software Development}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1031--1033}, doi = {}, year = {2011}, } |
|
Araujo, Wladimir |
ICSE '11-SEIP: "Enabling the Runtime Assertion ..."
Enabling the Runtime Assertion Checking of Concurrent Contracts for the Java Modeling Language
Wladimir Araujo, Lionel C. Briand, and Yvan Labiche (Juniper Networks, Canada; Simula Research Laboratory, Norway; University of Oslo, Norway; Carleton University, Canada) Though there exists ample support for Design by Contract (DbC) for sequential programs, applying DbC to concurrent programs presents several challenges. In previous work, we extended the Java Modeling Language (JML) with constructs to specify concurrent contracts for Java programs. We present a runtime assertion checker (RAC) for the expanded JML capable of verifying assertions for concurrent Java programs. We systematically evaluate the validity of system testing results obtained via runtime assertion checking using actual concurrent and functional faults on a highly concurrent industrial system from the telecommunications domain. @InProceedings{ICSE11p786, author = {Wladimir Araujo and Lionel C. Briand and Yvan Labiche}, title = {Enabling the Runtime Assertion Checking of Concurrent Contracts for the Java Modeling Language}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {786--795}, doi = {}, year = {2011}, } |
|
Araya, Vanessa Peña |
ICSE '11-SRC: "Test Blueprint: An Effective ..."
Test Blueprint: An Effective Visual Support for Test Coverage
Vanessa Peña Araya (University of Chile, Chile) Test coverage is about assessing the relevance of unit tests against the tested application. It is widely acknowledged that a software with a “good” test coverage is more robust against unanticipated execution, thus lowering the maintenance cost. However, insuring a coverage of a good quality is challenging, especially since most of the available test coverage tools do not discriminate software components that require a “strong” coverage from the components that require less attention from the unit tests. Hapao is an innovative test coverage tool, implemented in the Pharo Smalltalk programming language. It employs an effective and intuitive graphical representation to visually assess the quality of the coverage. A combination of appropriate metrics and relations visually shapes methods and classes, which indicates to the programmer whether more effort on testing is required. This paper presents the essence of Hapao using a real world case study. @InProceedings{ICSE11p1140, author = {Vanessa Peña Araya}, title = {Test Blueprint: An Effective Visual Support for Test Coverage}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1140--1142}, doi = {}, year = {2011}, } |
|
Arcuri, Andrea |
ICSE '11: "A Practical Guide for Using ..."
A Practical Guide for Using Statistical Tests to Assess Randomized Algorithms in Software Engineering
Andrea Arcuri and Lionel C. Briand (Simula Research Laboratory, Norway) Randomized algorithms have been used to successfully address many different types of software engineering problems. This type of algorithms employ a degree of randomness as part of their logic. Randomized algorithms are useful for difficult problems where a precise solution cannot be derived in a deterministic way within reasonable time. However, randomized algorithms produce different results on every run when applied to the same problem instance. It is hence important to assess the effectiveness of randomized algorithms by collecting data from a large enough number of runs. The use of rigorous statistical tests is then essential to provide support to the conclusions derived by analyzing such data. In this paper, we provide a systematic review of the use of randomized algorithms in selected software engineering venues in 2009. Its goal is not to perform a complete survey but to get a representative snapshot of current practice in software engineering research. We show that randomized algorithms are used in a significant percentage of papers but that, in most cases, randomness is not properly accounted for. This casts doubts on the validity of most empirical results assessing randomized algorithms. There are numerous statistical tests, based on different assumptions, and it is not always clear when and how to use these tests. We hence provide practical guidelines to support empirical research on randomized algorithms in software engineering. @InProceedings{ICSE11p1, author = {Andrea Arcuri and Lionel C. Briand}, title = {A Practical Guide for Using Statistical Tests to Assess Randomized Algorithms in Software Engineering}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1--10}, doi = {}, year = {2011}, } |
|
Artzi, Shay |
ICSE '11: "A Framework for Automated ..."
A Framework for Automated Testing of JavaScript Web Applications
Shay Artzi, Julian Dolby, Simon Holm Jensen, Anders Møller, and Frank Tip (IBM Research, USA; Aarhus University, Denmark) Current practice in testing JavaScript web applications requires manual construction of test cases, which is difficult and tedious. We present a framework for feedback-directed automated test generation for JavaScript in which execution is monitored to collect information that dire cts the test generator towards inputs that yield increased coverage. We implemented several instantiations of the framework, corresponding to variations on feedback-directed random testing, in a tool called Artemis. Experiments on a suite of JavaScript applications demonstrate that a simple instantiation of the framework that uses event handler registrations as feedback information produces surprisingly good coverage if enough tests are generated. By also using coverage information and read-write sets as feedback information, a slightly better level of coverage can be achieved, and sometimes with many fewer tests. The generated tests can be used for detecting HTML validity problems and other programming errors. @InProceedings{ICSE11p571, author = {Shay Artzi and Julian Dolby and Simon Holm Jensen and Anders Møller and Frank Tip}, title = {A Framework for Automated Testing of JavaScript Web Applications}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {571--580}, doi = {}, year = {2011}, } |
|
Athanasopoulos, Dionysis |
ICSE '11-NIER: "Mining Service Abstractions ..."
Mining Service Abstractions (NIER Track)
Dionysis Athanasopoulos, Apostolos V. Zarras, Panos Vassiliadis, and Valerie Issarny (University of Ioannina, Greece; INRIA-Paris, France) Several lines of research rely on the concept of service abstractions to enable the organization, the composition and the adaptation of services. However, what is still missing, is a systematic approach for extracting service abstractions out of the vast amount of services that are available all over the Web. To deal with this issue, we propose an approach for mining service abstractions, based on an agglomerative clustering algorithm. Our experimental findings suggest that the approach is promising and can serve as a basis for future research. @InProceedings{ICSE11p944, author = {Dionysis Athanasopoulos and Apostolos V. Zarras and Panos Vassiliadis and Valerie Issarny}, title = {Mining Service Abstractions (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {944--947}, doi = {}, year = {2011}, } |
|
Atkinson, Colin |
ICSE '11-NIER: "Search-Enhanced Testing (NIER ..."
Search-Enhanced Testing (NIER Track)
Colin Atkinson, Oliver Hummel, and Werner Janjic (University of Mannheim, Germany) The prime obstacle to automated defect testing has always been the generation of “correct” results against which to judge the behavior of the system under test – the “oracle problem”. So called “back-to-back” testing techniques that exploit the availability of multiple versions of a system to solve the oracle problem have mainly been restricted to very special, safety critical domains such as military and space applications since it is so expensive to manually develop the additional versions. However, a new generation of software search engines that can find multiple copies of software components at virtually zero cost promise to change this situation. They make it economically feasible to use the knowledge locked in reusable software components to dramatically improve the efficiency of the software testing process. In this paper we outline the basic ingredients of such an approach. @InProceedings{ICSE11p880, author = {Colin Atkinson and Oliver Hummel and Werner Janjic}, title = {Search-Enhanced Testing (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {880--883}, doi = {}, year = {2011}, } |
|
Avgeriou, Paris |
ICSE '11-NIER: "Capturing Tacit Architectural ..."
Capturing Tacit Architectural Knowledge Using the Repertory Grid Technique (NIER Track)
Dan Tofan, Matthias Galster, and Paris Avgeriou (University of Groningen, Netherlands) Knowledge about the architecture of a software-intensive system tends to vaporize easily. This leads to increased maintenance costs. We explore a new idea: utilizing the repertory grid technique to capture tacit architectural knowledge. Particularly, we investigate the elicitation of design decision alternatives and their characteristics. To study the applicability of this idea, we performed an exploratory study. Seven independent subjects applied the repertory grid technique to document a design decision they had to take in previous projects. Then, we interviewed each subject to understand their perception about the technique. We identified advantages and disadvantages of using the technique. The main advantage is the reasoning support it provides; the main disadvantage is the additional effort it requires. Also, applying the technique depends on the context of the project. Using the repertory grid technique is a promising approach for fighting architectural knowledge vaporization. @InProceedings{ICSE11p916, author = {Dan Tofan and Matthias Galster and Paris Avgeriou}, title = {Capturing Tacit Architectural Knowledge Using the Repertory Grid Technique (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {916--919}, doi = {}, year = {2011}, } ICSE '11-WORKSHOPS: "Workshop on SHAring and Reusing ..." Workshop on SHAring and Reusing architectural Knowledge (SHARK 2011) Paris Avgeriou, Patricia Lago, and Philippe Kruchten (University of Groningen, Netherlands; VU University Amsterdam, Netherlands; University of British Columbia, Canada) Architectural Knowledge (AK) is defined as the integrated representation of the software architecture of a software-intensive system or family of systems along with architectural decisions and their rationale, external influence and the development environment. The SHARK workshop series focuses on current methods, languages, and tools that can be used to extract, represent, share, apply, and reuse AK, and the experimentation and/or exploitation thereof. This sixth edition of SHARK will discuss, among other topics, the approaches for AK personalization, where knowledge is not codified through templates or annotations, but it is exchanged through the discussion between the different stakeholders. @InProceedings{ICSE11p1220, author = {Paris Avgeriou and Patricia Lago and Philippe Kruchten}, title = {Workshop on SHAring and Reusing architectural Knowledge (SHARK 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1220--1221}, doi = {}, year = {2011}, } |
|
Bacchelli, Alberto |
ICSE '11-DEMOS: "Miler: A Toolset for Exploring ..."
Miler: A Toolset for Exploring Email Data
Alberto Bacchelli, Michele Lanza, and Marco D'Ambros (University of Lugano, Switzerland) Source code is the target and final outcome of software development. By focusing our research and analysis on source code only, we risk forgetting that software is the product of human efforts, where communication plays a pivotal role. One of the most used communications means are emails, which have become vital for any distributed development project. Analyzing email archives is non-trivial, due to the noisy and unstructured nature of emails, the vast amounts of information, the unstandardized storage systems, and the gap with development tools. We present Miler, a toolset that allows the exploration of this form of communication, in the context of software maintenance and evolution. With Miler we can retrieve data from mailing list repositories in different formats, model emails as first-class entities, and transparently store them in databases. Miler offers tools and support for navigating the content, manually labelling emails with discussed source code entities, automatically linking emails to source code, measuring code entities’ popularity in mailing lists, exposing structured content in the unstructured content, and integrating email communication in an IDE. @InProceedings{ICSE11p1025, author = {Alberto Bacchelli and Michele Lanza and Marco D'Ambros}, title = {Miler: A Toolset for Exploring Email Data}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1025--1027}, doi = {}, year = {2011}, } ICSE '11-DOCTORALPRESENT: "Exploring, Exposing, and Exploiting ..." Exploring, Exposing, and Exploiting Emails to Include Human Factors in Software Engineering Alberto Bacchelli (University of Lugano, Switzerland) Researchers mine software repositories to support software maintenance and evolution. The analysis of the structured data, mainly source code and changes, has several benefits and offers precise results. This data, however, leaves communication in the background, and does not permit a deep investigation of the human factor, which is crucial in software engineering. Software repositories also archive documents, such as emails or comments, that are used to exchange knowledge among people--we call it "people-centric information." By covering this data, we include the human factor in our analysis, yet its unstructured nature makes it currently sub-exploited. Our work, by focusing on email communication and by implementing the necessary tools, investigates methods for exploring, exposing, and exploiting unstructured data. We believe it is possible to close the gap between development and communication, extract opinions, habits, and views of developers, and link implementation to its rationale; we see in a future where software analysis and development is routinely augmented with people-centric information. @InProceedings{ICSE11p1074, author = {Alberto Bacchelli}, title = {Exploring, Exposing, and Exploiting Emails to Include Human Factors in Software Engineering}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1074--1077}, doi = {}, year = {2011}, } |
|
Bagheri, Hamid |
ICSE '11-SRC: "A Formal Approach to Software ..."
A Formal Approach to Software Synthesis for Architectural Platforms
Hamid Bagheri (University of Virginia, USA) Software-intensive systems today often rely on middleware platforms as major building blocks. As such, the architectural choices of such systems are being driven to a significant extent by such platforms. However, the diversity and rapid evolution of these platforms lead to architectural choices quickly becoming obsolete. Yet architectural choices are among the most difficult to change. This paper presents a novel and formal approach to end-to-end transformation of application models into architecturally correct code, averting the problem of mapping application models to such architectural platforms. @InProceedings{ICSE11p1143, author = {Hamid Bagheri}, title = {A Formal Approach to Software Synthesis for Architectural Platforms}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1143--1145}, doi = {}, year = {2011}, } |
|
Bajracharya, Sushil |
ICSE '11-WORKSHOPS: "Third International Workshop ..."
Third International Workshop on Search-Driven Development: Users, Infrastructure, Tools, and Evaluation (SUITE 2011)
Sushil Bajracharya, Adrian Kuhn, and Yunwen Ye (Black Duck Software, USA; University of Bern, Switzerland; Software Research Associates Inc., Japan) SUITE is a workshop that focuses on exploring the notion of search as a fundamental activity during software development. The first two editions of SUITE were held at ICSE 2009/2010, and they have focused on the building of a research community that brings researchers and practioners who are interested in the research areas that SUITE addresses. While this thrid workshop continues the effort of community building, it puts more focus on addressing directly some of the urgent issues identified by previous two workshops, encouraging researchers to contribute to and take advantage of common datasets that we have started assembling for SUITE research. . @InProceedings{ICSE11p1228, author = {Sushil Bajracharya and Adrian Kuhn and Yunwen Ye}, title = {Third International Workshop on Search-Driven Development: Users, Infrastructure, Tools, and Evaluation (SUITE 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1228--1229}, doi = {}, year = {2011}, } |
|
Balan, Rajesh Krishna |
ICSE '11: "Configuring Global Software ..."
Configuring Global Software Teams: A Multi-Company Analysis of Project Productivity, Quality, and Profits
Narayan Ramasubbu, Marcelo Cataldo, Rajesh Krishna Balan, and James D. Herbsleb (Singapore Management University, Singapore; CMU, USA) In this paper, we examined the impact of project-level configurational choices of globally distributed software teams on project productivity, quality, and profits. Our analysis used data from 362 projects of four different firms. These projects spanned a wide range of programming languages, application domain, process choices, and development sites spread over 15 countries and 5 continents. Our analysis revealed fundamental tradeoffs in choosing configurational choices that are optimized for productivity, quality, and/or profits. In particular, achieving higher levels of productivity and quality require diametrically opposed configurational choices. In addition, creating imbalances in the expertise and personnel distribution of project teams significantly helps increase profit margins. However, a profitoriented imbalance could also significantly affect productivity and/or quality outcomes. Analyzing these complex tradeoffs, we provide actionable managerial insights that can help software firms and their clients choose configurations that achieve desired project outcomes in globally distributed software development. @InProceedings{ICSE11p261, author = {Narayan Ramasubbu and Marcelo Cataldo and Rajesh Krishna Balan and James D. Herbsleb}, title = {Configuring Global Software Teams: A Multi-Company Analysis of Project Productivity, Quality, and Profits}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {261--270}, doi = {}, year = {2011}, } |
|
Balland, Emilie |
ICSE '11: "Leveraging Software Architectures ..."
Leveraging Software Architectures to Guide and Verify the Development of Sense/Compute/Control Applications
Damien Cassou, Emilie Balland, Charles Consel, and Julia Lawall (University of Bordeaux, France; INRIA, France; DIKU, Denmark; LIP6, France) A software architecture describes the structure of a computing system by specifying software components and their interactions. Mapping a software architecture to an implementation is a well known challenge. A key element of this mapping is the architecture’s description of the data and control-flow interactions between components. The characterization of these interactions can be rather abstract or very concrete, providing more or less implementation guidance, programming support, and static verification. In this paper, we explore one point in the design space between abstract and concrete component interaction specifications. We introduce a notion of interaction contract that expresses the set of allowed interactions between components, describing both data and control-flow constraints. This declaration is part of the architecture description, allows generation of extensive programming support, and enables various verifications. We instantiate our approach in an architecture description language for Sense/Compute/Control applications, and describe associated compilation and verification strategies. @InProceedings{ICSE11p431, author = {Damien Cassou and Emilie Balland and Charles Consel and Julia Lawall}, title = {Leveraging Software Architectures to Guide and Verify the Development of Sense/Compute/Control Applications}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {431--440}, doi = {}, year = {2011}, } |
|
Bao, Tao |
ICSE '11: "Coalescing Executions for ..."
Coalescing Executions for Fast Uncertainty Analysis
William N. Sumner, Tao Bao, Xiangyu Zhang, and Sunil Prabhakar (Purdue University, USA) Uncertain data processing is critical in a wide range of applications such as scientific computation handling data with inevitable errors and financial decision making relying on human provided parameters. While increasingly studied in the area of databases, uncertain data processing is often carried out by software, and thus software based solutions are attractive. In particular, Monte Carlo (MC) methods execute software with many samples from the uncertain inputs and observe the statistical behavior of the output. In this paper, we propose a technique to improve the cost-effectiveness of MC methods. Assuming only part of the input is uncertain, the certain part of the input always leads to the same execution across multiple sample runs. We remove such redundancy by coalescing multiple sample runs in a single run. In the coalesced run, the program operates on a vector of values if uncertainty is present and a single value otherwise. We handle cases where control flow and pointers are uncertain. Our results show that we can speed up the execution time of 30 sample runs by an average factor of 2.3 without precision lost or by up to 3.4 with negligible precision lost. @InProceedings{ICSE11p581, author = {William N. Sumner and Tao Bao and Xiangyu Zhang and Sunil Prabhakar}, title = {Coalescing Executions for Fast Uncertainty Analysis}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {581--590}, doi = {}, year = {2011}, } |
|
Barman, Shaon |
ICSE '11: "Angelic Debugging ..."
Angelic Debugging
Satish Chandra, Emina Torlak, Shaon Barman, and Rastislav Bodik (IBM Research, USA; UC Berkeley, USA) Software ships with known bugs because it is expensive to pinpoint and fix the bug exposed by a failing test. To reduce the cost of bug identification, we locate expressions that are likely causes of bugs and thus candidates for repair. Our symbolic method approximates an ideal approach to fixing bugs mechanically, which is to search the space of all edits to the program for one that repairs the failing test without breaking any passing test. We approximate the expensive ideal of exploring syntactic edits by instead computing the set of values whose substitution for the expression corrects the execution. We observe that an expression is a repair candidate if it can be replaced with a value that fixes a failing test and in each passing test, its value can be changed to another value without breaking the test. The latter condition makes the expression flexible in that it permits multiple values. The key observation is that the repair of a flexible expression is less likely to break a passing test. The method is called angelic debugging because the values are computed by angelically nondeterministic statements. We implemented the method on top of the Java PathFinder model checker. Our experiments with this technique show promise of its applicability in speeding up program debugging. @InProceedings{ICSE11p121, author = {Satish Chandra and Emina Torlak and Shaon Barman and Rastislav Bodik}, title = {Angelic Debugging}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {121--130}, doi = {}, year = {2011}, } |
|
Barna, Cornel |
ICSE '11-NIER: "Model-based Performance Testing ..."
Model-based Performance Testing (NIER Track)
Cornel Barna, Marin Litoiu, and Hamoun Ghanbari (York University, Canada) In this paper, we present a method for performance testing of transactional systems. The methods models the system under test, finds the software and hardware bottlenecks and generate the workloads that saturate them. The framework is adaptive, the model and workloads are determined during the performance test execution by measuring the system performance, fitting a performance model and by analytically computing the number and mix of users that will saturate the bottlenecks. We model the software system using a two layers queuing model and use analytical techniques to find the workload mixes that change the bottlenecks in the system. Those workload mixes become stress vectors and initial starting points for the stress test cases. The rest of test cases are generated based on a feedback loop that drives the software system towards the worst case behaviour. @InProceedings{ICSE11p872, author = {Cornel Barna and Marin Litoiu and Hamoun Ghanbari}, title = {Model-based Performance Testing (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {872--875}, doi = {}, year = {2011}, } |
|
Barr, Earl T. |
ICSE '11-DEMOS: "BQL: Capturing and Reusing ..."
BQL: Capturing and Reusing Debugging Knowledge
Zhongxian Gu, Earl T. Barr, and Zhendong Su (UC Davis, USA) When fixing a bug, a programmer tends to search for similar bugs that have been resolved in the past. A fix for a similar bug may help him fix his bug or at least understand his bug. We designed and implemented the Bug Query Language (BQL) and its accompanying tools to help users search for similar bugs to aid debugging. This paper demonstrates the main features of the BQL infrastructure. We populated BQL with bugs collected from open-source projects and show that BQL could have helped users to fix real-world bugs. @InProceedings{ICSE11p1001, author = {Zhongxian Gu and Earl T. Barr and Zhendong Su}, title = {BQL: Capturing and Reusing Debugging Knowledge}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1001--1003}, doi = {}, year = {2011}, } |
|
Bartlett, Roscoe |
ICSE '11-WORKSHOPS: "Fourth International Workshop ..."
Fourth International Workshop on Software Engineering for Computational Science and Engineering (SE-CSE 2011)
Jeffrey C. Carver, Roscoe Bartlett, Ian Gorton, Lorin Hochstein, Diane Kelly, and Judith Segal (University of Alabama, USA; Sandia National Laboratories, USA; Pacific Northwest National Laboratory, USA; USC-ISI, USA; Royal Military College, Canada; The Open University, UK) Computational Science and Engineering (CSE) software supports a wide variety of domains including nuclear physics, crash simulation, satellite data processing, fluid dynamics, climate modeling, bioinformatics, and vehicle development. The increase in the importance of CSE software motivates the need to identify and understand appropriate software engineering (SE) practices for CSE. Because of the uniqueness of CSE software development, existing SE tools and techniques developed for the business/IT community are often not efficient or effective. Appropriate SE solutions must account for the salient characteristics of the CSE development environment. This situation creates an opportunity for members of the SE community to interact with members of the CSE community to address this need. This workshop facilitates that collaboration by bringing together members of the SE community and the CSE community to share perspectives and present findings from research and practice relevant to CSE software. A significant portion of the workshop is devoted to focused interaction among the participants with the goal of generating a research agenda to improve tools, techniques, and experimental methods for studying CSE software engineering. @InProceedings{ICSE11p1226, author = {Jeffrey C. Carver and Roscoe Bartlett and Ian Gorton and Lorin Hochstein and Diane Kelly and Judith Segal}, title = {Fourth International Workshop on Software Engineering for Computational Science and Engineering (SE-CSE 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1226--1227}, doi = {}, year = {2011}, } |
|
Barzilay, Ohad |
ICSE '11-NIER: "How Do Programmers Ask and ..."
How Do Programmers Ask and Answer Questions on the Web? (NIER Track)
Christoph Treude, Ohad Barzilay, and Margaret-Anne Storey (University of Victoria, Canada; Tel-Aviv University, Israel) Question and Answer (Q&A) websites, such as Stack Overflow, use social media to facilitate knowledge exchange between programmers and fill archives with millions of entries that contribute to the body of knowledge in software development. Understanding the role of Q&A websites in the documentation landscape will enable us to make recommendations on how individuals and companies can leverage this knowledge effectively. In this paper, we analyze data from Stack Overflow to categorize the kinds of questions that are asked, and to explore which questions are answered well and which ones remain unanswered. Our preliminary findings indicate that Q&A websites are particularly effective at code reviews and conceptual questions. We pose research questions and suggest future work to explore the motivations of programmers that contribute to Q&A websites, and to understand the implications of turning Q&A exchanges into technical mini-blogs through the editing of questions and answers. @InProceedings{ICSE11p804, author = {Christoph Treude and Ohad Barzilay and Margaret-Anne Storey}, title = {How Do Programmers Ask and Answer Questions on the Web? (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {804--807}, doi = {}, year = {2011}, } |
|
Basili, Victor R. |
ICSE '11-SEIP: "A Case Study of Measuring ..."
A Case Study of Measuring Process Risk for Early Insights into Software Safety
Lucas Layman, Victor R. Basili, Marvin V. Zelkowitz, and Karen L. Fisher (Fraunhofer CESE, USA; University of Maryland, USA; NASA Goddard Spaceflight Center, USA) In this case study, we examine software safety risk in three flight hardware systems in NASA’s Constellation spaceflight program. We applied our Technical and Process Risk Measurement (TPRM) methodology to the Constellation hazard analysis process to quantify the technical and process risks involving software safety in the early design phase of these projects. We analyzed 154 hazard reports and collected metrics to measure the prevalence of software in hazards and the specificity of descriptions of software causes of hazardous conditions. We found that 49-70% of 154 hazardous conditions could be caused by software or software was involved in the prevention of the hazardous condition. We also found that 12-17% of the 2013 hazard causes involved software, and that 23-29% of all causes had a software control. The application of the TRPM methodology identified process risks in the application of the hazard analysis process itself that may lead to software safety risk. @InProceedings{ICSE11p623, author = {Lucas Layman and Victor R. Basili and Marvin V. Zelkowitz and Karen L. Fisher}, title = {A Case Study of Measuring Process Risk for Early Insights into Software Safety}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {623--632}, doi = {}, year = {2011}, } |
|
Bass, Len |
ICSE '11-SEIP: "Architecture Evaluation without ..."
Architecture Evaluation without an Architecture: Experience with the Smart Grid
Rick Kazman, Len Bass, James Ivers, and Gabriel A. Moreno (SEI/CMU, USA; University of Hawaii, USA) This paper describes an analysis of some of the challenges facing one portion of the Smart Grid in the United States—residential Demand Response (DR) systems. The purposes of this paper are twofold: 1) to discover risks to residential DR systems and 2) to illustrate an architecture-based analysis approach to uncovering risks that span a collection of technical and social concerns. The results presented here are specific to residential DR but the approach is general and it could be applied to other systems within the Smart Grid and other critical infrastructure domains. Our architecture-based analysis is different from most other approaches to analyzing complex systems in that it addresses multiple quality attributes simultaneously (e.g., performance, reliability, security, modifiability, usability, etc.) and it considers the architecture of a complex system from a socio-technical perspective where the actions of the people in the system are as important, from an analysis perspective, as the physical and computational elements of the system. This analysis can be done early in a system’s lifetime, before substantial resources have been committed to its construction or procurement, and so it provides extremely cost-effective risk analysis. @InProceedings{ICSE11p663, author = {Rick Kazman and Len Bass and James Ivers and Gabriel A. Moreno}, title = {Architecture Evaluation without an Architecture: Experience with the Smart Grid}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {663--670}, doi = {}, year = {2011}, } |
|
Batory, Don |
ICSE '11-WORKSHOPS: "Fourth Workshop on Refactoring ..."
Fourth Workshop on Refactoring Tools (WRT 2011)
Danny Dig and Don Batory (University of Illinois at Urbana-Champaign, USA; University of Texas at Austin, USA) Refactoring is the process of applying behavior-preserving transformations to a program with the objective of improving the program’s design. A specific refactoring is identified by a name (e.g., Extract Method), a set of preconditions, and a set of transformations that need to be performed. Tool support for refactoring is essential because checking the preconditions of refactoring often requires nontrivial program analysis, and applying transformations may affect many locations throughout a program. In recent years, the emergence of light-weight programming methodologies such as Extreme Programming has generated a great amount of interest in refactoring, and refactoring support has become a required feature in today’s IDEs. This workshop is a continuation of a series of previous workshops (ECOOP 2007, OOPSLA 2008 and 2009 – see http://refactoring.info/WRT) where researchers and developers of refactoring tools can meet, discuss recent ideas and work, and view tool demonstrations. @InProceedings{ICSE11p1202, author = {Danny Dig and Don Batory}, title = {Fourth Workshop on Refactoring Tools (WRT 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1202--1203}, doi = {}, year = {2011}, } |
|
Baudry, Benoit |
ICSE '11: "Estimating Footprints of Model ..."
Estimating Footprints of Model Operations
Cédric Jeanneret, Martin Glinz, and Benoit Baudry (University of Zurich, Switzerland; IRISA, France) When performed on a model, a set of operations (e.g., queries or model transformations) rarely uses all the information present in the model. Unintended underuse of a model can indicate various problems: the model may contain more detail than necessary or the operations may be immature or erroneous. Analyzing the footprints of the operations — i.e., the part of a model actually used by an operation — is a simple technique to diagnose and analyze such problems. However, precisely calculating the footprint of an operation is expensive, because it requires analyzing the operation’s execution trace. In this paper, we present an automated technique to estimate the footprint of an operation without executing it. We evaluate our approach by applying it to 75 models and five operations. Our technique provides software engineers with an efficient, yet precise, evaluation of the usage of their models. @InProceedings{ICSE11p601, author = {Cédric Jeanneret and Martin Glinz and Benoit Baudry}, title = {Estimating Footprints of Model Operations}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {601--610}, doi = {}, year = {2011}, } |
|
Bavota, Gabriele |
ICSE '11-NIER: "Identifying Method Friendships ..."
Identifying Method Friendships to Remove the Feature Envy Bad Smell (NIER Track)
Rocco Oliveto, Malcom Gethers, Gabriele Bavota, Denys Poshyvanyk, and Andrea De Lucia (University of Molise, Italy; College of William and Mary, USA; University of Salerno, Italy) We propose a novel approach to identify Move Method refactoring opportunities and remove the Feature Envy bad smell from source code. The proposed approach analyzes both structural and conceptual relationships between methods and uses Relational Topic Models (RTM) to identify sets of methods that share several responsibilities, i.e., "friend methods". The analysis of method friendships of a given method can be used to pinpoint the target class (envied class) where the method should be moved in. The results of a preliminary empirical evaluation indicate that the proposed approach provides accurate and meaningful refactoring opportunities. @InProceedings{ICSE11p820, author = {Rocco Oliveto and Malcom Gethers and Gabriele Bavota and Denys Poshyvanyk and Andrea De Lucia}, title = {Identifying Method Friendships to Remove the Feature Envy Bad Smell (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {820--823}, doi = {}, year = {2011}, } |
|
Bayne, Michael |
ICSE '11: "Always-Available Static and ..."
Always-Available Static and Dynamic Feedback
Michael Bayne, Richard Cook, and Michael D. Ernst (University of Washington, USA) Developers who write code in a statically typed language are denied the ability to obtain dynamic feedback by executing their code during periods when it fails the static type checker. They are further confined to the static typing discipline during times in the development process where it does not yield the highest productivity. If they opt instead to use a dynamic language, they forgo the many benefits of static typing, including machine-checked documentation, improved correctness and reliability, tool support (such as for refactoring), and better runtime performance. We present a novel approach to giving developers the benefits of both static and dynamic typing, throughout the development process, and without the burden of manually separating their program into statically- and dynamically-typed parts. Our approach, which is intended for temporary use during the development process, relaxes the static type system and provides a semantics for many type-incorrect programs. It defers type errors to run time, or suppresses them if they do not affect runtime semantics. We implemented our approach in a publicly available tool, DuctileJ, for the Java language. In case studies, DuctileJ conferred benefits both during prototyping and during the evolution of existing code. @InProceedings{ICSE11p521, author = {Michael Bayne and Richard Cook and Michael D. Ernst}, title = {Always-Available Static and Dynamic Feedback}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {521--530}, doi = {}, year = {2011}, } |
|
Becker, Steffen |
ICSE '11-SEIP: "An Industrial Case Study on ..."
An Industrial Case Study on Quality Impact Prediction for Evolving Service-Oriented Software
Heiko Koziolek, Bastian Schlich, Carlos Bilich, Roland Weiss, Steffen Becker, Klaus Krogmann, Mircea Trifu, Raffaela Mirandola, and Anne Koziolek (ABB Corporate Research, Germany; University of Paderborn, Germany; FZI, Germany; Politecnico di Milano, Italy; KIT, Germany) Systematic decision support for architectural design decisions is a major concern for software architects of evolving service-oriented systems. In practice, architects often analyse the expected performance and reliability of design alternatives based on prototypes or former experience. Model-driven prediction methods claim to uncover the tradeoffs between different alternatives quantitatively while being more cost-effective and less error-prone. However, they often suffer from weak tool support and focus on single quality attributes. Furthermore, there is limited evidence on their effectiveness based on documented industrial case studies. Thus, we have applied a novel, model-driven prediction method called Q-ImPrESS on a large-scale process control system consisting of several million lines of code from the automation domain to evaluate its evolution scenarios. This paper reports our experiences with the method and lessons learned. Benefits of Q-ImPrESS are the good architectural decision support and comprehensive tool framework, while one drawback is the time-consuming data collection. @InProceedings{ICSE11p776, author = {Heiko Koziolek and Bastian Schlich and Carlos Bilich and Roland Weiss and Steffen Becker and Klaus Krogmann and Mircea Trifu and Raffaela Mirandola and Anne Koziolek}, title = {An Industrial Case Study on Quality Impact Prediction for Evolving Service-Oriented Software}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {776--785}, doi = {}, year = {2011}, } |
|
Begel, Andrew |
ICSE '11-WORKSHOPS: "Second International Workshop ..."
Second International Workshop on Web 2.0 for Software Engineering (Web2SE 2011)
Christoph Treude, Margaret-Anne Storey, Arie van Deursen, Andrew Begel, and Sue Black (University of Victoria, Canada; Delft University of Technology, Netherlands; Microsoft Research, USA; University College London, UK) Social software is built around an "architecture of participation" where user data is aggregated as a side-effect of using Web 2.0 applications. Web 2.0 implies that processes and tools are socially open, and that content can be used in several different contexts. Web 2.0 tools and technologies support interactive information sharing, data interoperability and user centered design. For instance, wikis, blogs, tags and feeds help us organize, manage and categorize content in an informal and collaborative way. Some of these technologies have made their way into collaborative software development processes and development platforms. These processes and environments are just scratching the surface of what can be done by incorporating Web 2.0 approaches and technologies into collaborative software development. Web 2.0 opens up new opportunities for developers to form teams and collaborate, but it also comes with challenges for developers and researchers. Web2SE aims to improve our understanding of how Web 2.0, manifested in technologies such as mashups or dashboards, can change the culture of collaborative software development. @InProceedings{ICSE11p1222, author = {Christoph Treude and Margaret-Anne Storey and Arie van Deursen and Andrew Begel and Sue Black}, title = {Second International Workshop on Web 2.0 for Software Engineering (Web2SE 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1222--1223}, doi = {}, year = {2011}, } |
|
Behar, Noam |
ICSE '11-SEIP: "Code Coverage Analysis in ..."
Code Coverage Analysis in Practice for Large Systems
Yoram Adler, Noam Behar, Orna Raz, Onn Shehory, Nadav Steindler, Shmuel Ur, and Aviad Zlotnick (IBM Research Haifa, Israel; Microsoft, Israel; Shmuel Ur Innovation, Israel) Large systems generate immense quantities of code coverage data. A user faced with the task of analyzing this data, for example, to decide on test areas to improve, faces a ’needle in a haystack’ problem. In earlier studies we introduced substring hole analysis, a technique for presenting large quantities of coverage data in a succinct way. Here we demonstrate the successful use of substring hole analysis on large scale data from industrial software systems. For this end we augment substring hole analysis by introducing a work flow and tool support for practical code coverage analysis. We conduct real data experiments indicating that augmented substring hole analysis enables code coverage analysis where it was previously impractical, correctly identifies functionality that is missing from existing tests, and can increase the probability of finding bugs. These facilitate cost-effective code coverage analysis. @InProceedings{ICSE11p736, author = {Yoram Adler and Noam Behar and Orna Raz and Onn Shehory and Nadav Steindler and Shmuel Ur and Aviad Zlotnick}, title = {Code Coverage Analysis in Practice for Large Systems}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {736--745}, doi = {}, year = {2011}, } |
|
Bellamy, Rachel |
ICSE '11-SEIP: "Deploying CogTool: Integrating ..."
Deploying CogTool: Integrating Quantitative Usability Assessment into Real-World Software Development
Rachel Bellamy, Bonnie E. John, and Sandra Kogan (IBM Research Watson, USA; CMU, USA; IBM Software Group, USA) Usability concerns are often difficult to integrate into real-world software development processes. To remedy this situation, IBM research and development, partnering with Carnegie Mellon University, has begun to employ a repeatable and quantifiable usability analysis method, embodied in CogTool, in its development practice. CogTool analyzes tasks performed on an interactive system from a storyboard and a demonstration of tasks on that storyboard, and predicts the time a skilled user will take to perform those tasks. We discuss how IBM designers and UX professionals used CogTool in their existing practice for contract compliance, communication within a product team and between a product team and its customer, assigning appropriate personnel to fix customer complaints, and quantitatively assessing design ideas before a line of code is written. We then reflect on the lessons learned by both the development organizations and the researchers attempting this technology transfer from academic research to integration into real-world practice, and we point to future research to even better serve the needs of practice. @InProceedings{ICSE11p691, author = {Rachel Bellamy and Bonnie E. John and Sandra Kogan}, title = {Deploying CogTool: Integrating Quantitative Usability Assessment into Real-World Software Development}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {691--700}, doi = {}, year = {2011}, } ICSE '11-NIER: "Sketching Tools for Ideation ..." Sketching Tools for Ideation (NIER Track) Rachel Bellamy, Michael Desmond, Jacquelyn Martino, Paul Matchen, Harold Ossher, John Richards, and Cal Swart (IBM Research Watson, USA) Sketching facilitates design in the exploration of ideas about concrete objects and abstractions. In fact, throughout the software engineering process when grappling with new ideas, people reach for a pen and start sketching. While pen and paper work well, digital media can provide additional features to benefit the sketcher. Digital support will only be successful, however, if it does not detract from the core sketching experience. Based on research that defines characteristics of sketches and sketching, this paper offers three preliminary tool examples. Each example is intended to enable sketching while maintaining its characteristic experience. @InProceedings{ICSE11p808, author = {Rachel Bellamy and Michael Desmond and Jacquelyn Martino and Paul Matchen and Harold Ossher and John Richards and Cal Swart}, title = {Sketching Tools for Ideation (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {808--811}, doi = {}, year = {2011}, } ICSE '11-WORKSHOPS: "Workshop on Flexible Modeling ..." Workshop on Flexible Modeling Tools (FlexiTools 2011) Harold Ossher, André van der Hoek, Margaret-Anne Storey, John Grundy, Rachel Bellamy, and Marian Petre (IBM Research Watson, USA; UC Irvine, USA; University of Victoria, Canada; Swinburne University of Technology at Hawthorn, Australia; The Open University, UK) Modeling tools are often not used for tasks during the software lifecycle for which they should be more helpful; instead free-from approaches, such as office tools and white boards, are frequently used. Prior workshops explored why this is the case and what might be done about it. The goal of this workshop is to continue those discussions and also to form an initial set of challenge problems and research challenges that researchers and developers of flexible modeling tools should address. @InProceedings{ICSE11p1192, author = {Harold Ossher and André van der Hoek and Margaret-Anne Storey and John Grundy and Rachel Bellamy and Marian Petre}, title = {Workshop on Flexible Modeling Tools (FlexiTools 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1192--1193}, doi = {}, year = {2011}, } |
|
Benestad, Hans Christian |
ICSE '11-SEIP: "A Comparison of Model-based ..."
A Comparison of Model-based and Judgment-based Release Planning in Incremental Software Projects
Hans Christian Benestad and Jo E. Hannay (Simula Research Laboratory, Norway) Numerous factors are involved when deciding when to implement which features in incremental software development. To facilitate a rational and efficient planning process, release planning models make such factors explicit and compute release plan alternatives according to optimization principles. However, experience suggests that industrial use of such models is limited. To investigate the feasibility of model and tool support, we compared input factors assumed by release planning models with factors considered by expert planners. The former factors were cataloged by systematically surveying release planning models, while the latter were elicited through repertory grid interviews in three software organizations. The findings indicate a substantial overlap between the two approaches. However, a detailed analysis reveals that models focus on only select parts of a possibly larger space of relevant planning factors. Three concrete areas of mismatch were identified: (1) continuously evolving requirements and specifications, (2) continuously changing prioritization criteria, and (3) authority-based decision processes. With these results in mind, models, tools and guidelines can be adjusted to address better real-life development processes. @InProceedings{ICSE11p766, author = {Hans Christian Benestad and Jo E. Hannay}, title = {A Comparison of Model-based and Judgment-based Release Planning in Incremental Software Projects}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {766--775}, doi = {}, year = {2011}, } |
|
Berger, Thorsten |
ICSE '11: "Reverse Engineering Feature ..."
Reverse Engineering Feature Models
Steven She, Rafael Lotufo, Thorsten Berger, Andrzej Wasowski, and Krzysztof Czarnecki (University of Waterloo, Canada; University of Leipzig, Germany; IT University of Copenhagen, Denmark) Feature models describe the common and variable characteristics of a product line. Their advantages are well recognized in product line methods. Unfortunately, creating a feature model for an existing project is time-consuming and requires substantial effort from a modeler. We present procedures for reverse engineering feature models based on a crucial heuristic for identifying parents—the major challenge of this task. We also automatically recover constructs such as feature groups, mandatory features, and implies/excludes edges. We evaluate the technique on two large-scale software product lines with existing reference feature models—the Linux and eCos kernels—and FreeBSD, a project without a feature model. Our heuristic is effective across all three projects by ranking the correct parent among the top results for a vast majority of features. The procedures effectively reduce the information a modeler has to consider from thousands of choices to typically five or less. @InProceedings{ICSE11p461, author = {Steven She and Rafael Lotufo and Thorsten Berger and Andrzej Wasowski and Krzysztof Czarnecki}, title = {Reverse Engineering Feature Models}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {461--470}, doi = {}, year = {2011}, } |
|
Bertolino, Antonia |
ICSE '11-WORKSHOPS: "Sixth International Workshop ..."
Sixth International Workshop on Automation of Software Test (AST 2011)
Howard Foster, Antonia Bertolino, and J. Jenny Li (City University London, UK; ISTI-CNR, Italy; Avaya Research Labs, USA) The Sixth International Workshop on Automation of Software Test (AST 2011) is associated with the 33rd International Conference on Software Engineering (ICSE 2011). This edition of AST was focused on the special theme of Software Design and the Automation of Software Test and authors were encouraged to submit work in this area. The workshop covers two days with presentations of regular research papers, industrial case studies and experience reports. The workshop also aims to have extensive discussions on collaborative solutions in the form of charette sessions. This paper summarizes the organization of the workshop, the special theme, as well as the sessions. @InProceedings{ICSE11p1216, author = {Howard Foster and Antonia Bertolino and J. Jenny Li}, title = {Sixth International Workshop on Automation of Software Test (AST 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1216--1217}, doi = {}, year = {2011}, } |
|
Bertran, Isela Macia |
ICSE '11-DOCTORALPRESENT: "Detecting Architecturally-Relevant ..."
Detecting Architecturally-Relevant Code Smells in Evolving Software Systems
Isela Macia Bertran (PUC Rio, Brazil) Refactoring tends to avoid the early deviation of a program from its intended architecture design. However, there is little knowledge about whether the manifestation of code smells in evolving software is indicator of architectural deviations. A fundamental difficulty in this process is that developers are only equipped with static analysis techniques for the source code, which do not exploit traceable architectural information. This work addresses this problem by: (1) identifying a family of architecturally-relevant code smells; (2) providing empirical evidence about the correlation of code smell patterns and architectural degeneration; (3) proposing a set of metrics and detection strategies and that exploit traceable architectural information in smell detection; and (4) conceiving a technique to support the early identification of architecture degeneration symptoms by reasoning about code smell patterns. @InProceedings{ICSE11p1090, author = {Isela Macia Bertran}, title = {Detecting Architecturally-Relevant Code Smells in Evolving Software Systems}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1090--1093}, doi = {}, year = {2011}, } |
|
Beyer, Dirk |
ICSE '11: "Feature Cohesion in Software ..."
Feature Cohesion in Software Product Lines: An Exploratory Study
Sven Apel and Dirk Beyer (University of Passau, Germany; Simon Fraser University, Canada) Software product lines gain momentum in research and industry. Many product-line approaches use features as a central abstraction mechanism. Feature-oriented software development aims at encapsulating features in cohesive units to support program comprehension, variability, and reuse. Surprisingly, not much is known about the characteristics of cohesion in feature-oriented product lines, although proper cohesion is of special interest in product-line engineering due to its focus on variability and reuse. To fill this gap, we conduct an exploratory study on forty software product lines of different sizes and domains. A distinguishing property of our approach is that we use both classic software measures and novel measures that are based on distances in clustering layouts, which can be used also for visual exploration of product-line architectures. This way, we can draw a holistic picture of feature cohesion. In our exploratory study, we found several interesting correlations (e.g., between development process and feature cohesion) and we discuss insights and perspectives of investigating feature cohesion (e.g., regarding feature interfaces and programming style). @InProceedings{ICSE11p421, author = {Sven Apel and Dirk Beyer}, title = {Feature Cohesion in Software Product Lines: An Exploratory Study}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {421--430}, doi = {}, year = {2011}, } |
|
Bhandar, Manisha |
ICSE '11-DEMOS: "Using MATCON to Generate CASE ..."
Using MATCON to Generate CASE Tools That Guide Deployment of Pre-Packaged Applications
Elad Fein, Natalia Razinkov, Shlomit Shachor, Pietro Mazzoleni, Sweefen Goh, Richard Goodwin, Manisha Bhandar, Shyh-Kwei Chen, Juhnyoung Lee, Vibha Singhal Sinha, Senthil Mani, Debdoot Mukherjee, Biplav Srivastava, and Pankaj Dhoolia (IBM Research Haifa, Israel; IBM Research Watson, USA; IBM Research, India) The complex process of adapting pre-packaged applications, such as Oracle or SAP, to an organization’s needs is full of challenges. Although detailed, structured, and well-documented methods govern this process, the consulting team implementing the method must spend a huge amount of manual effort to make sure the guidelines of the method are followed as intended by the method author. MATCON breaks down the method content, documents, templates, and work products into reusable objects, and enables them to be cataloged and indexed so these objects can be easily found and reused on subsequent projects. By using models and meta-modeling the reusable methods, we automatically produce a CASE tool to apply these methods, thereby guiding consultants through this complex process. The resulting tool helps consultants create the method deliverables for the initial phases of large customization projects. Our MATCON output, referred to as Consultant Assistant, has shown significant savings in training costs, a 20–30% improvement in productivity, and positive results in large Oracle and SAP implementations. @InProceedings{ICSE11p1016, author = {Elad Fein and Natalia Razinkov and Shlomit Shachor and Pietro Mazzoleni and Sweefen Goh and Richard Goodwin and Manisha Bhandar and Shyh-Kwei Chen and Juhnyoung Lee and Vibha Singhal Sinha and Senthil Mani and Debdoot Mukherjee and Biplav Srivastava and Pankaj Dhoolia}, title = {Using MATCON to Generate CASE Tools That Guide Deployment of Pre-Packaged Applications}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1016--1018}, doi = {}, year = {2011}, } |
|
Bhattacharya, Pamela |
ICSE '11: "Assessing Programming Language ..."
Assessing Programming Language Impact on Development and Maintenance: A Study on C and C++
Pamela Bhattacharya and Iulian Neamtiu (UC Riverside, USA) Billions of dollars are spent every year for building and maintaining software. To reduce these costs we must identify the key factors that lead to better software and more productive development. One such key factor, and the focus of our paper, is the choice of programming language. Existing studies that analyze the impact of choice of programming language suffer from several deficiencies with respect to methodology and the applications they consider. For example, they consider applications built by different teams in different languages, hence fail to control for developer competence, or they consider small-sized, infrequently-used, short-lived projects. We propose a novel methodology which controls for development process and developer competence, and quantifies how the choice of programming language impacts software quality and developer productivity. We conduct a study and statistical analysis on a set of long-lived, widely-used, open source projects—Firefox, Blender, VLC, and MySQL. The key novelties of our study are: (1) we only consider projects which have considerable portions of development in two languages, C and C++, and (2) a majority of developers in these projects contribute to both C and C++ code bases. We found that using C++ instead of C results in improved software quality and reduced maintenance effort, and that code bases are shifting from C to C++. Our methodology lays a solid foundation for future studies on comparative advantages of particular programming languages. @InProceedings{ICSE11p171, author = {Pamela Bhattacharya and Iulian Neamtiu}, title = {Assessing Programming Language Impact on Development and Maintenance: A Study on C and C++}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {171--180}, doi = {}, year = {2011}, } ICSE '11-DOCTORALPOSTER: "Using Software Evolution History ..." Using Software Evolution History to Facilitate Development and Maintenance Pamela Bhattacharya (UC Riverside, USA) Much research in software engineering have been focused on improving software quality and automating the maintenance process to reduce software costs and mitigating complications associated with the evolution process. Despite all these efforts, there are still high cost and effort associated with software bugs and software maintenance, software still continues to be unreliable, and software bugs can wreak havoc on software producers and consumers alike. My dissertation aims to advance the state-of-art in software evolution research by designing tools that can measure and predict software quality and to create integrated frameworks that helps in improving software maintenance and research that involves mining software repositories. @InProceedings{ICSE11p1122, author = {Pamela Bhattacharya}, title = {Using Software Evolution History to Facilitate Development and Maintenance}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1122--1123}, doi = {}, year = {2011}, } |
|
Bianculli, Domenico |
ICSE '11: "Interface Decomposition for ..."
Interface Decomposition for Service Compositions
Domenico Bianculli, Dimitra Giannakopoulou, and Corina S. Păsăreanu (University of Lugano, Switzerland; NASA Ames Research Center, USA; Carnegie Mellon Silicon Valley, USA) Service-based applications can be realized by composing existing services into new, added-value composite services. The external services with which a service composition interacts are usually known by means of their syntactical interface. However, an interface providing more information, such as a behavioral specification, could be more useful to a service integrator for assessing that a certain external service can contribute to fulfill the functional requirements of the composite application. Given the requirements specification of a composite service, we present a technique for obtaining the behavioral interfaces — in the form of labeled transition systems — of the external services, by decomposing the global interface specification that characterizes the environment of the service composition. The generated interfaces guarantee that the service composition fulfills its requirements during the execution. Our approach has been implemented in the LTSA tool and has been applied to two case studies. @InProceedings{ICSE11p501, author = {Domenico Bianculli and Dimitra Giannakopoulou and Corina S. Păsăreanu}, title = {Interface Decomposition for Service Compositions}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {501--510}, doi = {}, year = {2011}, } |
|
Bigrigg, Michael W. |
ICSE '11-SEIP: "SORASCS: A Case Study in SOA-based ..."
SORASCS: A Case Study in SOA-based Platform Design for Socio-Cultural Analysis
Bradley Schmerl, David Garlan, Vishal Dwivedi, Michael W. Bigrigg, and Kathleen M. Carley (CMU, USA) An increasingly important class of software-based systems is platforms that permit integration of third-party components, services, and tools. Service-Oriented Architecture (SOA) is one such platform that has been successful in providing integration and distribution in the business domain, and could be effective in other domains (e.g., scientific computing, healthcare, and complex decision making). In this paper, we discuss our application of SOA to provide an integration platform for socio-cultural analysis, a domain that, through models, tries to understand, analyze and predict relationships in large complex social systems. In developing this platform, called SORASCS, we had to overcome issues we believe are generally applicable to any application of SOA within a domain that involves technically naïve users and seeks to establish a sustainable software ecosystem based on a common integration platform. We discuss these issues, the lessons learned about the kinds of problems that occur, and pathways toward a solution. @InProceedings{ICSE11p643, author = {Bradley Schmerl and David Garlan and Vishal Dwivedi and Michael W. Bigrigg and Kathleen M. Carley}, title = {SORASCS: A Case Study in SOA-based Platform Design for Socio-Cultural Analysis}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {643--652}, doi = {}, year = {2011}, } |
|
Bilich, Carlos |
ICSE '11-SEIP: "An Industrial Case Study on ..."
An Industrial Case Study on Quality Impact Prediction for Evolving Service-Oriented Software
Heiko Koziolek, Bastian Schlich, Carlos Bilich, Roland Weiss, Steffen Becker, Klaus Krogmann, Mircea Trifu, Raffaela Mirandola, and Anne Koziolek (ABB Corporate Research, Germany; University of Paderborn, Germany; FZI, Germany; Politecnico di Milano, Italy; KIT, Germany) Systematic decision support for architectural design decisions is a major concern for software architects of evolving service-oriented systems. In practice, architects often analyse the expected performance and reliability of design alternatives based on prototypes or former experience. Model-driven prediction methods claim to uncover the tradeoffs between different alternatives quantitatively while being more cost-effective and less error-prone. However, they often suffer from weak tool support and focus on single quality attributes. Furthermore, there is limited evidence on their effectiveness based on documented industrial case studies. Thus, we have applied a novel, model-driven prediction method called Q-ImPrESS on a large-scale process control system consisting of several million lines of code from the automation domain to evaluate its evolution scenarios. This paper reports our experiences with the method and lessons learned. Benefits of Q-ImPrESS are the good architectural decision support and comprehensive tool framework, while one drawback is the time-consuming data collection. @InProceedings{ICSE11p776, author = {Heiko Koziolek and Bastian Schlich and Carlos Bilich and Roland Weiss and Steffen Becker and Klaus Krogmann and Mircea Trifu and Raffaela Mirandola and Anne Koziolek}, title = {An Industrial Case Study on Quality Impact Prediction for Evolving Service-Oriented Software}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {776--785}, doi = {}, year = {2011}, } |
|
Binkley, David |
ICSE '11: "Model Projection: Simplifying ..."
Model Projection: Simplifying Models in Response to Restricting the Environment
Kelly Androutsopoulos, David Binkley, David Clark, Nicolas Gold, Mark Harman, Kevin Lano, and Zheng Li (University College London, UK; Loyola University Maryland, USA; King's College London, UK) This paper introduces Model Projection. Finite state models such as Extended Finite State Machines are being used in an ever increasing number of software engineering activities. Model projection facilitates model development by specializing models for a specific operating environment. A projection is useful in many design-level applications including specification reuse and property verification. The applicability of model projection rests upon three critical concerns: correctness, effectiveness, and efficiency, all of which are addressed in this paper. We introduce four related algorithms for model projection and prove each correct. We also present an empirical study of effectiveness and efficiency using ten models, including widely–studied benchmarks as well as industrial models. Results show that a typical projection includes about half of the states and a third of the transitions from the original model. @InProceedings{ICSE11p291, author = {Kelly Androutsopoulos and David Binkley and David Clark and Nicolas Gold and Mark Harman and Kevin Lano and Zheng Li}, title = {Model Projection: Simplifying Models in Response to Restricting the Environment}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {291--300}, doi = {}, year = {2011}, } |
|
Bishop, Judith |
ICSE '11-WORKSHOPS: "First Workshop on Developing ..."
First Workshop on Developing Tools as Plug-ins (TOPI 2011)
Judith Bishop, David Notkin, and Karin K. Breitman (Microsoft Research, USA; University of Washington, USA; PUC-Rio, Brazil) Our knowledge as to how to solve software engineering problems is increasingly being encapsulated in tools. These tools are at their strongest when they operate in a pre-existing development environment that can provide integration with existing elements such as compilers, debuggers, profilers and visualizers. The first Workshop on Developing Tools as Plug-ins is a new forum in which to addresses research, ongoing work, ideas, concepts, and critical questions related to the engineering of software tools and plug-ins. @InProceedings{ICSE11p1230, author = {Judith Bishop and David Notkin and Karin K. Breitman}, title = {First Workshop on Developing Tools as Plug-ins (TOPI 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1230--1231}, doi = {}, year = {2011}, } |
|
Black, Sue |
ICSE '11-WORKSHOPS: "Second International Workshop ..."
Second International Workshop on Web 2.0 for Software Engineering (Web2SE 2011)
Christoph Treude, Margaret-Anne Storey, Arie van Deursen, Andrew Begel, and Sue Black (University of Victoria, Canada; Delft University of Technology, Netherlands; Microsoft Research, USA; University College London, UK) Social software is built around an "architecture of participation" where user data is aggregated as a side-effect of using Web 2.0 applications. Web 2.0 implies that processes and tools are socially open, and that content can be used in several different contexts. Web 2.0 tools and technologies support interactive information sharing, data interoperability and user centered design. For instance, wikis, blogs, tags and feeds help us organize, manage and categorize content in an informal and collaborative way. Some of these technologies have made their way into collaborative software development processes and development platforms. These processes and environments are just scratching the surface of what can be done by incorporating Web 2.0 approaches and technologies into collaborative software development. Web 2.0 opens up new opportunities for developers to form teams and collaborate, but it also comes with challenges for developers and researchers. Web2SE aims to improve our understanding of how Web 2.0, manifested in technologies such as mashups or dashboards, can change the culture of collaborative software development. @InProceedings{ICSE11p1222, author = {Christoph Treude and Margaret-Anne Storey and Arie van Deursen and Andrew Begel and Sue Black}, title = {Second International Workshop on Web 2.0 for Software Engineering (Web2SE 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1222--1223}, doi = {}, year = {2011}, } |
|
Bodden, Eric |
ICSE '11: "Taming Reflection: Aiding ..."
Taming Reflection: Aiding Static Analysis in the Presence of Reflection and Custom Class Loaders
Eric Bodden, Andreas Sewe, Jan Sinschek, Hela Oueslati, and Mira Mezini (TU Darmstadt, Germany; Center for Advanced Security Research Darmstadt, Germany) Static program analyses and transformations for Java face many problems when analyzing programs that use reflection or custom class loaders: How can a static analysis know which reflective calls the program will execute? How can it get hold of classes that the program loads from remote locations or even generates on the fly? And if the analysis transforms classes, how can these classes be re-inserted into a program that uses custom class loaders? In this paper, we present TamiFlex, a tool chain that offers a partial but often effective solution to these problems. With TamiFlex, programmers can use existing staticanalysis tools to produce results that are sound at least with respect to a set of recorded program runs. TamiFlex inserts runtime checks into the program that warn the user in case the program executes reflective calls that the analysis did not take into account. TamiFlex further allows programmers to re-insert offline-transformed classes into a program. We evaluate TamiFlex in two scenarios: benchmarking with the DaCapo benchmark suite and analysing large-scale interactive applications. For the latter, TamiFlex significantly improves code coverage of the static analyses, while for the former our approach even appears complete: the inserted runtime checks issue no warning. Hence, for the first time, TamiFlex enables sound static whole-program analyses on DaCapo. During this process, TamiFlex usually incurs less than 10% runtime overhead. @InProceedings{ICSE11p241, author = {Eric Bodden and Andreas Sewe and Jan Sinschek and Hela Oueslati and Mira Mezini}, title = {Taming Reflection: Aiding Static Analysis in the Presence of Reflection and Custom Class Loaders}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {241--250}, doi = {}, year = {2011}, } |
|
Bodik, Rastislav |
ICSE '11: "Angelic Debugging ..."
Angelic Debugging
Satish Chandra, Emina Torlak, Shaon Barman, and Rastislav Bodik (IBM Research, USA; UC Berkeley, USA) Software ships with known bugs because it is expensive to pinpoint and fix the bug exposed by a failing test. To reduce the cost of bug identification, we locate expressions that are likely causes of bugs and thus candidates for repair. Our symbolic method approximates an ideal approach to fixing bugs mechanically, which is to search the space of all edits to the program for one that repairs the failing test without breaking any passing test. We approximate the expensive ideal of exploring syntactic edits by instead computing the set of values whose substitution for the expression corrects the execution. We observe that an expression is a repair candidate if it can be replaced with a value that fixes a failing test and in each passing test, its value can be changed to another value without breaking the test. The latter condition makes the expression flexible in that it permits multiple values. The key observation is that the repair of a flexible expression is less likely to break a passing test. The method is called angelic debugging because the values are computed by angelically nondeterministic statements. We implemented the method on top of the Java PathFinder model checker. Our experiments with this technique show promise of its applicability in speeding up program debugging. @InProceedings{ICSE11p121, author = {Satish Chandra and Emina Torlak and Shaon Barman and Rastislav Bodik}, title = {Angelic Debugging}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {121--130}, doi = {}, year = {2011}, } |
|
Boehm, Barry W. |
ICSE '11-IMPACT: "Impact of Software Resource ..."
Impact of Software Resource Estimation Research on Practice: A Preliminary Report on Achievements, Synergies, and Challenges
Barry W. Boehm and Ricardo Valerdi (University of Southern California, USA; MIT, USA) This paper is a contribution to the Impact Project in the area of software resource estimation. The objective of the Impact Project has been to analyze the impact of software engineering research investments on software engineering practice. The paper begins by summarizing the motivation and context for analyzing software resource estimation; and by summarizing the study’s purpose, scope, and approach. The approach includes analyses of the literature; interviews of leading software resource estimation researchers, practitioners, and users; and value/impact surveys of estimators and users. The study concludes that research in software resource estimation has had a significant impact on the practice of software engineering, but also faces significant challenges in addressing likely future software trends. @InProceedings{ICSE11p1057, author = {Barry W. Boehm and Ricardo Valerdi}, title = {Impact of Software Resource Estimation Research on Practice: A Preliminary Report on Achievements, Synergies, and Challenges}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1057--1065}, doi = {}, year = {2011}, } |
|
Borges, Rafael V. |
ICSE '11-NIER: "Learning to Adapt Requirements ..."
Learning to Adapt Requirements Specifications of Evolving Systems (NIER Track)
Rafael V. Borges, Artur d'Avila Garcez, Luis C. Lamb, and Bashar Nuseibeh (City University London, UK; UFRGS, Brazil; The Open University, UK; Lero, Ireland) We propose a novel framework for adapting and evolving software requirements models. The framework uses model checking and machine learning techniques for verifying properties and evolving model descriptions. The paper offers two novel contributions and a preliminary evaluation and application of the ideas presented. First, the framework is capable of coping with errors in the specification process so that performance degrades gracefully. Second, the framework can also be used to re-engineer a model from examples only, when an initial model is not available. We provide a preliminary evaluation of our framework by applying it to a Pump System case study, and integrate our prototype tool with the NuSMV model checker. We show how the tool integrates verification and evolution of abstract models, and also how it is capable of re-engineering partial models given examples from an existing system. @InProceedings{ICSE11p856, author = {Rafael V. Borges and Artur d'Avila Garcez and Luis C. Lamb and Bashar Nuseibeh}, title = {Learning to Adapt Requirements Specifications of Evolving Systems (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {856--859}, doi = {}, year = {2011}, } |
|
Bos, Jeroen van den |
ICSE '11-SEIP: "Bringing Domain-Specific Languages ..."
Bringing Domain-Specific Languages to Digital Forensics
Jeroen van den Bos and Tijs van der Storm (Netherlands Forensic Institute, Netherlands; Centrum Wiskunde en Informatica, Netherlands) Digital forensics investigations often consist of analyzing large quantities of data. The software tools used for analyzing such data are constantly evolving to cope with a multiplicity of versions and variants of data formats. This process of customization is time consuming and error prone. To improve this situation we present Derric, a domainspecific language (DSL) for declaratively specifying data structures. This way, the specification of structure is separated from data processing. The resulting architecture encourages customization and facilitates reuse. It enables faster development through a division of labour between investigators and software engineers. We have performed an initial evaluation of Derric by constructing a data recovery tool. This so-called carver has been automatically derived from a declarative description of the structure of JPEG files. We compare it to existing carvers, and show it to be in the same league both with respect to recovered evidence, and runtime performance. @InProceedings{ICSE11p671, author = {Jeroen van den Bos and Tijs van der Storm}, title = {Bringing Domain-Specific Languages to Digital Forensics}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {671--680}, doi = {}, year = {2011}, } |
|
Bott, Felix |
ICSE '11-NIER: "CREWW - Collaborative Requirements ..."
CREWW - Collaborative Requirements Engineering with Wii-Remotes (NIER Track)
Felix Bott, Stephan Diehl, and Rainer Lutz (University of Trier, Germany) In this paper, we present CREWW, a tool for co-located, collaborative CRC modeling and use case analysis. In CRC sessions role play is used to involve all stakeholders when determining whether the current software model completely and consistently captures the modeled use case. In this activity it quickly becomes difficult to keep track of which class is currently active or along which path the current state was reached. CREWW was designed to alleviate these and other weaknesses of the traditional approach. @InProceedings{ICSE11p852, author = {Felix Bott and Stephan Diehl and Rainer Lutz}, title = {CREWW - Collaborative Requirements Engineering with Wii-Remotes (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {852--855}, doi = {}, year = {2011}, } |
|
Botterweck, Goetz |
ICSE '11-WORKSHOPS: "Second International Workshop ..."
Second International Workshop on Product Line Approaches in Software Engineering (PLEASE 2011)
Julia Rubin, Goetz Botterweck, Andreas Pleuss, and David M. Weiss (IBM Research Haifa, Israel; Lero, Ireland; University of Limerick, Ireland; Iowa State University, USA) PLEASE workshop series focuses on exploring the present and the future of Software Product Line Engineering techniques. The main goal of PLEASE 2011 is to bring together industrial practitioner and software product line researchers in order to couple real-life industrial problems with concrete solutions developed by the community. We plan for an interactive workshop, where participants can apply their expertise to current industrial problems, while those who face challenges in the area of product line engineering can benefit from the suggested solutions. We also intend to establish ongoing, long-lasting relationships between industrial and research participants to the mutual benefits of both. The second edition of PLEASE is held in conjunction with the 33rd International Conference in Software Engineering (May 21-28, 2011, Honolulu, Hawaii). @InProceedings{ICSE11p1204, author = {Julia Rubin and Goetz Botterweck and Andreas Pleuss and David M. Weiss}, title = {Second International Workshop on Product Line Approaches in Software Engineering (PLEASE 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1204--1205}, doi = {}, year = {2011}, } |
|
Braberman, Víctor |
ICSE '11: "Program Abstractions for Behaviour ..."
Program Abstractions for Behaviour Validation
Guido de Caso, Víctor Braberman, Diego Garbervetsky, and Sebastián Uchitel (Universidad de Buenos Aires, Argentina; Imperial College London, UK) @InProceedings{ICSE11p381, author = {Guido de Caso and Víctor Braberman and Diego Garbervetsky and Sebastián Uchitel}, title = {Program Abstractions for Behaviour Validation}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {381--380}, doi = {}, year = {2011}, } ICSE '11: "Synthesis of Live Behaviour ..." Synthesis of Live Behaviour Models for Fallible Domains Nicolás D'Ippolito, Víctor Braberman, Nir Piterman, and Sebastián Uchitel (Imperial College London, UK; Universidad de Buenos Aires, Argentina; University of Leicester, UK) We revisit synthesis of live controllers for event-based operational models. We remove one aspect of an idealised problem domain by allowing to integrate failures of controller actions in the environment model. Classical treatment of failures through strong fairness leads to a very high computational complexity and may be insufficient for many interesting cases. We identify a realistic stronger fairness condition on the behaviour of failures. We show how to construct controllers satisfying liveness specifications under these fairness conditions. The resulting controllers exhibit the only possible behaviour in face of the given topology of failures: they keep retrying and never give up. We then identify some well-structure conditions on the environment. These conditions ensure that the resulting controller will be eager to satisfy its goals. Furthermore, for environments that satisfy these conditions and have an underlying probabilistic behaviour, the measure of traces that satisfy our fairness condition is 1, giving a characterisation of the kind of domains in which the approach is applicable. @InProceedings{ICSE11p211, author = {Nicolás D'Ippolito and Víctor Braberman and Nir Piterman and Sebastián Uchitel}, title = {Synthesis of Live Behaviour Models for Fallible Domains}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {211--220}, doi = {}, year = {2011}, } |
|
Bradshaw, Gary |
ICSE '11-NIER: "Information Foraging as a ..."
Information Foraging as a Foundation for Code Navigation (NIER Track)
Nan Niu, Anas Mahmoud, and Gary Bradshaw (Mississippi State University, USA) A major software engineering challenge is to understand the fundamental mechanisms that underlie the developer’s code navigation behavior. We propose a novel and unified theory based on the premise that we can study developer’s information seeking strategies in light of the foraging principles that evolved to help our animal ancestors to find food. Our preliminary study on code navigation graphs suggests that the tenets of information foraging provide valuable insight into software maintenance. Our research opens the avenue towards the development of ecologically valid tool support to augment developers’ code search skills. @InProceedings{ICSE11p816, author = {Nan Niu and Anas Mahmoud and Gary Bradshaw}, title = {Information Foraging as a Foundation for Code Navigation (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {816--819}, doi = {}, year = {2011}, } |
|
Breitman, Karin K. |
ICSE '11-WORKSHOPS: "First Workshop on Developing ..."
First Workshop on Developing Tools as Plug-ins (TOPI 2011)
Judith Bishop, David Notkin, and Karin K. Breitman (Microsoft Research, USA; University of Washington, USA; PUC-Rio, Brazil) Our knowledge as to how to solve software engineering problems is increasingly being encapsulated in tools. These tools are at their strongest when they operate in a pre-existing development environment that can provide integration with existing elements such as compilers, debuggers, profilers and visualizers. The first Workshop on Developing Tools as Plug-ins is a new forum in which to addresses research, ongoing work, ideas, concepts, and critical questions related to the engineering of software tools and plug-ins. @InProceedings{ICSE11p1230, author = {Judith Bishop and David Notkin and Karin K. Breitman}, title = {First Workshop on Developing Tools as Plug-ins (TOPI 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1230--1231}, doi = {}, year = {2011}, } |
|
Briand, Lionel C. |
ICSE '11: "A Practical Guide for Using ..."
A Practical Guide for Using Statistical Tests to Assess Randomized Algorithms in Software Engineering
Andrea Arcuri and Lionel C. Briand (Simula Research Laboratory, Norway) Randomized algorithms have been used to successfully address many different types of software engineering problems. This type of algorithms employ a degree of randomness as part of their logic. Randomized algorithms are useful for difficult problems where a precise solution cannot be derived in a deterministic way within reasonable time. However, randomized algorithms produce different results on every run when applied to the same problem instance. It is hence important to assess the effectiveness of randomized algorithms by collecting data from a large enough number of runs. The use of rigorous statistical tests is then essential to provide support to the conclusions derived by analyzing such data. In this paper, we provide a systematic review of the use of randomized algorithms in selected software engineering venues in 2009. Its goal is not to perform a complete survey but to get a representative snapshot of current practice in software engineering research. We show that randomized algorithms are used in a significant percentage of papers but that, in most cases, randomness is not properly accounted for. This casts doubts on the validity of most empirical results assessing randomized algorithms. There are numerous statistical tests, based on different assumptions, and it is not always clear when and how to use these tests. We hence provide practical guidelines to support empirical research on randomized algorithms in software engineering. @InProceedings{ICSE11p1, author = {Andrea Arcuri and Lionel C. Briand}, title = {A Practical Guide for Using Statistical Tests to Assess Randomized Algorithms in Software Engineering}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1--10}, doi = {}, year = {2011}, } ICSE '11-SEIP: "Enabling the Runtime Assertion ..." Enabling the Runtime Assertion Checking of Concurrent Contracts for the Java Modeling Language Wladimir Araujo, Lionel C. Briand, and Yvan Labiche (Juniper Networks, Canada; Simula Research Laboratory, Norway; University of Oslo, Norway; Carleton University, Canada) Though there exists ample support for Design by Contract (DbC) for sequential programs, applying DbC to concurrent programs presents several challenges. In previous work, we extended the Java Modeling Language (JML) with constructs to specify concurrent contracts for Java programs. We present a runtime assertion checker (RAC) for the expanded JML capable of verifying assertions for concurrent Java programs. We systematically evaluate the validity of system testing results obtained via runtime assertion checking using actual concurrent and functional faults on a highly concurrent industrial system from the telecommunications domain. @InProceedings{ICSE11p786, author = {Wladimir Araujo and Lionel C. Briand and Yvan Labiche}, title = {Enabling the Runtime Assertion Checking of Concurrent Contracts for the Java Modeling Language}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {786--795}, doi = {}, year = {2011}, } |
|
Brown, Nanette |
ICSE '11-WORKSHOPS: "Second International Workshop ..."
Second International Workshop on Managing Technical Debt (MTD 2011)
Ipek Ozkaya, Philippe Kruchten, Robert L. Nord, and Nanette Brown (SEI/CMU, USA; University of British Columbia, Canada) The technical debt metaphor is gaining significant traction in the software development community as a way to understand and communicate issues of intrinsic quality, value, and cost. The idea is that developers sometimes accept compromises in a system in one dimension (e.g., modularity) to meet an urgent demand in some other dimension (e.g., a deadline), and that such compromises incur a “debt”: on which “interest” has to be paid and which should be repaid at some point for the long-term health of the project. Little is known about technical debt, beyond feelings and opinions. The software engineering research community has an opportunity to study this phenomenon and improve the way it is handled. We can offer software engineers a foundation for managing such trade-offs based on models of their economic impacts. The goal of this second workshop is to discuss managing technical debt as a part of the research agenda for the software engineering field. @InProceedings{ICSE11p1212, author = {Ipek Ozkaya and Philippe Kruchten and Robert L. Nord and Nanette Brown}, title = {Second International Workshop on Managing Technical Debt (MTD 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1212--1213}, doi = {}, year = {2011}, } |
|
Bull, Christopher |
ICSE '11-NIER: "Digitally Annexing Desk Space ..."
Digitally Annexing Desk Space for Software Development (NIER Track)
John Hardy, Christopher Bull, Gerald Kotonya, and Jon Whittle (Lancaster University, UK) Software engineering is a team activity yet the programmer’s key tool, the IDE, is still largely that of a soloist. This paper describes the vision, implementation and initial evaluation of CoffeeTable – a fully featured research prototype resulting from our reflections on the software design process. CoffeeTable exchanges the traditional IDE for one built around a shared interactive desk. The proposed solution encourages smooth transitions between agile and traditional modes of working whilst helping to create a shared vision and common reference frame – key to sustaining a good design. This paper also presents early results from the evaluation of CoffeeTable and offers some insights from the lessons learned. In particular, it highlights the role of developer tools and the software constructions that are shaped by them. @InProceedings{ICSE11p812, author = {John Hardy and Christopher Bull and Gerald Kotonya and Jon Whittle}, title = {Digitally Annexing Desk Space for Software Development (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {812--815}, doi = {}, year = {2011}, } |
|
Bultan, Tevfik |
ICSE '11: "Patching Vulnerabilities with ..."
Patching Vulnerabilities with Sanitization Synthesis
Fang Yu, Muath Alkhalaf, and Tevfik Bultan (National Chengchi University, Taiwan; UC Santa Barbara, USA) We present automata-based static string analysis techniques that automatically generate sanitization statements for patching vulnerable web applications. Our approach consists of three phases: Given an attack pattern we first conduct a vulnerability analysis to identify if strings that match the attack pattern can reach the security-sensitive functions. Next, we compute vulnerability signatures that characterize all input strings that can exploit the discovered vulnerability. Given the vulnerability signatures, we then construct sanitization statements that 1) check if a given input matches the vulnerability signature and 2) modify the input in a minimal way so that the modified input does not match the vulnerability signature. Our approach is capable of generating relational vulnerability signatures (and corresponding sanitization statements) for vulnerabilities that are due to more than one input. @InProceedings{ICSE11p251, author = {Fang Yu and Muath Alkhalaf and Tevfik Bultan}, title = {Patching Vulnerabilities with Sanitization Synthesis}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {251--260}, doi = {}, year = {2011}, } |
|
Cadar, Cristian |
ICSE '11-IMPACT: "Symbolic Execution for Software ..."
Symbolic Execution for Software Testing in Practice -- Preliminary Assessment
Cristian Cadar, Patrice Godefroid, Sarfraz Khurshid, Corina S. Păsăreanu, Koushik Sen, Nikolai Tillmann, and Willem Visser (Imperial College London, UK; Microsoft Research, USA; University of Texas at Austin, USA; CMU, USA; NASA Ames Research Center, USA; UC Berkeley, USA; Stellenbosch University, South Africa) We present results for the “Impact Project Focus Area” on the topic of symbolic execution as used in software testing. Symbolic execution is a program analysis technique introduced in the 70s that has received renewed interest in recent years, due to algorithmic advances and increased availability of computational power and constraint solving technology. We review classical symbolic execution and some modern extensions such as generalized symbolic execution and dynamic test generation. We also give a preliminary assessment of the use in academia, research labs, and industry. @InProceedings{ICSE11p1066, author = {Cristian Cadar and Patrice Godefroid and Sarfraz Khurshid and Corina S. Păsăreanu and Koushik Sen and Nikolai Tillmann and Willem Visser}, title = {Symbolic Execution for Software Testing in Practice -- Preliminary Assessment}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1066--1071}, doi = {}, year = {2011}, } |
|
Cai, Dongxiang |
ICSE '11: "An Empirical Investigation ..."
An Empirical Investigation into the Role of API-Level Refactorings during Software Evolution
Miryung Kim, Dongxiang Cai, and Sunghun Kim (University of Texas at Austin, USA; Hong Kong University of Science and Technology, China) It is widely believed that refactoring improves software quality and programmer productivity by making it easier to maintain and understand software systems. However, the role of refactorings has not been systematically investigated using fine-grained evolution history. We quantitatively and qualitatively studied API-level refactorings and bug fixes in three large open source projects, totaling 26523 revisions of evolution. The study found several surprising results: One, there is an increase in the number of bug fixes after API-level refactorings. Two, the time taken to fix bugs is shorter after API-level refactorings than before. Three, a large number of refactoring revisions include bug fixes at the same time or are related to later bug fix revisions. Four, API-level refactorings occur more frequently before than after major software releases. These results call for re-thinking refactoring’s true benefits. Furthermore, frequent floss refactoring mistakes observed in this study call for new software engineering tools to support safe application of refactoring and behavior modifying edits together. @InProceedings{ICSE11p151, author = {Miryung Kim and Dongxiang Cai and Sunghun Kim}, title = {An Empirical Investigation into the Role of API-Level Refactorings during Software Evolution}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {151--160}, doi = {}, year = {2011}, } |
|
Cai, Yuanfang |
ICSE '11: "Detecting Software Modularity ..."
Detecting Software Modularity Violations
Sunny Wong, Yuanfang Cai, Miryung Kim, and Michael Dalton (Drexel University, USA; University of Texas at Austin, USA) This paper presents Clio, an approach that detects modularity violations, which can cause software defects, modularity decay, or expensive refactorings. Clio computes the discrepancies between how components should change together based on the modular structure, and how components actually change together as revealed in version history. We evaluated Clio using 15 releases of Hadoop Common and 10 releases of Eclipse JDT. The results show that hundreds of violations identified using Clio were indeed recognized as design problems or refactored by the developers in later versions. The identified violations exhibit multiple symptoms of poor design, some of which are not easily detectable using existing approaches. @InProceedings{ICSE11p411, author = {Sunny Wong and Yuanfang Cai and Miryung Kim and Michael Dalton}, title = {Detecting Software Modularity Violations}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {411--420}, doi = {}, year = {2011}, } |
|
Callery, Matthew |
ICSE '11-NIER: "Blending Freeform and Managed ..."
Blending Freeform and Managed Information in Tables (NIER Track)
Nicolas Mangano, Harold Ossher, Ian Simmonds, Matthew Callery, Michael Desmond, and Sophia Krasikov (UC Irvine, USA; IBM Research Watson, USA) Tables are an important tool used by business analysts engaged in early requirements activities (in fact it is safe to say that tables appeal to many other types of user, in a variety of activities and domains). Business analysts typically use the tables provided by office tools. These tables offer great flexibility, but no underlying model, and hence no consistency management, multiple views or other advantages familiar to the users of modeling tools. Modeling tools, however, are usually too rigid for business analysts. In this paper we present a flexible modeling approach to tables, which combines the advantages of both office and modeling tools. Freeform information can co-exist with information managed by an underlying model, and an incremental formalization approach allows each item of information to transition fluidly between freeform and managed. As the model evolves, it is used to guide the user in the process of formalizing any remaining freeform information. The model therefore helps users without restricting them. Early feedback is described, and the approach is analyzed briefly in terms of cognitive dimensions. @InProceedings{ICSE11p840, author = {Nicolas Mangano and Harold Ossher and Ian Simmonds and Matthew Callery and Michael Desmond and Sophia Krasikov}, title = {Blending Freeform and Managed Information in Tables (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {840--843}, doi = {}, year = {2011}, } |
|
Carley, Kathleen M. |
ICSE '11-SEIP: "SORASCS: A Case Study in SOA-based ..."
SORASCS: A Case Study in SOA-based Platform Design for Socio-Cultural Analysis
Bradley Schmerl, David Garlan, Vishal Dwivedi, Michael W. Bigrigg, and Kathleen M. Carley (CMU, USA) An increasingly important class of software-based systems is platforms that permit integration of third-party components, services, and tools. Service-Oriented Architecture (SOA) is one such platform that has been successful in providing integration and distribution in the business domain, and could be effective in other domains (e.g., scientific computing, healthcare, and complex decision making). In this paper, we discuss our application of SOA to provide an integration platform for socio-cultural analysis, a domain that, through models, tries to understand, analyze and predict relationships in large complex social systems. In developing this platform, called SORASCS, we had to overcome issues we believe are generally applicable to any application of SOA within a domain that involves technically naïve users and seeks to establish a sustainable software ecosystem based on a common integration platform. We discuss these issues, the lessons learned about the kinds of problems that occur, and pathways toward a solution. @InProceedings{ICSE11p643, author = {Bradley Schmerl and David Garlan and Vishal Dwivedi and Michael W. Bigrigg and Kathleen M. Carley}, title = {SORASCS: A Case Study in SOA-based Platform Design for Socio-Cultural Analysis}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {643--652}, doi = {}, year = {2011}, } |
|
Carro, Manuel |
ICSE '11-WORKSHOPS: "Third International Workshop ..."
Third International Workshop on Principles of Engineering Service-Oriented Systems (PESOS 2011)
Manuel Carro, Dimka Karastoyanova, Grace A. Lewis, and Anna Liu (Universidad Politécnica de Madrid, Spain; University of Stuttgart, Germany; CMU, USA; NICTA, Australia) ervice-oriented systems have attracted great interest from industry and research communities worldwide. Service integrators, developers, and providers are collaborating to address the various challenges in the field. PESOS 2011 is a forum for all these communities to present and discuss a wide range of topics related to service-oriented systems. The goal of PESOS is to bring together researchers from academia and industry, as well as practitioners working in the areas of software engineering and service-oriented systems to discuss research challenges, recent developments, novel applications, as well as methods, techniques, experiences, and tools to support the engineering of service-oriented systems. @InProceedings{ICSE11p1218, author = {Manuel Carro and Dimka Karastoyanova and Grace A. Lewis and Anna Liu}, title = {Third International Workshop on Principles of Engineering Service-Oriented Systems (PESOS 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1218--1219}, doi = {}, year = {2011}, } |
|
Carver, Jeffrey C. |
ICSE '11-WORKSHOPS: "Fourth International Workshop ..."
Fourth International Workshop on Software Engineering for Computational Science and Engineering (SE-CSE 2011)
Jeffrey C. Carver, Roscoe Bartlett, Ian Gorton, Lorin Hochstein, Diane Kelly, and Judith Segal (University of Alabama, USA; Sandia National Laboratories, USA; Pacific Northwest National Laboratory, USA; USC-ISI, USA; Royal Military College, Canada; The Open University, UK) Computational Science and Engineering (CSE) software supports a wide variety of domains including nuclear physics, crash simulation, satellite data processing, fluid dynamics, climate modeling, bioinformatics, and vehicle development. The increase in the importance of CSE software motivates the need to identify and understand appropriate software engineering (SE) practices for CSE. Because of the uniqueness of CSE software development, existing SE tools and techniques developed for the business/IT community are often not efficient or effective. Appropriate SE solutions must account for the salient characteristics of the CSE development environment. This situation creates an opportunity for members of the SE community to interact with members of the CSE community to address this need. This workshop facilitates that collaboration by bringing together members of the SE community and the CSE community to share perspectives and present findings from research and practice relevant to CSE software. A significant portion of the workshop is devoted to focused interaction among the participants with the goal of generating a research agenda to improve tools, techniques, and experimental methods for studying CSE software engineering. @InProceedings{ICSE11p1226, author = {Jeffrey C. Carver and Roscoe Bartlett and Ian Gorton and Lorin Hochstein and Diane Kelly and Judith Segal}, title = {Fourth International Workshop on Software Engineering for Computational Science and Engineering (SE-CSE 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1226--1227}, doi = {}, year = {2011}, } |
|
Cassou, Damien |
ICSE '11: "Leveraging Software Architectures ..."
Leveraging Software Architectures to Guide and Verify the Development of Sense/Compute/Control Applications
Damien Cassou, Emilie Balland, Charles Consel, and Julia Lawall (University of Bordeaux, France; INRIA, France; DIKU, Denmark; LIP6, France) A software architecture describes the structure of a computing system by specifying software components and their interactions. Mapping a software architecture to an implementation is a well known challenge. A key element of this mapping is the architecture’s description of the data and control-flow interactions between components. The characterization of these interactions can be rather abstract or very concrete, providing more or less implementation guidance, programming support, and static verification. In this paper, we explore one point in the design space between abstract and concrete component interaction specifications. We introduce a notion of interaction contract that expresses the set of allowed interactions between components, describing both data and control-flow constraints. This declaration is part of the architecture description, allows generation of extensive programming support, and enables various verifications. We instantiate our approach in an architecture description language for Sense/Compute/Control applications, and describe associated compilation and verification strategies. @InProceedings{ICSE11p431, author = {Damien Cassou and Emilie Balland and Charles Consel and Julia Lawall}, title = {Leveraging Software Architectures to Guide and Verify the Development of Sense/Compute/Control Applications}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {431--440}, doi = {}, year = {2011}, } |
|
Castro-Herrera, Carlos |
ICSE '11: "On-demand Feature Recommendations ..."
On-demand Feature Recommendations Derived from Mining Public Product Descriptions
Horatiu Dumitru, Marek Gibiec, Negar Hariri, Jane Cleland-Huang, Bamshad Mobasher, Carlos Castro-Herrera, and Mehdi Mirakhorli (DePaul University, USA) We present a recommender system that models and recommends product features for a given domain. Our approach mines product descriptions from publicly available online specifications, utilizes text mining and a novel incremental diffusive clustering algorithm to discover domain-specific features, generates a probabilistic feature model that represents commonalities, variants, and cross-category features, and then uses association rule mining and the k-NearestNeighbor machine learning strategy to generate product specific feature recommendations. Our recommender system supports the relatively labor-intensive task of domain analysis, potentially increasing opportunities for re-use, reducing time-to-market, and delivering more competitive software products. The approach is empirically validated against 20 different product categories using thousands of product descriptions mined from a repository of free software applications. @InProceedings{ICSE11p181, author = {Horatiu Dumitru and Marek Gibiec and Negar Hariri and Jane Cleland-Huang and Bamshad Mobasher and Carlos Castro-Herrera and Mehdi Mirakhorli}, title = {On-demand Feature Recommendations Derived from Mining Public Product Descriptions}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {181--190}, doi = {}, year = {2011}, } |
|
Cataldo, Marcelo |
ICSE '11: "Configuring Global Software ..."
Configuring Global Software Teams: A Multi-Company Analysis of Project Productivity, Quality, and Profits
Narayan Ramasubbu, Marcelo Cataldo, Rajesh Krishna Balan, and James D. Herbsleb (Singapore Management University, Singapore; CMU, USA) In this paper, we examined the impact of project-level configurational choices of globally distributed software teams on project productivity, quality, and profits. Our analysis used data from 362 projects of four different firms. These projects spanned a wide range of programming languages, application domain, process choices, and development sites spread over 15 countries and 5 continents. Our analysis revealed fundamental tradeoffs in choosing configurational choices that are optimized for productivity, quality, and/or profits. In particular, achieving higher levels of productivity and quality require diametrically opposed configurational choices. In addition, creating imbalances in the expertise and personnel distribution of project teams significantly helps increase profit margins. However, a profitoriented imbalance could also significantly affect productivity and/or quality outcomes. Analyzing these complex tradeoffs, we provide actionable managerial insights that can help software firms and their clients choose configurations that achieve desired project outcomes in globally distributed software development. @InProceedings{ICSE11p261, author = {Narayan Ramasubbu and Marcelo Cataldo and Rajesh Krishna Balan and James D. Herbsleb}, title = {Configuring Global Software Teams: A Multi-Company Analysis of Project Productivity, Quality, and Profits}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {261--270}, doi = {}, year = {2011}, } ICSE '11: "Factors Leading to Integration ..." Factors Leading to Integration Failures in Global Feature-Oriented Development: An Empirical Analysis Marcelo Cataldo and James D. Herbsleb (CMU, USA) Feature-driven software development is a novel approach that has grown in popularity over the past decade. Researchers and practitioners alike have argued that numerous benefits could be garnered from adopting a feature-driven development approach. However, those persuasive arguments have not been matched with supporting empirical evidence. Moreover, developing software systems around features involves new technical and organizational elements that could have significant implications for outcomes such as software quality. This paper presents an empirical analysis of a large-scale project that implemented 1195 features in a software system. We examined the impact that technical attributes of product features, attributes of the feature teams and crossfeature interactions have on software integration failures. Our results show that technical factors such as the nature of component dependencies and organizational factors such as the geographic dispersion of the feature teams and the role of the feature owners had complementary impact suggesting their independent and important role in terms of software quality. Furthermore, our analyses revealed that cross-feature interactions, measured as the number of architectural dependencies between two product features, are a major driver of integration failures. The research and practical implications of our results are discussed. @InProceedings{ICSE11p161, author = {Marcelo Cataldo and James D. Herbsleb}, title = {Factors Leading to Integration Failures in Global Feature-Oriented Development: An Empirical Analysis}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {161--170}, doi = {}, year = {2011}, } ICSE '11-WORKSHOPS: "Workshop on Cooperative and ..." Workshop on Cooperative and Human Aspects of Software Engineering (CHASE 2011) Marcelo Cataldo, Cleidson de Souza, Yvonne Dittrich, Rashina Hoda, and Helen Sharp (Robert Bosch Research, USA; IBM Research, Brazil; IT University of Copenhagen, Denmark; Victoria University of Wellington, New Zealand; The Open University, UK) Software is created by people for people working in varied environments, under various conditions. Thus understanding cooperative and human aspects of software development is crucial to comprehend how methods and tools are used, and thereby improve the creation and maintenance of software. Over the years, both researchers and practitioners have recognized the need to study and understand these aspects. Despite recognizing this, researchers in cooperative and human aspects have no clear place to meet and are dispersed in different research conferences and areas. The goal of this workshop is to provide a forum for discussing high quality research on human and cooperative aspects of software engineering. We aim at providing both a meeting place for the growing community and the possibility for researchers interested in joining the field to present their work in progress and get an overview over the field. @InProceedings{ICSE11p1188, author = {Marcelo Cataldo and Cleidson de Souza and Yvonne Dittrich and Rashina Hoda and Helen Sharp}, title = {Workshop on Cooperative and Human Aspects of Software Engineering (CHASE 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1188--1189}, doi = {}, year = {2011}, } |
|
Cazzola, Walter |
ICSE '11-DEMOS: "JavAdaptor: Unrestricted Dynamic ..."
JavAdaptor: Unrestricted Dynamic Software Updates for Java
Mario Pukall, Alexander Grebhahn, Reimar Schröter, Christian Kästner, Walter Cazzola, and Sebastian Götz (University of Magdeburg, Germany; Philipps-University Marburg, Germany; University of Milano, Italy; University of Dresden, Germany) Dynamic software updates (DSU) are one of the top-most features requested by developers and users. As a result, DSU is already standard in many dynamic programming languages. But, it is not standard in statically typed languages such as Java. Even if at place number three of Oracle’s current request for enhancement (RFE) list, DSU support in Java is very limited. Therefore, over the years many different DSU approaches for Java have been proposed. Nevertheless, DSU for Java is still an active field of research, because most of the existing approaches are too restrictive. Some of the approaches have shortcomings either in terms of flexibility or performance, whereas others are platform dependent or dictate the program’s architecture. With JavAdaptor, we present the first DSU approach which comes without those restrictions. We will demonstrate JavAdaptor based on the well-known arcade game Snake which we will update stepwise at runtime. @InProceedings{ICSE11p989, author = {Mario Pukall and Alexander Grebhahn and Reimar Schröter and Christian Kästner and Walter Cazzola and Sebastian Götz}, title = {JavAdaptor: Unrestricted Dynamic Software Updates for Java}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {989--991}, doi = {}, year = {2011}, } |
|
Chandra, Satish |
ICSE '11: "Angelic Debugging ..."
Angelic Debugging
Satish Chandra, Emina Torlak, Shaon Barman, and Rastislav Bodik (IBM Research, USA; UC Berkeley, USA) Software ships with known bugs because it is expensive to pinpoint and fix the bug exposed by a failing test. To reduce the cost of bug identification, we locate expressions that are likely causes of bugs and thus candidates for repair. Our symbolic method approximates an ideal approach to fixing bugs mechanically, which is to search the space of all edits to the program for one that repairs the failing test without breaking any passing test. We approximate the expensive ideal of exploring syntactic edits by instead computing the set of values whose substitution for the expression corrects the execution. We observe that an expression is a repair candidate if it can be replaced with a value that fixes a failing test and in each passing test, its value can be changed to another value without breaking the test. The latter condition makes the expression flexible in that it permits multiple values. The key observation is that the repair of a flexible expression is less likely to break a passing test. The method is called angelic debugging because the values are computed by angelically nondeterministic statements. We implemented the method on top of the Java PathFinder model checker. Our experiments with this technique show promise of its applicability in speeding up program debugging. @InProceedings{ICSE11p121, author = {Satish Chandra and Emina Torlak and Shaon Barman and Rastislav Bodik}, title = {Angelic Debugging}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {121--130}, doi = {}, year = {2011}, } |
|
Chatzigeorgiou, Alexander |
ICSE '11-DEMOS: "JDeodorant: Identification ..."
JDeodorant: Identification and Application of Extract Class Refactorings
Marios Fokaefs, Nikolaos Tsantalis, Eleni Stroulia, and Alexander Chatzigeorgiou (University of Alberta, Canada; University of Macedonia, Greece) Evolutionary changes in object-oriented systems can result in large, complex classes, known as “God Classes”. In this paper, we present a tool, developed as part of the JDeodorant Eclipse plugin, that can recognize opportunities for extracting cohesive classes from “God Classes” and automatically apply the refactoring chosen by the developer. @InProceedings{ICSE11p1037, author = {Marios Fokaefs and Nikolaos Tsantalis and Eleni Stroulia and Alexander Chatzigeorgiou}, title = {JDeodorant: Identification and Application of Extract Class Refactorings}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1037--1039}, doi = {}, year = {2011}, } |
|
Chen, Baiqiang |
ICSE '11-NIER: "Tuple Density: A New Metric ..."
Tuple Density: A New Metric for Combinatorial Test Suites (NIER Track)
Baiqiang Chen and Jian Zhang (Chinese Academy of Sciences, China) We propose tuple density to be a new metric for combinatorial test suites. It can be used to distinguish one test suite from another even if they have the same size and strength. Moreover, it is also illustrated how a given test suite can be optimized based on this metric. The initial experimental results are encouraging @InProceedings{ICSE11p876, author = {Baiqiang Chen and Jian Zhang}, title = {Tuple Density: A New Metric for Combinatorial Test Suites (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {876--879}, doi = {}, year = {2011}, } |
|
Chen, Feng |
ICSE '11: "Mining Parametric Specifications ..."
Mining Parametric Specifications
Choonghwan Lee, Feng Chen, and Grigore Roşu (University of Illinois at Urbana-Champaign, USA) Specifications carrying formal parameters that are bound to concrete data at runtime can effectively and elegantly capture multi-object behaviors or protocols. Unfortunately, parametric specifications are not easy to formulate by nonexperts and, consequently, are rarely available. This paper presents a general approach for mining parametric specifications from program executions, based on a strict separation of concerns: (1) a trace slicer first extracts sets of independent interactions from parametric execution traces; and (2) the resulting non-parametric trace slices are then passed to any conventional non-parametric property learner. The presented technique has been implemented in jMiner, which has been used to automatically mine many meaningful and non-trivial parametric properties of OpenJDK 6. @InProceedings{ICSE11p591, author = {Choonghwan Lee and Feng Chen and Grigore Roşu}, title = {Mining Parametric Specifications}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {591--600}, doi = {}, year = {2011}, } |
|
Chen, Ning |
ICSE '11-DOCTORALPRESENT: "GATE: Game-based Testing Environment ..."
GATE: Game-based Testing Environment
Ning Chen (Hong Kong University of Science and Technology, China) In this paper, we propose a game-based public testing mechanism called GATE. The purpose of GATE is to make use of the rich human resource on the Internet to help increase effectiveness in software testing and improve test adequacy. GATE facilitates public testing in three main steps: 1) decompose the test criterion satisfaction problem into many smaller sub-model satisfaction problems; 2) construct games for each individual sub-models and presenting the games to the public through web servers; 3) collect and convert public users’ action sequence data into real test cases which guarantee to cover not adequately tested elements. A preliminary study on apache-commons-math library shows that 44% of the branches have not been adequately tested by state of the art automatic test generation techniques. Among these branches, at least 42% are decomposable by GATE into smaller sub-problems. These elements naturally become the potential targets of GATE for public game-based testing. @InProceedings{ICSE11p1078, author = {Ning Chen}, title = {GATE: Game-based Testing Environment}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1078--1081}, doi = {}, year = {2011}, } |
|
Chen, Shyh-Kwei |
ICSE '11-DEMOS: "Using MATCON to Generate CASE ..."
Using MATCON to Generate CASE Tools That Guide Deployment of Pre-Packaged Applications
Elad Fein, Natalia Razinkov, Shlomit Shachor, Pietro Mazzoleni, Sweefen Goh, Richard Goodwin, Manisha Bhandar, Shyh-Kwei Chen, Juhnyoung Lee, Vibha Singhal Sinha, Senthil Mani, Debdoot Mukherjee, Biplav Srivastava, and Pankaj Dhoolia (IBM Research Haifa, Israel; IBM Research Watson, USA; IBM Research, India) The complex process of adapting pre-packaged applications, such as Oracle or SAP, to an organization’s needs is full of challenges. Although detailed, structured, and well-documented methods govern this process, the consulting team implementing the method must spend a huge amount of manual effort to make sure the guidelines of the method are followed as intended by the method author. MATCON breaks down the method content, documents, templates, and work products into reusable objects, and enables them to be cataloged and indexed so these objects can be easily found and reused on subsequent projects. By using models and meta-modeling the reusable methods, we automatically produce a CASE tool to apply these methods, thereby guiding consultants through this complex process. The resulting tool helps consultants create the method deliverables for the initial phases of large customization projects. Our MATCON output, referred to as Consultant Assistant, has shown significant savings in training costs, a 20–30% improvement in productivity, and positive results in large Oracle and SAP implementations. @InProceedings{ICSE11p1016, author = {Elad Fein and Natalia Razinkov and Shlomit Shachor and Pietro Mazzoleni and Sweefen Goh and Richard Goodwin and Manisha Bhandar and Shyh-Kwei Chen and Juhnyoung Lee and Vibha Singhal Sinha and Senthil Mani and Debdoot Mukherjee and Biplav Srivastava and Pankaj Dhoolia}, title = {Using MATCON to Generate CASE Tools That Guide Deployment of Pre-Packaged Applications}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1016--1018}, doi = {}, year = {2011}, } |
|
Chen, Wenguang |
ICSE '11: "RACEZ: A Lightweight and Non-Invasive ..."
RACEZ: A Lightweight and Non-Invasive Race Detection Tool for Production Applications
Tianwei Sheng, Neil Vachharajani, Stephane Eranian, Robert Hundt, Wenguang Chen, and Weimin Zheng (Tsinghua University, China; Google Inc., USA) Concurrency bugs, particularly data races, are notoriously difficult to debug and are a significant source of unreliability in multithreaded applications. Many tools to catch data races rely on program instrumentation to obtain memory instruction traces. Unfortunately, this instrumentation introduces significant runtime overhead, is extremely invasive, or has a limited domain of applicability making these tools unsuitable for many production systems. Consequently, these tools are typically used during application testing where many data races go undetected. This paper proposes R ACEZ, a novel race detection mechanism which uses a sampled memory trace collected by the hardware performance monitoring unit rather than invasive instrumentation. The approach introduces only a modest overhead making it usable in production environments. We validate R ACEZ using two open source server applications and the PARSEC benchmarks. Our experiments show that R ACEZ catches a set of known bugs with reasonable probability while introducing only 2.8% runtime slow down on average. @InProceedings{ICSE11p401, author = {Tianwei Sheng and Neil Vachharajani and Stephane Eranian and Robert Hundt and Wenguang Chen and Weimin Zheng}, title = {RACEZ: A Lightweight and Non-Invasive Race Detection Tool for Production Applications}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {401--410}, doi = {}, year = {2011}, } |
|
Chen, Xiaofan |
ICSE '11-NIER: "A Combination Approach for ..."
A Combination Approach for Enhancing Automated Traceability (NIER Track)
Xiaofan Chen, John Hosking, and John Grundy (University of Auckland, New Zealand; Swinburne University of Technology at Melbourne, Australia) Tracking a variety of traceability links between artifacts assists software developers in comprehension, efficient development, and effective management of a system. Traceability systems to date based on various Information Retrieval (IR) techniques have been faced with a major open research challenge: how to extract these links with both high precision and high recall. In this paper we describe an experimental approach that combines Regular Expression, Key Phrases, and Clustering with IR techniques to enhance the performance of IR for traceability link recovery between documents and source code. Our preliminary experimental results show that our combination technique improves the performance of IR, increases the precision of retrieved links, and recovers more true links than IR alone. @InProceedings{ICSE11p912, author = {Xiaofan Chen and John Hosking and John Grundy}, title = {A Combination Approach for Enhancing Automated Traceability (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {912--915}, doi = {}, year = {2011}, } |
|
Choudhary, Shauvik Roy |
ICSE '11-SRC: "Detecting Cross-browser Issues ..."
Detecting Cross-browser Issues in Web Applications
Shauvik Roy Choudhary (Georgia Institute of Technology, USA) Cross-browser issues are prevalent in web applications. However, existing tools require considerable manual effort from developers to detect such issues. Our technique and prototype tool - WEBDIFF detects such issues automatically and reports them to the developer. Along with each issue reported, the tool also provides details about the affected HTML element, thereby helping the developer to fix the issue. WEBDIFF is the first technique to apply concepts from computer vision and graph theory to identify cross-browser issues in web applications. Our results show that W EB D IFF is practical and can find issues in real world web applications. @InProceedings{ICSE11p1146, author = {Shauvik Roy Choudhary}, title = {Detecting Cross-browser Issues in Web Applications}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1146--1148}, doi = {}, year = {2011}, } |
|
Christensen, Henrik Bærbak |
ICSE '11-NIER: "Towards Architectural Information ..."
Towards Architectural Information in Implementation (NIER Track)
Henrik Bærbak Christensen and Klaus Marius Hansen (Aarhus University, Denmark; University of Copenhagen, Denmark) Agile development methods favor speed and feature producing iterations. Software architecture, on the other hand, is ripe with techniques that are slow and not oriented directly towards implementation of costumers’ needs. Thus, there is a major challenge in retaining architectural information in a fast-faced agile project. We propose to embed as much architectural information as possible in the central artefact of the agile universe, the code. We argue that thereby valuable architectural information is retained for (automatic) documentation, validation, and further analysis, based on a relatively small investment of effort. We outline some preliminary examples of architectural annotations in Java and Python and their applicability in practice. @InProceedings{ICSE11p928, author = {Henrik Bærbak Christensen and Klaus Marius Hansen}, title = {Towards Architectural Information in Implementation (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {928--931}, doi = {}, year = {2011}, } |
|
Clark, David |
ICSE '11: "Model Projection: Simplifying ..."
Model Projection: Simplifying Models in Response to Restricting the Environment
Kelly Androutsopoulos, David Binkley, David Clark, Nicolas Gold, Mark Harman, Kevin Lano, and Zheng Li (University College London, UK; Loyola University Maryland, USA; King's College London, UK) This paper introduces Model Projection. Finite state models such as Extended Finite State Machines are being used in an ever increasing number of software engineering activities. Model projection facilitates model development by specializing models for a specific operating environment. A projection is useful in many design-level applications including specification reuse and property verification. The applicability of model projection rests upon three critical concerns: correctness, effectiveness, and efficiency, all of which are addressed in this paper. We introduce four related algorithms for model projection and prove each correct. We also present an empirical study of effectiveness and efficiency using ten models, including widely–studied benchmarks as well as industrial models. Results show that a typical projection includes about half of the states and a third of the transitions from the original model. @InProceedings{ICSE11p291, author = {Kelly Androutsopoulos and David Binkley and David Clark and Nicolas Gold and Mark Harman and Kevin Lano and Zheng Li}, title = {Model Projection: Simplifying Models in Response to Restricting the Environment}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {291--300}, doi = {}, year = {2011}, } |
|
Classen, Andreas |
ICSE '11: "Symbolic Model Checking of ..."
Symbolic Model Checking of Software Product Lines
Andreas Classen, Patrick Heymans, Pierre-Yves Schobbens, and Axel Legay (University of Namur, Belgium; IRISA/INRIA Rennes, France; University of Liège, Belgium) We study the problem of model checking software product line (SPL) behaviours against temporal properties. This is more difficult than for single systems because an SPL with n features yields up to 2^n individual systems to verify. As each individual verification suffers from state explosion, it is crucial to propose efficient formalisms and heuristics. We recently proposed featured transition systems (FTS), a compact representation for SPL behaviour, and defined algorithms for model checking FTS against linear temporal properties. Although they showed to outperform individual system verifications, they still face a state explosion problem as they enumerate and visit system states one by one. In this paper, we tackle this latter problem by using symbolic representations of the state space. This lead us to consider computation tree logic (CTL) which is supported by the industry-strength symbolic model checker NuSMV. We first lay the foundations for symbolic SPL model checking by defining a feature-oriented version of CTL and its dedicated algorithms. We then describe an implementation that adapts the NuSMV language and tool infrastructure. Finally, we propose theoretical and empirical evaluations of our results. The benchmarks show that for certain properties, our algorithm is over a hundred times faster than model checking each system with the standard algorithm. @InProceedings{ICSE11p321, author = {Andreas Classen and Patrick Heymans and Pierre-Yves Schobbens and Axel Legay}, title = {Symbolic Model Checking of Software Product Lines}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {321--330}, doi = {}, year = {2011}, } |
|
Clause, James |
ICSE '11: "Camouflage: Automated Anonymization ..."
Camouflage: Automated Anonymization of Field Data
James Clause and Alessandro Orso (University of Delaware, USA; Georgia Institute of Technology, USA) Privacy and security concerns have adversely affected the usefulness of many types of techniques that leverage information gathered from deployed applications. To address this issue, we present an approach for automatically anonymizing failure-inducing inputs that builds on a previously developed technique. Given an input I that causes a failure f, our approach generates an anonymized input I' that is different from I but still causes f. I' can thus be sent to developers to enable them to debug f without having to know I. We implemented our approach in a prototype tool, Camouflage, and performed an extensive empirical evaluation where we applied Camouflage to a large set of failure-inducing inputs for several real applications. The results of the evaluation are promising, as they show that Camouflage is both practical and effective at generating anonymized inputs; for the inputs that we considered, I and I' shared no sensitive information. The results also show that our approach can outperform the general technique it extends. @InProceedings{ICSE11p21, author = {James Clause and Alessandro Orso}, title = {Camouflage: Automated Anonymization of Field Data}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {21--30}, doi = {}, year = {2011}, } |
|
Cleland-Huang, Jane |
ICSE '11: "On-demand Feature Recommendations ..."
On-demand Feature Recommendations Derived from Mining Public Product Descriptions
Horatiu Dumitru, Marek Gibiec, Negar Hariri, Jane Cleland-Huang, Bamshad Mobasher, Carlos Castro-Herrera, and Mehdi Mirakhorli (DePaul University, USA) We present a recommender system that models and recommends product features for a given domain. Our approach mines product descriptions from publicly available online specifications, utilizes text mining and a novel incremental diffusive clustering algorithm to discover domain-specific features, generates a probabilistic feature model that represents commonalities, variants, and cross-category features, and then uses association rule mining and the k-NearestNeighbor machine learning strategy to generate product specific feature recommendations. Our recommender system supports the relatively labor-intensive task of domain analysis, potentially increasing opportunities for re-use, reducing time-to-market, and delivering more competitive software products. The approach is empirically validated against 20 different product categories using thousands of product descriptions mined from a repository of free software applications. @InProceedings{ICSE11p181, author = {Horatiu Dumitru and Marek Gibiec and Negar Hariri and Jane Cleland-Huang and Bamshad Mobasher and Carlos Castro-Herrera and Mehdi Mirakhorli}, title = {On-demand Feature Recommendations Derived from Mining Public Product Descriptions}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {181--190}, doi = {}, year = {2011}, } ICSE '11-NIER: "Tracing Architectural Concerns ..." Tracing Architectural Concerns in High Assurance Systems (NIER Track) Mehdi Mirakhorli and Jane Cleland-Huang (DePaul University, USA) Software architecture is shaped by a diverse set of interacting and competing quality concerns, each of which may have broad-reaching impacts across multiple architectural views. Without traceability support, it is easy for developers to inadvertently change critical architectural elements during ongoing system maintenance and evolution, leading to architectural erosion. Unfortunately, existing traceability practices, tend to result in the proliferation of traceability links, which can be difficult to create, maintain, and understand. We therefore present a decision-centric approach that focuses traceability links around the architectural decisions that have shaped the delivered system. Our approach, which is informed through an extensive investigation of architectural decisions made in real-world safety-critical and performance-critical applications, provides enhanced support for advanced software engineering tasks. @InProceedings{ICSE11p908, author = {Mehdi Mirakhorli and Jane Cleland-Huang}, title = {Tracing Architectural Concerns in High Assurance Systems (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {908--911}, doi = {}, year = {2011}, } |
|
Concas, Giulio |
ICSE '11-WORKSHOPS: "Workshop on Emerging Trends ..."
Workshop on Emerging Trends in Software Metrics (WETSoM 2011)
Giulio Concas, Massimiliano Di Penta, Ewan Tempero, and Hongyu Zhang (University of Cagliari, Italy; University of Sannio, Italy; University of Auckland, New Zealand; Tsinghua University, China) The Workshop on Emerging Trends in Software Metrics aims at bringing together researchers and practitioners to discuss the progress of software metrics. The motivation for this workshop is the low impact that software metrics has on current software development. The goals of this workshop are to critically examine the evidence for the effectiveness of existing metrics and to identify new directions for development of software metrics. @InProceedings{ICSE11p1224, author = {Giulio Concas and Massimiliano Di Penta and Ewan Tempero and Hongyu Zhang}, title = {Workshop on Emerging Trends in Software Metrics (WETSoM 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1224--1225}, doi = {}, year = {2011}, } |
|
Consel, Charles |
ICSE '11: "Leveraging Software Architectures ..."
Leveraging Software Architectures to Guide and Verify the Development of Sense/Compute/Control Applications
Damien Cassou, Emilie Balland, Charles Consel, and Julia Lawall (University of Bordeaux, France; INRIA, France; DIKU, Denmark; LIP6, France) A software architecture describes the structure of a computing system by specifying software components and their interactions. Mapping a software architecture to an implementation is a well known challenge. A key element of this mapping is the architecture’s description of the data and control-flow interactions between components. The characterization of these interactions can be rather abstract or very concrete, providing more or less implementation guidance, programming support, and static verification. In this paper, we explore one point in the design space between abstract and concrete component interaction specifications. We introduce a notion of interaction contract that expresses the set of allowed interactions between components, describing both data and control-flow constraints. This declaration is part of the architecture description, allows generation of extensive programming support, and enables various verifications. We instantiate our approach in an architecture description language for Sense/Compute/Control applications, and describe associated compilation and verification strategies. @InProceedings{ICSE11p431, author = {Damien Cassou and Emilie Balland and Charles Consel and Julia Lawall}, title = {Leveraging Software Architectures to Guide and Verify the Development of Sense/Compute/Control Applications}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {431--440}, doi = {}, year = {2011}, } |
|
Cook, Richard |
ICSE '11: "Always-Available Static and ..."
Always-Available Static and Dynamic Feedback
Michael Bayne, Richard Cook, and Michael D. Ernst (University of Washington, USA) Developers who write code in a statically typed language are denied the ability to obtain dynamic feedback by executing their code during periods when it fails the static type checker. They are further confined to the static typing discipline during times in the development process where it does not yield the highest productivity. If they opt instead to use a dynamic language, they forgo the many benefits of static typing, including machine-checked documentation, improved correctness and reliability, tool support (such as for refactoring), and better runtime performance. We present a novel approach to giving developers the benefits of both static and dynamic typing, throughout the development process, and without the burden of manually separating their program into statically- and dynamically-typed parts. Our approach, which is intended for temporary use during the development process, relaxes the static type system and provides a semantics for many type-incorrect programs. It defers type errors to run time, or suppresses them if they do not affect runtime semantics. We implemented our approach in a publicly available tool, DuctileJ, for the Java language. In case studies, DuctileJ conferred benefits both during prototyping and during the evolution of existing code. @InProceedings{ICSE11p521, author = {Michael Bayne and Richard Cook and Michael D. Ernst}, title = {Always-Available Static and Dynamic Feedback}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {521--530}, doi = {}, year = {2011}, } |
|
Cordeiro, Lucas |
ICSE '11: "Verifying Multi-threaded Software ..."
Verifying Multi-threaded Software using SMT-based Context-Bounded Model Checking
Lucas Cordeiro and Bernd Fischer (University of Southampton, UK) We describe and evaluate three approaches to model check multi-threaded software with shared variables and locks using bounded model checking based on Satisfiability Modulo Theories (SMT) and our modelling of the synchronization primitives of the Pthread library. In the lazy approach, we generate all possible interleavings and call the SMT solver on each of them individually, until we either find a bug, or have systematically explored all interleavings. In the schedule recording approach, we encode all possible interleavings into one single formula and then exploit the high speed of the SMT solvers. In the underapproximation and widening approach, we reduce the state space by abstracting the number of interleavings from the proofs of unsatisfiability generated by the SMT solvers. In all three approaches, we bound the number of context switches allowed among threads in order to reduce the number of interleavings explored. We implemented these approaches in ESBMC, our SMT-based bounded model checker for ANSI-C programs. Our experiments show that ESBMC can analyze larger problems and substantially reduce the verification time compared to stateof-the-art techniques that use iterative context-bounding algorithms or counter-example guided abstraction refinement. @InProceedings{ICSE11p331, author = {Lucas Cordeiro and Bernd Fischer}, title = {Verifying Multi-threaded Software using SMT-based Context-Bounded Model Checking}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {331--340}, doi = {}, year = {2011}, } |
|
Cordy, James R. |
ICSE '11-WORKSHOPS: "Fifth International Workshop ..."
Fifth International Workshop on Software Clones (IWSC 2011)
James R. Cordy, Katsuro Inoue, Stanislaw Jarzabek, and Rainer Koschke (Queen's University, Canada; Osaka University, Japan; National University of Singapore, Singapore; University of Bremen, Germany) Software clones are identical or similar pieces of code, design or other artifacts. Clones are known to be closely related to various issues in software engineering, such as software quality, complexity, architecture, refactoring, evolution, licensing, plagiarism, and so on. Various characteristics of software systems can be uncovered through clone analysis, and system restructuring can be performed by merging clones. The goals of this workshop are to bring together researchers and practitioners from around the world to evaluate the current state of research and applications, discuss common problems, discover new opportunities for collaboration, exchange ideas, envision new areas of research and applications, and explore synergies with similarity analysis in other areas and disciplines. @InProceedings{ICSE11p1210, author = {James R. Cordy and Katsuro Inoue and Stanislaw Jarzabek and Rainer Koschke}, title = {Fifth International Workshop on Software Clones (IWSC 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1210--1211}, doi = {}, year = {2011}, } |
|
Cuddeback, David |
ICSE '11-NIER: "Towards Overcoming Human Analyst ..."
Towards Overcoming Human Analyst Fallibility in the Requirements Tracing Process (NIER Track)
David Cuddeback, Alex Dekhtyar, Jane Huffman Hayes, Jeff Holden, and Wei-Keat Kong (California Polytechnic State University, USA; University of Kentucky, USA) Our research group recently discovered that human analysts, when asked to validate candidate traceability matrices, produce predictably imperfect results, in some cases less accurate than the starting candidate matrices. This discovery radically changes our understanding of how to design a fast, accurate and certifiable tracing process that can be implemented as part of software assurance activities. We present our vision for the new approach to achieving this goal. Further, we posit that human fallibility may impact other softare engineering activities involving decision support tools. @InProceedings{ICSE11p860, author = {David Cuddeback and Alex Dekhtyar and Jane Huffman Hayes and Jeff Holden and Wei-Keat Kong}, title = {Towards Overcoming Human Analyst Fallibility in the Requirements Tracing Process (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {860--863}, doi = {}, year = {2011}, } |
|
Curtis, Bill |
ICSE '11-SEIP: "An Evaluation of the Internal ..."
An Evaluation of the Internal Quality of Business Applications: Does Size Matter?
Bill Curtis, Jay Sappidi, and Jitendra Subramanyam (CAST, USA) This study summarizes results of a study of the internal, structural quality of 288 business applications comprising 108 million lines of code collected from 75 companies in 8 industry segments. These applications were submitted to a static analysis that evaluates quality within and across application components that may be coded in different languages. The analysis consists of evaluating the application against a repository of over 900 rules of good architectural and coding practice. Results are presented for measures of security, performance, and changeability. The effect of size on quality is evaluated, and the ability of modularity to reduce the impact of size is suggested by the results. @InProceedings{ICSE11p711, author = {Bill Curtis and Jay Sappidi and Jitendra Subramanyam}, title = {An Evaluation of the Internal Quality of Business Applications: Does Size Matter?}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {711--715}, doi = {}, year = {2011}, } |
|
Czarnecki, Krzysztof |
ICSE '11: "Reverse Engineering Feature ..."
Reverse Engineering Feature Models
Steven She, Rafael Lotufo, Thorsten Berger, Andrzej Wasowski, and Krzysztof Czarnecki (University of Waterloo, Canada; University of Leipzig, Germany; IT University of Copenhagen, Denmark) Feature models describe the common and variable characteristics of a product line. Their advantages are well recognized in product line methods. Unfortunately, creating a feature model for an existing project is time-consuming and requires substantial effort from a modeler. We present procedures for reverse engineering feature models based on a crucial heuristic for identifying parents—the major challenge of this task. We also automatically recover constructs such as feature groups, mandatory features, and implies/excludes edges. We evaluate the technique on two large-scale software product lines with existing reference feature models—the Linux and eCos kernels—and FreeBSD, a project without a feature model. Our heuristic is effective across all three projects by ranking the correct parent among the top results for a vast majority of features. The procedures effectively reduce the information a modeler has to consider from thousands of choices to typically five or less. @InProceedings{ICSE11p461, author = {Steven She and Rafael Lotufo and Thorsten Berger and Andrzej Wasowski and Krzysztof Czarnecki}, title = {Reverse Engineering Feature Models}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {461--470}, doi = {}, year = {2011}, } |
|
Dachselt, Raimund |
ICSE '11-DEMOS: "View Infinity: A Zoomable ..."
View Infinity: A Zoomable Interface for Feature-Oriented Software Development
Michael Stengel, Janet Feigenspan, Mathias Frisch, Christian Kästner, Sven Apel, and Raimund Dachselt (University of Magdeburg, Germany; University of Marburg, Germany; University of Passau, Germany) Software product line engineering provides efficient means to develop variable software. To support program comprehension of software product lines (SPLs), we developed View Infinity, a tool that provides seamless and semantic zooming of different abstraction layers of an SPL. First results of a qualitative study with experienced SPL developers are promising and indicate that View Infinity is useful and intuitive to use. @InProceedings{ICSE11p1031, author = {Michael Stengel and Janet Feigenspan and Mathias Frisch and Christian Kästner and Sven Apel and Raimund Dachselt}, title = {View Infinity: A Zoomable Interface for Feature-Oriented Software Development}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1031--1033}, doi = {}, year = {2011}, } |
|
Dagnat, Fabien |
ICSE '11-NIER: "The Lazy Initialization Multilayered ..."
The Lazy Initialization Multilayered Modeling Framework (NIER Track)
Fahad R. Golra and Fabien Dagnat (Université Européenne de Bretagne, France; Institut Télécom, France) Lazy Initialization Multilayer Modeling (LIMM) is an object oriented modeling language targeted to the declarative definition of Domain Specific Languages (DSLs) for Model Driven Engineering. It focuses on the precise definition of modeling frameworks spanning over multiple layers. In particular, it follows a two dimensional architecture instead of the linear architecture followed by many other modeling frameworks. The novelty of our approach is to use lazy initialization for the definition of mapping between different modeling abstractions, within and across multiple layers, hence providing the basis for exploiting the potential of metamodeling. @InProceedings{ICSE11p924, author = {Fahad R. Golra and Fabien Dagnat}, title = {The Lazy Initialization Multilayered Modeling Framework (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {924--927}, doi = {}, year = {2011}, } |
|
Dalton, Michael |
ICSE '11: "Detecting Software Modularity ..."
Detecting Software Modularity Violations
Sunny Wong, Yuanfang Cai, Miryung Kim, and Michael Dalton (Drexel University, USA; University of Texas at Austin, USA) This paper presents Clio, an approach that detects modularity violations, which can cause software defects, modularity decay, or expensive refactorings. Clio computes the discrepancies between how components should change together based on the modular structure, and how components actually change together as revealed in version history. We evaluated Clio using 15 releases of Hadoop Common and 10 releases of Eclipse JDT. The results show that hundreds of violations identified using Clio were indeed recognized as design problems or refactored by the developers in later versions. The identified violations exhibit multiple symptoms of poor design, some of which are not easily detectable using existing approaches. @InProceedings{ICSE11p411, author = {Sunny Wong and Yuanfang Cai and Miryung Kim and Michael Dalton}, title = {Detecting Software Modularity Violations}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {411--420}, doi = {}, year = {2011}, } |
|
D'Ambros, Marco |
ICSE '11-DEMOS: "Miler: A Toolset for Exploring ..."
Miler: A Toolset for Exploring Email Data
Alberto Bacchelli, Michele Lanza, and Marco D'Ambros (University of Lugano, Switzerland) Source code is the target and final outcome of software development. By focusing our research and analysis on source code only, we risk forgetting that software is the product of human efforts, where communication plays a pivotal role. One of the most used communications means are emails, which have become vital for any distributed development project. Analyzing email archives is non-trivial, due to the noisy and unstructured nature of emails, the vast amounts of information, the unstandardized storage systems, and the gap with development tools. We present Miler, a toolset that allows the exploration of this form of communication, in the context of software maintenance and evolution. With Miler we can retrieve data from mailing list repositories in different formats, model emails as first-class entities, and transparently store them in databases. Miler offers tools and support for navigating the content, manually labelling emails with discussed source code entities, automatically linking emails to source code, measuring code entities’ popularity in mailing lists, exposing structured content in the unstructured content, and integrating email communication in an IDE. @InProceedings{ICSE11p1025, author = {Alberto Bacchelli and Michele Lanza and Marco D'Ambros}, title = {Miler: A Toolset for Exploring Email Data}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1025--1027}, doi = {}, year = {2011}, } |
|
Damian, Daniela |
ICSE '11-DEMOS: "StakeSource2.0: Using Social ..."
StakeSource2.0: Using Social Networks of Stakeholders to Identify and Prioritise Requirements
Soo Ling Lim, Daniela Damian, and Anthony Finkelstein (University College London, UK; University of Victoria, Canada) Software projects typically rely on system analysts to conduct requirements elicitation, an approach potentially costly for large projects with many stakeholders and requirements. This paper describes StakeSource2.0, a web-based tool that uses social networks and collaborative filtering, a “crowdsourcing” approach, to identify and prioritise stakeholders and their requirements. @InProceedings{ICSE11p1022, author = {Soo Ling Lim and Daniela Damian and Anthony Finkelstein}, title = {StakeSource2.0: Using Social Networks of Stakeholders to Identify and Prioritise Requirements}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1022--1024}, doi = {}, year = {2011}, } ICSE '11-NIER: "The Hidden Experts in Software-Engineering ..." The Hidden Experts in Software-Engineering Communication (NIER Track) Irwin Kwan and Daniela Damian (University of Victoria, Canada) Sharing knowledge in a timely fashion is important in distributed software development. However, because experts are difficult to locate, developers tend to broadcast information to find the right people, which leads to overload and to communication breakdowns. We study the context in which experts are included in an email discussion so that team members can identify experts sooner. In this paper, we conduct a case study examining why people emerge in discussions by examining email within a distributed team. We find that people emerge in the following four situations: when a crisis occurs, when they respond to explicit requests, when they are forwarded in announcements, and when discussants follow up on a previous event such as a meeting. We observe that emergent people respond not only to situations where developers are seeking expertise, but also to execute routine tasks. Our findings have implications for expertise seeking and knowledge management processes. @InProceedings{ICSE11p800, author = {Irwin Kwan and Daniela Damian}, title = {The Hidden Experts in Software-Engineering Communication (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {800--803}, doi = {}, year = {2011}, } |
|
Daniel, Brett |
ICSE '11-DEMOS: "ReAssert: A Tool for Repairing ..."
ReAssert: A Tool for Repairing Broken Unit Tests
Brett Daniel, Danny Dig, Tihomir Gvero, Vilas Jagannath, Johnston Jiaa, Damion Mitchell, Jurand Nogiec, Shin Hwei Tan, and Darko Marinov (University of Illinois at Urbana-Champaign, USA; EPFL, Switzerland) Successful software systems continuously change their requirements and thus code. When this happens, some existing tests get broken because they no longer reflect the intended behavior, and thus they need to be updated. Repairing broken tests can be time-consuming and difficult. We present ReAssert, a tool that can automatically suggest repairs for broken unit tests. Examples include replacing literal values in tests, changing assertion methods, or replacing one assertion with several. Our experiments show that ReAssert can repair many common test failures and that its suggested repairs match developers’ expectations. @InProceedings{ICSE11p1010, author = {Brett Daniel and Danny Dig and Tihomir Gvero and Vilas Jagannath and Johnston Jiaa and Damion Mitchell and Jurand Nogiec and Shin Hwei Tan and Darko Marinov}, title = {ReAssert: A Tool for Repairing Broken Unit Tests}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1010--1012}, doi = {}, year = {2011}, } |
|
Dantas, Francisco |
ICSE '11-DOCTORALPRESENT: "Reuse vs. Maintainability: ..."
Reuse vs. Maintainability: Revealing the Impact of Composition Code Properties
Francisco Dantas (PUC-Rio, Brazil) Over the last years, several composition mechanisms have emerged to improve program modularity. Even though these mechanisms widely vary in their notation and semantics, they all promote a shift in the way programs are structured. They promote expressive means to define the composition of two or more reusable modules. However, given the complexity of the composition code, its actual effects on software quality are not well understood. This PhD research aims at investigating the impact of emerging composition mechanisms on the simultaneous satisfaction of software reuse and maintainability. In order to perform this analysis, we intend to define a set of compositiondriven metrics and compare their efficacy with traditional modularity metrics. Finally, we plan to derive guidelines on how to use new composition mechanisms to maximize reuse and stability of software modules. @InProceedings{ICSE11p1082, author = {Francisco Dantas}, title = {Reuse vs. Maintainability: Revealing the Impact of Composition Code Properties}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1082--1085}, doi = {}, year = {2011}, } |
|
Davies, Julius |
ICSE '11-SRC: "Measuring Subversions: Security ..."
Measuring Subversions: Security and Legal Risk in Reused Software Artifacts
Julius Davies (University of Victoria, Canada) A software system often includes a set of library dependencies and other software artifacts necessary for the system's proper operation. However, long-term maintenance problems related to reused software can gradually emerge over the lifetime of the deployed system. In our exploratory study we propose a manual technique to locate documented security and legal problems in a set of reused software artifacts. We evaluate our technique with a case study of 81 Java libraries found in a proprietary e-commerce web application. Using our approach we discovered both a potential legal problem with one library, and a second library that was affected by a known security vulnerability. These results support our larger thesis: software reuse entails long-term maintenance costs. In future work we strive to develop automated techniques by which developers, managers, and other software stakeholders can measure, address, and minimize these costs over the lifetimes of their software assets. @InProceedings{ICSE11p1149, author = {Julius Davies}, title = {Measuring Subversions: Security and Legal Risk in Reused Software Artifacts}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1149--1151}, doi = {}, year = {2011}, } |
|
D'Avila Garcez, Artur |
ICSE '11-NIER: "Learning to Adapt Requirements ..."
Learning to Adapt Requirements Specifications of Evolving Systems (NIER Track)
Rafael V. Borges, Artur d'Avila Garcez, Luis C. Lamb, and Bashar Nuseibeh (City University London, UK; UFRGS, Brazil; The Open University, UK; Lero, Ireland) We propose a novel framework for adapting and evolving software requirements models. The framework uses model checking and machine learning techniques for verifying properties and evolving model descriptions. The paper offers two novel contributions and a preliminary evaluation and application of the ideas presented. First, the framework is capable of coping with errors in the specification process so that performance degrades gracefully. Second, the framework can also be used to re-engineer a model from examples only, when an initial model is not available. We provide a preliminary evaluation of our framework by applying it to a Pump System case study, and integrate our prototype tool with the NuSMV model checker. We show how the tool integrates verification and evolution of abstract models, and also how it is capable of re-engineering partial models given examples from an existing system. @InProceedings{ICSE11p856, author = {Rafael V. Borges and Artur d'Avila Garcez and Luis C. Lamb and Bashar Nuseibeh}, title = {Learning to Adapt Requirements Specifications of Evolving Systems (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {856--859}, doi = {}, year = {2011}, } |
|
De Caso, Guido |
ICSE '11: "Program Abstractions for Behaviour ..."
Program Abstractions for Behaviour Validation
Guido de Caso, Víctor Braberman, Diego Garbervetsky, and Sebastián Uchitel (Universidad de Buenos Aires, Argentina; Imperial College London, UK) @InProceedings{ICSE11p381, author = {Guido de Caso and Víctor Braberman and Diego Garbervetsky and Sebastián Uchitel}, title = {Program Abstractions for Behaviour Validation}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {381--380}, doi = {}, year = {2011}, } |
|
De Halleux, Jonathan |
ICSE '11: "Precise Identification of ..."
Precise Identification of Problems for Structural Test Generation
Xusheng Xiao, Tao Xie, Nikolai Tillmann, and Jonathan de Halleux (North Carolina State University, USA; Microsoft Research, USA) An important goal of software testing is to achieve at least high structural coverage. To reduce the manual efforts of producing such high-covering test inputs, testers or developers can employ tools built based on automated structural test-generation approaches. Although these tools can easily achieve high structural coverage for simple programs, when they are applied on complex programs in practice, these tools face various problems, such as (1) the external-method-call problem (EMCP), where tools cannot deal with method calls to external libraries; (2) the object-creation problem (OCP), where tools fails to generate method-call sequences to produce desirable object states. Since these tools currently could not be powerful enough to deal with these problems in testing complex programs in practice, we propose cooperative developer testing, where developers provide guidance to help tools achieve higher structural coverage. To reduce the efforts of developers in providing guidance to tools, in this paper, we propose a novel approach, called Covana, which precisely identifies and reports problems that prevent the tools from achieving high structural coverage primarily by determining whether branch statements containing not-covered branches have data dependencies on problem candidates. We provide two techniques to instantiate Covana to identify EMCPs and OCPs. Finally, we conduct evaluations on two open source projects to show the effectiveness of Covana in identifying EMCPs and OCPs. @InProceedings{ICSE11p611, author = {Xusheng Xiao and Tao Xie and Nikolai Tillmann and Jonathan de Halleux}, title = {Precise Identification of Problems for Structural Test Generation}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {611--620}, doi = {}, year = {2011}, } ICSE '11-DEMOS: "Covana: Precise Identification ..." Covana: Precise Identification of Problems in Pex Xusheng Xiao, Tao Xie, Nikolai Tillmann, and Jonathan de Halleux (North Carolina State University, USA; Microsoft Research, USA) Achieving high structural coverage is an important goal of software testing. Instead of manually producing high-covering test inputs that achieve high structural coverage, testers or developers can employ tools built based on automated test-generation approaches, such as Pex, to automatically generate such test inputs. Although these tools can easily generate test inputs that achieve high structural coverage for simple programs, when applied on complex programs in practice, these tools face various problems, such as the problems of dealing with method calls to external libraries or generating method-call sequences to produce desired object states. Since these tools are currently not powerful enough to deal with these various problems in testing complex programs, we propose cooperative developer testing, where developers provide guidance to help tools achieve higher structural coverage. In this demo, we present Covana, a tool that precisely identifies and reports problems that prevent Pex from achieving high structural coverage. Covana identifies problems primarily by determining whether branch statements containing not-covered branches have data dependencies on problem candidates. @InProceedings{ICSE11p1004, author = {Xusheng Xiao and Tao Xie and Nikolai Tillmann and Jonathan de Halleux}, title = {Covana: Precise Identification of Problems in Pex}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1004--1006}, doi = {}, year = {2011}, } |
|
Deissenboeck, Florian |
ICSE '11-DEMOS: "The Quamoco Tool Chain for ..."
The Quamoco Tool Chain for Quality Modeling and Assessment
Florian Deissenboeck, Lars Heinemann, Markus Herrmannsdoerfer, Klaus Lochmann, and Stefan Wagner (TU München, Germany) Continuous quality assessment is crucial for the long-term success of evolving software. On the one hand, code analysis tools automatically supply quality indicators, but do not provide a complete overview of software quality. On the other hand, quality models define abstract characteristics that influence quality, but are not operationalized. Currently, no tool chain exists that integrates code analysis tools with quality models. To alleviate this, the Quamoco project provides a tool chain to both define and assess software quality. The tool chain consists of a quality model editor and an integration with the quality assessment toolkit ConQAT. Using the editor, we can define quality models ranging from abstract characteristics down to operationalized measures. From the quality model, a ConQAT configuration can be generated that can be used to automatically assess the quality of a software system. @InProceedings{ICSE11p1007, author = {Florian Deissenboeck and Lars Heinemann and Markus Herrmannsdoerfer and Klaus Lochmann and Stefan Wagner}, title = {The Quamoco Tool Chain for Quality Modeling and Assessment}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1007--1009}, doi = {}, year = {2011}, } |
|
Dekhtyar, Alex |
ICSE '11-NIER: "Towards Overcoming Human Analyst ..."
Towards Overcoming Human Analyst Fallibility in the Requirements Tracing Process (NIER Track)
David Cuddeback, Alex Dekhtyar, Jane Huffman Hayes, Jeff Holden, and Wei-Keat Kong (California Polytechnic State University, USA; University of Kentucky, USA) Our research group recently discovered that human analysts, when asked to validate candidate traceability matrices, produce predictably imperfect results, in some cases less accurate than the starting candidate matrices. This discovery radically changes our understanding of how to design a fast, accurate and certifiable tracing process that can be implemented as part of software assurance activities. We present our vision for the new approach to achieving this goal. Further, we posit that human fallibility may impact other softare engineering activities involving decision support tools. @InProceedings{ICSE11p860, author = {David Cuddeback and Alex Dekhtyar and Jane Huffman Hayes and Jeff Holden and Wei-Keat Kong}, title = {Towards Overcoming Human Analyst Fallibility in the Requirements Tracing Process (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {860--863}, doi = {}, year = {2011}, } |
|
De Lucia, Andrea |
ICSE '11-DEMOS: "CodeTopics: Which Topic am ..."
CodeTopics: Which Topic am I Coding Now?
Malcom Gethers, Trevor Savage, Massimiliano Di Penta, Rocco Oliveto, Denys Poshyvanyk, and Andrea De Lucia (College of William and Mary, USA; CMU, USA; University of Sannio, Italy; University of Molise, Italy; University of Salerno, Italy) Recent studies indicated that showing the similarity between the source code being developed and related high-level artifacts (HLAs), such as requirements, helps developers improve the quality of source code identifiers. In this paper, we present CodeTopics, an Eclipse plug-in that in addition to showing the similarity between source code and HLAs also highlights to what extent the code under development covers topics described in HLAs. Such views complement information derived by showing only the similarity between source code and HLAs helping (i) developers to identify functionality that are not implemented yet or (ii) newcomers to comprehend source code artifacts by showing them the topics that these artifacts relate to. @InProceedings{ICSE11p1034, author = {Malcom Gethers and Trevor Savage and Massimiliano Di Penta and Rocco Oliveto and Denys Poshyvanyk and Andrea De Lucia}, title = {CodeTopics: Which Topic am I Coding Now?}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1034--1036}, doi = {}, year = {2011}, } ICSE '11-NIER: "Identifying Method Friendships ..." Identifying Method Friendships to Remove the Feature Envy Bad Smell (NIER Track) Rocco Oliveto, Malcom Gethers, Gabriele Bavota, Denys Poshyvanyk, and Andrea De Lucia (University of Molise, Italy; College of William and Mary, USA; University of Salerno, Italy) We propose a novel approach to identify Move Method refactoring opportunities and remove the Feature Envy bad smell from source code. The proposed approach analyzes both structural and conceptual relationships between methods and uses Relational Topic Models (RTM) to identify sets of methods that share several responsibilities, i.e., "friend methods". The analysis of method friendships of a given method can be used to pinpoint the target class (envied class) where the method should be moved in. The results of a preliminary empirical evaluation indicate that the proposed approach provides accurate and meaningful refactoring opportunities. @InProceedings{ICSE11p820, author = {Rocco Oliveto and Malcom Gethers and Gabriele Bavota and Denys Poshyvanyk and Andrea De Lucia}, title = {Identifying Method Friendships to Remove the Feature Envy Bad Smell (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {820--823}, doi = {}, year = {2011}, } |
|
Dempsey, Mitch |
ICSE '11-DEMOS: "A Demonstration of a Distributed ..."
A Demonstration of a Distributed Software Design Sketching Tool
Nicolas Mangano, Mitch Dempsey, Nicolas Lopez, and André van der Hoek (UC Irvine, USA) Software designers frequently sketch when they design, particularly during the early phases of exploration of a design problem and its solution. In so doing, they shun formal design tools, the reason being that such tools impose conformity and precision prematurely. Sketching on the other hand is a highly fluid and flexible way of expressing oneself. In this paper, we present Calico, a sketch-based distributed software design tool that supports software designers with a variety of features that improve over the use of just pen-and-paper or a regular whiteboard, and are tailored specifically for software design. Calico is meant to be used on electronic whiteboards or tablets, and provides for rapid creation and manipulation of design content by sets of developers who can collaborate distributedly. @InProceedings{ICSE11p1028, author = {Nicolas Mangano and Mitch Dempsey and Nicolas Lopez and André van der Hoek}, title = {A Demonstration of a Distributed Software Design Sketching Tool}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1028--1030}, doi = {}, year = {2011}, } |
|
Desmond, Michael |
ICSE '11-NIER: "Sketching Tools for Ideation ..."
Sketching Tools for Ideation (NIER Track)
Rachel Bellamy, Michael Desmond, Jacquelyn Martino, Paul Matchen, Harold Ossher, John Richards, and Cal Swart (IBM Research Watson, USA) Sketching facilitates design in the exploration of ideas about concrete objects and abstractions. In fact, throughout the software engineering process when grappling with new ideas, people reach for a pen and start sketching. While pen and paper work well, digital media can provide additional features to benefit the sketcher. Digital support will only be successful, however, if it does not detract from the core sketching experience. Based on research that defines characteristics of sketches and sketching, this paper offers three preliminary tool examples. Each example is intended to enable sketching while maintaining its characteristic experience. @InProceedings{ICSE11p808, author = {Rachel Bellamy and Michael Desmond and Jacquelyn Martino and Paul Matchen and Harold Ossher and John Richards and Cal Swart}, title = {Sketching Tools for Ideation (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {808--811}, doi = {}, year = {2011}, } ICSE '11-NIER: "Blending Freeform and Managed ..." Blending Freeform and Managed Information in Tables (NIER Track) Nicolas Mangano, Harold Ossher, Ian Simmonds, Matthew Callery, Michael Desmond, and Sophia Krasikov (UC Irvine, USA; IBM Research Watson, USA) Tables are an important tool used by business analysts engaged in early requirements activities (in fact it is safe to say that tables appeal to many other types of user, in a variety of activities and domains). Business analysts typically use the tables provided by office tools. These tables offer great flexibility, but no underlying model, and hence no consistency management, multiple views or other advantages familiar to the users of modeling tools. Modeling tools, however, are usually too rigid for business analysts. In this paper we present a flexible modeling approach to tables, which combines the advantages of both office and modeling tools. Freeform information can co-exist with information managed by an underlying model, and an incremental formalization approach allows each item of information to transition fluidly between freeform and managed. As the model evolves, it is used to guide the user in the process of formalizing any remaining freeform information. The model therefore helps users without restricting them. Early feedback is described, and the approach is analyzed briefly in terms of cognitive dimensions. @InProceedings{ICSE11p840, author = {Nicolas Mangano and Harold Ossher and Ian Simmonds and Matthew Callery and Michael Desmond and Sophia Krasikov}, title = {Blending Freeform and Managed Information in Tables (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {840--843}, doi = {}, year = {2011}, } |
|
De Souza, Cleidson |
ICSE '11-WORKSHOPS: "Workshop on Cooperative and ..."
Workshop on Cooperative and Human Aspects of Software Engineering (CHASE 2011)
Marcelo Cataldo, Cleidson de Souza, Yvonne Dittrich, Rashina Hoda, and Helen Sharp (Robert Bosch Research, USA; IBM Research, Brazil; IT University of Copenhagen, Denmark; Victoria University of Wellington, New Zealand; The Open University, UK) Software is created by people for people working in varied environments, under various conditions. Thus understanding cooperative and human aspects of software development is crucial to comprehend how methods and tools are used, and thereby improve the creation and maintenance of software. Over the years, both researchers and practitioners have recognized the need to study and understand these aspects. Despite recognizing this, researchers in cooperative and human aspects have no clear place to meet and are dispersed in different research conferences and areas. The goal of this workshop is to provide a forum for discussing high quality research on human and cooperative aspects of software engineering. We aim at providing both a meeting place for the growing community and the possibility for researchers interested in joining the field to present their work in progress and get an overview over the field. @InProceedings{ICSE11p1188, author = {Marcelo Cataldo and Cleidson de Souza and Yvonne Dittrich and Rashina Hoda and Helen Sharp}, title = {Workshop on Cooperative and Human Aspects of Software Engineering (CHASE 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1188--1189}, doi = {}, year = {2011}, } |
|
Deursen, Arie van |
ICSE '11: "Supporting Professional Spreadsheet ..."
Supporting Professional Spreadsheet Users by Generating Leveled Dataflow Diagrams
Felienne Hermans, Martin Pinzger, and Arie van Deursen (Delft University of Technology, Netherlands) Thanks to their flexibility and intuitive programming model, spreadsheets are widely used in industry, often for businesscritical applications. Similar to software developers, professional spreadsheet users demand support for maintaining and transferring their spreadsheets. In this paper, we first study the problems and information needs of professional spreadsheet users by means of a survey conducted at a large financial company. Based on these needs, we then present an approach that extracts this information from spreadsheets and presents it in a compact and easy to understand way, with leveled dataflow diagrams. Our approach comes with three different views on the dataflow that allow the user to analyze the dataflow diagrams in a top-down fashion. To evaluate the usefulness of the proposed approach, we conducted a series of interviews as well as nine case studies in an industrial setting. The results of the evaluation clearly indicate the demand for and usefulness of our approach in ease the understanding of spreadsheets. @InProceedings{ICSE11p451, author = {Felienne Hermans and Martin Pinzger and Arie van Deursen}, title = {Supporting Professional Spreadsheet Users by Generating Leveled Dataflow Diagrams}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {451--460}, doi = {}, year = {2011}, } ICSE '11-WORKSHOPS: "Second International Workshop ..." Second International Workshop on Web 2.0 for Software Engineering (Web2SE 2011) Christoph Treude, Margaret-Anne Storey, Arie van Deursen, Andrew Begel, and Sue Black (University of Victoria, Canada; Delft University of Technology, Netherlands; Microsoft Research, USA; University College London, UK) Social software is built around an "architecture of participation" where user data is aggregated as a side-effect of using Web 2.0 applications. Web 2.0 implies that processes and tools are socially open, and that content can be used in several different contexts. Web 2.0 tools and technologies support interactive information sharing, data interoperability and user centered design. For instance, wikis, blogs, tags and feeds help us organize, manage and categorize content in an informal and collaborative way. Some of these technologies have made their way into collaborative software development processes and development platforms. These processes and environments are just scratching the surface of what can be done by incorporating Web 2.0 approaches and technologies into collaborative software development. Web 2.0 opens up new opportunities for developers to form teams and collaborate, but it also comes with challenges for developers and researchers. Web2SE aims to improve our understanding of how Web 2.0, manifested in technologies such as mashups or dashboards, can change the culture of collaborative software development. @InProceedings{ICSE11p1222, author = {Christoph Treude and Margaret-Anne Storey and Arie van Deursen and Andrew Begel and Sue Black}, title = {Second International Workshop on Web 2.0 for Software Engineering (Web2SE 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1222--1223}, doi = {}, year = {2011}, } |
|
Devanbu, Premkumar |
ICSE '11: "Ownership, Experience and ..."
Ownership, Experience and Defects: A Fine-Grained Study of Authorship
Foyzur Rahman and Premkumar Devanbu (UC Davis, USA) Recent research indicates that “people” factors such as ownership, experience, organizational structure, and geographic distribution have a big impact on software quality. Understanding these factors, and properly deploying people resources can help managers improve quality outcomes. This paper considers the impact of code ownership and developer experience on software quality. In a large project, a file might be entirely owned by a single developer, or worked on by many. Some previous research indicates that more developers working on a file might lead to more defects. Prior research considered this phenomenon at the level of modules or files, and thus does not tease apart and study the effect of contributions of different developers to each module or file. We exploit a modern version control system to examine this issue at a fine-grained level. Using version history, we examine contributions to code fragments that are actually repaired to fix bugs. Are these code fragments “implicated” in bugs the result of contributions from many? or from one? Does experience matter? What type of experience? We find that implicated code is more strongly associated with a single developer’s contribution; our findings also indicate that an author’s specialized experience in the target file is more important than general experience. Our findings suggest that quality control efforts could be profitably targeted at changes made by single developers with limited prior experience on that file. @InProceedings{ICSE11p491, author = {Foyzur Rahman and Premkumar Devanbu}, title = {Ownership, Experience and Defects: A Fine-Grained Study of Authorship}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {491--500}, doi = {}, year = {2011}, } |
|
Dhoolia, Pankaj |
ICSE '11-DEMOS: "Using MATCON to Generate CASE ..."
Using MATCON to Generate CASE Tools That Guide Deployment of Pre-Packaged Applications
Elad Fein, Natalia Razinkov, Shlomit Shachor, Pietro Mazzoleni, Sweefen Goh, Richard Goodwin, Manisha Bhandar, Shyh-Kwei Chen, Juhnyoung Lee, Vibha Singhal Sinha, Senthil Mani, Debdoot Mukherjee, Biplav Srivastava, and Pankaj Dhoolia (IBM Research Haifa, Israel; IBM Research Watson, USA; IBM Research, India) The complex process of adapting pre-packaged applications, such as Oracle or SAP, to an organization’s needs is full of challenges. Although detailed, structured, and well-documented methods govern this process, the consulting team implementing the method must spend a huge amount of manual effort to make sure the guidelines of the method are followed as intended by the method author. MATCON breaks down the method content, documents, templates, and work products into reusable objects, and enables them to be cataloged and indexed so these objects can be easily found and reused on subsequent projects. By using models and meta-modeling the reusable methods, we automatically produce a CASE tool to apply these methods, thereby guiding consultants through this complex process. The resulting tool helps consultants create the method deliverables for the initial phases of large customization projects. Our MATCON output, referred to as Consultant Assistant, has shown significant savings in training costs, a 20–30% improvement in productivity, and positive results in large Oracle and SAP implementations. @InProceedings{ICSE11p1016, author = {Elad Fein and Natalia Razinkov and Shlomit Shachor and Pietro Mazzoleni and Sweefen Goh and Richard Goodwin and Manisha Bhandar and Shyh-Kwei Chen and Juhnyoung Lee and Vibha Singhal Sinha and Senthil Mani and Debdoot Mukherjee and Biplav Srivastava and Pankaj Dhoolia}, title = {Using MATCON to Generate CASE Tools That Guide Deployment of Pre-Packaged Applications}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1016--1018}, doi = {}, year = {2011}, } |
|
Diehl, Stephan |
ICSE '11-NIER: "CREWW - Collaborative Requirements ..."
CREWW - Collaborative Requirements Engineering with Wii-Remotes (NIER Track)
Felix Bott, Stephan Diehl, and Rainer Lutz (University of Trier, Germany) In this paper, we present CREWW, a tool for co-located, collaborative CRC modeling and use case analysis. In CRC sessions role play is used to involve all stakeholders when determining whether the current software model completely and consistently captures the modeled use case. In this activity it quickly becomes difficult to keep track of which class is currently active or along which path the current state was reached. CREWW was designed to alleviate these and other weaknesses of the traditional approach. @InProceedings{ICSE11p852, author = {Felix Bott and Stephan Diehl and Rainer Lutz}, title = {CREWW - Collaborative Requirements Engineering with Wii-Remotes (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {852--855}, doi = {}, year = {2011}, } |
|
Dietl, Werner |
ICSE '11-SEIP: "Building and Using Pluggable ..."
Building and Using Pluggable Type-Checkers
Werner Dietl, Stephanie Dietzel, Michael D. Ernst, Kıvanç Muşlu, and Todd W. Schiller (University of Washington, USA) This paper describes practical experience building and using pluggable type-checkers. A pluggable type-checker refines (strengthens) the built-in type system of a programming language. This permits programmers to detect and prevent, at compile time, defects that would otherwise have been manifested as run-time errors. The prevented defects may be generally applicable to all programs, such as null pointer dereferences. Or, an application-specific pluggable type system may be designed for a single application. We built a series of pluggable type checkers using the Checker Framework, and evaluated them on 2 million lines of code, finding hundreds of bugs in the process. We also observed 28 first-year computer science students use a checker to eliminate null pointer errors in their course projects. Along with describing the checkers and characterizing the bugs we found, we report the insights we had throughout the process. Overall, we found that the type checkers were easy to write, easy for novices to productively use, and effective in finding real bugs and verifying program properties, even for widely tested and used open source projects. @InProceedings{ICSE11p681, author = {Werner Dietl and Stephanie Dietzel and Michael D. Ernst and Kıvanç Muşlu and Todd W. Schiller}, title = {Building and Using Pluggable Type-Checkers}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {681--690}, doi = {}, year = {2011}, } |
|
Dietzel, Stephanie |
ICSE '11-SEIP: "Building and Using Pluggable ..."
Building and Using Pluggable Type-Checkers
Werner Dietl, Stephanie Dietzel, Michael D. Ernst, Kıvanç Muşlu, and Todd W. Schiller (University of Washington, USA) This paper describes practical experience building and using pluggable type-checkers. A pluggable type-checker refines (strengthens) the built-in type system of a programming language. This permits programmers to detect and prevent, at compile time, defects that would otherwise have been manifested as run-time errors. The prevented defects may be generally applicable to all programs, such as null pointer dereferences. Or, an application-specific pluggable type system may be designed for a single application. We built a series of pluggable type checkers using the Checker Framework, and evaluated them on 2 million lines of code, finding hundreds of bugs in the process. We also observed 28 first-year computer science students use a checker to eliminate null pointer errors in their course projects. Along with describing the checkers and characterizing the bugs we found, we report the insights we had throughout the process. Overall, we found that the type checkers were easy to write, easy for novices to productively use, and effective in finding real bugs and verifying program properties, even for widely tested and used open source projects. @InProceedings{ICSE11p681, author = {Werner Dietl and Stephanie Dietzel and Michael D. Ernst and Kıvanç Muşlu and Todd W. Schiller}, title = {Building and Using Pluggable Type-Checkers}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {681--690}, doi = {}, year = {2011}, } |
|
Dig, Danny |
ICSE '11: "Transformation for Class Immutability ..."
Transformation for Class Immutability
Fredrik Kjolstad, Danny Dig, Gabriel Acevedo, and Marc Snir (University of Illinois at Urbana-Champaign, USA) It is common for object-oriented programs to have both mutable and immutable classes. Immutable classes simplify programing because the programmer does not have to reason about side-effects. Sometimes programmers write immutable classes from scratch, other times they transform mutable into immutable classes. To transform a mutable class, programmers must find all methods that mutate its transitive state and all objects that can enter or escape the state of the class. The analyses are non-trivial and the rewriting is tedious. Fortunately, this can be automated. We present an algorithm and a tool, Immutator, that enables the programmer to safely transform a mutable class into an immutable class. Two case studies and one controlled experiment show that Immutator is useful. It (i) reduces the burden of making classes immutable, (ii) is fast enough to be used interactively, and (iii) is much safer than manual transformations. @InProceedings{ICSE11p61, author = {Fredrik Kjolstad and Danny Dig and Gabriel Acevedo and Marc Snir}, title = {Transformation for Class Immutability}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {61--70}, doi = {}, year = {2011}, } ICSE '11-DEMOS: "ReAssert: A Tool for Repairing ..." ReAssert: A Tool for Repairing Broken Unit Tests Brett Daniel, Danny Dig, Tihomir Gvero, Vilas Jagannath, Johnston Jiaa, Damion Mitchell, Jurand Nogiec, Shin Hwei Tan, and Darko Marinov (University of Illinois at Urbana-Champaign, USA; EPFL, Switzerland) Successful software systems continuously change their requirements and thus code. When this happens, some existing tests get broken because they no longer reflect the intended behavior, and thus they need to be updated. Repairing broken tests can be time-consuming and difficult. We present ReAssert, a tool that can automatically suggest repairs for broken unit tests. Examples include replacing literal values in tests, changing assertion methods, or replacing one assertion with several. Our experiments show that ReAssert can repair many common test failures and that its suggested repairs match developers’ expectations. @InProceedings{ICSE11p1010, author = {Brett Daniel and Danny Dig and Tihomir Gvero and Vilas Jagannath and Johnston Jiaa and Damion Mitchell and Jurand Nogiec and Shin Hwei Tan and Darko Marinov}, title = {ReAssert: A Tool for Repairing Broken Unit Tests}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1010--1012}, doi = {}, year = {2011}, } ICSE '11-WORKSHOPS: "Fourth Workshop on Refactoring ..." Fourth Workshop on Refactoring Tools (WRT 2011) Danny Dig and Don Batory (University of Illinois at Urbana-Champaign, USA; University of Texas at Austin, USA) Refactoring is the process of applying behavior-preserving transformations to a program with the objective of improving the program’s design. A specific refactoring is identified by a name (e.g., Extract Method), a set of preconditions, and a set of transformations that need to be performed. Tool support for refactoring is essential because checking the preconditions of refactoring often requires nontrivial program analysis, and applying transformations may affect many locations throughout a program. In recent years, the emergence of light-weight programming methodologies such as Extreme Programming has generated a great amount of interest in refactoring, and refactoring support has become a required feature in today’s IDEs. This workshop is a continuation of a series of previous workshops (ECOOP 2007, OOPSLA 2008 and 2009 – see http://refactoring.info/WRT) where researchers and developers of refactoring tools can meet, discuss recent ideas and work, and view tool demonstrations. @InProceedings{ICSE11p1202, author = {Danny Dig and Don Batory}, title = {Fourth Workshop on Refactoring Tools (WRT 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1202--1203}, doi = {}, year = {2011}, } |
|
Di Penta, Massimiliano |
ICSE '11-DEMOS: "CodeTopics: Which Topic am ..."
CodeTopics: Which Topic am I Coding Now?
Malcom Gethers, Trevor Savage, Massimiliano Di Penta, Rocco Oliveto, Denys Poshyvanyk, and Andrea De Lucia (College of William and Mary, USA; CMU, USA; University of Sannio, Italy; University of Molise, Italy; University of Salerno, Italy) Recent studies indicated that showing the similarity between the source code being developed and related high-level artifacts (HLAs), such as requirements, helps developers improve the quality of source code identifiers. In this paper, we present CodeTopics, an Eclipse plug-in that in addition to showing the similarity between source code and HLAs also highlights to what extent the code under development covers topics described in HLAs. Such views complement information derived by showing only the similarity between source code and HLAs helping (i) developers to identify functionality that are not implemented yet or (ii) newcomers to comprehend source code artifacts by showing them the topics that these artifacts relate to. @InProceedings{ICSE11p1034, author = {Malcom Gethers and Trevor Savage and Massimiliano Di Penta and Rocco Oliveto and Denys Poshyvanyk and Andrea De Lucia}, title = {CodeTopics: Which Topic am I Coding Now?}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1034--1036}, doi = {}, year = {2011}, } ICSE '11-WORKSHOPS: "Workshop on Emerging Trends ..." Workshop on Emerging Trends in Software Metrics (WETSoM 2011) Giulio Concas, Massimiliano Di Penta, Ewan Tempero, and Hongyu Zhang (University of Cagliari, Italy; University of Sannio, Italy; University of Auckland, New Zealand; Tsinghua University, China) The Workshop on Emerging Trends in Software Metrics aims at bringing together researchers and practitioners to discuss the progress of software metrics. The motivation for this workshop is the low impact that software metrics has on current software development. The goals of this workshop are to critically examine the evidence for the effectiveness of existing metrics and to identify new directions for development of software metrics. @InProceedings{ICSE11p1224, author = {Giulio Concas and Massimiliano Di Penta and Ewan Tempero and Hongyu Zhang}, title = {Workshop on Emerging Trends in Software Metrics (WETSoM 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1224--1225}, doi = {}, year = {2011}, } ICSE '11-WORKSHOPS: "Sixth International Workshop ..." Sixth International Workshop on Traceability in Emerging Forms of Software Engineering (TEFSE 2011) Denys Poshyvanyk, Massimiliano Di Penta, and Huzefa Kagdi (College of William and Mary, USA; University of Sannio, Italy; Winston-Salem State University, USA) The Sixth International Workshop on Traceability in Emerging Forms of Software Engineering (TEFSE 2011) will bring together researchers and practitioners to examine the challenges of recovering and maintaining traceability for the myriad forms of software engineering artifacts, ranging from user needs to models to source code. The objective of the 6th edition of TEFSE is to build on the work the traceability research community has completed in identifying the open traceability challenges. In particular, it is intended to be a working event focused on discussing the main problems related to software artifact traceability and propose possible solutions for such problems. Moreover, the workshop also aims at identifying key issues concerning the importance of maintaining the traceability information during software development, to further improve the cooperation between academia and industry and to facilitate technology transfer. @InProceedings{ICSE11p1214, author = {Denys Poshyvanyk and Massimiliano Di Penta and Huzefa Kagdi}, title = {Sixth International Workshop on Traceability in Emerging Forms of Software Engineering (TEFSE 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1214--1215}, doi = {}, year = {2011}, } |
|
D'Ippolito, Nicolás |
ICSE '11: "Synthesis of Live Behaviour ..."
Synthesis of Live Behaviour Models for Fallible Domains
Nicolás D'Ippolito, Víctor Braberman, Nir Piterman, and Sebastián Uchitel (Imperial College London, UK; Universidad de Buenos Aires, Argentina; University of Leicester, UK) We revisit synthesis of live controllers for event-based operational models. We remove one aspect of an idealised problem domain by allowing to integrate failures of controller actions in the environment model. Classical treatment of failures through strong fairness leads to a very high computational complexity and may be insufficient for many interesting cases. We identify a realistic stronger fairness condition on the behaviour of failures. We show how to construct controllers satisfying liveness specifications under these fairness conditions. The resulting controllers exhibit the only possible behaviour in face of the given topology of failures: they keep retrying and never give up. We then identify some well-structure conditions on the environment. These conditions ensure that the resulting controller will be eager to satisfy its goals. Furthermore, for environments that satisfy these conditions and have an underlying probabilistic behaviour, the measure of traces that satisfy our fairness condition is 1, giving a characterisation of the kind of domains in which the approach is applicable. @InProceedings{ICSE11p211, author = {Nicolás D'Ippolito and Víctor Braberman and Nir Piterman and Sebastián Uchitel}, title = {Synthesis of Live Behaviour Models for Fallible Domains}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {211--220}, doi = {}, year = {2011}, } |
|
Dittrich, Yvonne |
ICSE '11-WORKSHOPS: "Workshop on Cooperative and ..."
Workshop on Cooperative and Human Aspects of Software Engineering (CHASE 2011)
Marcelo Cataldo, Cleidson de Souza, Yvonne Dittrich, Rashina Hoda, and Helen Sharp (Robert Bosch Research, USA; IBM Research, Brazil; IT University of Copenhagen, Denmark; Victoria University of Wellington, New Zealand; The Open University, UK) Software is created by people for people working in varied environments, under various conditions. Thus understanding cooperative and human aspects of software development is crucial to comprehend how methods and tools are used, and thereby improve the creation and maintenance of software. Over the years, both researchers and practitioners have recognized the need to study and understand these aspects. Despite recognizing this, researchers in cooperative and human aspects have no clear place to meet and are dispersed in different research conferences and areas. The goal of this workshop is to provide a forum for discussing high quality research on human and cooperative aspects of software engineering. We aim at providing both a meeting place for the growing community and the possibility for researchers interested in joining the field to present their work in progress and get an overview over the field. @InProceedings{ICSE11p1188, author = {Marcelo Cataldo and Cleidson de Souza and Yvonne Dittrich and Rashina Hoda and Helen Sharp}, title = {Workshop on Cooperative and Human Aspects of Software Engineering (CHASE 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1188--1189}, doi = {}, year = {2011}, } |
|
Dohi, Tadashi |
ICSE '11: "Towards Quantitative Software ..."
Towards Quantitative Software Reliability Assessment in Incremental Development Processes
Tadashi Dohi and Takaji Fujiwara (Hiroshima University, Japan; Fujitsu Quality Laboratory, Japan) The iterative and incremental development is becoming a major development process model in industry, and allows us for a good deal of parallelism between development and testing. In this paper we develop a quantitative software reliability assessment method in incremental development processes, based on the familiar non-homogeneous Poisson processes. More specifically, we utilize the software metrics observed in each incremental development and testing, and estimate the associated software reliability measures. In a numerical example with a real incremental developmental project data, it is shown that the estimate of software reliability with a specific model can take a realistic value, and that the reliability growth phenomenon can be observed even in the incremental development scheme. @InProceedings{ICSE11p41, author = {Tadashi Dohi and Takaji Fujiwara}, title = {Towards Quantitative Software Reliability Assessment in Incremental Development Processes}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {41--50}, doi = {}, year = {2011}, } |
|
Dolby, Julian |
ICSE '11: "Refactoring Java Programs ..."
Refactoring Java Programs for Flexible Locking
Max Schäfer, Manu Sridharan, Julian Dolby, and Frank Tip (Oxford University, UK; IBM Research Watson, USA) @InProceedings{ICSE11p71, author = {Max Schäfer and Manu Sridharan and Julian Dolby and Frank Tip}, title = {Refactoring Java Programs for Flexible Locking}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {71--70}, doi = {}, year = {2011}, } ICSE '11: "A Framework for Automated ..." A Framework for Automated Testing of JavaScript Web Applications Shay Artzi, Julian Dolby, Simon Holm Jensen, Anders Møller, and Frank Tip (IBM Research, USA; Aarhus University, Denmark) Current practice in testing JavaScript web applications requires manual construction of test cases, which is difficult and tedious. We present a framework for feedback-directed automated test generation for JavaScript in which execution is monitored to collect information that dire cts the test generator towards inputs that yield increased coverage. We implemented several instantiations of the framework, corresponding to variations on feedback-directed random testing, in a tool called Artemis. Experiments on a suite of JavaScript applications demonstrate that a simple instantiation of the framework that uses event handler registrations as feedback information produces surprisingly good coverage if enough tests are generated. By also using coverage information and read-write sets as feedback information, a slightly better level of coverage can be achieved, and sometimes with many fewer tests. The generated tests can be used for detecting HTML validity problems and other programming errors. @InProceedings{ICSE11p571, author = {Shay Artzi and Julian Dolby and Simon Holm Jensen and Anders Møller and Frank Tip}, title = {A Framework for Automated Testing of JavaScript Web Applications}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {571--580}, doi = {}, year = {2011}, } |
|
Dresselhaus, Bill |
ICSE '11-KEYNOTES: "Exciting New Trends in Design ..."
Exciting New Trends in Design Thinking (Keynote Abstract)
Bill Dresselhaus (DRESSELHAUSgroup Inc., USA/Korea) Design and design thinking are becoming the hot topics and new business processes around the world—yes, business processes! Business schools are adding design thinking courses to their curricula and business professors are writing books on design thinking. Countries like Korea and Singapore are vying to be the leading Asian Design Nations. New, socalled Convergent courses, programs and schools are emerging globally that combine engineering, business and design disciplines and departments into integrated efforts. The Do-It-Yourself (DIY) Design Movement is gaining momentum and the personal discipline of Making things is coming back. DIY Prototyping and Manufacturing are gaining ground and opportunities with new technologies and innovations. User- Generated Design is becoming a common corporate process. Design process and design thinking are being applied cross-functionally to such global issues as clean water and alternative energy. And the old traditional view of design as art and decoration and styling is giving way to a broader and more comprehensive way of thinking and solving human-centered problems by other than just a few elite professionals. In light of all this and more, Bill is excited about the ideas of ubiquitous design education for everyone and DIY design as a universal human experience. He is passionate about an idea in what Victor Papanek said 40 years ago in his seminal book, Design for the Real World, “All that we do, almost all the time, is design, for design is basic to all human activity”. Just as all humans are inherently businesspeople in many ways at many times, we are also all designers in many ways at many times—it is time to believe this and make the best of it. @InProceedings{ICSE11p622, author = {Bill Dresselhaus}, title = {Exciting New Trends in Design Thinking (Keynote Abstract)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {622--622}, doi = {}, year = {2011}, } |
|
Dumitru, Horatiu |
ICSE '11: "On-demand Feature Recommendations ..."
On-demand Feature Recommendations Derived from Mining Public Product Descriptions
Horatiu Dumitru, Marek Gibiec, Negar Hariri, Jane Cleland-Huang, Bamshad Mobasher, Carlos Castro-Herrera, and Mehdi Mirakhorli (DePaul University, USA) We present a recommender system that models and recommends product features for a given domain. Our approach mines product descriptions from publicly available online specifications, utilizes text mining and a novel incremental diffusive clustering algorithm to discover domain-specific features, generates a probabilistic feature model that represents commonalities, variants, and cross-category features, and then uses association rule mining and the k-NearestNeighbor machine learning strategy to generate product specific feature recommendations. Our recommender system supports the relatively labor-intensive task of domain analysis, potentially increasing opportunities for re-use, reducing time-to-market, and delivering more competitive software products. The approach is empirically validated against 20 different product categories using thousands of product descriptions mined from a repository of free software applications. @InProceedings{ICSE11p181, author = {Horatiu Dumitru and Marek Gibiec and Negar Hariri and Jane Cleland-Huang and Bamshad Mobasher and Carlos Castro-Herrera and Mehdi Mirakhorli}, title = {On-demand Feature Recommendations Derived from Mining Public Product Descriptions}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {181--190}, doi = {}, year = {2011}, } |
|
Dwivedi, Vishal |
ICSE '11-SEIP: "SORASCS: A Case Study in SOA-based ..."
SORASCS: A Case Study in SOA-based Platform Design for Socio-Cultural Analysis
Bradley Schmerl, David Garlan, Vishal Dwivedi, Michael W. Bigrigg, and Kathleen M. Carley (CMU, USA) An increasingly important class of software-based systems is platforms that permit integration of third-party components, services, and tools. Service-Oriented Architecture (SOA) is one such platform that has been successful in providing integration and distribution in the business domain, and could be effective in other domains (e.g., scientific computing, healthcare, and complex decision making). In this paper, we discuss our application of SOA to provide an integration platform for socio-cultural analysis, a domain that, through models, tries to understand, analyze and predict relationships in large complex social systems. In developing this platform, called SORASCS, we had to overcome issues we believe are generally applicable to any application of SOA within a domain that involves technically naïve users and seeks to establish a sustainable software ecosystem based on a common integration platform. We discuss these issues, the lessons learned about the kinds of problems that occur, and pathways toward a solution. @InProceedings{ICSE11p643, author = {Bradley Schmerl and David Garlan and Vishal Dwivedi and Michael W. Bigrigg and Kathleen M. Carley}, title = {SORASCS: A Case Study in SOA-based Platform Design for Socio-Cultural Analysis}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {643--652}, doi = {}, year = {2011}, } |
|
Egyed, Alexander |
ICSE '11-NIER: "Positive Effects of Utilizing ..."
Positive Effects of Utilizing Relationships Between Inconsistencies for more Effective Inconsistency Resolution (NIER Track)
Alexander Nöhrer, Alexander Reder, and Alexander Egyed (Johannes Kepler University, Austria) @InProceedings{ICSE11p864, author = {Alexander Nöhrer and Alexander Reder and Alexander Egyed}, title = {Positive Effects of Utilizing Relationships Between Inconsistencies for more Effective Inconsistency Resolution (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {864--863}, doi = {}, year = {2011}, } |
|
Elbaum, Sebastian |
ICSE '11: "Refactoring Pipe-like Mashups ..."
Refactoring Pipe-like Mashups for End-User Programmers
Kathryn T. Stolee and Sebastian Elbaum (University of Nebraska-Lincoln, USA) Mashups are becoming increasingly popular as end users are able to easily access, manipulate, and compose data from many web sources. We have observed, however, that mashups tend to suffer from deficiencies that propagate as mashups are reused. To address these deficiencies, we would like to bring some of the benefits of software engineering techniques to the end users creating these programs. In this work, we focus on identifying code smells indicative of the deficiencies we observed in web mashups programmed in the popular Yahoo! Pipes environment. Through an empirical study, we explore the impact of those smells on end-user programmers and observe that users generally prefer mashups without smells. We then introduce refactorings targeting those smells, reducing the complexity of the mashup programs, increasing their abstraction, updating broken data sources and dated components, and standardizing their structures to fit the community development patterns. Our assessment of a large sample of mashups shows that smells are present in 81% of them and that the proposed refactorings can reduce the number of smelly mashups to 16%, illustrating the potential of refactoring to support the thousands of end users programming mashups. @InProceedings{ICSE11p81, author = {Kathryn T. Stolee and Sebastian Elbaum}, title = {Refactoring Pipe-like Mashups for End-User Programmers}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {81--90}, doi = {}, year = {2011}, } |
|
Eranian, Stephane |
ICSE '11: "RACEZ: A Lightweight and Non-Invasive ..."
RACEZ: A Lightweight and Non-Invasive Race Detection Tool for Production Applications
Tianwei Sheng, Neil Vachharajani, Stephane Eranian, Robert Hundt, Wenguang Chen, and Weimin Zheng (Tsinghua University, China; Google Inc., USA) Concurrency bugs, particularly data races, are notoriously difficult to debug and are a significant source of unreliability in multithreaded applications. Many tools to catch data races rely on program instrumentation to obtain memory instruction traces. Unfortunately, this instrumentation introduces significant runtime overhead, is extremely invasive, or has a limited domain of applicability making these tools unsuitable for many production systems. Consequently, these tools are typically used during application testing where many data races go undetected. This paper proposes R ACEZ, a novel race detection mechanism which uses a sampled memory trace collected by the hardware performance monitoring unit rather than invasive instrumentation. The approach introduces only a modest overhead making it usable in production environments. We validate R ACEZ using two open source server applications and the PARSEC benchmarks. Our experiments show that R ACEZ catches a set of known bugs with reasonable probability while introducing only 2.8% runtime slow down on average. @InProceedings{ICSE11p401, author = {Tianwei Sheng and Neil Vachharajani and Stephane Eranian and Robert Hundt and Wenguang Chen and Weimin Zheng}, title = {RACEZ: A Lightweight and Non-Invasive Race Detection Tool for Production Applications}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {401--410}, doi = {}, year = {2011}, } |
|
Ernst, Michael D. |
ICSE '11: "Inference of Field Initialization ..."
Inference of Field Initialization
Fausto Spoto and Michael D. Ernst (Università di Verona, Italy; University of Washington, USA) A raw object is partially initialized, with only some fields set to legal values. It may violate its object invariants, such as that a given field is non-null. Programs often manipulate partially-initialized objects, but they must do so with care. Furthermore, analyses must be aware of field initialization. For instance, proving the absence of null pointer dereferences or of division by zero, or proving that object invariants are satisfied, requires information about initialization. We present a static analysis that infers a safe over-approximation of the program variables, fields, and array elements that, at run time, might hold raw objects. Our formalization is flow-sensitive and interprocedural, and it considers the exception flow in the analyzed program. We have proved the analysis sound and implemented it in a tool called Julia that computes initialization and nullness information. We have evaluated Julia on over 160K lines of code. We have compared its output to manually-written initialization and nullness information, and to an independently-written type-checking tool that checks initialization and nullness. Julia's output is accurate and useful both to programmers and to static analyses. @InProceedings{ICSE11p231, author = {Fausto Spoto and Michael D. Ernst}, title = {Inference of Field Initialization}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {231--240}, doi = {}, year = {2011}, } ICSE '11: "Always-Available Static and ..." Always-Available Static and Dynamic Feedback Michael Bayne, Richard Cook, and Michael D. Ernst (University of Washington, USA) Developers who write code in a statically typed language are denied the ability to obtain dynamic feedback by executing their code during periods when it fails the static type checker. They are further confined to the static typing discipline during times in the development process where it does not yield the highest productivity. If they opt instead to use a dynamic language, they forgo the many benefits of static typing, including machine-checked documentation, improved correctness and reliability, tool support (such as for refactoring), and better runtime performance. We present a novel approach to giving developers the benefits of both static and dynamic typing, throughout the development process, and without the burden of manually separating their program into statically- and dynamically-typed parts. Our approach, which is intended for temporary use during the development process, relaxes the static type system and provides a semantics for many type-incorrect programs. It defers type errors to run time, or suppresses them if they do not affect runtime semantics. We implemented our approach in a publicly available tool, DuctileJ, for the Java language. In case studies, DuctileJ conferred benefits both during prototyping and during the evolution of existing code. @InProceedings{ICSE11p521, author = {Michael Bayne and Richard Cook and Michael D. Ernst}, title = {Always-Available Static and Dynamic Feedback}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {521--530}, doi = {}, year = {2011}, } ICSE '11-SEIP: "Building and Using Pluggable ..." Building and Using Pluggable Type-Checkers Werner Dietl, Stephanie Dietzel, Michael D. Ernst, Kıvanç Muşlu, and Todd W. Schiller (University of Washington, USA) This paper describes practical experience building and using pluggable type-checkers. A pluggable type-checker refines (strengthens) the built-in type system of a programming language. This permits programmers to detect and prevent, at compile time, defects that would otherwise have been manifested as run-time errors. The prevented defects may be generally applicable to all programs, such as null pointer dereferences. Or, an application-specific pluggable type system may be designed for a single application. We built a series of pluggable type checkers using the Checker Framework, and evaluated them on 2 million lines of code, finding hundreds of bugs in the process. We also observed 28 first-year computer science students use a checker to eliminate null pointer errors in their course projects. Along with describing the checkers and characterizing the bugs we found, we report the insights we had throughout the process. Overall, we found that the type checkers were easy to write, easy for novices to productively use, and effective in finding real bugs and verifying program properties, even for widely tested and used open source projects. @InProceedings{ICSE11p681, author = {Werner Dietl and Stephanie Dietzel and Michael D. Ernst and Kıvanç Muşlu and Todd W. Schiller}, title = {Building and Using Pluggable Type-Checkers}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {681--690}, doi = {}, year = {2011}, } |
|
Fant, Julie Street |
ICSE '11-SRC: "Building Domain Specific Software ..."
Building Domain Specific Software Architectures from Software Architectural Design Patterns
Julie Street Fant (George Mason University, USA) Software design patterns are best practice solutions to common software problems. However, applying design patterns in practice can be difficult since design pattern descriptions are general and can be applied at multiple levels of abstraction. In order to address the aforementioned issue, this research focuses on creating a systematic approach to designing domain specific distributed, real-time and embedded (DRE) software from software architectural design patterns. To address variability across a DRE domain, software product line concepts are used to categorize and organize the features and design patterns. The software architectures produced are also validated through design time simulation. This research is applied and validated using the space flight software (FSW) domain. @InProceedings{ICSE11p1152, author = {Julie Street Fant}, title = {Building Domain Specific Software Architectures from Software Architectural Design Patterns}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1152--1154}, doi = {}, year = {2011}, } |
|
Faulk, Stuart |
ICSE '11-WORKSHOPS: "Collaborative Teaching of ..."
Collaborative Teaching of Globally Distributed Software Development: Community Building Workshop (CTGDSD 2011)
Stuart Faulk, Michal Young, David M. Weiss, and Lian Yu (University of Oregon, USA; Iowa State University, USA; Peking University, China) Software engineering project courses where student teams are geographically distributed can effectively simulate the problems of globally distributed software development (DSD). However, this pedagogical model has proven difficult to adopt or sustain. It requires significant pedagogical resources and collaboration infrastructure. Institutionalizing such courses also requires compatible and reliable teaching partners. The purpose of this workshop is to foster a community of international faculty and institutions committed to developing, supporting, and teaching DSD. Foundational materials presented will include pedagogical materials and infrastructure developed and used in teaching DSD courses along with results and lessons learned. Long-range goals include: lowering adoption barriers by providing common pedagogical materials, validated collaboration infrastructure, and a pool of potential teaching partners from around the globe. @InProceedings{ICSE11p1208, author = {Stuart Faulk and Michal Young and David M. Weiss and Lian Yu}, title = {Collaborative Teaching of Globally Distributed Software Development: Community Building Workshop (CTGDSD 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1208--1209}, doi = {}, year = {2011}, } |
|
Feigenspan, Janet |
ICSE '11-DEMOS: "View Infinity: A Zoomable ..."
View Infinity: A Zoomable Interface for Feature-Oriented Software Development
Michael Stengel, Janet Feigenspan, Mathias Frisch, Christian Kästner, Sven Apel, and Raimund Dachselt (University of Magdeburg, Germany; University of Marburg, Germany; University of Passau, Germany) Software product line engineering provides efficient means to develop variable software. To support program comprehension of software product lines (SPLs), we developed View Infinity, a tool that provides seamless and semantic zooming of different abstraction layers of an SPL. First results of a qualitative study with experienced SPL developers are promising and indicate that View Infinity is useful and intuitive to use. @InProceedings{ICSE11p1031, author = {Michael Stengel and Janet Feigenspan and Mathias Frisch and Christian Kästner and Sven Apel and Raimund Dachselt}, title = {View Infinity: A Zoomable Interface for Feature-Oriented Software Development}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1031--1033}, doi = {}, year = {2011}, } |
|
Fein, Elad |
ICSE '11-DEMOS: "Using MATCON to Generate CASE ..."
Using MATCON to Generate CASE Tools That Guide Deployment of Pre-Packaged Applications
Elad Fein, Natalia Razinkov, Shlomit Shachor, Pietro Mazzoleni, Sweefen Goh, Richard Goodwin, Manisha Bhandar, Shyh-Kwei Chen, Juhnyoung Lee, Vibha Singhal Sinha, Senthil Mani, Debdoot Mukherjee, Biplav Srivastava, and Pankaj Dhoolia (IBM Research Haifa, Israel; IBM Research Watson, USA; IBM Research, India) The complex process of adapting pre-packaged applications, such as Oracle or SAP, to an organization’s needs is full of challenges. Although detailed, structured, and well-documented methods govern this process, the consulting team implementing the method must spend a huge amount of manual effort to make sure the guidelines of the method are followed as intended by the method author. MATCON breaks down the method content, documents, templates, and work products into reusable objects, and enables them to be cataloged and indexed so these objects can be easily found and reused on subsequent projects. By using models and meta-modeling the reusable methods, we automatically produce a CASE tool to apply these methods, thereby guiding consultants through this complex process. The resulting tool helps consultants create the method deliverables for the initial phases of large customization projects. Our MATCON output, referred to as Consultant Assistant, has shown significant savings in training costs, a 20–30% improvement in productivity, and positive results in large Oracle and SAP implementations. @InProceedings{ICSE11p1016, author = {Elad Fein and Natalia Razinkov and Shlomit Shachor and Pietro Mazzoleni and Sweefen Goh and Richard Goodwin and Manisha Bhandar and Shyh-Kwei Chen and Juhnyoung Lee and Vibha Singhal Sinha and Senthil Mani and Debdoot Mukherjee and Biplav Srivastava and Pankaj Dhoolia}, title = {Using MATCON to Generate CASE Tools That Guide Deployment of Pre-Packaged Applications}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1016--1018}, doi = {}, year = {2011}, } |
|
Filieri, Antonio |
ICSE '11: "Run-Time Efficient Probabilistic ..."
Run-Time Efficient Probabilistic Model Checking
Antonio Filieri, Carlo Ghezzi, and Giordano Tamburrelli (Politecnico di Milano, Italy) Unpredictable changes continuously affect software systems and may have a severe impact on their quality of service, potentially jeopardizing the system’s ability to meet the desired requirements. Changes may occur in critical components of the system, clients’ operational profiles, requirements, or deployment environments. The adoption of software models and model checking techniques at run time may support automatic reasoning about such changes, detect harmful configurations, and potentially enable appropriate (self-)reactions. However, traditional model checking techniques and tools may not be simply applied as they are at run time, since they hardly meet the constraints imposed by on-the-fly analysis, in terms of execution time and memory occupation. This paper precisely addresses this issue and focuses on reliability models, given in terms of Discrete Time Markov Chains, and probabilistic model checking. It develops a mathematical framework for run-time probabilistic model checking that, given a reliability model and a set of requirements, statically generates a set of expressions, which can be efficiently used at run-time to verify system requirements. An experimental comparison of our approach with existing probabilistic model checkers shows its practical applicability in run-time verification. @InProceedings{ICSE11p341, author = {Antonio Filieri and Carlo Ghezzi and Giordano Tamburrelli}, title = {Run-Time Efficient Probabilistic Model Checking}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {341--350}, doi = {}, year = {2011}, } |
|
Finkelstein, Anthony |
ICSE '11-DEMOS: "StakeSource2.0: Using Social ..."
StakeSource2.0: Using Social Networks of Stakeholders to Identify and Prioritise Requirements
Soo Ling Lim, Daniela Damian, and Anthony Finkelstein (University College London, UK; University of Victoria, Canada) Software projects typically rely on system analysts to conduct requirements elicitation, an approach potentially costly for large projects with many stakeholders and requirements. This paper describes StakeSource2.0, a web-based tool that uses social networks and collaborative filtering, a “crowdsourcing” approach, to identify and prioritise stakeholders and their requirements. @InProceedings{ICSE11p1022, author = {Soo Ling Lim and Daniela Damian and Anthony Finkelstein}, title = {StakeSource2.0: Using Social Networks of Stakeholders to Identify and Prioritise Requirements}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1022--1024}, doi = {}, year = {2011}, } |
|
Fischer, Bernd |
ICSE '11: "Verifying Multi-threaded Software ..."
Verifying Multi-threaded Software using SMT-based Context-Bounded Model Checking
Lucas Cordeiro and Bernd Fischer (University of Southampton, UK) We describe and evaluate three approaches to model check multi-threaded software with shared variables and locks using bounded model checking based on Satisfiability Modulo Theories (SMT) and our modelling of the synchronization primitives of the Pthread library. In the lazy approach, we generate all possible interleavings and call the SMT solver on each of them individually, until we either find a bug, or have systematically explored all interleavings. In the schedule recording approach, we encode all possible interleavings into one single formula and then exploit the high speed of the SMT solvers. In the underapproximation and widening approach, we reduce the state space by abstracting the number of interleavings from the proofs of unsatisfiability generated by the SMT solvers. In all three approaches, we bound the number of context switches allowed among threads in order to reduce the number of interleavings explored. We implemented these approaches in ESBMC, our SMT-based bounded model checker for ANSI-C programs. Our experiments show that ESBMC can analyze larger problems and substantially reduce the verification time compared to stateof-the-art techniques that use iterative context-bounding algorithms or counter-example guided abstraction refinement. @InProceedings{ICSE11p331, author = {Lucas Cordeiro and Bernd Fischer}, title = {Verifying Multi-threaded Software using SMT-based Context-Bounded Model Checking}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {331--340}, doi = {}, year = {2011}, } |
|
Fisher, Karen L. |
ICSE '11-SEIP: "A Case Study of Measuring ..."
A Case Study of Measuring Process Risk for Early Insights into Software Safety
Lucas Layman, Victor R. Basili, Marvin V. Zelkowitz, and Karen L. Fisher (Fraunhofer CESE, USA; University of Maryland, USA; NASA Goddard Spaceflight Center, USA) In this case study, we examine software safety risk in three flight hardware systems in NASA’s Constellation spaceflight program. We applied our Technical and Process Risk Measurement (TPRM) methodology to the Constellation hazard analysis process to quantify the technical and process risks involving software safety in the early design phase of these projects. We analyzed 154 hazard reports and collected metrics to measure the prevalence of software in hazards and the specificity of descriptions of software causes of hazardous conditions. We found that 49-70% of 154 hazardous conditions could be caused by software or software was involved in the prevention of the hazardous condition. We also found that 12-17% of the 2013 hazard causes involved software, and that 23-29% of all causes had a software control. The application of the TRPM methodology identified process risks in the application of the hazard analysis process itself that may lead to software safety risk. @InProceedings{ICSE11p623, author = {Lucas Layman and Victor R. Basili and Marvin V. Zelkowitz and Karen L. Fisher}, title = {A Case Study of Measuring Process Risk for Early Insights into Software Safety}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {623--632}, doi = {}, year = {2011}, } |
|
Fokaefs, Marios |
ICSE '11-DEMOS: "JDeodorant: Identification ..."
JDeodorant: Identification and Application of Extract Class Refactorings
Marios Fokaefs, Nikolaos Tsantalis, Eleni Stroulia, and Alexander Chatzigeorgiou (University of Alberta, Canada; University of Macedonia, Greece) Evolutionary changes in object-oriented systems can result in large, complex classes, known as “God Classes”. In this paper, we present a tool, developed as part of the JDeodorant Eclipse plugin, that can recognize opportunities for extracting cohesive classes from “God Classes” and automatically apply the refactoring chosen by the developer. @InProceedings{ICSE11p1037, author = {Marios Fokaefs and Nikolaos Tsantalis and Eleni Stroulia and Alexander Chatzigeorgiou}, title = {JDeodorant: Identification and Application of Extract Class Refactorings}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1037--1039}, doi = {}, year = {2011}, } |
|
Foster, Howard |
ICSE '11-WORKSHOPS: "Sixth International Workshop ..."
Sixth International Workshop on Automation of Software Test (AST 2011)
Howard Foster, Antonia Bertolino, and J. Jenny Li (City University London, UK; ISTI-CNR, Italy; Avaya Research Labs, USA) The Sixth International Workshop on Automation of Software Test (AST 2011) is associated with the 33rd International Conference on Software Engineering (ICSE 2011). This edition of AST was focused on the special theme of Software Design and the Automation of Software Test and authors were encouraged to submit work in this area. The workshop covers two days with presentations of regular research papers, industrial case studies and experience reports. The workshop also aims to have extensive discussions on collaborative solutions in the form of charette sessions. This paper summarizes the organization of the workshop, the special theme, as well as the sessions. @InProceedings{ICSE11p1216, author = {Howard Foster and Antonia Bertolino and J. Jenny Li}, title = {Sixth International Workshop on Automation of Software Test (AST 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1216--1217}, doi = {}, year = {2011}, } |
|
Frisch, Mathias |
ICSE '11-DEMOS: "View Infinity: A Zoomable ..."
View Infinity: A Zoomable Interface for Feature-Oriented Software Development
Michael Stengel, Janet Feigenspan, Mathias Frisch, Christian Kästner, Sven Apel, and Raimund Dachselt (University of Magdeburg, Germany; University of Marburg, Germany; University of Passau, Germany) Software product line engineering provides efficient means to develop variable software. To support program comprehension of software product lines (SPLs), we developed View Infinity, a tool that provides seamless and semantic zooming of different abstraction layers of an SPL. First results of a qualitative study with experienced SPL developers are promising and indicate that View Infinity is useful and intuitive to use. @InProceedings{ICSE11p1031, author = {Michael Stengel and Janet Feigenspan and Mathias Frisch and Christian Kästner and Sven Apel and Raimund Dachselt}, title = {View Infinity: A Zoomable Interface for Feature-Oriented Software Development}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1031--1033}, doi = {}, year = {2011}, } |
|
Fu, Chen |
ICSE '11: "Portfolio: Finding Relevant ..."
Portfolio: Finding Relevant Functions and Their Usages
Collin McMillan, Mark Grechanik, Denys Poshyvanyk, Qing Xie, and Chen Fu (College of William and Mary, USA; Accenture Technology Lab, USA) Different studies show that programmers are more interested in finding definitions of functions and their uses than variables, statements, or arbitrary code fragments [30, 29, 31]. Therefore, programmers require support in finding relevant functions and determining how those functions are used. Unfortunately, existing code search engines do not provide enough of this support to developers, thus reducing the effectiveness of code reuse. We provide this support to programmers in a code search system called Portfolio that retrieves and visualizes relevant functions and their usages. We have built Portfolio using a combination of models that address surfing behavior of programmer and sharing related concepts among functions. We conducted an experiment with 49 professional programmers to compare Portfolio to Google Code Search and Koders using a standard methodology. The results show with strong statistical significance that users find more relevant functions with higher precision with Portfolio than with Google Code Search and Koders. @InProceedings{ICSE11p111, author = {Collin McMillan and Mark Grechanik and Denys Poshyvanyk and Qing Xie and Chen Fu}, title = {Portfolio: Finding Relevant Functions and Their Usages}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {111--120}, doi = {}, year = {2011}, } ICSE '11-DEMOS: "Portfolio: A Search Engine ..." Portfolio: A Search Engine for Finding Functions and Their Usages Collin McMillan, Mark Grechanik, Denys Poshyvanyk, Qing Xie, and Chen Fu (College of William and Mary, USA; University of Illinois at Chicago, USA; Accenture Technology Labs, USA) In this demonstration, we present a code search system called Portfolio that retrieves and visualizes relevant functions and their usages. We will show how chains of relevant functions and their usages can be visualized to users in response to their queries. @InProceedings{ICSE11p1043, author = {Collin McMillan and Mark Grechanik and Denys Poshyvanyk and Qing Xie and Chen Fu}, title = {Portfolio: A Search Engine for Finding Functions and Their Usages}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1043--1045}, doi = {}, year = {2011}, } |
|
Fujiwara, Takaji |
ICSE '11: "Towards Quantitative Software ..."
Towards Quantitative Software Reliability Assessment in Incremental Development Processes
Tadashi Dohi and Takaji Fujiwara (Hiroshima University, Japan; Fujitsu Quality Laboratory, Japan) The iterative and incremental development is becoming a major development process model in industry, and allows us for a good deal of parallelism between development and testing. In this paper we develop a quantitative software reliability assessment method in incremental development processes, based on the familiar non-homogeneous Poisson processes. More specifically, we utilize the software metrics observed in each incremental development and testing, and estimate the associated software reliability measures. In a numerical example with a real incremental developmental project data, it is shown that the estimate of software reliability with a specific model can take a realistic value, and that the reliability growth phenomenon can be observed even in the incremental development scheme. @InProceedings{ICSE11p41, author = {Tadashi Dohi and Takaji Fujiwara}, title = {Towards Quantitative Software Reliability Assessment in Incremental Development Processes}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {41--50}, doi = {}, year = {2011}, } |
|
Furia, Carlo A. |
ICSE '11: "Inferring Better Contracts ..."
Inferring Better Contracts
Yi Wei, Carlo A. Furia, Nikolay Kazmin, and Bertrand Meyer (ETH Zurich, Switzerland) Considerable progress has been made towards automatic support for one of the principal techniques available to enhance program reliability: equipping programs with extensive contracts. The results of current contract inference tools are still often unsatisfactory in practice, especially for programmers who already apply some kind of basic Design by Contract discipline, since the inferred contracts tend to be simple assertions—the very ones that programmers find easy to write. We present new, completely automatic inference techniques and a supporting tool, which take advantage of the presence of simple programmer-written contracts in the code to infer sophisticated assertions, involving for example implication and universal quantification. Applied to a production library of classes covering standard data structures such as linked lists, arrays, stacks, queues and hash tables, the tool is able, entirely automatically, to infer 75% of the complete contracts—contracts yielding the full formal specification of the classes—with very few redundant or irrelevant clauses. @InProceedings{ICSE11p191, author = {Yi Wei and Carlo A. Furia and Nikolay Kazmin and Bertrand Meyer}, title = {Inferring Better Contracts}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {191--200}, doi = {}, year = {2011}, } |
|
Galster, Matthias |
ICSE '11-NIER: "Capturing Tacit Architectural ..."
Capturing Tacit Architectural Knowledge Using the Repertory Grid Technique (NIER Track)
Dan Tofan, Matthias Galster, and Paris Avgeriou (University of Groningen, Netherlands) Knowledge about the architecture of a software-intensive system tends to vaporize easily. This leads to increased maintenance costs. We explore a new idea: utilizing the repertory grid technique to capture tacit architectural knowledge. Particularly, we investigate the elicitation of design decision alternatives and their characteristics. To study the applicability of this idea, we performed an exploratory study. Seven independent subjects applied the repertory grid technique to document a design decision they had to take in previous projects. Then, we interviewed each subject to understand their perception about the technique. We identified advantages and disadvantages of using the technique. The main advantage is the reasoning support it provides; the main disadvantage is the additional effort it requires. Also, applying the technique depends on the context of the project. Using the repertory grid technique is a promising approach for fighting architectural knowledge vaporization. @InProceedings{ICSE11p916, author = {Dan Tofan and Matthias Galster and Paris Avgeriou}, title = {Capturing Tacit Architectural Knowledge Using the Repertory Grid Technique (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {916--919}, doi = {}, year = {2011}, } |
|
Gamble, Rose F. |
ICSE '11-DEMOS: "SEREBRO: Facilitating Student ..."
SEREBRO: Facilitating Student Project Team Collaboration
Noah M. Jorgenson, Matthew L. Hale, and Rose F. Gamble (University of Tulsa, USA) In this demonstration, we show SEREBRO, a lightweight courseware developed for student team collaboration in a software engineering class. SEREBRO couples an idea forum with software project management tools to maintain cohesive interaction between team discussion and resulting work products, such as tasking, documentation, and version control. SEREBRO has been used consecutively for two years of software engineering classes. Student input and experiments on student use in these classes has directed SERBRO to its current functionality. @InProceedings{ICSE11p1019, author = {Noah M. Jorgenson and Matthew L. Hale and Rose F. Gamble}, title = {SEREBRO: Facilitating Student Project Team Collaboration}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1019--1021}, doi = {}, year = {2011}, } |
|
Gandhi, Robin |
ICSE '11-NIER: "Empirical Results on the Study ..."
Empirical Results on the Study of Software Vulnerabilities (NIER Track)
Yan Wu, Harvey Siy, and Robin Gandhi (University of Nebraska at Omaha, USA) While the software development community has put a significant effort to capture the artifacts related to a discovered vulnerability in organized repositories, much of this information is not amenable to meaningful analysis and requires a deep and manual inspection. In the software assurance community a body of knowledge that provides an enumeration of common weaknesses has been developed, but it is not readily usable for the study of vulnerabilities in specific projects and user environments. We propose organizing the information in project repositories around semantic templates. In this paper, we present preliminary results of an experiment conducted to evaluate the effectiveness of using semantic templates as an aid to studying software vulnerabilities. @InProceedings{ICSE11p964, author = {Yan Wu and Harvey Siy and Robin Gandhi}, title = {Empirical Results on the Study of Software Vulnerabilities (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {964--967}, doi = {}, year = {2011}, } |
|
Garbervetsky, Diego |
ICSE '11: "Program Abstractions for Behaviour ..."
Program Abstractions for Behaviour Validation
Guido de Caso, Víctor Braberman, Diego Garbervetsky, and Sebastián Uchitel (Universidad de Buenos Aires, Argentina; Imperial College London, UK) @InProceedings{ICSE11p381, author = {Guido de Caso and Víctor Braberman and Diego Garbervetsky and Sebastián Uchitel}, title = {Program Abstractions for Behaviour Validation}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {381--380}, doi = {}, year = {2011}, } |
|
Garcia, Ronald |
ICSE '11-NIER: "Permission-Based Programming ..."
Permission-Based Programming Languages (NIER Track)
Jonathan Aldrich, Ronald Garcia, Mark Hahnenberg, Manuel Mohr, Karl Naden, Darpan Saini, and Roger Wolff (CMU, USA; Karlsruhe Institute of Technology, Germany; University of Chile, Chile) Linear permissions have been proposed as a lightweight way to specify how an object may be aliased, and whether those aliases allow mutation. Prior work has demonstrated the value of permissions for addressing many software engineering concerns, including information hiding, protocol checking, concurrency, security, and memory management. We propose the concept of a permission-based programming language--a language whose object model, type system, and runtime are all co-designed with permissions in mind. This approach supports an object model in which the structure of an object can change over time, a type system that tracks changing structure in addition to addressing the other concerns above, and a runtime system that can dynamically check permission assertions and leverage permissions to parallelize code. We sketch the design of the permission-based programming language Plaid, and argue that the approach may provide significant software engineering benefits. @InProceedings{ICSE11p828, author = {Jonathan Aldrich and Ronald Garcia and Mark Hahnenberg and Manuel Mohr and Karl Naden and Darpan Saini and Roger Wolff}, title = {Permission-Based Programming Languages (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {828--831}, doi = {}, year = {2011}, } |
|
Garlan, David |
ICSE '11-SEIP: "SORASCS: A Case Study in SOA-based ..."
SORASCS: A Case Study in SOA-based Platform Design for Socio-Cultural Analysis
Bradley Schmerl, David Garlan, Vishal Dwivedi, Michael W. Bigrigg, and Kathleen M. Carley (CMU, USA) An increasingly important class of software-based systems is platforms that permit integration of third-party components, services, and tools. Service-Oriented Architecture (SOA) is one such platform that has been successful in providing integration and distribution in the business domain, and could be effective in other domains (e.g., scientific computing, healthcare, and complex decision making). In this paper, we discuss our application of SOA to provide an integration platform for socio-cultural analysis, a domain that, through models, tries to understand, analyze and predict relationships in large complex social systems. In developing this platform, called SORASCS, we had to overcome issues we believe are generally applicable to any application of SOA within a domain that involves technically naïve users and seeks to establish a sustainable software ecosystem based on a common integration platform. We discuss these issues, the lessons learned about the kinds of problems that occur, and pathways toward a solution. @InProceedings{ICSE11p643, author = {Bradley Schmerl and David Garlan and Vishal Dwivedi and Michael W. Bigrigg and Kathleen M. Carley}, title = {SORASCS: A Case Study in SOA-based Platform Design for Socio-Cultural Analysis}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {643--652}, doi = {}, year = {2011}, } |
|
Ge, Xi |
ICSE '11-DEMOS: "DyTa: Dynamic Symbolic Execution ..."
DyTa: Dynamic Symbolic Execution Guided with Static Verification Results
Xi Ge, Kunal Taneja, Tao Xie, and Nikolai Tillmann (North Carolina State University, USA; Microsoft Research, USA) Software-defect detection is an increasingly important research topic in software engineering. To detect defects in a program, static verification and dynamic test generation are two important proposed techniques. However, both of these techniques face their respective issues. Static verification produces false positives, and on the other hand, dynamic test generation is often time consuming. To address the limitations of static verification and dynamic test generation, we present an automated defect-detection tool, called DyTa, that combines both static verification and dynamic test generation. DyTa consists of a static phase and a dynamic phase. The static phase detects potential defects with a static checker; the dynamic phase generates test inputs through dynamic symbolic execution to confirm these potential defects. DyTa reduces the number of false positives compared to static verification and performs more efficiently compared to dynamic test generation. @InProceedings{ICSE11p992, author = {Xi Ge and Kunal Taneja and Tao Xie and Nikolai Tillmann}, title = {DyTa: Dynamic Symbolic Execution Guided with Static Verification Results}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {992--994}, doi = {}, year = {2011}, } |
|
Geihs, Kurt |
ICSE '11-WORKSHOPS: "Second International Workshop ..."
Second International Workshop on Software Engineering for Sensor Network Applications (SESENA 2011)
Kurt Geihs, Luca Mottola, Gian Pietro Picco, and Kay Römer (University of Kassel, Germany; Swedish Institute of Computer Science, Sweden; University of Trento, Italy; University of Lübeck, Germany) We describe the motivation, focus, and organization of SESENA11, the 2nd International Workshop on Software Engineering for Sensor Network Applications. The workshop took place under the umbrella of ICSE 2011, the 33rd ACM/IEEE International Conference on Software Engineering, in Honolulu, Hawaii, on May 22, 2011. The aim was to attract researchers belonging to the Software Engineering (SE) and Wireless Sensor Network (WSN) communities, not only to exchange their recent research results on the topic, but also to stimulate discussion on the core open problems and define a shared research agenda. More information can be found at the workshop website: http://www.sesena.info. @InProceedings{ICSE11p1198, author = {Kurt Geihs and Luca Mottola and Gian Pietro Picco and Kay Römer}, title = {Second International Workshop on Software Engineering for Sensor Network Applications (SESENA 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1198--1199}, doi = {}, year = {2011}, } |
|
Genevès, Pierre |
ICSE '11-DEMOS: "Inconsistent Path Detection ..."
Inconsistent Path Detection for XML IDEs
Pierre Genevès and Nabil Layaïda (CNRS, France; INRIA, France) We present the first IDE augmented with static detection of inconsistent paths for simplifying the development and debugging of any application involving XPath expressions. @InProceedings{ICSE11p983, author = {Pierre Genevès and Nabil Layaïda}, title = {Inconsistent Path Detection for XML IDEs}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {983--985}, doi = {}, year = {2011}, } |
|
Gethers, Malcom |
ICSE '11-DEMOS: "CodeTopics: Which Topic am ..."
CodeTopics: Which Topic am I Coding Now?
Malcom Gethers, Trevor Savage, Massimiliano Di Penta, Rocco Oliveto, Denys Poshyvanyk, and Andrea De Lucia (College of William and Mary, USA; CMU, USA; University of Sannio, Italy; University of Molise, Italy; University of Salerno, Italy) Recent studies indicated that showing the similarity between the source code being developed and related high-level artifacts (HLAs), such as requirements, helps developers improve the quality of source code identifiers. In this paper, we present CodeTopics, an Eclipse plug-in that in addition to showing the similarity between source code and HLAs also highlights to what extent the code under development covers topics described in HLAs. Such views complement information derived by showing only the similarity between source code and HLAs helping (i) developers to identify functionality that are not implemented yet or (ii) newcomers to comprehend source code artifacts by showing them the topics that these artifacts relate to. @InProceedings{ICSE11p1034, author = {Malcom Gethers and Trevor Savage and Massimiliano Di Penta and Rocco Oliveto and Denys Poshyvanyk and Andrea De Lucia}, title = {CodeTopics: Which Topic am I Coding Now?}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1034--1036}, doi = {}, year = {2011}, } ICSE '11-NIER: "Identifying Method Friendships ..." Identifying Method Friendships to Remove the Feature Envy Bad Smell (NIER Track) Rocco Oliveto, Malcom Gethers, Gabriele Bavota, Denys Poshyvanyk, and Andrea De Lucia (University of Molise, Italy; College of William and Mary, USA; University of Salerno, Italy) We propose a novel approach to identify Move Method refactoring opportunities and remove the Feature Envy bad smell from source code. The proposed approach analyzes both structural and conceptual relationships between methods and uses Relational Topic Models (RTM) to identify sets of methods that share several responsibilities, i.e., "friend methods". The analysis of method friendships of a given method can be used to pinpoint the target class (envied class) where the method should be moved in. The results of a preliminary empirical evaluation indicate that the proposed approach provides accurate and meaningful refactoring opportunities. @InProceedings{ICSE11p820, author = {Rocco Oliveto and Malcom Gethers and Gabriele Bavota and Denys Poshyvanyk and Andrea De Lucia}, title = {Identifying Method Friendships to Remove the Feature Envy Bad Smell (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {820--823}, doi = {}, year = {2011}, } |
|
Ghanbari, Hamoun |
ICSE '11-NIER: "Model-based Performance Testing ..."
Model-based Performance Testing (NIER Track)
Cornel Barna, Marin Litoiu, and Hamoun Ghanbari (York University, Canada) In this paper, we present a method for performance testing of transactional systems. The methods models the system under test, finds the software and hardware bottlenecks and generate the workloads that saturate them. The framework is adaptive, the model and workloads are determined during the performance test execution by measuring the system performance, fitting a performance model and by analytically computing the number and mix of users that will saturate the bottlenecks. We model the software system using a two layers queuing model and use analytical techniques to find the workload mixes that change the bottlenecks in the system. Those workload mixes become stress vectors and initial starting points for the stress test cases. The rest of test cases are generated based on a feedback loop that drives the software system towards the worst case behaviour. @InProceedings{ICSE11p872, author = {Cornel Barna and Marin Litoiu and Hamoun Ghanbari}, title = {Model-based Performance Testing (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {872--875}, doi = {}, year = {2011}, } |
|
Ghezzi, Carlo |
ICSE '11: "Run-Time Efficient Probabilistic ..."
Run-Time Efficient Probabilistic Model Checking
Antonio Filieri, Carlo Ghezzi, and Giordano Tamburrelli (Politecnico di Milano, Italy) Unpredictable changes continuously affect software systems and may have a severe impact on their quality of service, potentially jeopardizing the system’s ability to meet the desired requirements. Changes may occur in critical components of the system, clients’ operational profiles, requirements, or deployment environments. The adoption of software models and model checking techniques at run time may support automatic reasoning about such changes, detect harmful configurations, and potentially enable appropriate (self-)reactions. However, traditional model checking techniques and tools may not be simply applied as they are at run time, since they hardly meet the constraints imposed by on-the-fly analysis, in terms of execution time and memory occupation. This paper precisely addresses this issue and focuses on reliability models, given in terms of Discrete Time Markov Chains, and probabilistic model checking. It develops a mathematical framework for run-time probabilistic model checking that, given a reliability model and a set of requirements, statically generates a set of expressions, which can be efficiently used at run-time to verify system requirements. An experimental comparison of our approach with existing probabilistic model checkers shows its practical applicability in run-time verification. @InProceedings{ICSE11p341, author = {Antonio Filieri and Carlo Ghezzi and Giordano Tamburrelli}, title = {Run-Time Efficient Probabilistic Model Checking}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {341--350}, doi = {}, year = {2011}, } |
|
Giannakopoulou, Dimitra |
ICSE '11: "Interface Decomposition for ..."
Interface Decomposition for Service Compositions
Domenico Bianculli, Dimitra Giannakopoulou, and Corina S. Păsăreanu (University of Lugano, Switzerland; NASA Ames Research Center, USA; Carnegie Mellon Silicon Valley, USA) Service-based applications can be realized by composing existing services into new, added-value composite services. The external services with which a service composition interacts are usually known by means of their syntactical interface. However, an interface providing more information, such as a behavioral specification, could be more useful to a service integrator for assessing that a certain external service can contribute to fulfill the functional requirements of the composite application. Given the requirements specification of a composite service, we present a technique for obtaining the behavioral interfaces — in the form of labeled transition systems — of the external services, by decomposing the global interface specification that characterizes the environment of the service composition. The generated interfaces guarantee that the service composition fulfills its requirements during the execution. Our approach has been implemented in the LTSA tool and has been applied to two case studies. @InProceedings{ICSE11p501, author = {Domenico Bianculli and Dimitra Giannakopoulou and Corina S. Păsăreanu}, title = {Interface Decomposition for Service Compositions}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {501--510}, doi = {}, year = {2011}, } |
|
Gibiec, Marek |
ICSE '11: "On-demand Feature Recommendations ..."
On-demand Feature Recommendations Derived from Mining Public Product Descriptions
Horatiu Dumitru, Marek Gibiec, Negar Hariri, Jane Cleland-Huang, Bamshad Mobasher, Carlos Castro-Herrera, and Mehdi Mirakhorli (DePaul University, USA) We present a recommender system that models and recommends product features for a given domain. Our approach mines product descriptions from publicly available online specifications, utilizes text mining and a novel incremental diffusive clustering algorithm to discover domain-specific features, generates a probabilistic feature model that represents commonalities, variants, and cross-category features, and then uses association rule mining and the k-NearestNeighbor machine learning strategy to generate product specific feature recommendations. Our recommender system supports the relatively labor-intensive task of domain analysis, potentially increasing opportunities for re-use, reducing time-to-market, and delivering more competitive software products. The approach is empirically validated against 20 different product categories using thousands of product descriptions mined from a repository of free software applications. @InProceedings{ICSE11p181, author = {Horatiu Dumitru and Marek Gibiec and Negar Hariri and Jane Cleland-Huang and Bamshad Mobasher and Carlos Castro-Herrera and Mehdi Mirakhorli}, title = {On-demand Feature Recommendations Derived from Mining Public Product Descriptions}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {181--190}, doi = {}, year = {2011}, } |
|
Gibson, James |
ICSE '11-SEIP: "A Method for Selecting SOA ..."
A Method for Selecting SOA Pilot Projects Including a Pilot Metrics Framework
Liam O'Brien, James Gibson, and Jon Gray (CSIRO, Australia; ANU, Australia; NICTA, Australia) Many organizations are introducing Service Oriented Architecture (SOA) as part of their business transformation projects to take advantage of the proposed benefits associated with using SOA. However, in many cases organizations don’t necessarily know on which projects introducing SOA would be of value and show real benefits to the organization. In this paper we outline a method and pilot metrics framework (PMF) to help organization’s select from a set of candidate projects those which would be most suitable for piloting SOA. The PMF is used as part of a method based on identifying a set of benefit and risk criteria, investigating each of the candidate projects, mapping them to the criteria and then selecting the most suitable project(s). The paper outlines a case study where the PMF was applied in a large government organization to help them select pilot projects and develop an overall strategy for introducing SOA into their organization. @InProceedings{ICSE11p653, author = {Liam O'Brien and James Gibson and Jon Gray}, title = {A Method for Selecting SOA Pilot Projects Including a Pilot Metrics Framework}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {653--662}, doi = {}, year = {2011}, } |
|
Gittens, Mechelle |
ICSE '11-NIER: "Diagnosing New Faults Using ..."
Diagnosing New Faults Using Mutants and Prior Faults (NIER Track)
Syed Shariyar Murtaza, Nazim Madhavji, Mechelle Gittens, and Zude Li (University of Western Ontario, Canada; University of West Indies, Barbados) Literature indicates that 20% of a program’s code is responsible for 80% of the faults, and 50-90% of the field failures are rediscoveries of previous faults. Despite this, identification of faulty code can consume 30-40% time of error correction. Previous fault-discovery techniques focusing on field failures either require many pass-fail traces, discover only crashing failures, or identify faulty “files” (which are of large granularity) as origin of the source code. In our earlier work (the F007 approach), we identify faulty “functions” (which are of small granularity) in a field trace by using earlier resolved traces of the same release, which limits it to the known faulty functions. This paper overcomes this limitation by proposing a new “strategy” to identify new and old faulty functions using F007. This strategy uses failed traces of mutants (artificial faults) and failed traces of prior releases to identify faulty functions in the traces of succeeding release. Our results on two UNIX utilities (i.e., Flex and Gzip) show that faulty functions in the traces of the majority (60-85%) of failures of a new software release can be identified by reviewing only 20% of the code. If compared against prior techniques then this is a notable improvement in terms of contextual knowledge required and accuracy in the discovery of finer-grain fault origin. @InProceedings{ICSE11p960, author = {Syed Shariyar Murtaza and Nazim Madhavji and Mechelle Gittens and Zude Li}, title = {Diagnosing New Faults Using Mutants and Prior Faults (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {960--963}, doi = {}, year = {2011}, } |
|
Glinz, Martin |
ICSE '11: "Estimating Footprints of Model ..."
Estimating Footprints of Model Operations
Cédric Jeanneret, Martin Glinz, and Benoit Baudry (University of Zurich, Switzerland; IRISA, France) When performed on a model, a set of operations (e.g., queries or model transformations) rarely uses all the information present in the model. Unintended underuse of a model can indicate various problems: the model may contain more detail than necessary or the operations may be immature or erroneous. Analyzing the footprints of the operations — i.e., the part of a model actually used by an operation — is a simple technique to diagnose and analyze such problems. However, precisely calculating the footprint of an operation is expensive, because it requires analyzing the operation’s execution trace. In this paper, we present an automated technique to estimate the footprint of an operation without executing it. We evaluate our approach by applying it to 75 models and five operations. Our technique provides software engineers with an efficient, yet precise, evaluation of the usage of their models. @InProceedings{ICSE11p601, author = {Cédric Jeanneret and Martin Glinz and Benoit Baudry}, title = {Estimating Footprints of Model Operations}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {601--610}, doi = {}, year = {2011}, } |
|
Göde, Nils |
ICSE '11: "Frequency and Risks of Changes ..."
Frequency and Risks of Changes to Clones
Nils Göde and Rainer Koschke (University of Bremen, Germany) @InProceedings{ICSE11p311, author = {Nils Göde and Rainer Koschke}, title = {Frequency and Risks of Changes to Clones}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {311--310}, doi = {}, year = {2011}, } |
|
Godefroid, Patrice |
ICSE '11-IMPACT: "Symbolic Execution for Software ..."
Symbolic Execution for Software Testing in Practice -- Preliminary Assessment
Cristian Cadar, Patrice Godefroid, Sarfraz Khurshid, Corina S. Păsăreanu, Koushik Sen, Nikolai Tillmann, and Willem Visser (Imperial College London, UK; Microsoft Research, USA; University of Texas at Austin, USA; CMU, USA; NASA Ames Research Center, USA; UC Berkeley, USA; Stellenbosch University, South Africa) We present results for the “Impact Project Focus Area” on the topic of symbolic execution as used in software testing. Symbolic execution is a program analysis technique introduced in the 70s that has received renewed interest in recent years, due to algorithmic advances and increased availability of computational power and constraint solving technology. We review classical symbolic execution and some modern extensions such as generalized symbolic execution and dynamic test generation. We also give a preliminary assessment of the use in academia, research labs, and industry. @InProceedings{ICSE11p1066, author = {Cristian Cadar and Patrice Godefroid and Sarfraz Khurshid and Corina S. Păsăreanu and Koushik Sen and Nikolai Tillmann and Willem Visser}, title = {Symbolic Execution for Software Testing in Practice -- Preliminary Assessment}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1066--1071}, doi = {}, year = {2011}, } |
|
Godfrey, Michael W. |
ICSE '11-NIER: "Multifractal Aspects of Software ..."
Multifractal Aspects of Software Development (NIER Track)
Abram Hindle, Michael W. Godfrey, and Richard C. Holt (UC Davis, USA; University of Waterloo, Canada) Software development is difficult to model, particularly the noisy, non-stationary signals of changes per time unit, extracted from version control systems (VCSs). Currently researchers are utilizing timeseries analysis tools such as ARIMA to model these signals extracted from a project's VCS. Unfortunately current approaches are not very amenable to the underlying power-law distributions of this kind of signal. We propose modeling changes per time unit using multifractal analysis. This analysis can be used when a signal exhibits multiscale self-similarity, as in the case of complex data drawn from power-law distributions. Specifically we utilize multifractal analysis to demonstrate that software development is multifractal, that is the signal is a fractal composed of multiple fractal dimensions along a range of Hurst exponents. Thus we show that software development has multi-scale self-similarity, that software development is multifractal. We also pose questions that we hope multifractal analysis can answer. @InProceedings{ICSE11p968, author = {Abram Hindle and Michael W. Godfrey and Richard C. Holt}, title = {Multifractal Aspects of Software Development (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {968--971}, doi = {}, year = {2011}, } |
|
Goeritzer, Robert |
ICSE '11-SRC: "Using Impact Analysis in Industry ..."
Using Impact Analysis in Industry
Robert Goeritzer (University of Klagenfurt, Austria) Software is subjected to continuous change, and with increasing size and complexity performing changes becomes more critical. Impact analysis assists in estimating the consequences of a change, and is an important research topic. Nevertheless, until now researchers have not applied and evaluated those techniques in industry. This paper contributes an approach suitable for an industrial setting, and an evaluation of its application in a large software system. @InProceedings{ICSE11p1155, author = {Robert Goeritzer}, title = {Using Impact Analysis in Industry}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1155--1157}, doi = {}, year = {2011}, } |
|
Goh, Sweefen |
ICSE '11-DEMOS: "Using MATCON to Generate CASE ..."
Using MATCON to Generate CASE Tools That Guide Deployment of Pre-Packaged Applications
Elad Fein, Natalia Razinkov, Shlomit Shachor, Pietro Mazzoleni, Sweefen Goh, Richard Goodwin, Manisha Bhandar, Shyh-Kwei Chen, Juhnyoung Lee, Vibha Singhal Sinha, Senthil Mani, Debdoot Mukherjee, Biplav Srivastava, and Pankaj Dhoolia (IBM Research Haifa, Israel; IBM Research Watson, USA; IBM Research, India) The complex process of adapting pre-packaged applications, such as Oracle or SAP, to an organization’s needs is full of challenges. Although detailed, structured, and well-documented methods govern this process, the consulting team implementing the method must spend a huge amount of manual effort to make sure the guidelines of the method are followed as intended by the method author. MATCON breaks down the method content, documents, templates, and work products into reusable objects, and enables them to be cataloged and indexed so these objects can be easily found and reused on subsequent projects. By using models and meta-modeling the reusable methods, we automatically produce a CASE tool to apply these methods, thereby guiding consultants through this complex process. The resulting tool helps consultants create the method deliverables for the initial phases of large customization projects. Our MATCON output, referred to as Consultant Assistant, has shown significant savings in training costs, a 20–30% improvement in productivity, and positive results in large Oracle and SAP implementations. @InProceedings{ICSE11p1016, author = {Elad Fein and Natalia Razinkov and Shlomit Shachor and Pietro Mazzoleni and Sweefen Goh and Richard Goodwin and Manisha Bhandar and Shyh-Kwei Chen and Juhnyoung Lee and Vibha Singhal Sinha and Senthil Mani and Debdoot Mukherjee and Biplav Srivastava and Pankaj Dhoolia}, title = {Using MATCON to Generate CASE Tools That Guide Deployment of Pre-Packaged Applications}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1016--1018}, doi = {}, year = {2011}, } |
|
Gold, Nicolas |
ICSE '11: "Model Projection: Simplifying ..."
Model Projection: Simplifying Models in Response to Restricting the Environment
Kelly Androutsopoulos, David Binkley, David Clark, Nicolas Gold, Mark Harman, Kevin Lano, and Zheng Li (University College London, UK; Loyola University Maryland, USA; King's College London, UK) This paper introduces Model Projection. Finite state models such as Extended Finite State Machines are being used in an ever increasing number of software engineering activities. Model projection facilitates model development by specializing models for a specific operating environment. A projection is useful in many design-level applications including specification reuse and property verification. The applicability of model projection rests upon three critical concerns: correctness, effectiveness, and efficiency, all of which are addressed in this paper. We introduce four related algorithms for model projection and prove each correct. We also present an empirical study of effectiveness and efficiency using ten models, including widely–studied benchmarks as well as industrial models. Results show that a typical projection includes about half of the states and a third of the transitions from the original model. @InProceedings{ICSE11p291, author = {Kelly Androutsopoulos and David Binkley and David Clark and Nicolas Gold and Mark Harman and Kevin Lano and Zheng Li}, title = {Model Projection: Simplifying Models in Response to Restricting the Environment}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {291--300}, doi = {}, year = {2011}, } |
|
Golra, Fahad R. |
ICSE '11-NIER: "The Lazy Initialization Multilayered ..."
The Lazy Initialization Multilayered Modeling Framework (NIER Track)
Fahad R. Golra and Fabien Dagnat (Université Européenne de Bretagne, France; Institut Télécom, France) Lazy Initialization Multilayer Modeling (LIMM) is an object oriented modeling language targeted to the declarative definition of Domain Specific Languages (DSLs) for Model Driven Engineering. It focuses on the precise definition of modeling frameworks spanning over multiple layers. In particular, it follows a two dimensional architecture instead of the linear architecture followed by many other modeling frameworks. The novelty of our approach is to use lazy initialization for the definition of mapping between different modeling abstractions, within and across multiple layers, hence providing the basis for exploiting the potential of metamodeling. @InProceedings{ICSE11p924, author = {Fahad R. Golra and Fabien Dagnat}, title = {The Lazy Initialization Multilayered Modeling Framework (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {924--927}, doi = {}, year = {2011}, } |
|
Gong, Liang |
ICSE '11: "Dealing with Noise in Defect ..."
Dealing with Noise in Defect Prediction
Sunghun Kim, Hongyu Zhang, Rongxin Wu, and Liang Gong (Hong Kong University of Science and Technology, China; Tsinghua University, China) Many software defect prediction models have been built using historical defect data obtained by mining software repositories (MSR). Recent studies have discovered that data so collected contain noises because current defect collection practices are based on optional bug fix keywords or bug report links in change logs. Automatically collected defect data based on the change logs could include noises. This paper proposes approaches to deal with the noise in defect data. First, we measure the impact of noise on defect prediction models and provide guidelines for acceptable noise level. We measure noise resistant ability of two well-known defect prediction algorithms and find that in general, for large defect datasets, adding FP (false positive) or FN (false negative) noises alone does not lead to substantial performance differences. However, the prediction performance decreases significantly when the dataset contains 20%-35% of both FP and FN noises. Second, we propose a noise detection and elimination algorithm to address this problem. Our empirical study shows that our algorithm can identify noisy instances with reasonable accuracy. In addition, after eliminating the noises using our algorithm, defect prediction accuracy is improved. @InProceedings{ICSE11p481, author = {Sunghun Kim and Hongyu Zhang and Rongxin Wu and Liang Gong}, title = {Dealing with Noise in Defect Prediction}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {481--490}, doi = {}, year = {2011}, } |
|
Goodwin, Richard |
ICSE '11-DEMOS: "Using MATCON to Generate CASE ..."
Using MATCON to Generate CASE Tools That Guide Deployment of Pre-Packaged Applications
Elad Fein, Natalia Razinkov, Shlomit Shachor, Pietro Mazzoleni, Sweefen Goh, Richard Goodwin, Manisha Bhandar, Shyh-Kwei Chen, Juhnyoung Lee, Vibha Singhal Sinha, Senthil Mani, Debdoot Mukherjee, Biplav Srivastava, and Pankaj Dhoolia (IBM Research Haifa, Israel; IBM Research Watson, USA; IBM Research, India) The complex process of adapting pre-packaged applications, such as Oracle or SAP, to an organization’s needs is full of challenges. Although detailed, structured, and well-documented methods govern this process, the consulting team implementing the method must spend a huge amount of manual effort to make sure the guidelines of the method are followed as intended by the method author. MATCON breaks down the method content, documents, templates, and work products into reusable objects, and enables them to be cataloged and indexed so these objects can be easily found and reused on subsequent projects. By using models and meta-modeling the reusable methods, we automatically produce a CASE tool to apply these methods, thereby guiding consultants through this complex process. The resulting tool helps consultants create the method deliverables for the initial phases of large customization projects. Our MATCON output, referred to as Consultant Assistant, has shown significant savings in training costs, a 20–30% improvement in productivity, and positive results in large Oracle and SAP implementations. @InProceedings{ICSE11p1016, author = {Elad Fein and Natalia Razinkov and Shlomit Shachor and Pietro Mazzoleni and Sweefen Goh and Richard Goodwin and Manisha Bhandar and Shyh-Kwei Chen and Juhnyoung Lee and Vibha Singhal Sinha and Senthil Mani and Debdoot Mukherjee and Biplav Srivastava and Pankaj Dhoolia}, title = {Using MATCON to Generate CASE Tools That Guide Deployment of Pre-Packaged Applications}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1016--1018}, doi = {}, year = {2011}, } |
|
Gorton, Ian |
ICSE '11-WORKSHOPS: "Fourth International Workshop ..."
Fourth International Workshop on Software Engineering for Computational Science and Engineering (SE-CSE 2011)
Jeffrey C. Carver, Roscoe Bartlett, Ian Gorton, Lorin Hochstein, Diane Kelly, and Judith Segal (University of Alabama, USA; Sandia National Laboratories, USA; Pacific Northwest National Laboratory, USA; USC-ISI, USA; Royal Military College, Canada; The Open University, UK) Computational Science and Engineering (CSE) software supports a wide variety of domains including nuclear physics, crash simulation, satellite data processing, fluid dynamics, climate modeling, bioinformatics, and vehicle development. The increase in the importance of CSE software motivates the need to identify and understand appropriate software engineering (SE) practices for CSE. Because of the uniqueness of CSE software development, existing SE tools and techniques developed for the business/IT community are often not efficient or effective. Appropriate SE solutions must account for the salient characteristics of the CSE development environment. This situation creates an opportunity for members of the SE community to interact with members of the CSE community to address this need. This workshop facilitates that collaboration by bringing together members of the SE community and the CSE community to share perspectives and present findings from research and practice relevant to CSE software. A significant portion of the workshop is devoted to focused interaction among the participants with the goal of generating a research agenda to improve tools, techniques, and experimental methods for studying CSE software engineering. @InProceedings{ICSE11p1226, author = {Jeffrey C. Carver and Roscoe Bartlett and Ian Gorton and Lorin Hochstein and Diane Kelly and Judith Segal}, title = {Fourth International Workshop on Software Engineering for Computational Science and Engineering (SE-CSE 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1226--1227}, doi = {}, year = {2011}, } |
|
Götz, Sebastian |
ICSE '11-DEMOS: "JavAdaptor: Unrestricted Dynamic ..."
JavAdaptor: Unrestricted Dynamic Software Updates for Java
Mario Pukall, Alexander Grebhahn, Reimar Schröter, Christian Kästner, Walter Cazzola, and Sebastian Götz (University of Magdeburg, Germany; Philipps-University Marburg, Germany; University of Milano, Italy; University of Dresden, Germany) Dynamic software updates (DSU) are one of the top-most features requested by developers and users. As a result, DSU is already standard in many dynamic programming languages. But, it is not standard in statically typed languages such as Java. Even if at place number three of Oracle’s current request for enhancement (RFE) list, DSU support in Java is very limited. Therefore, over the years many different DSU approaches for Java have been proposed. Nevertheless, DSU for Java is still an active field of research, because most of the existing approaches are too restrictive. Some of the approaches have shortcomings either in terms of flexibility or performance, whereas others are platform dependent or dictate the program’s architecture. With JavAdaptor, we present the first DSU approach which comes without those restrictions. We will demonstrate JavAdaptor based on the well-known arcade game Snake which we will update stepwise at runtime. @InProceedings{ICSE11p989, author = {Mario Pukall and Alexander Grebhahn and Reimar Schröter and Christian Kästner and Walter Cazzola and Sebastian Götz}, title = {JavAdaptor: Unrestricted Dynamic Software Updates for Java}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {989--991}, doi = {}, year = {2011}, } |
|
Gray, Jeff |
ICSE '11-DEMOS: "MT-Scribe: An End-User Approach ..."
MT-Scribe: An End-User Approach to Automate Software Model Evolution
Yu Sun, Jeff Gray, and Jules White (University of Alabama at Birmingham, USA; University of Alabama, USA; Virginia Tech, USA) Model evolution is an essential activity in software system modeling, which is traditionally supported by manual editing or writing model transformation rules. However, the current state of practice for model evolution presents challenges to those who are unfamiliar with model transformation languages or metamodel definitions. This demonstration presents a demonstration-based approach that assists end-users through automation of model evolution tasks (e.g., refactoring, model scaling, and aspect weaving). @InProceedings{ICSE11p980, author = {Yu Sun and Jeff Gray and Jules White}, title = {MT-Scribe: An End-User Approach to Automate Software Model Evolution}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {980--982}, doi = {}, year = {2011}, } |
|
Gray, Jon |
ICSE '11-SEIP: "A Method for Selecting SOA ..."
A Method for Selecting SOA Pilot Projects Including a Pilot Metrics Framework
Liam O'Brien, James Gibson, and Jon Gray (CSIRO, Australia; ANU, Australia; NICTA, Australia) Many organizations are introducing Service Oriented Architecture (SOA) as part of their business transformation projects to take advantage of the proposed benefits associated with using SOA. However, in many cases organizations don’t necessarily know on which projects introducing SOA would be of value and show real benefits to the organization. In this paper we outline a method and pilot metrics framework (PMF) to help organization’s select from a set of candidate projects those which would be most suitable for piloting SOA. The PMF is used as part of a method based on identifying a set of benefit and risk criteria, investigating each of the candidate projects, mapping them to the criteria and then selecting the most suitable project(s). The paper outlines a case study where the PMF was applied in a large government organization to help them select pilot projects and develop an overall strategy for introducing SOA into their organization. @InProceedings{ICSE11p653, author = {Liam O'Brien and James Gibson and Jon Gray}, title = {A Method for Selecting SOA Pilot Projects Including a Pilot Metrics Framework}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {653--662}, doi = {}, year = {2011}, } |
|
Grebhahn, Alexander |
ICSE '11-DEMOS: "JavAdaptor: Unrestricted Dynamic ..."
JavAdaptor: Unrestricted Dynamic Software Updates for Java
Mario Pukall, Alexander Grebhahn, Reimar Schröter, Christian Kästner, Walter Cazzola, and Sebastian Götz (University of Magdeburg, Germany; Philipps-University Marburg, Germany; University of Milano, Italy; University of Dresden, Germany) Dynamic software updates (DSU) are one of the top-most features requested by developers and users. As a result, DSU is already standard in many dynamic programming languages. But, it is not standard in statically typed languages such as Java. Even if at place number three of Oracle’s current request for enhancement (RFE) list, DSU support in Java is very limited. Therefore, over the years many different DSU approaches for Java have been proposed. Nevertheless, DSU for Java is still an active field of research, because most of the existing approaches are too restrictive. Some of the approaches have shortcomings either in terms of flexibility or performance, whereas others are platform dependent or dictate the program’s architecture. With JavAdaptor, we present the first DSU approach which comes without those restrictions. We will demonstrate JavAdaptor based on the well-known arcade game Snake which we will update stepwise at runtime. @InProceedings{ICSE11p989, author = {Mario Pukall and Alexander Grebhahn and Reimar Schröter and Christian Kästner and Walter Cazzola and Sebastian Götz}, title = {JavAdaptor: Unrestricted Dynamic Software Updates for Java}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {989--991}, doi = {}, year = {2011}, } |
|
Grechanik, Mark |
ICSE '11: "Portfolio: Finding Relevant ..."
Portfolio: Finding Relevant Functions and Their Usages
Collin McMillan, Mark Grechanik, Denys Poshyvanyk, Qing Xie, and Chen Fu (College of William and Mary, USA; Accenture Technology Lab, USA) Different studies show that programmers are more interested in finding definitions of functions and their uses than variables, statements, or arbitrary code fragments [30, 29, 31]. Therefore, programmers require support in finding relevant functions and determining how those functions are used. Unfortunately, existing code search engines do not provide enough of this support to developers, thus reducing the effectiveness of code reuse. We provide this support to programmers in a code search system called Portfolio that retrieves and visualizes relevant functions and their usages. We have built Portfolio using a combination of models that address surfing behavior of programmer and sharing related concepts among functions. We conducted an experiment with 49 professional programmers to compare Portfolio to Google Code Search and Koders using a standard methodology. The results show with strong statistical significance that users find more relevant functions with higher precision with Portfolio than with Google Code Search and Koders. @InProceedings{ICSE11p111, author = {Collin McMillan and Mark Grechanik and Denys Poshyvanyk and Qing Xie and Chen Fu}, title = {Portfolio: Finding Relevant Functions and Their Usages}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {111--120}, doi = {}, year = {2011}, } ICSE '11-DEMOS: "Portfolio: A Search Engine ..." Portfolio: A Search Engine for Finding Functions and Their Usages Collin McMillan, Mark Grechanik, Denys Poshyvanyk, Qing Xie, and Chen Fu (College of William and Mary, USA; University of Illinois at Chicago, USA; Accenture Technology Labs, USA) In this demonstration, we present a code search system called Portfolio that retrieves and visualizes relevant functions and their usages. We will show how chains of relevant functions and their usages can be visualized to users in response to their queries. @InProceedings{ICSE11p1043, author = {Collin McMillan and Mark Grechanik and Denys Poshyvanyk and Qing Xie and Chen Fu}, title = {Portfolio: A Search Engine for Finding Functions and Their Usages}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1043--1045}, doi = {}, year = {2011}, } |
|
Grundy, John |
ICSE '11: "Improving Requirements Quality ..."
Improving Requirements Quality using Essential Use Case Interaction Patterns
Massila Kamalrudin, John Hosking, and John Grundy (University of Auckland, New Zealand; Swinburne University of Technology at Hawthorn, Australia) Requirements specifications need to be checked against the 3C’s Consistency, Completeness and Correctness – in order to achieve high quality. This is especially difficult when working with both natural language requirements and associated semi-formal modelling representations. We describe a technique and support tool that allows us to perform semi-automated checking of natural language and semi-formal requirements models, supporting both consistency management between representations but also correctness and completeness analysis. We use a concept of essential use case interaction patterns to perform the correctness and completeness analysis on the semi-formal representation. We highlight potential inconsistencies, incompleteness and incorrectness using visual differencing in our support tool. We have evaluated our approach via an end user study which focused on the tool’s usefulness, ease of use, ease of learning and user satisfaction and provided data for cognitive dimensions of notations analysis of the tool. @InProceedings{ICSE11p531, author = {Massila Kamalrudin and John Hosking and John Grundy}, title = {Improving Requirements Quality using Essential Use Case Interaction Patterns}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {531--540}, doi = {}, year = {2011}, } ICSE '11-NIER: "A Combination Approach for ..." A Combination Approach for Enhancing Automated Traceability (NIER Track) Xiaofan Chen, John Hosking, and John Grundy (University of Auckland, New Zealand; Swinburne University of Technology at Melbourne, Australia) Tracking a variety of traceability links between artifacts assists software developers in comprehension, efficient development, and effective management of a system. Traceability systems to date based on various Information Retrieval (IR) techniques have been faced with a major open research challenge: how to extract these links with both high precision and high recall. In this paper we describe an experimental approach that combines Regular Expression, Key Phrases, and Clustering with IR techniques to enhance the performance of IR for traceability link recovery between documents and source code. Our preliminary experimental results show that our combination technique improves the performance of IR, increases the precision of retrieved links, and recovers more true links than IR alone. @InProceedings{ICSE11p912, author = {Xiaofan Chen and John Hosking and John Grundy}, title = {A Combination Approach for Enhancing Automated Traceability (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {912--915}, doi = {}, year = {2011}, } ICSE '11-WORKSHOPS: "Workshop on Flexible Modeling ..." Workshop on Flexible Modeling Tools (FlexiTools 2011) Harold Ossher, André van der Hoek, Margaret-Anne Storey, John Grundy, Rachel Bellamy, and Marian Petre (IBM Research Watson, USA; UC Irvine, USA; University of Victoria, Canada; Swinburne University of Technology at Hawthorn, Australia; The Open University, UK) Modeling tools are often not used for tasks during the software lifecycle for which they should be more helpful; instead free-from approaches, such as office tools and white boards, are frequently used. Prior workshops explored why this is the case and what might be done about it. The goal of this workshop is to continue those discussions and also to form an initial set of challenge problems and research challenges that researchers and developers of flexible modeling tools should address. @InProceedings{ICSE11p1192, author = {Harold Ossher and André van der Hoek and Margaret-Anne Storey and John Grundy and Rachel Bellamy and Marian Petre}, title = {Workshop on Flexible Modeling Tools (FlexiTools 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1192--1193}, doi = {}, year = {2011}, } |
|
Gu, Zhongxian |
ICSE '11-DEMOS: "BQL: Capturing and Reusing ..."
BQL: Capturing and Reusing Debugging Knowledge
Zhongxian Gu, Earl T. Barr, and Zhendong Su (UC Davis, USA) When fixing a bug, a programmer tends to search for similar bugs that have been resolved in the past. A fix for a similar bug may help him fix his bug or at least understand his bug. We designed and implemented the Bug Query Language (BQL) and its accompanying tools to help users search for similar bugs to aid debugging. This paper demonstrates the main features of the BQL infrastructure. We populated BQL with bugs collected from open-source projects and show that BQL could have helped users to fix real-world bugs. @InProceedings{ICSE11p1001, author = {Zhongxian Gu and Earl T. Barr and Zhendong Su}, title = {BQL: Capturing and Reusing Debugging Knowledge}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1001--1003}, doi = {}, year = {2011}, } |
|
Gupta, Aarti |
ICSE '11: "Coverage Guided Systematic ..."
Coverage Guided Systematic Concurrency Testing
Chao Wang, Mahmoud Said, and Aarti Gupta (NEC Laboratories America, USA; Western Michigan University, USA) Shared-memory multi-threaded programs are notoriously difficult to test, and because of the often astronomically large number of thread schedules, testing all possible interleavings is practically infeasible. In this paper we propose a coverage-guided systematic testing framework, where we use dynamically learned ordering constraints over shared object accesses to select only high-risk interleavings for test execution. An interleaving is of high-risk if it has not been covered by the ordering constraints, meaning that it has concurrency scenarios that have not been tested. Our method consists of two components. First, we utilize dynamic information collected from good test runs to learn ordering constraints over the memory-accessing and synchronization statements. These ordering constraints are treated as likely invariants since they are respected by all the tested runs. Second, during the process of systematic testing, we use the learned ordering constraints to guide the selection of interleavings for future test execution. Our experiments on public domain multithreaded C/C++ programs show that, by focusing on only the high-risk interleavings rather than enumerating all possible interleavings, our method can increase the coverage of important concurrency scenarios with a reasonable cost and detect most of the concurrency bugs in practice. @InProceedings{ICSE11p221, author = {Chao Wang and Mahmoud Said and Aarti Gupta}, title = {Coverage Guided Systematic Concurrency Testing}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {221--230}, doi = {}, year = {2011}, } |
|
Gvero, Tihomir |
ICSE '11-DEMOS: "ReAssert: A Tool for Repairing ..."
ReAssert: A Tool for Repairing Broken Unit Tests
Brett Daniel, Danny Dig, Tihomir Gvero, Vilas Jagannath, Johnston Jiaa, Damion Mitchell, Jurand Nogiec, Shin Hwei Tan, and Darko Marinov (University of Illinois at Urbana-Champaign, USA; EPFL, Switzerland) Successful software systems continuously change their requirements and thus code. When this happens, some existing tests get broken because they no longer reflect the intended behavior, and thus they need to be updated. Repairing broken tests can be time-consuming and difficult. We present ReAssert, a tool that can automatically suggest repairs for broken unit tests. Examples include replacing literal values in tests, changing assertion methods, or replacing one assertion with several. Our experiments show that ReAssert can repair many common test failures and that its suggested repairs match developers’ expectations. @InProceedings{ICSE11p1010, author = {Brett Daniel and Danny Dig and Tihomir Gvero and Vilas Jagannath and Johnston Jiaa and Damion Mitchell and Jurand Nogiec and Shin Hwei Tan and Darko Marinov}, title = {ReAssert: A Tool for Repairing Broken Unit Tests}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1010--1012}, doi = {}, year = {2011}, } |
|
Hahnenberg, Mark |
ICSE '11-NIER: "Permission-Based Programming ..."
Permission-Based Programming Languages (NIER Track)
Jonathan Aldrich, Ronald Garcia, Mark Hahnenberg, Manuel Mohr, Karl Naden, Darpan Saini, and Roger Wolff (CMU, USA; Karlsruhe Institute of Technology, Germany; University of Chile, Chile) Linear permissions have been proposed as a lightweight way to specify how an object may be aliased, and whether those aliases allow mutation. Prior work has demonstrated the value of permissions for addressing many software engineering concerns, including information hiding, protocol checking, concurrency, security, and memory management. We propose the concept of a permission-based programming language--a language whose object model, type system, and runtime are all co-designed with permissions in mind. This approach supports an object model in which the structure of an object can change over time, a type system that tracks changing structure in addition to addressing the other concerns above, and a runtime system that can dynamically check permission assertions and leverage permissions to parallelize code. We sketch the design of the permission-based programming language Plaid, and argue that the approach may provide significant software engineering benefits. @InProceedings{ICSE11p828, author = {Jonathan Aldrich and Ronald Garcia and Mark Hahnenberg and Manuel Mohr and Karl Naden and Darpan Saini and Roger Wolff}, title = {Permission-Based Programming Languages (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {828--831}, doi = {}, year = {2011}, } |
|
Hale, Matthew L. |
ICSE '11-DEMOS: "SEREBRO: Facilitating Student ..."
SEREBRO: Facilitating Student Project Team Collaboration
Noah M. Jorgenson, Matthew L. Hale, and Rose F. Gamble (University of Tulsa, USA) In this demonstration, we show SEREBRO, a lightweight courseware developed for student team collaboration in a software engineering class. SEREBRO couples an idea forum with software project management tools to maintain cohesive interaction between team discussion and resulting work products, such as tasking, documentation, and version control. SEREBRO has been used consecutively for two years of software engineering classes. Student input and experiments on student use in these classes has directed SERBRO to its current functionality. @InProceedings{ICSE11p1019, author = {Noah M. Jorgenson and Matthew L. Hale and Rose F. Gamble}, title = {SEREBRO: Facilitating Student Project Team Collaboration}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1019--1021}, doi = {}, year = {2011}, } |
|
Hamou-Lhadj, Abdelwahab |
ICSE '11-NIER: "A Software Behaviour Analysis ..."
A Software Behaviour Analysis Framework Based on the Human Perception Systems (NIER Track)
Heidar Pirzadeh and Abdelwahab Hamou-Lhadj (Concordia University, Canada) Understanding software behaviour can help in a variety of software engineering tasks if one can develop effective techniques for analyzing the information generated from a system's run. These techniques often rely on tracing. Traces, however, can be considerably large and complex to process. In this paper, we present an innovative approach for trace analysis inspired by the way the human brain and perception systems operate. The idea is to mimic the psychological processes that have been developed over the years to explain how our perception system deals with huge volume of visual data. We show how similar mechanisms can be applied to the abstraction and simplification of large traces. Some preliminary results are also presented. @InProceedings{ICSE11p948, author = {Heidar Pirzadeh and Abdelwahab Hamou-Lhadj}, title = {A Software Behaviour Analysis Framework Based on the Human Perception Systems (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {948--951}, doi = {}, year = {2011}, } |
|
Hannay, Jo E. |
ICSE '11-SEIP: "A Comparison of Model-based ..."
A Comparison of Model-based and Judgment-based Release Planning in Incremental Software Projects
Hans Christian Benestad and Jo E. Hannay (Simula Research Laboratory, Norway) Numerous factors are involved when deciding when to implement which features in incremental software development. To facilitate a rational and efficient planning process, release planning models make such factors explicit and compute release plan alternatives according to optimization principles. However, experience suggests that industrial use of such models is limited. To investigate the feasibility of model and tool support, we compared input factors assumed by release planning models with factors considered by expert planners. The former factors were cataloged by systematically surveying release planning models, while the latter were elicited through repertory grid interviews in three software organizations. The findings indicate a substantial overlap between the two approaches. However, a detailed analysis reveals that models focus on only select parts of a possibly larger space of relevant planning factors. Three concrete areas of mismatch were identified: (1) continuously evolving requirements and specifications, (2) continuously changing prioritization criteria, and (3) authority-based decision processes. With these results in mind, models, tools and guidelines can be adjusted to address better real-life development processes. @InProceedings{ICSE11p766, author = {Hans Christian Benestad and Jo E. Hannay}, title = {A Comparison of Model-based and Judgment-based Release Planning in Incremental Software Projects}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {766--775}, doi = {}, year = {2011}, } |
|
Hansen, Klaus Marius |
ICSE '11-NIER: "Towards Architectural Information ..."
Towards Architectural Information in Implementation (NIER Track)
Henrik Bærbak Christensen and Klaus Marius Hansen (Aarhus University, Denmark; University of Copenhagen, Denmark) Agile development methods favor speed and feature producing iterations. Software architecture, on the other hand, is ripe with techniques that are slow and not oriented directly towards implementation of costumers’ needs. Thus, there is a major challenge in retaining architectural information in a fast-faced agile project. We propose to embed as much architectural information as possible in the central artefact of the agile universe, the code. We argue that thereby valuable architectural information is retained for (automatic) documentation, validation, and further analysis, based on a relatively small investment of effort. We outline some preliminary examples of architectural annotations in Java and Python and their applicability in practice. @InProceedings{ICSE11p928, author = {Henrik Bærbak Christensen and Klaus Marius Hansen}, title = {Towards Architectural Information in Implementation (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {928--931}, doi = {}, year = {2011}, } |
|
Hardy, John |
ICSE '11-NIER: "Digitally Annexing Desk Space ..."
Digitally Annexing Desk Space for Software Development (NIER Track)
John Hardy, Christopher Bull, Gerald Kotonya, and Jon Whittle (Lancaster University, UK) Software engineering is a team activity yet the programmer’s key tool, the IDE, is still largely that of a soloist. This paper describes the vision, implementation and initial evaluation of CoffeeTable – a fully featured research prototype resulting from our reflections on the software design process. CoffeeTable exchanges the traditional IDE for one built around a shared interactive desk. The proposed solution encourages smooth transitions between agile and traditional modes of working whilst helping to create a shared vision and common reference frame – key to sustaining a good design. This paper also presents early results from the evaluation of CoffeeTable and offers some insights from the lessons learned. In particular, it highlights the role of developer tools and the software constructions that are shaped by them. @InProceedings{ICSE11p812, author = {John Hardy and Christopher Bull and Gerald Kotonya and Jon Whittle}, title = {Digitally Annexing Desk Space for Software Development (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {812--815}, doi = {}, year = {2011}, } |
|
Hariri, Negar |
ICSE '11: "On-demand Feature Recommendations ..."
On-demand Feature Recommendations Derived from Mining Public Product Descriptions
Horatiu Dumitru, Marek Gibiec, Negar Hariri, Jane Cleland-Huang, Bamshad Mobasher, Carlos Castro-Herrera, and Mehdi Mirakhorli (DePaul University, USA) We present a recommender system that models and recommends product features for a given domain. Our approach mines product descriptions from publicly available online specifications, utilizes text mining and a novel incremental diffusive clustering algorithm to discover domain-specific features, generates a probabilistic feature model that represents commonalities, variants, and cross-category features, and then uses association rule mining and the k-NearestNeighbor machine learning strategy to generate product specific feature recommendations. Our recommender system supports the relatively labor-intensive task of domain analysis, potentially increasing opportunities for re-use, reducing time-to-market, and delivering more competitive software products. The approach is empirically validated against 20 different product categories using thousands of product descriptions mined from a repository of free software applications. @InProceedings{ICSE11p181, author = {Horatiu Dumitru and Marek Gibiec and Negar Hariri and Jane Cleland-Huang and Bamshad Mobasher and Carlos Castro-Herrera and Mehdi Mirakhorli}, title = {On-demand Feature Recommendations Derived from Mining Public Product Descriptions}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {181--190}, doi = {}, year = {2011}, } |
|
Harman, Mark |
ICSE '11: "Model Projection: Simplifying ..."
Model Projection: Simplifying Models in Response to Restricting the Environment
Kelly Androutsopoulos, David Binkley, David Clark, Nicolas Gold, Mark Harman, Kevin Lano, and Zheng Li (University College London, UK; Loyola University Maryland, USA; King's College London, UK) This paper introduces Model Projection. Finite state models such as Extended Finite State Machines are being used in an ever increasing number of software engineering activities. Model projection facilitates model development by specializing models for a specific operating environment. A projection is useful in many design-level applications including specification reuse and property verification. The applicability of model projection rests upon three critical concerns: correctness, effectiveness, and efficiency, all of which are addressed in this paper. We introduce four related algorithms for model projection and prove each correct. We also present an empirical study of effectiveness and efficiency using ten models, including widely–studied benchmarks as well as industrial models. Results show that a typical projection includes about half of the states and a third of the transitions from the original model. @InProceedings{ICSE11p291, author = {Kelly Androutsopoulos and David Binkley and David Clark and Nicolas Gold and Mark Harman and Kevin Lano and Zheng Li}, title = {Model Projection: Simplifying Models in Response to Restricting the Environment}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {291--300}, doi = {}, year = {2011}, } |
|
Hassan, Ahmed E. |
ICSE '11: "An Empirical Study of Build ..."
An Empirical Study of Build Maintenance Effort
Shane McIntosh, Bram Adams, Thanh H. D. Nguyen, Yasutaka Kamei, and Ahmed E. Hassan (Queen's University, Canada) The build system of a software project is responsible for transforming source code and other development artifacts into executable programs and deliverables. Similar to source code, build system specifications require maintenance to cope with newly implemented features, changes to imported Application Program Interfaces (APIs), and source code restructuring. In this paper, we mine the version histories of one proprietary and nine open source projects of different sizes and domain to analyze the overhead that build maintenance imposes on developers. We split our analysis into two dimensions: (1) Build Coupling, i.e., how frequently source code changes require build changes, and (2) Build Ownership, i.e., the proportion of developers responsible for build maintenance. Our results indicate that, despite the difference in scale, the build system churn rate is comparable to that of the source code, and build changes induce more relative churn on the build system than source code changes induce on the source code. Furthermore, build maintenance yields up to a 27% overhead on source code development and a 44% overhead on test development. Up to 79% of source code developers and 89% of test code developers are significantly impacted by build maintenance, yet investment in build experts can reduce the proportion of impacted developers to 22% of source code developers and 24% of test code developers. @InProceedings{ICSE11p141, author = {Shane McIntosh and Bram Adams and Thanh H. D. Nguyen and Yasutaka Kamei and Ahmed E. Hassan}, title = {An Empirical Study of Build Maintenance Effort}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {141--150}, doi = {}, year = {2011}, } |
|
Hayes, Jane Huffman |
ICSE '11-NIER: "Towards Overcoming Human Analyst ..."
Towards Overcoming Human Analyst Fallibility in the Requirements Tracing Process (NIER Track)
David Cuddeback, Alex Dekhtyar, Jane Huffman Hayes, Jeff Holden, and Wei-Keat Kong (California Polytechnic State University, USA; University of Kentucky, USA) Our research group recently discovered that human analysts, when asked to validate candidate traceability matrices, produce predictably imperfect results, in some cases less accurate than the starting candidate matrices. This discovery radically changes our understanding of how to design a fast, accurate and certifiable tracing process that can be implemented as part of software assurance activities. We present our vision for the new approach to achieving this goal. Further, we posit that human fallibility may impact other softare engineering activities involving decision support tools. @InProceedings{ICSE11p860, author = {David Cuddeback and Alex Dekhtyar and Jane Huffman Hayes and Jeff Holden and Wei-Keat Kong}, title = {Towards Overcoming Human Analyst Fallibility in the Requirements Tracing Process (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {860--863}, doi = {}, year = {2011}, } |
|
Heimdahl, Mats P. E. |
ICSE '11: "Programs, Tests, and Oracles: ..."
Programs, Tests, and Oracles: The Foundations of Testing Revisited
Matt Staats, Michael W. Whalen, and Mats P. E. Heimdahl (University of Minnesota, USA) In previous decades, researchers have explored the formal foundations of program testing. By exploring the foundations of testing largely separate from any specific method of testing, these researchers provided a general discussion of the testing process, including the goals, the underlying problems, and the limitations of testing. Unfortunately, a common, rigorous foundation has not been widely adopted in empirical software testing research, making it difficult to generalize and compare empirical research. We continue this foundational work, providing a framework intended to serve as a guide for future discussions and empirical studies concerning software testing. Specifically, we extend Gourlay’s functional description of testing with the notion of a test oracle, an aspect of testing largely overlooked in previous foundational work and only lightly explored in general. We argue additional work exploring the interrelationship between programs, tests, and oracles should be performed, and use our extension to clarify concepts presented in previous work, present new concepts related to test oracles, and demonstrate that oracle selection must be considered when discussing the efficacy of a testing process. @InProceedings{ICSE11p391, author = {Matt Staats and Michael W. Whalen and Mats P. E. Heimdahl}, title = {Programs, Tests, and Oracles: The Foundations of Testing Revisited}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {391--400}, doi = {}, year = {2011}, } ICSE '11-NIER: "Better Testing Through Oracle ..." Better Testing Through Oracle Selection (NIER Track) Matt Staats, Michael W. Whalen, and Mats P. E. Heimdahl (University of Minnesota, USA) In software testing, the test oracle determines if the application under test has performed an execution correctly. In current testing practice and research, significant effort and thought is placed on selecting test inputs, with the selection of test oracles largely neglected. Here, we argue that improvements to the testing process can be made by considering the problem of oracle selection. In particular, we argue that selecting the test oracle and test inputs together to complement one another may yield improvements testing effectiveness. We illustrate this using an example and present selected results from an ongoing study demonstrating the relationship between test suite selection, oracle selection, and fault finding. @InProceedings{ICSE11p892, author = {Matt Staats and Michael W. Whalen and Mats P. E. Heimdahl}, title = {Better Testing Through Oracle Selection (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {892--895}, doi = {}, year = {2011}, } |
|
Heinemann, Lars |
ICSE '11-DEMOS: "The Quamoco Tool Chain for ..."
The Quamoco Tool Chain for Quality Modeling and Assessment
Florian Deissenboeck, Lars Heinemann, Markus Herrmannsdoerfer, Klaus Lochmann, and Stefan Wagner (TU München, Germany) Continuous quality assessment is crucial for the long-term success of evolving software. On the one hand, code analysis tools automatically supply quality indicators, but do not provide a complete overview of software quality. On the other hand, quality models define abstract characteristics that influence quality, but are not operationalized. Currently, no tool chain exists that integrates code analysis tools with quality models. To alleviate this, the Quamoco project provides a tool chain to both define and assess software quality. The tool chain consists of a quality model editor and an integration with the quality assessment toolkit ConQAT. Using the editor, we can define quality models ranging from abstract characteristics down to operationalized measures. From the quality model, a ConQAT configuration can be generated that can be used to automatically assess the quality of a software system. @InProceedings{ICSE11p1007, author = {Florian Deissenboeck and Lars Heinemann and Markus Herrmannsdoerfer and Klaus Lochmann and Stefan Wagner}, title = {The Quamoco Tool Chain for Quality Modeling and Assessment}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1007--1009}, doi = {}, year = {2011}, } |
|
Helming, Jonas |
ICSE '11-NIER: "A Domain Specific Requirements ..."
A Domain Specific Requirements Model for Scientific Computing (NIER Track)
Yang Li, Nitesh Narayan, Jonas Helming, and Maximilian Koegel (TU München, Germany) Requirements engineering is a core activity in software engineering. However, formal requirements engineering methodologies and documented requirements are often missing in scientific computing projects. We claim that there is a need for methodologies, which capture requirements for scientific computing projects, because traditional requirements engineering methodologies are difficult to apply in this domain. We propose a novel domain specific requirements model to meet this need. We conducted an exploratory experiment to evaluate the usage of this model in scientific computing projects. The results indicate that the proposed model facilitates the communication across the domain boundary, which is between the scientific computing domain and the software engineering domain. It supports requirements elicitation for the projects efficiently. @InProceedings{ICSE11p848, author = {Yang Li and Nitesh Narayan and Jonas Helming and Maximilian Koegel}, title = {A Domain Specific Requirements Model for Scientific Computing (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {848--851}, doi = {}, year = {2011}, } |
|
Herbsleb, James D. |
ICSE '11: "Configuring Global Software ..."
Configuring Global Software Teams: A Multi-Company Analysis of Project Productivity, Quality, and Profits
Narayan Ramasubbu, Marcelo Cataldo, Rajesh Krishna Balan, and James D. Herbsleb (Singapore Management University, Singapore; CMU, USA) In this paper, we examined the impact of project-level configurational choices of globally distributed software teams on project productivity, quality, and profits. Our analysis used data from 362 projects of four different firms. These projects spanned a wide range of programming languages, application domain, process choices, and development sites spread over 15 countries and 5 continents. Our analysis revealed fundamental tradeoffs in choosing configurational choices that are optimized for productivity, quality, and/or profits. In particular, achieving higher levels of productivity and quality require diametrically opposed configurational choices. In addition, creating imbalances in the expertise and personnel distribution of project teams significantly helps increase profit margins. However, a profitoriented imbalance could also significantly affect productivity and/or quality outcomes. Analyzing these complex tradeoffs, we provide actionable managerial insights that can help software firms and their clients choose configurations that achieve desired project outcomes in globally distributed software development. @InProceedings{ICSE11p261, author = {Narayan Ramasubbu and Marcelo Cataldo and Rajesh Krishna Balan and James D. Herbsleb}, title = {Configuring Global Software Teams: A Multi-Company Analysis of Project Productivity, Quality, and Profits}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {261--270}, doi = {}, year = {2011}, } ICSE '11: "Factors Leading to Integration ..." Factors Leading to Integration Failures in Global Feature-Oriented Development: An Empirical Analysis Marcelo Cataldo and James D. Herbsleb (CMU, USA) Feature-driven software development is a novel approach that has grown in popularity over the past decade. Researchers and practitioners alike have argued that numerous benefits could be garnered from adopting a feature-driven development approach. However, those persuasive arguments have not been matched with supporting empirical evidence. Moreover, developing software systems around features involves new technical and organizational elements that could have significant implications for outcomes such as software quality. This paper presents an empirical analysis of a large-scale project that implemented 1195 features in a software system. We examined the impact that technical attributes of product features, attributes of the feature teams and crossfeature interactions have on software integration failures. Our results show that technical factors such as the nature of component dependencies and organizational factors such as the geographic dispersion of the feature teams and the role of the feature owners had complementary impact suggesting their independent and important role in terms of software quality. Furthermore, our analyses revealed that cross-feature interactions, measured as the number of architectural dependencies between two product features, are a major driver of integration failures. The research and practical implications of our results are discussed. @InProceedings{ICSE11p161, author = {Marcelo Cataldo and James D. Herbsleb}, title = {Factors Leading to Integration Failures in Global Feature-Oriented Development: An Empirical Analysis}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {161--170}, doi = {}, year = {2011}, } |
|
Hermans, Felienne |
ICSE '11: "Supporting Professional Spreadsheet ..."
Supporting Professional Spreadsheet Users by Generating Leveled Dataflow Diagrams
Felienne Hermans, Martin Pinzger, and Arie van Deursen (Delft University of Technology, Netherlands) Thanks to their flexibility and intuitive programming model, spreadsheets are widely used in industry, often for businesscritical applications. Similar to software developers, professional spreadsheet users demand support for maintaining and transferring their spreadsheets. In this paper, we first study the problems and information needs of professional spreadsheet users by means of a survey conducted at a large financial company. Based on these needs, we then present an approach that extracts this information from spreadsheets and presents it in a compact and easy to understand way, with leveled dataflow diagrams. Our approach comes with three different views on the dataflow that allow the user to analyze the dataflow diagrams in a top-down fashion. To evaluate the usefulness of the proposed approach, we conducted a series of interviews as well as nine case studies in an industrial setting. The results of the evaluation clearly indicate the demand for and usefulness of our approach in ease the understanding of spreadsheets. @InProceedings{ICSE11p451, author = {Felienne Hermans and Martin Pinzger and Arie van Deursen}, title = {Supporting Professional Spreadsheet Users by Generating Leveled Dataflow Diagrams}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {451--460}, doi = {}, year = {2011}, } |
|
Herrmannsdoerfer, Markus |
ICSE '11-DEMOS: "The Quamoco Tool Chain for ..."
The Quamoco Tool Chain for Quality Modeling and Assessment
Florian Deissenboeck, Lars Heinemann, Markus Herrmannsdoerfer, Klaus Lochmann, and Stefan Wagner (TU München, Germany) Continuous quality assessment is crucial for the long-term success of evolving software. On the one hand, code analysis tools automatically supply quality indicators, but do not provide a complete overview of software quality. On the other hand, quality models define abstract characteristics that influence quality, but are not operationalized. Currently, no tool chain exists that integrates code analysis tools with quality models. To alleviate this, the Quamoco project provides a tool chain to both define and assess software quality. The tool chain consists of a quality model editor and an integration with the quality assessment toolkit ConQAT. Using the editor, we can define quality models ranging from abstract characteristics down to operationalized measures. From the quality model, a ConQAT configuration can be generated that can be used to automatically assess the quality of a software system. @InProceedings{ICSE11p1007, author = {Florian Deissenboeck and Lars Heinemann and Markus Herrmannsdoerfer and Klaus Lochmann and Stefan Wagner}, title = {The Quamoco Tool Chain for Quality Modeling and Assessment}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1007--1009}, doi = {}, year = {2011}, } |
|
Heymans, Patrick |
ICSE '11: "Symbolic Model Checking of ..."
Symbolic Model Checking of Software Product Lines
Andreas Classen, Patrick Heymans, Pierre-Yves Schobbens, and Axel Legay (University of Namur, Belgium; IRISA/INRIA Rennes, France; University of Liège, Belgium) We study the problem of model checking software product line (SPL) behaviours against temporal properties. This is more difficult than for single systems because an SPL with n features yields up to 2^n individual systems to verify. As each individual verification suffers from state explosion, it is crucial to propose efficient formalisms and heuristics. We recently proposed featured transition systems (FTS), a compact representation for SPL behaviour, and defined algorithms for model checking FTS against linear temporal properties. Although they showed to outperform individual system verifications, they still face a state explosion problem as they enumerate and visit system states one by one. In this paper, we tackle this latter problem by using symbolic representations of the state space. This lead us to consider computation tree logic (CTL) which is supported by the industry-strength symbolic model checker NuSMV. We first lay the foundations for symbolic SPL model checking by defining a feature-oriented version of CTL and its dedicated algorithms. We then describe an implementation that adapts the NuSMV language and tool infrastructure. Finally, we propose theoretical and empirical evaluations of our results. The benchmarks show that for certain properties, our algorithm is over a hundred times faster than model checking each system with the standard algorithm. @InProceedings{ICSE11p321, author = {Andreas Classen and Patrick Heymans and Pierre-Yves Schobbens and Axel Legay}, title = {Symbolic Model Checking of Software Product Lines}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {321--330}, doi = {}, year = {2011}, } |
|
Hihn, Jairus |
ICSE '11-SEIP: "Experiences with Text Mining ..."
Experiences with Text Mining Large Collections of Unstructured Systems Development Artifacts at JPL
Daniel Port, Allen Nikora, Jairus Hihn, and LiGuo Huang (University of Hawaii, USA; Jet Propulsion Laboratory, USA; Southern Methodist University, USA) Often repositories of systems engineering artifacts at NASA’s Jet Propulsion Laboratory (JPL) are so large and poorly structured that they have outgrown our capability to effectively manually process their contents to extract useful information. Sophisticated text mining methods and tools seem a quick, low-effort approach to automating our limited manual efforts. Our experiences of exploring such methods mainly in three areas including historical risk analysis, defect identification based on requirements analysis, and over-time analysis of system anomalies at JPL, have shown that obtaining useful results requires substantial unanticipated efforts - from preprocessing the data to transforming the output for practical applications. We have not observed any quick “wins” or realized benefit from short-term effort avoidance through automation in this area. Surprisingly we have realized a number of unexpected long-term benefits from the process of applying text mining to our repositories. This paper elaborates some of these benefits and our important lessons learned from the process of preparing and applying text mining to large unstructured system artifacts at JPL aiming to benefit future TM applications in similar problem domains and also in hope for being extended to broader areas of applications. @InProceedings{ICSE11p701, author = {Daniel Port and Allen Nikora and Jairus Hihn and LiGuo Huang}, title = {Experiences with Text Mining Large Collections of Unstructured Systems Development Artifacts at JPL}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {701--710}, doi = {}, year = {2011}, } |
|
Hindle, Abram |
ICSE '11-NIER: "Multifractal Aspects of Software ..."
Multifractal Aspects of Software Development (NIER Track)
Abram Hindle, Michael W. Godfrey, and Richard C. Holt (UC Davis, USA; University of Waterloo, Canada) Software development is difficult to model, particularly the noisy, non-stationary signals of changes per time unit, extracted from version control systems (VCSs). Currently researchers are utilizing timeseries analysis tools such as ARIMA to model these signals extracted from a project's VCS. Unfortunately current approaches are not very amenable to the underlying power-law distributions of this kind of signal. We propose modeling changes per time unit using multifractal analysis. This analysis can be used when a signal exhibits multiscale self-similarity, as in the case of complex data drawn from power-law distributions. Specifically we utilize multifractal analysis to demonstrate that software development is multifractal, that is the signal is a fractal composed of multiple fractal dimensions along a range of Hurst exponents. Thus we show that software development has multi-scale self-similarity, that software development is multifractal. We also pose questions that we hope multifractal analysis can answer. @InProceedings{ICSE11p968, author = {Abram Hindle and Michael W. Godfrey and Richard C. Holt}, title = {Multifractal Aspects of Software Development (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {968--971}, doi = {}, year = {2011}, } |
|
Hochstein, Lorin |
ICSE '11-WORKSHOPS: "Fourth International Workshop ..."
Fourth International Workshop on Software Engineering for Computational Science and Engineering (SE-CSE 2011)
Jeffrey C. Carver, Roscoe Bartlett, Ian Gorton, Lorin Hochstein, Diane Kelly, and Judith Segal (University of Alabama, USA; Sandia National Laboratories, USA; Pacific Northwest National Laboratory, USA; USC-ISI, USA; Royal Military College, Canada; The Open University, UK) Computational Science and Engineering (CSE) software supports a wide variety of domains including nuclear physics, crash simulation, satellite data processing, fluid dynamics, climate modeling, bioinformatics, and vehicle development. The increase in the importance of CSE software motivates the need to identify and understand appropriate software engineering (SE) practices for CSE. Because of the uniqueness of CSE software development, existing SE tools and techniques developed for the business/IT community are often not efficient or effective. Appropriate SE solutions must account for the salient characteristics of the CSE development environment. This situation creates an opportunity for members of the SE community to interact with members of the CSE community to address this need. This workshop facilitates that collaboration by bringing together members of the SE community and the CSE community to share perspectives and present findings from research and practice relevant to CSE software. A significant portion of the workshop is devoted to focused interaction among the participants with the goal of generating a research agenda to improve tools, techniques, and experimental methods for studying CSE software engineering. @InProceedings{ICSE11p1226, author = {Jeffrey C. Carver and Roscoe Bartlett and Ian Gorton and Lorin Hochstein and Diane Kelly and Judith Segal}, title = {Fourth International Workshop on Software Engineering for Computational Science and Engineering (SE-CSE 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1226--1227}, doi = {}, year = {2011}, } |
|
Hoda, Rashina |
ICSE '11-WORKSHOPS: "Workshop on Cooperative and ..."
Workshop on Cooperative and Human Aspects of Software Engineering (CHASE 2011)
Marcelo Cataldo, Cleidson de Souza, Yvonne Dittrich, Rashina Hoda, and Helen Sharp (Robert Bosch Research, USA; IBM Research, Brazil; IT University of Copenhagen, Denmark; Victoria University of Wellington, New Zealand; The Open University, UK) Software is created by people for people working in varied environments, under various conditions. Thus understanding cooperative and human aspects of software development is crucial to comprehend how methods and tools are used, and thereby improve the creation and maintenance of software. Over the years, both researchers and practitioners have recognized the need to study and understand these aspects. Despite recognizing this, researchers in cooperative and human aspects have no clear place to meet and are dispersed in different research conferences and areas. The goal of this workshop is to provide a forum for discussing high quality research on human and cooperative aspects of software engineering. We aim at providing both a meeting place for the growing community and the possibility for researchers interested in joining the field to present their work in progress and get an overview over the field. @InProceedings{ICSE11p1188, author = {Marcelo Cataldo and Cleidson de Souza and Yvonne Dittrich and Rashina Hoda and Helen Sharp}, title = {Workshop on Cooperative and Human Aspects of Software Engineering (CHASE 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1188--1189}, doi = {}, year = {2011}, } |
|
Hoek, André van der |
ICSE '11-DEMOS: "A Demonstration of a Distributed ..."
A Demonstration of a Distributed Software Design Sketching Tool
Nicolas Mangano, Mitch Dempsey, Nicolas Lopez, and André van der Hoek (UC Irvine, USA) Software designers frequently sketch when they design, particularly during the early phases of exploration of a design problem and its solution. In so doing, they shun formal design tools, the reason being that such tools impose conformity and precision prematurely. Sketching on the other hand is a highly fluid and flexible way of expressing oneself. In this paper, we present Calico, a sketch-based distributed software design tool that supports software designers with a variety of features that improve over the use of just pen-and-paper or a regular whiteboard, and are tailored specifically for software design. Calico is meant to be used on electronic whiteboards or tablets, and provides for rapid creation and manipulation of design content by sets of developers who can collaborate distributedly. @InProceedings{ICSE11p1028, author = {Nicolas Mangano and Mitch Dempsey and Nicolas Lopez and André van der Hoek}, title = {A Demonstration of a Distributed Software Design Sketching Tool}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1028--1030}, doi = {}, year = {2011}, } ICSE '11-NIER: "The Code Orb -- Supporting ..." The Code Orb -- Supporting Contextualized Coding via At-a-Glance Views (NIER Track) Nicolas Lopez and André van der Hoek (UC Irvine, USA) While code is typically presented as a flat file to a developer who must change it, this flat file exists within a context that can drastically influence how a developer approaches changing it. While the developer clearly must be careful changing any code, they probably should be yet more careful in changing code that recently saw major changes, is barely covered by test cases, and was the source of a number of bugs. Contextualized coding refers to the ability of the developer to effectively use such contextual information while they work on some changes. In this paper, we introduce the Code Orb, a contextualized coding tool that builds upon existing mining and analysis techniques to warn developers on a line-by-line basis of the volatility of the code they are working on. The key insight underneath the Code Orb is that it is neither desired nor possible to always present a code’s context in its entirety; instead, it is necessary to provide an abstracted view of the context that informs the developer of which parts of the code they need to pay more attention to. This paper discusses the principles of and rationale behind contextualized coding, introduces the Code Orb, and illustrates its function with example code and context drawn from the Mylyn [11] project. @InProceedings{ICSE11p824, author = {Nicolas Lopez and André van der Hoek}, title = {The Code Orb -- Supporting Contextualized Coding via At-a-Glance Views (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {824--827}, doi = {}, year = {2011}, } ICSE '11-WORKSHOPS: "Workshop on Flexible Modeling ..." Workshop on Flexible Modeling Tools (FlexiTools 2011) Harold Ossher, André van der Hoek, Margaret-Anne Storey, John Grundy, Rachel Bellamy, and Marian Petre (IBM Research Watson, USA; UC Irvine, USA; University of Victoria, Canada; Swinburne University of Technology at Hawthorn, Australia; The Open University, UK) Modeling tools are often not used for tasks during the software lifecycle for which they should be more helpful; instead free-from approaches, such as office tools and white boards, are frequently used. Prior workshops explored why this is the case and what might be done about it. The goal of this workshop is to continue those discussions and also to form an initial set of challenge problems and research challenges that researchers and developers of flexible modeling tools should address. @InProceedings{ICSE11p1192, author = {Harold Ossher and André van der Hoek and Margaret-Anne Storey and John Grundy and Rachel Bellamy and Marian Petre}, title = {Workshop on Flexible Modeling Tools (FlexiTools 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1192--1193}, doi = {}, year = {2011}, } |
|
Holden, Jeff |
ICSE '11-NIER: "Towards Overcoming Human Analyst ..."
Towards Overcoming Human Analyst Fallibility in the Requirements Tracing Process (NIER Track)
David Cuddeback, Alex Dekhtyar, Jane Huffman Hayes, Jeff Holden, and Wei-Keat Kong (California Polytechnic State University, USA; University of Kentucky, USA) Our research group recently discovered that human analysts, when asked to validate candidate traceability matrices, produce predictably imperfect results, in some cases less accurate than the starting candidate matrices. This discovery radically changes our understanding of how to design a fast, accurate and certifiable tracing process that can be implemented as part of software assurance activities. We present our vision for the new approach to achieving this goal. Further, we posit that human fallibility may impact other softare engineering activities involving decision support tools. @InProceedings{ICSE11p860, author = {David Cuddeback and Alex Dekhtyar and Jane Huffman Hayes and Jeff Holden and Wei-Keat Kong}, title = {Towards Overcoming Human Analyst Fallibility in the Requirements Tracing Process (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {860--863}, doi = {}, year = {2011}, } |
|
Holmes, Reid |
ICSE '11: "Identifying Program, Test, ..."
Identifying Program, Test, and Environmental Changes That Affect Behaviour
Reid Holmes and David Notkin (University of Waterloo, Canada; University of Washington, USA) Developers evolve a software system by changing the program source code, by modifying its context by updating libraries or changing its configuration, and by improving its test suite. Any of these changes can cause differences in program behaviour. In general, program paths may appear or disappear between executions of two subsequent versions of a system. Some of these behavioural differences are expected by a developer; for example, executing new program paths is often precisely what is intended when adding a new test. Other behavioural differences may or may not be expected or benign. For example, changing an XML configuration file may cause a previously-executed path to disappear, which may or may not be expected and could be problematic. Furthermore, the degree to which a behavioural change might be problematic may only become apparent over time as the new behaviour interacts with other changes. We present an approach to identify specific program call dependencies where the programmer’s changes to the program source code, its tests, or its environment are not apparent in the system’s behaviour, or vice versa. Using a static and a dynamic call graph from each of two program versions, we partition dependencies based on their presence in each of the four graphs. Particular partitions contain dependencies that help a programmer develop insights about often subtle behavioural changes. @InProceedings{ICSE11p371, author = {Reid Holmes and David Notkin}, title = {Identifying Program, Test, and Environmental Changes That Affect Behaviour}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {371--380}, doi = {}, year = {2011}, } ICSE '11-DEMOS: "Identifying Opaque Behavioural ..." Identifying Opaque Behavioural Changes Reid Holmes and David Notkin (University of Waterloo, Canada; University of Washington, USA) Developers modify their systems by changing source code, updating test suites, and altering their system’s execution context. When they make these modifications, they have an understanding of the behavioural changes they expect to happen when the system is executed; when the system does not conform to their expectations, developers try to ensure their modification did not introduce some unexpected or undesirable behavioural change. We present an approach that integrates with existing continuous integration systems to help developers identify situations whereby their changes may have introduced unexpected behavioural consequences. In this research demonstration, we show how our approach can help developers identify and investigate unanticipated behavioural changes. @InProceedings{ICSE11p995, author = {Reid Holmes and David Notkin}, title = {Identifying Opaque Behavioural Changes}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {995--997}, doi = {}, year = {2011}, } |
|
Holt, Richard C. |
ICSE '11-NIER: "Multifractal Aspects of Software ..."
Multifractal Aspects of Software Development (NIER Track)
Abram Hindle, Michael W. Godfrey, and Richard C. Holt (UC Davis, USA; University of Waterloo, Canada) Software development is difficult to model, particularly the noisy, non-stationary signals of changes per time unit, extracted from version control systems (VCSs). Currently researchers are utilizing timeseries analysis tools such as ARIMA to model these signals extracted from a project's VCS. Unfortunately current approaches are not very amenable to the underlying power-law distributions of this kind of signal. We propose modeling changes per time unit using multifractal analysis. This analysis can be used when a signal exhibits multiscale self-similarity, as in the case of complex data drawn from power-law distributions. Specifically we utilize multifractal analysis to demonstrate that software development is multifractal, that is the signal is a fractal composed of multiple fractal dimensions along a range of Hurst exponents. Thus we show that software development has multi-scale self-similarity, that software development is multifractal. We also pose questions that we hope multifractal analysis can answer. @InProceedings{ICSE11p968, author = {Abram Hindle and Michael W. Godfrey and Richard C. Holt}, title = {Multifractal Aspects of Software Development (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {968--971}, doi = {}, year = {2011}, } |
|
Hopson, John |
ICSE '11-NIER: "Data Analytics for Game Development ..."
Data Analytics for Game Development (NIER Track)
Kenneth Hullett, Nachiappan Nagappan, Eric Schuh, and John Hopson (UC Santa Cruz, USA; Microsoft Research, USA; Microsoft Game Studios, USA; Bungie Studios, USA) The software engineering community has had seminal papers on data analysis for software productivity, quality, reliability, performance etc. Analyses have involved software systems ranging from desktop software to telecommunication switching systems. Little work has been done on the emerging digital game industry. In this paper we explore how data can drive game design and production decisions in game development. We define a mixture of qualitative and quantitative data sources, broken down into three broad categories: internal testing, external testing, and subjective evaluations. We present preliminary results of a case study of how data collected from users of a released game can inform subsequent development. @InProceedings{ICSE11p940, author = {Kenneth Hullett and Nachiappan Nagappan and Eric Schuh and John Hopson}, title = {Data Analytics for Game Development (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {940--943}, doi = {}, year = {2011}, } |
|
Hosking, John |
ICSE '11: "Improving Requirements Quality ..."
Improving Requirements Quality using Essential Use Case Interaction Patterns
Massila Kamalrudin, John Hosking, and John Grundy (University of Auckland, New Zealand; Swinburne University of Technology at Hawthorn, Australia) Requirements specifications need to be checked against the 3C’s Consistency, Completeness and Correctness – in order to achieve high quality. This is especially difficult when working with both natural language requirements and associated semi-formal modelling representations. We describe a technique and support tool that allows us to perform semi-automated checking of natural language and semi-formal requirements models, supporting both consistency management between representations but also correctness and completeness analysis. We use a concept of essential use case interaction patterns to perform the correctness and completeness analysis on the semi-formal representation. We highlight potential inconsistencies, incompleteness and incorrectness using visual differencing in our support tool. We have evaluated our approach via an end user study which focused on the tool’s usefulness, ease of use, ease of learning and user satisfaction and provided data for cognitive dimensions of notations analysis of the tool. @InProceedings{ICSE11p531, author = {Massila Kamalrudin and John Hosking and John Grundy}, title = {Improving Requirements Quality using Essential Use Case Interaction Patterns}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {531--540}, doi = {}, year = {2011}, } ICSE '11-NIER: "A Combination Approach for ..." A Combination Approach for Enhancing Automated Traceability (NIER Track) Xiaofan Chen, John Hosking, and John Grundy (University of Auckland, New Zealand; Swinburne University of Technology at Melbourne, Australia) Tracking a variety of traceability links between artifacts assists software developers in comprehension, efficient development, and effective management of a system. Traceability systems to date based on various Information Retrieval (IR) techniques have been faced with a major open research challenge: how to extract these links with both high precision and high recall. In this paper we describe an experimental approach that combines Regular Expression, Key Phrases, and Clustering with IR techniques to enhance the performance of IR for traceability link recovery between documents and source code. Our preliminary experimental results show that our combination technique improves the performance of IR, increases the precision of retrieved links, and recovers more true links than IR alone. @InProceedings{ICSE11p912, author = {Xiaofan Chen and John Hosking and John Grundy}, title = {A Combination Approach for Enhancing Automated Traceability (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {912--915}, doi = {}, year = {2011}, } |
|
Houston, Dan |
ICSE '11-IMPACT: "Impact of Process Simulation ..."
Impact of Process Simulation on Software Practice: An Initial Report
He Zhang, Ross Jeffery, Dan Houston, LiGuo Huang, and Liming Zhu (NICTA, Australia; University of New South Wales, Australia; The Aerospace Corporation, USA; Southern Methodist University, USA) Process simulation has become a powerful technology in support of software project management and process improvement over the past decades. This research, inspired by the Impact Project, intends to investigate the technology transfer of software process simulation to the use in industrial settings, and further identify the best practices to release its full potential in software practice. We collected the reported applications of process simulation in software industry, and identified its wide adoption in the organizations delivering various software intensive systems. This paper, as an initial report of the research, briefs a historical perspective of the impact upon practice based on the documented evidence, and also elaborates the research-practice transition by examining one detailed case study. It is shown that research has a significant impact on practice in this area. The analysis of impact trace also reveals that the success of software process simulation in practice highly relies on the association with other software process techniques or practices and the close collaboration between researchers and practitioners. @InProceedings{ICSE11p1046, author = {He Zhang and Ross Jeffery and Dan Houston and LiGuo Huang and Liming Zhu}, title = {Impact of Process Simulation on Software Practice: An Initial Report}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1046--1056}, doi = {}, year = {2011}, } |
|
Huang, LiGuo |
ICSE '11-IMPACT: "Impact of Process Simulation ..."
Impact of Process Simulation on Software Practice: An Initial Report
He Zhang, Ross Jeffery, Dan Houston, LiGuo Huang, and Liming Zhu (NICTA, Australia; University of New South Wales, Australia; The Aerospace Corporation, USA; Southern Methodist University, USA) Process simulation has become a powerful technology in support of software project management and process improvement over the past decades. This research, inspired by the Impact Project, intends to investigate the technology transfer of software process simulation to the use in industrial settings, and further identify the best practices to release its full potential in software practice. We collected the reported applications of process simulation in software industry, and identified its wide adoption in the organizations delivering various software intensive systems. This paper, as an initial report of the research, briefs a historical perspective of the impact upon practice based on the documented evidence, and also elaborates the research-practice transition by examining one detailed case study. It is shown that research has a significant impact on practice in this area. The analysis of impact trace also reveals that the success of software process simulation in practice highly relies on the association with other software process techniques or practices and the close collaboration between researchers and practitioners. @InProceedings{ICSE11p1046, author = {He Zhang and Ross Jeffery and Dan Houston and LiGuo Huang and Liming Zhu}, title = {Impact of Process Simulation on Software Practice: An Initial Report}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1046--1056}, doi = {}, year = {2011}, } ICSE '11-SEIP: "Experiences with Text Mining ..." Experiences with Text Mining Large Collections of Unstructured Systems Development Artifacts at JPL Daniel Port, Allen Nikora, Jairus Hihn, and LiGuo Huang (University of Hawaii, USA; Jet Propulsion Laboratory, USA; Southern Methodist University, USA) Often repositories of systems engineering artifacts at NASA’s Jet Propulsion Laboratory (JPL) are so large and poorly structured that they have outgrown our capability to effectively manually process their contents to extract useful information. Sophisticated text mining methods and tools seem a quick, low-effort approach to automating our limited manual efforts. Our experiences of exploring such methods mainly in three areas including historical risk analysis, defect identification based on requirements analysis, and over-time analysis of system anomalies at JPL, have shown that obtaining useful results requires substantial unanticipated efforts - from preprocessing the data to transforming the output for practical applications. We have not observed any quick “wins” or realized benefit from short-term effort avoidance through automation in this area. Surprisingly we have realized a number of unexpected long-term benefits from the process of applying text mining to our repositories. This paper elaborates some of these benefits and our important lessons learned from the process of preparing and applying text mining to large unstructured system artifacts at JPL aiming to benefit future TM applications in similar problem domains and also in hope for being extended to broader areas of applications. @InProceedings{ICSE11p701, author = {Daniel Port and Allen Nikora and Jairus Hihn and LiGuo Huang}, title = {Experiences with Text Mining Large Collections of Unstructured Systems Development Artifacts at JPL}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {701--710}, doi = {}, year = {2011}, } |
|
Hughes, Christopher J. |
ICSE '11: "LIME: A Framework for Debugging ..."
LIME: A Framework for Debugging Load Imbalance in Multi-threaded Execution
Jungju Oh, Christopher J. Hughes, Guru Venkataramani, and Milos Prvulovic (Georgia Institute of Technology, USA; Intel Corporation, USA; George Washington University, USA) With the ubiquity of multi-core processors, software must make effective use of multiple cores to obtain good performance on modern hardware. One of the biggest roadblocks to this is load imbalance, or the uneven distribution of work across cores. We propose LIME, a framework for analyzing parallel programs and reporting the cause of load imbalance in application source code. This framework uses statistical techniques to pinpoint load imbalance problems stemming from both control flow issues (e.g., unequal iteration counts) and interactions between the application and hardware (e.g., unequal cache miss counts). We evaluate LIME on applications from widely used parallel benchmark suites, and show that LIME accurately reports the causes of load imbalance, their nature and origin in the code, and their relative importance. @InProceedings{ICSE11p201, author = {Jungju Oh and Christopher J. Hughes and Guru Venkataramani and Milos Prvulovic}, title = {LIME: A Framework for Debugging Load Imbalance in Multi-threaded Execution}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {201--210}, doi = {}, year = {2011}, } |
|
Hullett, Kenneth |
ICSE '11-NIER: "Data Analytics for Game Development ..."
Data Analytics for Game Development (NIER Track)
Kenneth Hullett, Nachiappan Nagappan, Eric Schuh, and John Hopson (UC Santa Cruz, USA; Microsoft Research, USA; Microsoft Game Studios, USA; Bungie Studios, USA) The software engineering community has had seminal papers on data analysis for software productivity, quality, reliability, performance etc. Analyses have involved software systems ranging from desktop software to telecommunication switching systems. Little work has been done on the emerging digital game industry. In this paper we explore how data can drive game design and production decisions in game development. We define a mixture of qualitative and quantitative data sources, broken down into three broad categories: internal testing, external testing, and subjective evaluations. We present preliminary results of a case study of how data collected from users of a released game can inform subsequent development. @InProceedings{ICSE11p940, author = {Kenneth Hullett and Nachiappan Nagappan and Eric Schuh and John Hopson}, title = {Data Analytics for Game Development (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {940--943}, doi = {}, year = {2011}, } |
|
Hummel, Oliver |
ICSE '11-NIER: "Search-Enhanced Testing (NIER ..."
Search-Enhanced Testing (NIER Track)
Colin Atkinson, Oliver Hummel, and Werner Janjic (University of Mannheim, Germany) The prime obstacle to automated defect testing has always been the generation of “correct” results against which to judge the behavior of the system under test – the “oracle problem”. So called “back-to-back” testing techniques that exploit the availability of multiple versions of a system to solve the oracle problem have mainly been restricted to very special, safety critical domains such as military and space applications since it is so expensive to manually develop the additional versions. However, a new generation of software search engines that can find multiple copies of software components at virtually zero cost promise to change this situation. They make it economically feasible to use the knowledge locked in reusable software components to dramatically improve the efficiency of the software testing process. In this paper we outline the basic ingredients of such an approach. @InProceedings{ICSE11p880, author = {Colin Atkinson and Oliver Hummel and Werner Janjic}, title = {Search-Enhanced Testing (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {880--883}, doi = {}, year = {2011}, } |
|
Hundt, Robert |
ICSE '11: "RACEZ: A Lightweight and Non-Invasive ..."
RACEZ: A Lightweight and Non-Invasive Race Detection Tool for Production Applications
Tianwei Sheng, Neil Vachharajani, Stephane Eranian, Robert Hundt, Wenguang Chen, and Weimin Zheng (Tsinghua University, China; Google Inc., USA) Concurrency bugs, particularly data races, are notoriously difficult to debug and are a significant source of unreliability in multithreaded applications. Many tools to catch data races rely on program instrumentation to obtain memory instruction traces. Unfortunately, this instrumentation introduces significant runtime overhead, is extremely invasive, or has a limited domain of applicability making these tools unsuitable for many production systems. Consequently, these tools are typically used during application testing where many data races go undetected. This paper proposes R ACEZ, a novel race detection mechanism which uses a sampled memory trace collected by the hardware performance monitoring unit rather than invasive instrumentation. The approach introduces only a modest overhead making it usable in production environments. We validate R ACEZ using two open source server applications and the PARSEC benchmarks. Our experiments show that R ACEZ catches a set of known bugs with reasonable probability while introducing only 2.8% runtime slow down on average. @InProceedings{ICSE11p401, author = {Tianwei Sheng and Neil Vachharajani and Stephane Eranian and Robert Hundt and Wenguang Chen and Weimin Zheng}, title = {RACEZ: A Lightweight and Non-Invasive Race Detection Tool for Production Applications}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {401--410}, doi = {}, year = {2011}, } |
|
Hutchinson, John |
ICSE '11: "Empirical Assessment of MDE ..."
Empirical Assessment of MDE in Industry
John Hutchinson, Jon Whittle, Mark Rouncefield, and Steinar Kristoffersen (Lancaster University, UK; Østfold University College, Norway; Møreforskning Molde AS, Norway) This paper presents some initial results from a twelve-month empirical research study of model driven engineering (MDE). Using largely qualitative questionnaire and interview methods we investigate and document a range of technical, organizational and social factors that apparently influence organizational responses to MDE: specifically, its perception as a successful or unsuccessful organizational intervention. We then outline a range of lessons learned. Whilst, as with all qualitative research, these lessons should be interpreted with care, they should also be seen as providing a greater understanding of MDE practice in industry, as well as shedding light on the varied, and occasionally surprising, social, technical and organizational factors that affect success and failure. We conclude by suggesting how the next phase of the research will attempt to investigate some of these issues from a different angle and in greater depth. @InProceedings{ICSE11p471, author = {John Hutchinson and Jon Whittle and Mark Rouncefield and Steinar Kristoffersen}, title = {Empirical Assessment of MDE in Industry}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {471--480}, doi = {}, year = {2011}, } ICSE '11-SEIP: "Model-Driven Engineering Practices ..." Model-Driven Engineering Practices in Industry John Hutchinson, Mark Rouncefield, and Jon Whittle (Lancaster University, UK) In this paper, we attempt to address the relative absence of empirical studies of model driven engineering through describing the practices of three commercial organizations as they adopted a model driven engineering approach to their software development. Using in-depth semi-structured interviewing we invited practitioners to reflect on their experiences and selected three to use as exemplars or case studies. In documenting some details of attempts to deploy model driven practices, we identify some ‘lessons learned’, in particular the importance of complex organizational, managerial and social factors – as opposed to simple technical factors – in the relative success, or failure, of the endeavour. As an example of organizational change management the successful deployment of model driven engineering appears to require: a progressive and iterative approach; transparent organizational commitment and motivation; integration with existing organizational processes and a clear business focus. @InProceedings{ICSE11p633, author = {John Hutchinson and Mark Rouncefield and Jon Whittle}, title = {Model-Driven Engineering Practices in Industry}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {633--642}, doi = {}, year = {2011}, } |
|
Ibrahim, Zaid |
ICSE '11-NIER: "Toward Sustainable Software ..."
Toward Sustainable Software Engineering (NIER Track)
Nadine Amsel, Zaid Ibrahim, Amir Malik, and Bill Tomlinson (UC Irvine, USA) Current software engineering practices have significant effects on the environment. Examples include e-waste from computers made obsolete due to software upgrades, and changes in the power demands of new versions of software. Sustainable software engineering aims to create reliable, long-lasting software that meets the needs of users while reducing environmental impacts. We conducted three related research efforts to explore this area. First, we investigated the extent to which users thought about the environmental impact of their software usage. Second, we created a tool called GreenTracker, which measures the energy consumption of software in order to raise awareness about the environmental impact of software usage. Finally, we explored the indirect environmental effects of software in order to understand how software affects sustainability beyond its own power consumption. The relationship between environmental sustainability and software engineering is complex; understanding both direct and indirect effects is critical to helping humans live more sustainably. @InProceedings{ICSE11p976, author = {Nadine Amsel and Zaid Ibrahim and Amir Malik and Bill Tomlinson}, title = {Toward Sustainable Software Engineering (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {976--979}, doi = {}, year = {2011}, } |
|
Inoue, Katsuro |
ICSE '11-WORKSHOPS: "Fifth International Workshop ..."
Fifth International Workshop on Software Clones (IWSC 2011)
James R. Cordy, Katsuro Inoue, Stanislaw Jarzabek, and Rainer Koschke (Queen's University, Canada; Osaka University, Japan; National University of Singapore, Singapore; University of Bremen, Germany) Software clones are identical or similar pieces of code, design or other artifacts. Clones are known to be closely related to various issues in software engineering, such as software quality, complexity, architecture, refactoring, evolution, licensing, plagiarism, and so on. Various characteristics of software systems can be uncovered through clone analysis, and system restructuring can be performed by merging clones. The goals of this workshop are to bring together researchers and practitioners from around the world to evaluate the current state of research and applications, discuss common problems, discover new opportunities for collaboration, exchange ideas, envision new areas of research and applications, and explore synergies with similarity analysis in other areas and disciplines. @InProceedings{ICSE11p1210, author = {James R. Cordy and Katsuro Inoue and Stanislaw Jarzabek and Rainer Koschke}, title = {Fifth International Workshop on Software Clones (IWSC 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1210--1211}, doi = {}, year = {2011}, } |
|
Issarny, Valerie |
ICSE '11-NIER: "Mining Service Abstractions ..."
Mining Service Abstractions (NIER Track)
Dionysis Athanasopoulos, Apostolos V. Zarras, Panos Vassiliadis, and Valerie Issarny (University of Ioannina, Greece; INRIA-Paris, France) Several lines of research rely on the concept of service abstractions to enable the organization, the composition and the adaptation of services. However, what is still missing, is a systematic approach for extracting service abstractions out of the vast amount of services that are available all over the Web. To deal with this issue, we propose an approach for mining service abstractions, based on an agglomerative clustering algorithm. Our experimental findings suggest that the approach is promising and can serve as a basis for future research. @InProceedings{ICSE11p944, author = {Dionysis Athanasopoulos and Apostolos V. Zarras and Panos Vassiliadis and Valerie Issarny}, title = {Mining Service Abstractions (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {944--947}, doi = {}, year = {2011}, } |
|
Ivers, James |
ICSE '11-SEIP: "Architecture Evaluation without ..."
Architecture Evaluation without an Architecture: Experience with the Smart Grid
Rick Kazman, Len Bass, James Ivers, and Gabriel A. Moreno (SEI/CMU, USA; University of Hawaii, USA) This paper describes an analysis of some of the challenges facing one portion of the Smart Grid in the United States—residential Demand Response (DR) systems. The purposes of this paper are twofold: 1) to discover risks to residential DR systems and 2) to illustrate an architecture-based analysis approach to uncovering risks that span a collection of technical and social concerns. The results presented here are specific to residential DR but the approach is general and it could be applied to other systems within the Smart Grid and other critical infrastructure domains. Our architecture-based analysis is different from most other approaches to analyzing complex systems in that it addresses multiple quality attributes simultaneously (e.g., performance, reliability, security, modifiability, usability, etc.) and it considers the architecture of a complex system from a socio-technical perspective where the actions of the people in the system are as important, from an analysis perspective, as the physical and computational elements of the system. This analysis can be done early in a system’s lifetime, before substantial resources have been committed to its construction or procurement, and so it provides extremely cost-effective risk analysis. @InProceedings{ICSE11p663, author = {Rick Kazman and Len Bass and James Ivers and Gabriel A. Moreno}, title = {Architecture Evaluation without an Architecture: Experience with the Smart Grid}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {663--670}, doi = {}, year = {2011}, } |
|
Jackson, Daniel |
ICSE '11: "A Lightweight Code Analysis ..."
A Lightweight Code Analysis and its Role in Evaluation of a Dependability Case
Joseph P. Near, Aleksandar Milicevic, Eunsuk Kang, and Daniel Jackson (Massachusetts Institute of Technology, USA) A dependability case is an explicit, end-to-end argument, based on concrete evidence, that a system satisfies a critical property. We report on a case study constructing a dependability case for the control software of a medical device. The key novelty of our approach is a lightweight code analysis that generates a list of side conditions that correspond to assumptions to be discharged about the code and the environment in which it executes. This represents an unconventional trade-off between, at one extreme, more ambitious analyses that attempt to discharge all conditions automatically (but which cannot even in principle handle environmental assumptions), and at the other, flow- or contextinsensitive analyses that require more user involvement. The results of the analysis suggested a variety of ways in which the dependability of the system might be improved. @InProceedings{ICSE11p31, author = {Joseph P. Near and Aleksandar Milicevic and Eunsuk Kang and Daniel Jackson}, title = {A Lightweight Code Analysis and its Role in Evaluation of a Dependability Case}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {31--40}, doi = {}, year = {2011}, } ICSE '11: "Unifying Execution of Imperative ..." Unifying Execution of Imperative and Declarative Code Aleksandar Milicevic, Derek Rayside, Kuat Yessenov, and Daniel Jackson (Massachusetts Institute of Technology, USA) We present a unified environment for running declarative specifications in the context of an imperative object-oriented programming language. Specifications are Alloy-like, writ- ten in first-order relational logic with transitive closure, and the imperative language is Java. By being able to mix im- perative code with executable declarative specifications, the user can easily express constraint problems in place, i.e., in terms of the existing data structures and objects on the heap. After a solution is found, the heap is updated to reflect the solution, so the user can continue to manipulate the pro- gram heap in the usual imperative way. We show that this approach is not only convenient, but, for certain problems can also outperform a standard imperative implementation. We also present an optimization technique that allowed us to run our tool on heaps with almost 2000 objects. @InProceedings{ICSE11p511, author = {Aleksandar Milicevic and Derek Rayside and Kuat Yessenov and Daniel Jackson}, title = {Unifying Execution of Imperative and Declarative Code}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {511--520}, doi = {}, year = {2011}, } |
|
Jagannath, Vilas |
ICSE '11-DEMOS: "ReAssert: A Tool for Repairing ..."
ReAssert: A Tool for Repairing Broken Unit Tests
Brett Daniel, Danny Dig, Tihomir Gvero, Vilas Jagannath, Johnston Jiaa, Damion Mitchell, Jurand Nogiec, Shin Hwei Tan, and Darko Marinov (University of Illinois at Urbana-Champaign, USA; EPFL, Switzerland) Successful software systems continuously change their requirements and thus code. When this happens, some existing tests get broken because they no longer reflect the intended behavior, and thus they need to be updated. Repairing broken tests can be time-consuming and difficult. We present ReAssert, a tool that can automatically suggest repairs for broken unit tests. Examples include replacing literal values in tests, changing assertion methods, or replacing one assertion with several. Our experiments show that ReAssert can repair many common test failures and that its suggested repairs match developers’ expectations. @InProceedings{ICSE11p1010, author = {Brett Daniel and Danny Dig and Tihomir Gvero and Vilas Jagannath and Johnston Jiaa and Damion Mitchell and Jurand Nogiec and Shin Hwei Tan and Darko Marinov}, title = {ReAssert: A Tool for Repairing Broken Unit Tests}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1010--1012}, doi = {}, year = {2011}, } |
|
Janjic, Werner |
ICSE '11-NIER: "Search-Enhanced Testing (NIER ..."
Search-Enhanced Testing (NIER Track)
Colin Atkinson, Oliver Hummel, and Werner Janjic (University of Mannheim, Germany) The prime obstacle to automated defect testing has always been the generation of “correct” results against which to judge the behavior of the system under test – the “oracle problem”. So called “back-to-back” testing techniques that exploit the availability of multiple versions of a system to solve the oracle problem have mainly been restricted to very special, safety critical domains such as military and space applications since it is so expensive to manually develop the additional versions. However, a new generation of software search engines that can find multiple copies of software components at virtually zero cost promise to change this situation. They make it economically feasible to use the knowledge locked in reusable software components to dramatically improve the efficiency of the software testing process. In this paper we outline the basic ingredients of such an approach. @InProceedings{ICSE11p880, author = {Colin Atkinson and Oliver Hummel and Werner Janjic}, title = {Search-Enhanced Testing (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {880--883}, doi = {}, year = {2011}, } |
|
Jarzabek, Stanislaw |
ICSE '11-NIER: "Flexible Generators for Software ..."
Flexible Generators for Software Reuse and Evolution (NIER Track)
Stanislaw Jarzabek and Ha Duy Trung (National University of Singapore, Singapore) Developers tend to use models and generators during initial development, but often abandon them later in software evolution and reuse. One reason for that is that code generated from models (e.g., UML) is often manually modified, and changes cannot be easily propagated back to models. Once models become out of sync with code, any future re-generation of code overrides manual modifications. We propose a flexible generator solution that alleviates the above problem. The idea is to let developers weave arbitrary manual modifications into the generation process, rather than modify already generated code. A flexible generator stores specifications of manual modifications in executable form, so that weaving can be automatically re-done any time code is regenerated from modified models. In that way, models and manual modification can evolve independently but in sync with each other, and the generated code never gets directly changed. As a proof of concept, we have already built a flexible generator prototype by a merger of conventional generation system and variability technique to handle manual modifications. We believe a flexible generator approach alleviates an important problem that hinders wide spread adoption of MDD in software practice. @InProceedings{ICSE11p920, author = {Stanislaw Jarzabek and Ha Duy Trung}, title = {Flexible Generators for Software Reuse and Evolution (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {920--923}, doi = {}, year = {2011}, } ICSE '11-WORKSHOPS: "Fifth International Workshop ..." Fifth International Workshop on Software Clones (IWSC 2011) James R. Cordy, Katsuro Inoue, Stanislaw Jarzabek, and Rainer Koschke (Queen's University, Canada; Osaka University, Japan; National University of Singapore, Singapore; University of Bremen, Germany) Software clones are identical or similar pieces of code, design or other artifacts. Clones are known to be closely related to various issues in software engineering, such as software quality, complexity, architecture, refactoring, evolution, licensing, plagiarism, and so on. Various characteristics of software systems can be uncovered through clone analysis, and system restructuring can be performed by merging clones. The goals of this workshop are to bring together researchers and practitioners from around the world to evaluate the current state of research and applications, discuss common problems, discover new opportunities for collaboration, exchange ideas, envision new areas of research and applications, and explore synergies with similarity analysis in other areas and disciplines. @InProceedings{ICSE11p1210, author = {James R. Cordy and Katsuro Inoue and Stanislaw Jarzabek and Rainer Koschke}, title = {Fifth International Workshop on Software Clones (IWSC 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1210--1211}, doi = {}, year = {2011}, } |
|
Jeanneret, Cédric |
ICSE '11: "Estimating Footprints of Model ..."
Estimating Footprints of Model Operations
Cédric Jeanneret, Martin Glinz, and Benoit Baudry (University of Zurich, Switzerland; IRISA, France) When performed on a model, a set of operations (e.g., queries or model transformations) rarely uses all the information present in the model. Unintended underuse of a model can indicate various problems: the model may contain more detail than necessary or the operations may be immature or erroneous. Analyzing the footprints of the operations — i.e., the part of a model actually used by an operation — is a simple technique to diagnose and analyze such problems. However, precisely calculating the footprint of an operation is expensive, because it requires analyzing the operation’s execution trace. In this paper, we present an automated technique to estimate the footprint of an operation without executing it. We evaluate our approach by applying it to 75 models and five operations. Our technique provides software engineers with an efficient, yet precise, evaluation of the usage of their models. @InProceedings{ICSE11p601, author = {Cédric Jeanneret and Martin Glinz and Benoit Baudry}, title = {Estimating Footprints of Model Operations}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {601--610}, doi = {}, year = {2011}, } |
|
Jeffery, Ross |
ICSE '11-IMPACT: "Impact of Process Simulation ..."
Impact of Process Simulation on Software Practice: An Initial Report
He Zhang, Ross Jeffery, Dan Houston, LiGuo Huang, and Liming Zhu (NICTA, Australia; University of New South Wales, Australia; The Aerospace Corporation, USA; Southern Methodist University, USA) Process simulation has become a powerful technology in support of software project management and process improvement over the past decades. This research, inspired by the Impact Project, intends to investigate the technology transfer of software process simulation to the use in industrial settings, and further identify the best practices to release its full potential in software practice. We collected the reported applications of process simulation in software industry, and identified its wide adoption in the organizations delivering various software intensive systems. This paper, as an initial report of the research, briefs a historical perspective of the impact upon practice based on the documented evidence, and also elaborates the research-practice transition by examining one detailed case study. It is shown that research has a significant impact on practice in this area. The analysis of impact trace also reveals that the success of software process simulation in practice highly relies on the association with other software process techniques or practices and the close collaboration between researchers and practitioners. @InProceedings{ICSE11p1046, author = {He Zhang and Ross Jeffery and Dan Houston and LiGuo Huang and Liming Zhu}, title = {Impact of Process Simulation on Software Practice: An Initial Report}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1046--1056}, doi = {}, year = {2011}, } |
|
Jensen, Simon Holm |
ICSE '11: "A Framework for Automated ..."
A Framework for Automated Testing of JavaScript Web Applications
Shay Artzi, Julian Dolby, Simon Holm Jensen, Anders Møller, and Frank Tip (IBM Research, USA; Aarhus University, Denmark) Current practice in testing JavaScript web applications requires manual construction of test cases, which is difficult and tedious. We present a framework for feedback-directed automated test generation for JavaScript in which execution is monitored to collect information that dire cts the test generator towards inputs that yield increased coverage. We implemented several instantiations of the framework, corresponding to variations on feedback-directed random testing, in a tool called Artemis. Experiments on a suite of JavaScript applications demonstrate that a simple instantiation of the framework that uses event handler registrations as feedback information produces surprisingly good coverage if enough tests are generated. By also using coverage information and read-write sets as feedback information, a slightly better level of coverage can be achieved, and sometimes with many fewer tests. The generated tests can be used for detecting HTML validity problems and other programming errors. @InProceedings{ICSE11p571, author = {Shay Artzi and Julian Dolby and Simon Holm Jensen and Anders Møller and Frank Tip}, title = {A Framework for Automated Testing of JavaScript Web Applications}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {571--580}, doi = {}, year = {2011}, } |
|
Jeon, Sung-eok |
ICSE '11-SEIP: "Characterizing the Differences ..."
Characterizing the Differences Between Pre- and Post- Release Versions of Software
Paul Luo Li, Ryan Kivett, Zhiyuan Zhan, Sung-eok Jeon, Nachiappan Nagappan, Brendan Murphy, and Andrew J. Ko (Microsoft Inc., USA; University of Washington, USA; Microsoft Research, USA) Many software producers utilize beta programs to predict postrelease quality and to ensure that their products meet quality expectations of users. Prior work indicates that software producers need to adjust predictions to account for usage environments and usage scenarios differences between beta populations and postrelease populations. However, little is known about how usage characteristics relate to field quality and how usage characteristics differ between beta and post-release. In this study, we examine application crash, application hang, system crash, and usage information from millions of Windows® users to 1) examine the effects of usage characteristics differences on field quality (e.g. which usage characteristics impact quality), 2) examine usage characteristics differences between beta and post-release (e.g. do impactful usage characteristics differ), and 3) report experiences adjusting field quality predictions for Windows. Among the 18 usage characteristics that we examined, the five most important were: the number of application executed, whether the machines was pre-installed by the original equipment manufacturer, two sub-populations (two language/geographic locales), and whether Windows was 64-bit (not 32-bit). We found each of these usage characteristics to differ between beta and post-release, and by adjusting for the differences, accuracy of field quality predictions for Windows improved by ~59%. @InProceedings{ICSE11p716, author = {Paul Luo Li and Ryan Kivett and Zhiyuan Zhan and Sung-eok Jeon and Nachiappan Nagappan and Brendan Murphy and Andrew J. Ko}, title = {Characterizing the Differences Between Pre- and Post- Release Versions of Software}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {716--725}, doi = {}, year = {2011}, } |
|
Jhi, Yoon-Chan |
ICSE '11-SEIP: "Value-Based Program Characterization ..."
Value-Based Program Characterization and Its Application to Software Plagiarism Detection
Yoon-Chan Jhi, Xinran Wang, Xiaoqi Jia, Sencun Zhu, Peng Liu, and Dinghao Wu (Pennsylvania State University, USA; Chinese Academy of Sciences, China) Identifying similar or identical code fragments becomes much more challenging in code theft cases where plagiarizers can use various automated code transformation techniques to hide stolen code from being detected. Previous works in this field are largely limited in that (1) most of them cannot handle advanced obfuscation techniques; (2) the methods based on source code analysis are less practical since the source code of suspicious programs is typically not available until strong evidences are collected; and (3) those depending on the features of specific operating systems or programming languages have limited applicability. Based on an observation that some critical runtime values are hard to be replaced or eliminated by semanticspreserving transformation techniques, we introduce a novel approach to dynamic characterization of executable programs. Leveraging such invariant values, our technique is resilient to various control and data obfuscation techniques. We show how the values can be extracted and refined to expose the critical values and how we can apply this runtime property to help solve problems in software plagiarism detection. We have implemented a prototype with a dynamic taint analyzer atop a generic processor emulator. Our experimental results show that the value-based method successfully discriminates 34 plagiarisms obfuscated by SandMark, plagiarisms heavily obfuscated by KlassMaster, programs obfuscated by Thicket, and executables obfuscated by Loco/Diablo. @InProceedings{ICSE11p756, author = {Yoon-Chan Jhi and Xinran Wang and Xiaoqi Jia and Sencun Zhu and Peng Liu and Dinghao Wu}, title = {Value-Based Program Characterization and Its Application to Software Plagiarism Detection}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {756--765}, doi = {}, year = {2011}, } |
|
Jia, Xiaoqi |
ICSE '11-SEIP: "Value-Based Program Characterization ..."
Value-Based Program Characterization and Its Application to Software Plagiarism Detection
Yoon-Chan Jhi, Xinran Wang, Xiaoqi Jia, Sencun Zhu, Peng Liu, and Dinghao Wu (Pennsylvania State University, USA; Chinese Academy of Sciences, China) Identifying similar or identical code fragments becomes much more challenging in code theft cases where plagiarizers can use various automated code transformation techniques to hide stolen code from being detected. Previous works in this field are largely limited in that (1) most of them cannot handle advanced obfuscation techniques; (2) the methods based on source code analysis are less practical since the source code of suspicious programs is typically not available until strong evidences are collected; and (3) those depending on the features of specific operating systems or programming languages have limited applicability. Based on an observation that some critical runtime values are hard to be replaced or eliminated by semanticspreserving transformation techniques, we introduce a novel approach to dynamic characterization of executable programs. Leveraging such invariant values, our technique is resilient to various control and data obfuscation techniques. We show how the values can be extracted and refined to expose the critical values and how we can apply this runtime property to help solve problems in software plagiarism detection. We have implemented a prototype with a dynamic taint analyzer atop a generic processor emulator. Our experimental results show that the value-based method successfully discriminates 34 plagiarisms obfuscated by SandMark, plagiarisms heavily obfuscated by KlassMaster, programs obfuscated by Thicket, and executables obfuscated by Loco/Diablo. @InProceedings{ICSE11p756, author = {Yoon-Chan Jhi and Xinran Wang and Xiaoqi Jia and Sencun Zhu and Peng Liu and Dinghao Wu}, title = {Value-Based Program Characterization and Its Application to Software Plagiarism Detection}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {756--765}, doi = {}, year = {2011}, } |
|
Jiaa, Johnston |
ICSE '11-DEMOS: "ReAssert: A Tool for Repairing ..."
ReAssert: A Tool for Repairing Broken Unit Tests
Brett Daniel, Danny Dig, Tihomir Gvero, Vilas Jagannath, Johnston Jiaa, Damion Mitchell, Jurand Nogiec, Shin Hwei Tan, and Darko Marinov (University of Illinois at Urbana-Champaign, USA; EPFL, Switzerland) Successful software systems continuously change their requirements and thus code. When this happens, some existing tests get broken because they no longer reflect the intended behavior, and thus they need to be updated. Repairing broken tests can be time-consuming and difficult. We present ReAssert, a tool that can automatically suggest repairs for broken unit tests. Examples include replacing literal values in tests, changing assertion methods, or replacing one assertion with several. Our experiments show that ReAssert can repair many common test failures and that its suggested repairs match developers’ expectations. @InProceedings{ICSE11p1010, author = {Brett Daniel and Danny Dig and Tihomir Gvero and Vilas Jagannath and Johnston Jiaa and Damion Mitchell and Jurand Nogiec and Shin Hwei Tan and Darko Marinov}, title = {ReAssert: A Tool for Repairing Broken Unit Tests}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1010--1012}, doi = {}, year = {2011}, } |
|
John, Bonnie E. |
ICSE '11-SEIP: "Deploying CogTool: Integrating ..."
Deploying CogTool: Integrating Quantitative Usability Assessment into Real-World Software Development
Rachel Bellamy, Bonnie E. John, and Sandra Kogan (IBM Research Watson, USA; CMU, USA; IBM Software Group, USA) Usability concerns are often difficult to integrate into real-world software development processes. To remedy this situation, IBM research and development, partnering with Carnegie Mellon University, has begun to employ a repeatable and quantifiable usability analysis method, embodied in CogTool, in its development practice. CogTool analyzes tasks performed on an interactive system from a storyboard and a demonstration of tasks on that storyboard, and predicts the time a skilled user will take to perform those tasks. We discuss how IBM designers and UX professionals used CogTool in their existing practice for contract compliance, communication within a product team and between a product team and its customer, assigning appropriate personnel to fix customer complaints, and quantitatively assessing design ideas before a line of code is written. We then reflect on the lessons learned by both the development organizations and the researchers attempting this technology transfer from academic research to integration into real-world practice, and we point to future research to even better serve the needs of practice. @InProceedings{ICSE11p691, author = {Rachel Bellamy and Bonnie E. John and Sandra Kogan}, title = {Deploying CogTool: Integrating Quantitative Usability Assessment into Real-World Software Development}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {691--700}, doi = {}, year = {2011}, } |
|
Jorgenson, Noah M. |
ICSE '11-DEMOS: "SEREBRO: Facilitating Student ..."
SEREBRO: Facilitating Student Project Team Collaboration
Noah M. Jorgenson, Matthew L. Hale, and Rose F. Gamble (University of Tulsa, USA) In this demonstration, we show SEREBRO, a lightweight courseware developed for student team collaboration in a software engineering class. SEREBRO couples an idea forum with software project management tools to maintain cohesive interaction between team discussion and resulting work products, such as tasking, documentation, and version control. SEREBRO has been used consecutively for two years of software engineering classes. Student input and experiments on student use in these classes has directed SERBRO to its current functionality. @InProceedings{ICSE11p1019, author = {Noah M. Jorgenson and Matthew L. Hale and Rose F. Gamble}, title = {SEREBRO: Facilitating Student Project Team Collaboration}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1019--1021}, doi = {}, year = {2011}, } |
|
Jung, Yungbum |
ICSE '11: "MeCC: Memory Comparison-based ..."
MeCC: Memory Comparison-based Clone Detector
Heejung Kim, Yungbum Jung, Sunghun Kim, and Kwankeun Yi (Seoul National University, South Korea; Hong Kong University of Science and Technology, China) In this paper, we propose a new semantic clone detection technique by comparing programs’ abstract memory states, which are computed by a semantic-based static analyzer. Our experimental study using three large-scale open source projects shows that our technique can detect semantic clones that existing syntactic- or semantic-based clone detectors miss. Our technique can help developers identify inconsistent clone changes, find refactoring candidates, and understand software evolution related to semantic clones. @InProceedings{ICSE11p301, author = {Heejung Kim and Yungbum Jung and Sunghun Kim and Kwankeun Yi}, title = {MeCC: Memory Comparison-based Clone Detector}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {301--310}, doi = {}, year = {2011}, } |
|
Jürjens, Jan |
ICSE '11-DEMOS: "Automated Security Hardening ..."
Automated Security Hardening for Evolving UML Models
Jan Jürjens (TU Dortmund, Germany; Fraunhofer ISST, Germany) @InProceedings{ICSE11p986, author = {Jan Jürjens}, title = {Automated Security Hardening for Evolving UML Models}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {986--985}, doi = {}, year = {2011}, } ICSE '11-WORKSHOPS: "Seventh International Workshop ..." Seventh International Workshop on Software Engineering for Secure Systems (SESS 2011) Seok-Won Lee, Mattia Monga, and Jan Jürjens (University of Nebraska-Lincoln, USA; Università degli Studi di Milano, Italy; TU Dortmund, Germany) The 7th edition of the SESS workshop aims at providing a venue for software engineers and security researchers to exchange ideas and techniques. In fact, software is at core of most of the business transactions and its smart integration in an industrial setting may be the competitive advantage even when the core competence is outside the ICT field. As a result, the revenues of a firm depend directly on several complex software-based systems. Thus, stakeholders and users should be able to trust these systems to provide data and elaborations with a degree of confidentiality, integrity, and availability compatible with their needs. Moreover, the pervasiveness of software products in the creation of critical infrastructures has raised the value of trustworthiness and new efforts should be dedicated to achieve it. However, nowadays almost every application has some kind of security requirement even if its use is not to be considered critical. @InProceedings{ICSE11p1200, author = {Seok-Won Lee and Mattia Monga and Jan Jürjens}, title = {Seventh International Workshop on Software Engineering for Secure Systems (SESS 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1200--1201}, doi = {}, year = {2011}, } |
|
Kagdi, Huzefa |
ICSE '11-WORKSHOPS: "Sixth International Workshop ..."
Sixth International Workshop on Traceability in Emerging Forms of Software Engineering (TEFSE 2011)
Denys Poshyvanyk, Massimiliano Di Penta, and Huzefa Kagdi (College of William and Mary, USA; University of Sannio, Italy; Winston-Salem State University, USA) The Sixth International Workshop on Traceability in Emerging Forms of Software Engineering (TEFSE 2011) will bring together researchers and practitioners to examine the challenges of recovering and maintaining traceability for the myriad forms of software engineering artifacts, ranging from user needs to models to source code. The objective of the 6th edition of TEFSE is to build on the work the traceability research community has completed in identifying the open traceability challenges. In particular, it is intended to be a working event focused on discussing the main problems related to software artifact traceability and propose possible solutions for such problems. Moreover, the workshop also aims at identifying key issues concerning the importance of maintaining the traceability information during software development, to further improve the cooperation between academia and industry and to facilitate technology transfer. @InProceedings{ICSE11p1214, author = {Denys Poshyvanyk and Massimiliano Di Penta and Huzefa Kagdi}, title = {Sixth International Workshop on Traceability in Emerging Forms of Software Engineering (TEFSE 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1214--1215}, doi = {}, year = {2011}, } |
|
Kamalrudin, Massila |
ICSE '11: "Improving Requirements Quality ..."
Improving Requirements Quality using Essential Use Case Interaction Patterns
Massila Kamalrudin, John Hosking, and John Grundy (University of Auckland, New Zealand; Swinburne University of Technology at Hawthorn, Australia) Requirements specifications need to be checked against the 3C’s Consistency, Completeness and Correctness – in order to achieve high quality. This is especially difficult when working with both natural language requirements and associated semi-formal modelling representations. We describe a technique and support tool that allows us to perform semi-automated checking of natural language and semi-formal requirements models, supporting both consistency management between representations but also correctness and completeness analysis. We use a concept of essential use case interaction patterns to perform the correctness and completeness analysis on the semi-formal representation. We highlight potential inconsistencies, incompleteness and incorrectness using visual differencing in our support tool. We have evaluated our approach via an end user study which focused on the tool’s usefulness, ease of use, ease of learning and user satisfaction and provided data for cognitive dimensions of notations analysis of the tool. @InProceedings{ICSE11p531, author = {Massila Kamalrudin and John Hosking and John Grundy}, title = {Improving Requirements Quality using Essential Use Case Interaction Patterns}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {531--540}, doi = {}, year = {2011}, } |
|
Kamei, Yasutaka |
ICSE '11: "An Empirical Study of Build ..."
An Empirical Study of Build Maintenance Effort
Shane McIntosh, Bram Adams, Thanh H. D. Nguyen, Yasutaka Kamei, and Ahmed E. Hassan (Queen's University, Canada) The build system of a software project is responsible for transforming source code and other development artifacts into executable programs and deliverables. Similar to source code, build system specifications require maintenance to cope with newly implemented features, changes to imported Application Program Interfaces (APIs), and source code restructuring. In this paper, we mine the version histories of one proprietary and nine open source projects of different sizes and domain to analyze the overhead that build maintenance imposes on developers. We split our analysis into two dimensions: (1) Build Coupling, i.e., how frequently source code changes require build changes, and (2) Build Ownership, i.e., the proportion of developers responsible for build maintenance. Our results indicate that, despite the difference in scale, the build system churn rate is comparable to that of the source code, and build changes induce more relative churn on the build system than source code changes induce on the source code. Furthermore, build maintenance yields up to a 27% overhead on source code development and a 44% overhead on test development. Up to 79% of source code developers and 89% of test code developers are significantly impacted by build maintenance, yet investment in build experts can reduce the proportion of impacted developers to 22% of source code developers and 24% of test code developers. @InProceedings{ICSE11p141, author = {Shane McIntosh and Bram Adams and Thanh H. D. Nguyen and Yasutaka Kamei and Ahmed E. Hassan}, title = {An Empirical Study of Build Maintenance Effort}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {141--150}, doi = {}, year = {2011}, } |
|
Kang, Eunsuk |
ICSE '11: "A Lightweight Code Analysis ..."
A Lightweight Code Analysis and its Role in Evaluation of a Dependability Case
Joseph P. Near, Aleksandar Milicevic, Eunsuk Kang, and Daniel Jackson (Massachusetts Institute of Technology, USA) A dependability case is an explicit, end-to-end argument, based on concrete evidence, that a system satisfies a critical property. We report on a case study constructing a dependability case for the control software of a medical device. The key novelty of our approach is a lightweight code analysis that generates a list of side conditions that correspond to assumptions to be discharged about the code and the environment in which it executes. This represents an unconventional trade-off between, at one extreme, more ambitious analyses that attempt to discharge all conditions automatically (but which cannot even in principle handle environmental assumptions), and at the other, flow- or contextinsensitive analyses that require more user involvement. The results of the analysis suggested a variety of ways in which the dependability of the system might be improved. @InProceedings{ICSE11p31, author = {Joseph P. Near and Aleksandar Milicevic and Eunsuk Kang and Daniel Jackson}, title = {A Lightweight Code Analysis and its Role in Evaluation of a Dependability Case}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {31--40}, doi = {}, year = {2011}, } |
|
Karastoyanova, Dimka |
ICSE '11-WORKSHOPS: "Third International Workshop ..."
Third International Workshop on Principles of Engineering Service-Oriented Systems (PESOS 2011)
Manuel Carro, Dimka Karastoyanova, Grace A. Lewis, and Anna Liu (Universidad Politécnica de Madrid, Spain; University of Stuttgart, Germany; CMU, USA; NICTA, Australia) ervice-oriented systems have attracted great interest from industry and research communities worldwide. Service integrators, developers, and providers are collaborating to address the various challenges in the field. PESOS 2011 is a forum for all these communities to present and discuss a wide range of topics related to service-oriented systems. The goal of PESOS is to bring together researchers from academia and industry, as well as practitioners working in the areas of software engineering and service-oriented systems to discuss research challenges, recent developments, novel applications, as well as methods, techniques, experiences, and tools to support the engineering of service-oriented systems. @InProceedings{ICSE11p1218, author = {Manuel Carro and Dimka Karastoyanova and Grace A. Lewis and Anna Liu}, title = {Third International Workshop on Principles of Engineering Service-Oriented Systems (PESOS 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1218--1219}, doi = {}, year = {2011}, } |
|
Kästner, Christian |
ICSE '11-DEMOS: "View Infinity: A Zoomable ..."
View Infinity: A Zoomable Interface for Feature-Oriented Software Development
Michael Stengel, Janet Feigenspan, Mathias Frisch, Christian Kästner, Sven Apel, and Raimund Dachselt (University of Magdeburg, Germany; University of Marburg, Germany; University of Passau, Germany) Software product line engineering provides efficient means to develop variable software. To support program comprehension of software product lines (SPLs), we developed View Infinity, a tool that provides seamless and semantic zooming of different abstraction layers of an SPL. First results of a qualitative study with experienced SPL developers are promising and indicate that View Infinity is useful and intuitive to use. @InProceedings{ICSE11p1031, author = {Michael Stengel and Janet Feigenspan and Mathias Frisch and Christian Kästner and Sven Apel and Raimund Dachselt}, title = {View Infinity: A Zoomable Interface for Feature-Oriented Software Development}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1031--1033}, doi = {}, year = {2011}, } ICSE '11-DEMOS: "JavAdaptor: Unrestricted Dynamic ..." JavAdaptor: Unrestricted Dynamic Software Updates for Java Mario Pukall, Alexander Grebhahn, Reimar Schröter, Christian Kästner, Walter Cazzola, and Sebastian Götz (University of Magdeburg, Germany; Philipps-University Marburg, Germany; University of Milano, Italy; University of Dresden, Germany) Dynamic software updates (DSU) are one of the top-most features requested by developers and users. As a result, DSU is already standard in many dynamic programming languages. But, it is not standard in statically typed languages such as Java. Even if at place number three of Oracle’s current request for enhancement (RFE) list, DSU support in Java is very limited. Therefore, over the years many different DSU approaches for Java have been proposed. Nevertheless, DSU for Java is still an active field of research, because most of the existing approaches are too restrictive. Some of the approaches have shortcomings either in terms of flexibility or performance, whereas others are platform dependent or dictate the program’s architecture. With JavAdaptor, we present the first DSU approach which comes without those restrictions. We will demonstrate JavAdaptor based on the well-known arcade game Snake which we will update stepwise at runtime. @InProceedings{ICSE11p989, author = {Mario Pukall and Alexander Grebhahn and Reimar Schröter and Christian Kästner and Walter Cazzola and Sebastian Götz}, title = {JavAdaptor: Unrestricted Dynamic Software Updates for Java}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {989--991}, doi = {}, year = {2011}, } |
|
Katz, Randy |
ICSE '11: "Static Extraction of Program ..."
Static Extraction of Program Configuration Options
Ariel S. Rabkin and Randy Katz (UC Berkeley, USA) Many programs use a key-value model for configuration options. We examined how this model is used in seven open source Java projects totaling over a million lines of code. We present a static analysis that extracts a list of configuration options for a program. Our analysis finds 95% of the options read by the programs in our sample, making it more complete than existing documentation. Most configuration options we saw fall into a small number of types. A dozen types cover 90% of options. We present a second analysis that exploits this fact, inferring a type for most options. Together, these analyses enable more visibility into program configuration, helping reduce the burden of configuration documentation and configuration debugging. @InProceedings{ICSE11p131, author = {Ariel S. Rabkin and Randy Katz}, title = {Static Extraction of Program Configuration Options}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {131--140}, doi = {}, year = {2011}, } |
|
Kawrykow, David |
ICSE '11: "Non-Essential Changes in Version ..."
Non-Essential Changes in Version Histories
David Kawrykow and Martin P. Robillard (McGill University, Canada) Numerous techniques involve mining change data captured in software archives to assist engineering efforts, for example to identify components that tend to evolve together. We observed that important changes to software artifacts are sometimes accompanied by numerous non-essential modifications, such as local variable refactorings, or textual differences induced as part of a rename refactoring. We developed a tool-supported technique for detecting nonessential code differences in the revision histories of software systems. We used our technique to investigate code changes in over 24 000 change sets gathered from the change histories of seven long-lived open-source systems. We found that up to 15.5% of a system’s method updates were due solely to non-essential differences. We also report on numerous observations on the distribution of non-essential differences in change history and their potential impact on change-based analyses. @InProceedings{ICSE11p351, author = {David Kawrykow and Martin P. Robillard}, title = {Non-Essential Changes in Version Histories}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {351--360}, doi = {}, year = {2011}, } |
|
Kazman, Rick |
ICSE '11-SEIP: "Architecture Evaluation without ..."
Architecture Evaluation without an Architecture: Experience with the Smart Grid
Rick Kazman, Len Bass, James Ivers, and Gabriel A. Moreno (SEI/CMU, USA; University of Hawaii, USA) This paper describes an analysis of some of the challenges facing one portion of the Smart Grid in the United States—residential Demand Response (DR) systems. The purposes of this paper are twofold: 1) to discover risks to residential DR systems and 2) to illustrate an architecture-based analysis approach to uncovering risks that span a collection of technical and social concerns. The results presented here are specific to residential DR but the approach is general and it could be applied to other systems within the Smart Grid and other critical infrastructure domains. Our architecture-based analysis is different from most other approaches to analyzing complex systems in that it addresses multiple quality attributes simultaneously (e.g., performance, reliability, security, modifiability, usability, etc.) and it considers the architecture of a complex system from a socio-technical perspective where the actions of the people in the system are as important, from an analysis perspective, as the physical and computational elements of the system. This analysis can be done early in a system’s lifetime, before substantial resources have been committed to its construction or procurement, and so it provides extremely cost-effective risk analysis. @InProceedings{ICSE11p663, author = {Rick Kazman and Len Bass and James Ivers and Gabriel A. Moreno}, title = {Architecture Evaluation without an Architecture: Experience with the Smart Grid}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {663--670}, doi = {}, year = {2011}, } |
|
Kazmin, Nikolay |
ICSE '11: "Inferring Better Contracts ..."
Inferring Better Contracts
Yi Wei, Carlo A. Furia, Nikolay Kazmin, and Bertrand Meyer (ETH Zurich, Switzerland) Considerable progress has been made towards automatic support for one of the principal techniques available to enhance program reliability: equipping programs with extensive contracts. The results of current contract inference tools are still often unsatisfactory in practice, especially for programmers who already apply some kind of basic Design by Contract discipline, since the inferred contracts tend to be simple assertions—the very ones that programmers find easy to write. We present new, completely automatic inference techniques and a supporting tool, which take advantage of the presence of simple programmer-written contracts in the code to infer sophisticated assertions, involving for example implication and universal quantification. Applied to a production library of classes covering standard data structures such as linked lists, arrays, stacks, queues and hash tables, the tool is able, entirely automatically, to infer 75% of the complete contracts—contracts yielding the full formal specification of the classes—with very few redundant or irrelevant clauses. @InProceedings{ICSE11p191, author = {Yi Wei and Carlo A. Furia and Nikolay Kazmin and Bertrand Meyer}, title = {Inferring Better Contracts}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {191--200}, doi = {}, year = {2011}, } |
|
Kelly, Diane |
ICSE '11-WORKSHOPS: "Fourth International Workshop ..."
Fourth International Workshop on Software Engineering for Computational Science and Engineering (SE-CSE 2011)
Jeffrey C. Carver, Roscoe Bartlett, Ian Gorton, Lorin Hochstein, Diane Kelly, and Judith Segal (University of Alabama, USA; Sandia National Laboratories, USA; Pacific Northwest National Laboratory, USA; USC-ISI, USA; Royal Military College, Canada; The Open University, UK) Computational Science and Engineering (CSE) software supports a wide variety of domains including nuclear physics, crash simulation, satellite data processing, fluid dynamics, climate modeling, bioinformatics, and vehicle development. The increase in the importance of CSE software motivates the need to identify and understand appropriate software engineering (SE) practices for CSE. Because of the uniqueness of CSE software development, existing SE tools and techniques developed for the business/IT community are often not efficient or effective. Appropriate SE solutions must account for the salient characteristics of the CSE development environment. This situation creates an opportunity for members of the SE community to interact with members of the CSE community to address this need. This workshop facilitates that collaboration by bringing together members of the SE community and the CSE community to share perspectives and present findings from research and practice relevant to CSE software. A significant portion of the workshop is devoted to focused interaction among the participants with the goal of generating a research agenda to improve tools, techniques, and experimental methods for studying CSE software engineering. @InProceedings{ICSE11p1226, author = {Jeffrey C. Carver and Roscoe Bartlett and Ian Gorton and Lorin Hochstein and Diane Kelly and Judith Segal}, title = {Fourth International Workshop on Software Engineering for Computational Science and Engineering (SE-CSE 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1226--1227}, doi = {}, year = {2011}, } |
|
Khoo, Siau Cheng |
ICSE '11: "Mining Message Sequence Graphs ..."
Mining Message Sequence Graphs
Sandeep Kumar, Siau Cheng Khoo, Abhik Roychoudhury, and David Lo (National University of Singapore, Singapore; Singapore Management University, Singapore) Dynamic specification mining involves discovering software behavior from traces for the purpose of program comprehension and bug detection. However, mining program behavior from execution traces is difficult for concurrent/distributed programs. Specifically, the inherent partial order relationships among events occurring across processes pose a big challenge to specification mining. In this paper, we propose a framework for mining partial orders so as to understand concurrent program behavior. Our miner takes in a set of concurrent program traces, and produces a message sequence graph (MSG) to represent the concurrent program behavior. An MSG represents a graph where the nodes of the graph are partial orders, represented as Message Sequence Charts. Mining an MSG allows us to understand concurrent program behaviors since the nodes of the MSG depict important “phases” or “interaction snippets” involving several concurrently executing processes. To demonstrate the power of this technique, we conducted experiments on mining behaviors of several fairly complex distributed systems. We show that our miner can produce the corresponding MSGs with both high precision and recall. @InProceedings{ICSE11p91, author = {Sandeep Kumar and Siau Cheng Khoo and Abhik Roychoudhury and David Lo}, title = {Mining Message Sequence Graphs}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {91--100}, doi = {}, year = {2011}, } |
|
Khurshid, Sarfraz |
ICSE '11-IMPACT: "Symbolic Execution for Software ..."
Symbolic Execution for Software Testing in Practice -- Preliminary Assessment
Cristian Cadar, Patrice Godefroid, Sarfraz Khurshid, Corina S. Păsăreanu, Koushik Sen, Nikolai Tillmann, and Willem Visser (Imperial College London, UK; Microsoft Research, USA; University of Texas at Austin, USA; CMU, USA; NASA Ames Research Center, USA; UC Berkeley, USA; Stellenbosch University, South Africa) We present results for the “Impact Project Focus Area” on the topic of symbolic execution as used in software testing. Symbolic execution is a program analysis technique introduced in the 70s that has received renewed interest in recent years, due to algorithmic advances and increased availability of computational power and constraint solving technology. We review classical symbolic execution and some modern extensions such as generalized symbolic execution and dynamic test generation. We also give a preliminary assessment of the use in academia, research labs, and industry. @InProceedings{ICSE11p1066, author = {Cristian Cadar and Patrice Godefroid and Sarfraz Khurshid and Corina S. Păsăreanu and Koushik Sen and Nikolai Tillmann and Willem Visser}, title = {Symbolic Execution for Software Testing in Practice -- Preliminary Assessment}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1066--1071}, doi = {}, year = {2011}, } |
|
Kidwell, Billy |
ICSE '11-SRC: "A Decision Support System ..."
A Decision Support System for the Classification of Software Coding Faults: A Research Abstract
Billy Kidwell (University of Kentucky, USA) A decision support system for fault classification is presented. The fault classification scheme is developed to provide guidance in process improvement and fault-based testing. The research integrates results in fault classification, source code analysis, and fault-based testing research. Initial results indicate that existing change type and fault classification schemes are insufficient for this purpose. Development of sufficient schemes and their evaluation are discussed. @InProceedings{ICSE11p1158, author = {Billy Kidwell}, title = {A Decision Support System for the Classification of Software Coding Faults: A Research Abstract}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1158--1160}, doi = {}, year = {2011}, } |
|
Kim, Heejung |
ICSE '11: "MeCC: Memory Comparison-based ..."
MeCC: Memory Comparison-based Clone Detector
Heejung Kim, Yungbum Jung, Sunghun Kim, and Kwankeun Yi (Seoul National University, South Korea; Hong Kong University of Science and Technology, China) In this paper, we propose a new semantic clone detection technique by comparing programs’ abstract memory states, which are computed by a semantic-based static analyzer. Our experimental study using three large-scale open source projects shows that our technique can detect semantic clones that existing syntactic- or semantic-based clone detectors miss. Our technique can help developers identify inconsistent clone changes, find refactoring candidates, and understand software evolution related to semantic clones. @InProceedings{ICSE11p301, author = {Heejung Kim and Yungbum Jung and Sunghun Kim and Kwankeun Yi}, title = {MeCC: Memory Comparison-based Clone Detector}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {301--310}, doi = {}, year = {2011}, } |
|
Kim, Miryung |
ICSE '11: "Detecting Software Modularity ..."
Detecting Software Modularity Violations
Sunny Wong, Yuanfang Cai, Miryung Kim, and Michael Dalton (Drexel University, USA; University of Texas at Austin, USA) This paper presents Clio, an approach that detects modularity violations, which can cause software defects, modularity decay, or expensive refactorings. Clio computes the discrepancies between how components should change together based on the modular structure, and how components actually change together as revealed in version history. We evaluated Clio using 15 releases of Hadoop Common and 10 releases of Eclipse JDT. The results show that hundreds of violations identified using Clio were indeed recognized as design problems or refactored by the developers in later versions. The identified violations exhibit multiple symptoms of poor design, some of which are not easily detectable using existing approaches. @InProceedings{ICSE11p411, author = {Sunny Wong and Yuanfang Cai and Miryung Kim and Michael Dalton}, title = {Detecting Software Modularity Violations}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {411--420}, doi = {}, year = {2011}, } ICSE '11: "An Empirical Investigation ..." An Empirical Investigation into the Role of API-Level Refactorings during Software Evolution Miryung Kim, Dongxiang Cai, and Sunghun Kim (University of Texas at Austin, USA; Hong Kong University of Science and Technology, China) It is widely believed that refactoring improves software quality and programmer productivity by making it easier to maintain and understand software systems. However, the role of refactorings has not been systematically investigated using fine-grained evolution history. We quantitatively and qualitatively studied API-level refactorings and bug fixes in three large open source projects, totaling 26523 revisions of evolution. The study found several surprising results: One, there is an increase in the number of bug fixes after API-level refactorings. Two, the time taken to fix bugs is shorter after API-level refactorings than before. Three, a large number of refactoring revisions include bug fixes at the same time or are related to later bug fix revisions. Four, API-level refactorings occur more frequently before than after major software releases. These results call for re-thinking refactoring’s true benefits. Furthermore, frequent floss refactoring mistakes observed in this study call for new software engineering tools to support safe application of refactoring and behavior modifying edits together. @InProceedings{ICSE11p151, author = {Miryung Kim and Dongxiang Cai and Sunghun Kim}, title = {An Empirical Investigation into the Role of API-Level Refactorings during Software Evolution}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {151--160}, doi = {}, year = {2011}, } |
|
Kim, Sunghun |
ICSE '11: "MeCC: Memory Comparison-based ..."
MeCC: Memory Comparison-based Clone Detector
Heejung Kim, Yungbum Jung, Sunghun Kim, and Kwankeun Yi (Seoul National University, South Korea; Hong Kong University of Science and Technology, China) In this paper, we propose a new semantic clone detection technique by comparing programs’ abstract memory states, which are computed by a semantic-based static analyzer. Our experimental study using three large-scale open source projects shows that our technique can detect semantic clones that existing syntactic- or semantic-based clone detectors miss. Our technique can help developers identify inconsistent clone changes, find refactoring candidates, and understand software evolution related to semantic clones. @InProceedings{ICSE11p301, author = {Heejung Kim and Yungbum Jung and Sunghun Kim and Kwankeun Yi}, title = {MeCC: Memory Comparison-based Clone Detector}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {301--310}, doi = {}, year = {2011}, } ICSE '11: "Dealing with Noise in Defect ..." Dealing with Noise in Defect Prediction Sunghun Kim, Hongyu Zhang, Rongxin Wu, and Liang Gong (Hong Kong University of Science and Technology, China; Tsinghua University, China) Many software defect prediction models have been built using historical defect data obtained by mining software repositories (MSR). Recent studies have discovered that data so collected contain noises because current defect collection practices are based on optional bug fix keywords or bug report links in change logs. Automatically collected defect data based on the change logs could include noises. This paper proposes approaches to deal with the noise in defect data. First, we measure the impact of noise on defect prediction models and provide guidelines for acceptable noise level. We measure noise resistant ability of two well-known defect prediction algorithms and find that in general, for large defect datasets, adding FP (false positive) or FN (false negative) noises alone does not lead to substantial performance differences. However, the prediction performance decreases significantly when the dataset contains 20%-35% of both FP and FN noises. Second, we propose a noise detection and elimination algorithm to address this problem. Our empirical study shows that our algorithm can identify noisy instances with reasonable accuracy. In addition, after eliminating the noises using our algorithm, defect prediction accuracy is improved. @InProceedings{ICSE11p481, author = {Sunghun Kim and Hongyu Zhang and Rongxin Wu and Liang Gong}, title = {Dealing with Noise in Defect Prediction}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {481--490}, doi = {}, year = {2011}, } ICSE '11: "An Empirical Investigation ..." An Empirical Investigation into the Role of API-Level Refactorings during Software Evolution Miryung Kim, Dongxiang Cai, and Sunghun Kim (University of Texas at Austin, USA; Hong Kong University of Science and Technology, China) It is widely believed that refactoring improves software quality and programmer productivity by making it easier to maintain and understand software systems. However, the role of refactorings has not been systematically investigated using fine-grained evolution history. We quantitatively and qualitatively studied API-level refactorings and bug fixes in three large open source projects, totaling 26523 revisions of evolution. The study found several surprising results: One, there is an increase in the number of bug fixes after API-level refactorings. Two, the time taken to fix bugs is shorter after API-level refactorings than before. Three, a large number of refactoring revisions include bug fixes at the same time or are related to later bug fix revisions. Four, API-level refactorings occur more frequently before than after major software releases. These results call for re-thinking refactoring’s true benefits. Furthermore, frequent floss refactoring mistakes observed in this study call for new software engineering tools to support safe application of refactoring and behavior modifying edits together. @InProceedings{ICSE11p151, author = {Miryung Kim and Dongxiang Cai and Sunghun Kim}, title = {An Empirical Investigation into the Role of API-Level Refactorings during Software Evolution}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {151--160}, doi = {}, year = {2011}, } |
|
Kivett, Ryan |
ICSE '11-SEIP: "Characterizing the Differences ..."
Characterizing the Differences Between Pre- and Post- Release Versions of Software
Paul Luo Li, Ryan Kivett, Zhiyuan Zhan, Sung-eok Jeon, Nachiappan Nagappan, Brendan Murphy, and Andrew J. Ko (Microsoft Inc., USA; University of Washington, USA; Microsoft Research, USA) Many software producers utilize beta programs to predict postrelease quality and to ensure that their products meet quality expectations of users. Prior work indicates that software producers need to adjust predictions to account for usage environments and usage scenarios differences between beta populations and postrelease populations. However, little is known about how usage characteristics relate to field quality and how usage characteristics differ between beta and post-release. In this study, we examine application crash, application hang, system crash, and usage information from millions of Windows® users to 1) examine the effects of usage characteristics differences on field quality (e.g. which usage characteristics impact quality), 2) examine usage characteristics differences between beta and post-release (e.g. do impactful usage characteristics differ), and 3) report experiences adjusting field quality predictions for Windows. Among the 18 usage characteristics that we examined, the five most important were: the number of application executed, whether the machines was pre-installed by the original equipment manufacturer, two sub-populations (two language/geographic locales), and whether Windows was 64-bit (not 32-bit). We found each of these usage characteristics to differ between beta and post-release, and by adjusting for the differences, accuracy of field quality predictions for Windows improved by ~59%. @InProceedings{ICSE11p716, author = {Paul Luo Li and Ryan Kivett and Zhiyuan Zhan and Sung-eok Jeon and Nachiappan Nagappan and Brendan Murphy and Andrew J. Ko}, title = {Characterizing the Differences Between Pre- and Post- Release Versions of Software}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {716--725}, doi = {}, year = {2011}, } |
|
Kjolstad, Fredrik |
ICSE '11: "Transformation for Class Immutability ..."
Transformation for Class Immutability
Fredrik Kjolstad, Danny Dig, Gabriel Acevedo, and Marc Snir (University of Illinois at Urbana-Champaign, USA) It is common for object-oriented programs to have both mutable and immutable classes. Immutable classes simplify programing because the programmer does not have to reason about side-effects. Sometimes programmers write immutable classes from scratch, other times they transform mutable into immutable classes. To transform a mutable class, programmers must find all methods that mutate its transitive state and all objects that can enter or escape the state of the class. The analyses are non-trivial and the rewriting is tedious. Fortunately, this can be automated. We present an algorithm and a tool, Immutator, that enables the programmer to safely transform a mutable class into an immutable class. Two case studies and one controlled experiment show that Immutator is useful. It (i) reduces the burden of making classes immutable, (ii) is fast enough to be used interactively, and (iii) is much safer than manual transformations. @InProceedings{ICSE11p61, author = {Fredrik Kjolstad and Danny Dig and Gabriel Acevedo and Marc Snir}, title = {Transformation for Class Immutability}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {61--70}, doi = {}, year = {2011}, } |
|
Ko, Andrew J. |
ICSE '11-SEIP: "Characterizing the Differences ..."
Characterizing the Differences Between Pre- and Post- Release Versions of Software
Paul Luo Li, Ryan Kivett, Zhiyuan Zhan, Sung-eok Jeon, Nachiappan Nagappan, Brendan Murphy, and Andrew J. Ko (Microsoft Inc., USA; University of Washington, USA; Microsoft Research, USA) Many software producers utilize beta programs to predict postrelease quality and to ensure that their products meet quality expectations of users. Prior work indicates that software producers need to adjust predictions to account for usage environments and usage scenarios differences between beta populations and postrelease populations. However, little is known about how usage characteristics relate to field quality and how usage characteristics differ between beta and post-release. In this study, we examine application crash, application hang, system crash, and usage information from millions of Windows® users to 1) examine the effects of usage characteristics differences on field quality (e.g. which usage characteristics impact quality), 2) examine usage characteristics differences between beta and post-release (e.g. do impactful usage characteristics differ), and 3) report experiences adjusting field quality predictions for Windows. Among the 18 usage characteristics that we examined, the five most important were: the number of application executed, whether the machines was pre-installed by the original equipment manufacturer, two sub-populations (two language/geographic locales), and whether Windows was 64-bit (not 32-bit). We found each of these usage characteristics to differ between beta and post-release, and by adjusting for the differences, accuracy of field quality predictions for Windows improved by ~59%. @InProceedings{ICSE11p716, author = {Paul Luo Li and Ryan Kivett and Zhiyuan Zhan and Sung-eok Jeon and Nachiappan Nagappan and Brendan Murphy and Andrew J. Ko}, title = {Characterizing the Differences Between Pre- and Post- Release Versions of Software}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {716--725}, doi = {}, year = {2011}, } |
|
Koegel, Maximilian |
ICSE '11-NIER: "A Domain Specific Requirements ..."
A Domain Specific Requirements Model for Scientific Computing (NIER Track)
Yang Li, Nitesh Narayan, Jonas Helming, and Maximilian Koegel (TU München, Germany) Requirements engineering is a core activity in software engineering. However, formal requirements engineering methodologies and documented requirements are often missing in scientific computing projects. We claim that there is a need for methodologies, which capture requirements for scientific computing projects, because traditional requirements engineering methodologies are difficult to apply in this domain. We propose a novel domain specific requirements model to meet this need. We conducted an exploratory experiment to evaluate the usage of this model in scientific computing projects. The results indicate that the proposed model facilitates the communication across the domain boundary, which is between the scientific computing domain and the software engineering domain. It supports requirements elicitation for the projects efficiently. @InProceedings{ICSE11p848, author = {Yang Li and Nitesh Narayan and Jonas Helming and Maximilian Koegel}, title = {A Domain Specific Requirements Model for Scientific Computing (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {848--851}, doi = {}, year = {2011}, } |
|
Kogan, Sandra |
ICSE '11-SEIP: "Deploying CogTool: Integrating ..."
Deploying CogTool: Integrating Quantitative Usability Assessment into Real-World Software Development
Rachel Bellamy, Bonnie E. John, and Sandra Kogan (IBM Research Watson, USA; CMU, USA; IBM Software Group, USA) Usability concerns are often difficult to integrate into real-world software development processes. To remedy this situation, IBM research and development, partnering with Carnegie Mellon University, has begun to employ a repeatable and quantifiable usability analysis method, embodied in CogTool, in its development practice. CogTool analyzes tasks performed on an interactive system from a storyboard and a demonstration of tasks on that storyboard, and predicts the time a skilled user will take to perform those tasks. We discuss how IBM designers and UX professionals used CogTool in their existing practice for contract compliance, communication within a product team and between a product team and its customer, assigning appropriate personnel to fix customer complaints, and quantitatively assessing design ideas before a line of code is written. We then reflect on the lessons learned by both the development organizations and the researchers attempting this technology transfer from academic research to integration into real-world practice, and we point to future research to even better serve the needs of practice. @InProceedings{ICSE11p691, author = {Rachel Bellamy and Bonnie E. John and Sandra Kogan}, title = {Deploying CogTool: Integrating Quantitative Usability Assessment into Real-World Software Development}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {691--700}, doi = {}, year = {2011}, } |
|
Kong, Wei-Keat |
ICSE '11-NIER: "Towards Overcoming Human Analyst ..."
Towards Overcoming Human Analyst Fallibility in the Requirements Tracing Process (NIER Track)
David Cuddeback, Alex Dekhtyar, Jane Huffman Hayes, Jeff Holden, and Wei-Keat Kong (California Polytechnic State University, USA; University of Kentucky, USA) Our research group recently discovered that human analysts, when asked to validate candidate traceability matrices, produce predictably imperfect results, in some cases less accurate than the starting candidate matrices. This discovery radically changes our understanding of how to design a fast, accurate and certifiable tracing process that can be implemented as part of software assurance activities. We present our vision for the new approach to achieving this goal. Further, we posit that human fallibility may impact other softare engineering activities involving decision support tools. @InProceedings{ICSE11p860, author = {David Cuddeback and Alex Dekhtyar and Jane Huffman Hayes and Jeff Holden and Wei-Keat Kong}, title = {Towards Overcoming Human Analyst Fallibility in the Requirements Tracing Process (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {860--863}, doi = {}, year = {2011}, } |
|
Koschke, Rainer |
ICSE '11: "Frequency and Risks of Changes ..."
Frequency and Risks of Changes to Clones
Nils Göde and Rainer Koschke (University of Bremen, Germany) @InProceedings{ICSE11p311, author = {Nils Göde and Rainer Koschke}, title = {Frequency and Risks of Changes to Clones}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {311--310}, doi = {}, year = {2011}, } ICSE '11-WORKSHOPS: "Fifth International Workshop ..." Fifth International Workshop on Software Clones (IWSC 2011) James R. Cordy, Katsuro Inoue, Stanislaw Jarzabek, and Rainer Koschke (Queen's University, Canada; Osaka University, Japan; National University of Singapore, Singapore; University of Bremen, Germany) Software clones are identical or similar pieces of code, design or other artifacts. Clones are known to be closely related to various issues in software engineering, such as software quality, complexity, architecture, refactoring, evolution, licensing, plagiarism, and so on. Various characteristics of software systems can be uncovered through clone analysis, and system restructuring can be performed by merging clones. The goals of this workshop are to bring together researchers and practitioners from around the world to evaluate the current state of research and applications, discuss common problems, discover new opportunities for collaboration, exchange ideas, envision new areas of research and applications, and explore synergies with similarity analysis in other areas and disciplines. @InProceedings{ICSE11p1210, author = {James R. Cordy and Katsuro Inoue and Stanislaw Jarzabek and Rainer Koschke}, title = {Fifth International Workshop on Software Clones (IWSC 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1210--1211}, doi = {}, year = {2011}, } |
|
Kotonya, Gerald |
ICSE '11-NIER: "Digitally Annexing Desk Space ..."
Digitally Annexing Desk Space for Software Development (NIER Track)
John Hardy, Christopher Bull, Gerald Kotonya, and Jon Whittle (Lancaster University, UK) Software engineering is a team activity yet the programmer’s key tool, the IDE, is still largely that of a soloist. This paper describes the vision, implementation and initial evaluation of CoffeeTable – a fully featured research prototype resulting from our reflections on the software design process. CoffeeTable exchanges the traditional IDE for one built around a shared interactive desk. The proposed solution encourages smooth transitions between agile and traditional modes of working whilst helping to create a shared vision and common reference frame – key to sustaining a good design. This paper also presents early results from the evaluation of CoffeeTable and offers some insights from the lessons learned. In particular, it highlights the role of developer tools and the software constructions that are shaped by them. @InProceedings{ICSE11p812, author = {John Hardy and Christopher Bull and Gerald Kotonya and Jon Whittle}, title = {Digitally Annexing Desk Space for Software Development (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {812--815}, doi = {}, year = {2011}, } |
|
Koziolek, Anne |
ICSE '11-SEIP: "An Industrial Case Study on ..."
An Industrial Case Study on Quality Impact Prediction for Evolving Service-Oriented Software
Heiko Koziolek, Bastian Schlich, Carlos Bilich, Roland Weiss, Steffen Becker, Klaus Krogmann, Mircea Trifu, Raffaela Mirandola, and Anne Koziolek (ABB Corporate Research, Germany; University of Paderborn, Germany; FZI, Germany; Politecnico di Milano, Italy; KIT, Germany) Systematic decision support for architectural design decisions is a major concern for software architects of evolving service-oriented systems. In practice, architects often analyse the expected performance and reliability of design alternatives based on prototypes or former experience. Model-driven prediction methods claim to uncover the tradeoffs between different alternatives quantitatively while being more cost-effective and less error-prone. However, they often suffer from weak tool support and focus on single quality attributes. Furthermore, there is limited evidence on their effectiveness based on documented industrial case studies. Thus, we have applied a novel, model-driven prediction method called Q-ImPrESS on a large-scale process control system consisting of several million lines of code from the automation domain to evaluate its evolution scenarios. This paper reports our experiences with the method and lessons learned. Benefits of Q-ImPrESS are the good architectural decision support and comprehensive tool framework, while one drawback is the time-consuming data collection. @InProceedings{ICSE11p776, author = {Heiko Koziolek and Bastian Schlich and Carlos Bilich and Roland Weiss and Steffen Becker and Klaus Krogmann and Mircea Trifu and Raffaela Mirandola and Anne Koziolek}, title = {An Industrial Case Study on Quality Impact Prediction for Evolving Service-Oriented Software}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {776--785}, doi = {}, year = {2011}, } |
|
Koziolek, Heiko |
ICSE '11-SEIP: "An Industrial Case Study on ..."
An Industrial Case Study on Quality Impact Prediction for Evolving Service-Oriented Software
Heiko Koziolek, Bastian Schlich, Carlos Bilich, Roland Weiss, Steffen Becker, Klaus Krogmann, Mircea Trifu, Raffaela Mirandola, and Anne Koziolek (ABB Corporate Research, Germany; University of Paderborn, Germany; FZI, Germany; Politecnico di Milano, Italy; KIT, Germany) Systematic decision support for architectural design decisions is a major concern for software architects of evolving service-oriented systems. In practice, architects often analyse the expected performance and reliability of design alternatives based on prototypes or former experience. Model-driven prediction methods claim to uncover the tradeoffs between different alternatives quantitatively while being more cost-effective and less error-prone. However, they often suffer from weak tool support and focus on single quality attributes. Furthermore, there is limited evidence on their effectiveness based on documented industrial case studies. Thus, we have applied a novel, model-driven prediction method called Q-ImPrESS on a large-scale process control system consisting of several million lines of code from the automation domain to evaluate its evolution scenarios. This paper reports our experiences with the method and lessons learned. Benefits of Q-ImPrESS are the good architectural decision support and comprehensive tool framework, while one drawback is the time-consuming data collection. @InProceedings{ICSE11p776, author = {Heiko Koziolek and Bastian Schlich and Carlos Bilich and Roland Weiss and Steffen Becker and Klaus Krogmann and Mircea Trifu and Raffaela Mirandola and Anne Koziolek}, title = {An Industrial Case Study on Quality Impact Prediction for Evolving Service-Oriented Software}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {776--785}, doi = {}, year = {2011}, } |
|
Kramer, Jeff |
ICSE '11-DEMOS: "Evolve: Tool Support for Architecture ..."
Evolve: Tool Support for Architecture Evolution
Andrew McVeigh, Jeff Kramer, and Jeff Magee (Imperial College London, UK) Incremental change is intrinsic to both the initial development and subsequent evolution of large complex software systems. Evolve is a graphical design tool that captures this incremental change in the definition of software architecture. It supports a principled and manageable way of dealing with unplanned change and extension. In addition, Evolve supports decentralized evolution in which software is extended and evolved by multiple independent developers. Evolve supports a model-driven approach in that architecture definition is used to directly construct both initial implementations and extensions to these implementations. The tool implements Backbone - an architectural description language (ADL), which has both a textual and a UML2, based graphical representation. The demonstration focuses on the graphical representation. @InProceedings{ICSE11p1040, author = {Andrew McVeigh and Jeff Kramer and Jeff Magee}, title = {Evolve: Tool Support for Architecture Evolution}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1040--1042}, doi = {}, year = {2011}, } |
|
Krasikov, Sophia |
ICSE '11-NIER: "Blending Freeform and Managed ..."
Blending Freeform and Managed Information in Tables (NIER Track)
Nicolas Mangano, Harold Ossher, Ian Simmonds, Matthew Callery, Michael Desmond, and Sophia Krasikov (UC Irvine, USA; IBM Research Watson, USA) Tables are an important tool used by business analysts engaged in early requirements activities (in fact it is safe to say that tables appeal to many other types of user, in a variety of activities and domains). Business analysts typically use the tables provided by office tools. These tables offer great flexibility, but no underlying model, and hence no consistency management, multiple views or other advantages familiar to the users of modeling tools. Modeling tools, however, are usually too rigid for business analysts. In this paper we present a flexible modeling approach to tables, which combines the advantages of both office and modeling tools. Freeform information can co-exist with information managed by an underlying model, and an incremental formalization approach allows each item of information to transition fluidly between freeform and managed. As the model evolves, it is used to guide the user in the process of formalizing any remaining freeform information. The model therefore helps users without restricting them. Early feedback is described, and the approach is analyzed briefly in terms of cognitive dimensions. @InProceedings{ICSE11p840, author = {Nicolas Mangano and Harold Ossher and Ian Simmonds and Matthew Callery and Michael Desmond and Sophia Krasikov}, title = {Blending Freeform and Managed Information in Tables (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {840--843}, doi = {}, year = {2011}, } |
|
Kristoffersen, Steinar |
ICSE '11: "Empirical Assessment of MDE ..."
Empirical Assessment of MDE in Industry
John Hutchinson, Jon Whittle, Mark Rouncefield, and Steinar Kristoffersen (Lancaster University, UK; Østfold University College, Norway; Møreforskning Molde AS, Norway) This paper presents some initial results from a twelve-month empirical research study of model driven engineering (MDE). Using largely qualitative questionnaire and interview methods we investigate and document a range of technical, organizational and social factors that apparently influence organizational responses to MDE: specifically, its perception as a successful or unsuccessful organizational intervention. We then outline a range of lessons learned. Whilst, as with all qualitative research, these lessons should be interpreted with care, they should also be seen as providing a greater understanding of MDE practice in industry, as well as shedding light on the varied, and occasionally surprising, social, technical and organizational factors that affect success and failure. We conclude by suggesting how the next phase of the research will attempt to investigate some of these issues from a different angle and in greater depth. @InProceedings{ICSE11p471, author = {John Hutchinson and Jon Whittle and Mark Rouncefield and Steinar Kristoffersen}, title = {Empirical Assessment of MDE in Industry}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {471--480}, doi = {}, year = {2011}, } |
|
Krogmann, Klaus |
ICSE '11-SEIP: "An Industrial Case Study on ..."
An Industrial Case Study on Quality Impact Prediction for Evolving Service-Oriented Software
Heiko Koziolek, Bastian Schlich, Carlos Bilich, Roland Weiss, Steffen Becker, Klaus Krogmann, Mircea Trifu, Raffaela Mirandola, and Anne Koziolek (ABB Corporate Research, Germany; University of Paderborn, Germany; FZI, Germany; Politecnico di Milano, Italy; KIT, Germany) Systematic decision support for architectural design decisions is a major concern for software architects of evolving service-oriented systems. In practice, architects often analyse the expected performance and reliability of design alternatives based on prototypes or former experience. Model-driven prediction methods claim to uncover the tradeoffs between different alternatives quantitatively while being more cost-effective and less error-prone. However, they often suffer from weak tool support and focus on single quality attributes. Furthermore, there is limited evidence on their effectiveness based on documented industrial case studies. Thus, we have applied a novel, model-driven prediction method called Q-ImPrESS on a large-scale process control system consisting of several million lines of code from the automation domain to evaluate its evolution scenarios. This paper reports our experiences with the method and lessons learned. Benefits of Q-ImPrESS are the good architectural decision support and comprehensive tool framework, while one drawback is the time-consuming data collection. @InProceedings{ICSE11p776, author = {Heiko Koziolek and Bastian Schlich and Carlos Bilich and Roland Weiss and Steffen Becker and Klaus Krogmann and Mircea Trifu and Raffaela Mirandola and Anne Koziolek}, title = {An Industrial Case Study on Quality Impact Prediction for Evolving Service-Oriented Software}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {776--785}, doi = {}, year = {2011}, } |
|
Kruchten, Philippe |
ICSE '11-WORKSHOPS: "Second International Workshop ..."
Second International Workshop on Managing Technical Debt (MTD 2011)
Ipek Ozkaya, Philippe Kruchten, Robert L. Nord, and Nanette Brown (SEI/CMU, USA; University of British Columbia, Canada) The technical debt metaphor is gaining significant traction in the software development community as a way to understand and communicate issues of intrinsic quality, value, and cost. The idea is that developers sometimes accept compromises in a system in one dimension (e.g., modularity) to meet an urgent demand in some other dimension (e.g., a deadline), and that such compromises incur a “debt”: on which “interest” has to be paid and which should be repaid at some point for the long-term health of the project. Little is known about technical debt, beyond feelings and opinions. The software engineering research community has an opportunity to study this phenomenon and improve the way it is handled. We can offer software engineers a foundation for managing such trade-offs based on models of their economic impacts. The goal of this second workshop is to discuss managing technical debt as a part of the research agenda for the software engineering field. @InProceedings{ICSE11p1212, author = {Ipek Ozkaya and Philippe Kruchten and Robert L. Nord and Nanette Brown}, title = {Second International Workshop on Managing Technical Debt (MTD 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1212--1213}, doi = {}, year = {2011}, } ICSE '11-WORKSHOPS: "Workshop on SHAring and Reusing ..." Workshop on SHAring and Reusing architectural Knowledge (SHARK 2011) Paris Avgeriou, Patricia Lago, and Philippe Kruchten (University of Groningen, Netherlands; VU University Amsterdam, Netherlands; University of British Columbia, Canada) Architectural Knowledge (AK) is defined as the integrated representation of the software architecture of a software-intensive system or family of systems along with architectural decisions and their rationale, external influence and the development environment. The SHARK workshop series focuses on current methods, languages, and tools that can be used to extract, represent, share, apply, and reuse AK, and the experimentation and/or exploitation thereof. This sixth edition of SHARK will discuss, among other topics, the approaches for AK personalization, where knowledge is not codified through templates or annotations, but it is exchanged through the discussion between the different stakeholders. @InProceedings{ICSE11p1220, author = {Paris Avgeriou and Patricia Lago and Philippe Kruchten}, title = {Workshop on SHAring and Reusing architectural Knowledge (SHARK 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1220--1221}, doi = {}, year = {2011}, } |
|
Kuhn, Adrian |
ICSE '11-WORKSHOPS: "Third International Workshop ..."
Third International Workshop on Search-Driven Development: Users, Infrastructure, Tools, and Evaluation (SUITE 2011)
Sushil Bajracharya, Adrian Kuhn, and Yunwen Ye (Black Duck Software, USA; University of Bern, Switzerland; Software Research Associates Inc., Japan) SUITE is a workshop that focuses on exploring the notion of search as a fundamental activity during software development. The first two editions of SUITE were held at ICSE 2009/2010, and they have focused on the building of a research community that brings researchers and practioners who are interested in the research areas that SUITE addresses. While this thrid workshop continues the effort of community building, it puts more focus on addressing directly some of the urgent issues identified by previous two workshops, encouraging researchers to contribute to and take advantage of common datasets that we have started assembling for SUITE research. . @InProceedings{ICSE11p1228, author = {Sushil Bajracharya and Adrian Kuhn and Yunwen Ye}, title = {Third International Workshop on Search-Driven Development: Users, Infrastructure, Tools, and Evaluation (SUITE 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1228--1229}, doi = {}, year = {2011}, } |
|
Kumar, Sandeep |
ICSE '11: "Mining Message Sequence Graphs ..."
Mining Message Sequence Graphs
Sandeep Kumar, Siau Cheng Khoo, Abhik Roychoudhury, and David Lo (National University of Singapore, Singapore; Singapore Management University, Singapore) Dynamic specification mining involves discovering software behavior from traces for the purpose of program comprehension and bug detection. However, mining program behavior from execution traces is difficult for concurrent/distributed programs. Specifically, the inherent partial order relationships among events occurring across processes pose a big challenge to specification mining. In this paper, we propose a framework for mining partial orders so as to understand concurrent program behavior. Our miner takes in a set of concurrent program traces, and produces a message sequence graph (MSG) to represent the concurrent program behavior. An MSG represents a graph where the nodes of the graph are partial orders, represented as Message Sequence Charts. Mining an MSG allows us to understand concurrent program behaviors since the nodes of the MSG depict important “phases” or “interaction snippets” involving several concurrently executing processes. To demonstrate the power of this technique, we conducted experiments on mining behaviors of several fairly complex distributed systems. We show that our miner can produce the corresponding MSGs with both high precision and recall. @InProceedings{ICSE11p91, author = {Sandeep Kumar and Siau Cheng Khoo and Abhik Roychoudhury and David Lo}, title = {Mining Message Sequence Graphs}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {91--100}, doi = {}, year = {2011}, } ICSE '11-DOCTORALPRESENT: "Specification Mining in Concurrent ..." Specification Mining in Concurrent and Distributed Systems Sandeep Kumar (National University of Singapore, Singapore) Distributed systems contain several interacting components that perform complex computational tasks. Formal specification of the interaction protocols are crucial to the understanding of these systems. Dynamic specification mining from traces containing information about actual interactions during execution of distributed systems can play a useful role in verification and comprehension when formal specification is not available. A framework for behavioral specification mining in distributed systems is proposed. Concurrency and complexity in the distributed models raise special challenges to specification mining in such systems. @InProceedings{ICSE11p1086, author = {Sandeep Kumar}, title = {Specification Mining in Concurrent and Distributed Systems}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1086--1089}, doi = {}, year = {2011}, } ICSE '11-SRC: "Specification Mining in Concurrent ..." Specification Mining in Concurrent and Distributed Systems Sandeep Kumar (National University of Singapore, Singapore) Dynamic specification mining involves discovering software behavior from traces for the purpose of program comprehension and bug detection. However, in concurrent/distributed programs, the inherent partial order relationships among events occurring across processes pose a big challenge to specification mining. A framework for mining partial orders that takes in a set of concurrent program traces, and produces a message sequence graph (MSG) is proposed. Mining an MSG allows one to understand concurrent behaviors since the nodes of the MSG depict important “phases” or “interaction snippets” involving several concurrently executing processes. Experiments on mining behaviors of fairly complex distributed systems show that the proposed miner can produce the corresponding MSGs with both high precision and high recall. @InProceedings{ICSE11p1161, author = {Sandeep Kumar}, title = {Specification Mining in Concurrent and Distributed Systems}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1161--1163}, doi = {}, year = {2011}, } |
|
Kwan, Irwin |
ICSE '11-NIER: "The Hidden Experts in Software-Engineering ..."
The Hidden Experts in Software-Engineering Communication (NIER Track)
Irwin Kwan and Daniela Damian (University of Victoria, Canada) Sharing knowledge in a timely fashion is important in distributed software development. However, because experts are difficult to locate, developers tend to broadcast information to find the right people, which leads to overload and to communication breakdowns. We study the context in which experts are included in an email discussion so that team members can identify experts sooner. In this paper, we conduct a case study examining why people emerge in discussions by examining email within a distributed team. We find that people emerge in the following four situations: when a crisis occurs, when they respond to explicit requests, when they are forwarded in announcements, and when discussants follow up on a previous event such as a meeting. We observe that emergent people respond not only to situations where developers are seeking expertise, but also to execute routine tasks. Our findings have implications for expertise seeking and knowledge management processes. @InProceedings{ICSE11p800, author = {Irwin Kwan and Daniela Damian}, title = {The Hidden Experts in Software-Engineering Communication (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {800--803}, doi = {}, year = {2011}, } |
|
Labiche, Yvan |
ICSE '11-SEIP: "Enabling the Runtime Assertion ..."
Enabling the Runtime Assertion Checking of Concurrent Contracts for the Java Modeling Language
Wladimir Araujo, Lionel C. Briand, and Yvan Labiche (Juniper Networks, Canada; Simula Research Laboratory, Norway; University of Oslo, Norway; Carleton University, Canada) Though there exists ample support for Design by Contract (DbC) for sequential programs, applying DbC to concurrent programs presents several challenges. In previous work, we extended the Java Modeling Language (JML) with constructs to specify concurrent contracts for Java programs. We present a runtime assertion checker (RAC) for the expanded JML capable of verifying assertions for concurrent Java programs. We systematically evaluate the validity of system testing results obtained via runtime assertion checking using actual concurrent and functional faults on a highly concurrent industrial system from the telecommunications domain. @InProceedings{ICSE11p786, author = {Wladimir Araujo and Lionel C. Briand and Yvan Labiche}, title = {Enabling the Runtime Assertion Checking of Concurrent Contracts for the Java Modeling Language}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {786--795}, doi = {}, year = {2011}, } |
|
Lago, Patricia |
ICSE '11-WORKSHOPS: "Workshop on SHAring and Reusing ..."
Workshop on SHAring and Reusing architectural Knowledge (SHARK 2011)
Paris Avgeriou, Patricia Lago, and Philippe Kruchten (University of Groningen, Netherlands; VU University Amsterdam, Netherlands; University of British Columbia, Canada) Architectural Knowledge (AK) is defined as the integrated representation of the software architecture of a software-intensive system or family of systems along with architectural decisions and their rationale, external influence and the development environment. The SHARK workshop series focuses on current methods, languages, and tools that can be used to extract, represent, share, apply, and reuse AK, and the experimentation and/or exploitation thereof. This sixth edition of SHARK will discuss, among other topics, the approaches for AK personalization, where knowledge is not codified through templates or annotations, but it is exchanged through the discussion between the different stakeholders. @InProceedings{ICSE11p1220, author = {Paris Avgeriou and Patricia Lago and Philippe Kruchten}, title = {Workshop on SHAring and Reusing architectural Knowledge (SHARK 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1220--1221}, doi = {}, year = {2011}, } |
|
Lamb, Luis C. |
ICSE '11-NIER: "Learning to Adapt Requirements ..."
Learning to Adapt Requirements Specifications of Evolving Systems (NIER Track)
Rafael V. Borges, Artur d'Avila Garcez, Luis C. Lamb, and Bashar Nuseibeh (City University London, UK; UFRGS, Brazil; The Open University, UK; Lero, Ireland) We propose a novel framework for adapting and evolving software requirements models. The framework uses model checking and machine learning techniques for verifying properties and evolving model descriptions. The paper offers two novel contributions and a preliminary evaluation and application of the ideas presented. First, the framework is capable of coping with errors in the specification process so that performance degrades gracefully. Second, the framework can also be used to re-engineer a model from examples only, when an initial model is not available. We provide a preliminary evaluation of our framework by applying it to a Pump System case study, and integrate our prototype tool with the NuSMV model checker. We show how the tool integrates verification and evolution of abstract models, and also how it is capable of re-engineering partial models given examples from an existing system. @InProceedings{ICSE11p856, author = {Rafael V. Borges and Artur d'Avila Garcez and Luis C. Lamb and Bashar Nuseibeh}, title = {Learning to Adapt Requirements Specifications of Evolving Systems (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {856--859}, doi = {}, year = {2011}, } |
|
Lano, Kevin |
ICSE '11: "Model Projection: Simplifying ..."
Model Projection: Simplifying Models in Response to Restricting the Environment
Kelly Androutsopoulos, David Binkley, David Clark, Nicolas Gold, Mark Harman, Kevin Lano, and Zheng Li (University College London, UK; Loyola University Maryland, USA; King's College London, UK) This paper introduces Model Projection. Finite state models such as Extended Finite State Machines are being used in an ever increasing number of software engineering activities. Model projection facilitates model development by specializing models for a specific operating environment. A projection is useful in many design-level applications including specification reuse and property verification. The applicability of model projection rests upon three critical concerns: correctness, effectiveness, and efficiency, all of which are addressed in this paper. We introduce four related algorithms for model projection and prove each correct. We also present an empirical study of effectiveness and efficiency using ten models, including widely–studied benchmarks as well as industrial models. Results show that a typical projection includes about half of the states and a third of the transitions from the original model. @InProceedings{ICSE11p291, author = {Kelly Androutsopoulos and David Binkley and David Clark and Nicolas Gold and Mark Harman and Kevin Lano and Zheng Li}, title = {Model Projection: Simplifying Models in Response to Restricting the Environment}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {291--300}, doi = {}, year = {2011}, } |
|
Lanza, Michele |
ICSE '11: "Software Systems as Cities: ..."
Software Systems as Cities: A Controlled Experiment
Richard Wettel, Michele Lanza, and Romain Robbes (University of Lugano, Switzerland; University of Chile, Chile) Software visualization is a popular program comprehension technique used in the context of software maintenance, reverse engineering, and software evolution analysis. While there is a broad range of software visualization approaches, only few have been empirically evaluated. This is detrimental to the acceptance of software visualization in both the academic and the industrial world. We present a controlled experiment for the empirical evaluation of a 3D software visualization approach based on a city metaphor and implemented in a tool called CodeCity. The goal is to provide experimental evidence of the viability of our approach in the context of program comprehension by having subjects perform tasks related to program comprehension. We designed our experiment based on lessons extracted from the current body of research. We conducted the experiment in four locations across three countries, involving 41 participants from both academia and industry. The experiment shows that CodeCity leads to a statistically significant increase in terms of task correctness and decrease in task completion time. We detail the experiment we performed, discuss its results and reflect on the many lessons learned. @InProceedings{ICSE11p551, author = {Richard Wettel and Michele Lanza and Romain Robbes}, title = {Software Systems as Cities: A Controlled Experiment}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {551--560}, doi = {}, year = {2011}, } ICSE '11-DEMOS: "Miler: A Toolset for Exploring ..." Miler: A Toolset for Exploring Email Data Alberto Bacchelli, Michele Lanza, and Marco D'Ambros (University of Lugano, Switzerland) Source code is the target and final outcome of software development. By focusing our research and analysis on source code only, we risk forgetting that software is the product of human efforts, where communication plays a pivotal role. One of the most used communications means are emails, which have become vital for any distributed development project. Analyzing email archives is non-trivial, due to the noisy and unstructured nature of emails, the vast amounts of information, the unstandardized storage systems, and the gap with development tools. We present Miler, a toolset that allows the exploration of this form of communication, in the context of software maintenance and evolution. With Miler we can retrieve data from mailing list repositories in different formats, model emails as first-class entities, and transparently store them in databases. Miler offers tools and support for navigating the content, manually labelling emails with discussed source code entities, automatically linking emails to source code, measuring code entities’ popularity in mailing lists, exposing structured content in the unstructured content, and integrating email communication in an IDE. @InProceedings{ICSE11p1025, author = {Alberto Bacchelli and Michele Lanza and Marco D'Ambros}, title = {Miler: A Toolset for Exploring Email Data}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1025--1027}, doi = {}, year = {2011}, } |
|
Lawall, Julia |
ICSE '11: "Leveraging Software Architectures ..."
Leveraging Software Architectures to Guide and Verify the Development of Sense/Compute/Control Applications
Damien Cassou, Emilie Balland, Charles Consel, and Julia Lawall (University of Bordeaux, France; INRIA, France; DIKU, Denmark; LIP6, France) A software architecture describes the structure of a computing system by specifying software components and their interactions. Mapping a software architecture to an implementation is a well known challenge. A key element of this mapping is the architecture’s description of the data and control-flow interactions between components. The characterization of these interactions can be rather abstract or very concrete, providing more or less implementation guidance, programming support, and static verification. In this paper, we explore one point in the design space between abstract and concrete component interaction specifications. We introduce a notion of interaction contract that expresses the set of allowed interactions between components, describing both data and control-flow constraints. This declaration is part of the architecture description, allows generation of extensive programming support, and enables various verifications. We instantiate our approach in an architecture description language for Sense/Compute/Control applications, and describe associated compilation and verification strategies. @InProceedings{ICSE11p431, author = {Damien Cassou and Emilie Balland and Charles Consel and Julia Lawall}, title = {Leveraging Software Architectures to Guide and Verify the Development of Sense/Compute/Control Applications}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {431--440}, doi = {}, year = {2011}, } |
|
Layaïda, Nabil |
ICSE '11-DEMOS: "Inconsistent Path Detection ..."
Inconsistent Path Detection for XML IDEs
Pierre Genevès and Nabil Layaïda (CNRS, France; INRIA, France) We present the first IDE augmented with static detection of inconsistent paths for simplifying the development and debugging of any application involving XPath expressions. @InProceedings{ICSE11p983, author = {Pierre Genevès and Nabil Layaïda}, title = {Inconsistent Path Detection for XML IDEs}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {983--985}, doi = {}, year = {2011}, } |
|
Layman, Lucas |
ICSE '11-SEIP: "A Case Study of Measuring ..."
A Case Study of Measuring Process Risk for Early Insights into Software Safety
Lucas Layman, Victor R. Basili, Marvin V. Zelkowitz, and Karen L. Fisher (Fraunhofer CESE, USA; University of Maryland, USA; NASA Goddard Spaceflight Center, USA) In this case study, we examine software safety risk in three flight hardware systems in NASA’s Constellation spaceflight program. We applied our Technical and Process Risk Measurement (TPRM) methodology to the Constellation hazard analysis process to quantify the technical and process risks involving software safety in the early design phase of these projects. We analyzed 154 hazard reports and collected metrics to measure the prevalence of software in hazards and the specificity of descriptions of software causes of hazardous conditions. We found that 49-70% of 154 hazardous conditions could be caused by software or software was involved in the prevention of the hazardous condition. We also found that 12-17% of the 2013 hazard causes involved software, and that 23-29% of all causes had a software control. The application of the TRPM methodology identified process risks in the application of the hazard analysis process itself that may lead to software safety risk. @InProceedings{ICSE11p623, author = {Lucas Layman and Victor R. Basili and Marvin V. Zelkowitz and Karen L. Fisher}, title = {A Case Study of Measuring Process Risk for Early Insights into Software Safety}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {623--632}, doi = {}, year = {2011}, } |
|
Lee, Choonghwan |
ICSE '11: "Mining Parametric Specifications ..."
Mining Parametric Specifications
Choonghwan Lee, Feng Chen, and Grigore Roşu (University of Illinois at Urbana-Champaign, USA) Specifications carrying formal parameters that are bound to concrete data at runtime can effectively and elegantly capture multi-object behaviors or protocols. Unfortunately, parametric specifications are not easy to formulate by nonexperts and, consequently, are rarely available. This paper presents a general approach for mining parametric specifications from program executions, based on a strict separation of concerns: (1) a trace slicer first extracts sets of independent interactions from parametric execution traces; and (2) the resulting non-parametric trace slices are then passed to any conventional non-parametric property learner. The presented technique has been implemented in jMiner, which has been used to automatically mine many meaningful and non-trivial parametric properties of OpenJDK 6. @InProceedings{ICSE11p591, author = {Choonghwan Lee and Feng Chen and Grigore Roşu}, title = {Mining Parametric Specifications}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {591--600}, doi = {}, year = {2011}, } |
|
Lee, Da Young |
ICSE '11-SRC: "A Case Study on Refactoring ..."
A Case Study on Refactoring in Haskell Programs
Da Young Lee (North Carolina State University, USA) Programmers use refactoring to improve the design of existing code without changing external behavior. Current research does not empirically answer the question, “Why and how do programmers refactor functional programs?” In order to answer the question, I conducted a case study on three open source projects in Haskell. I investigated changed portions of code in 55 successive versions of a given project to classify how programmers refactor. I found a total of 143 refactorings classified by 12 refactoring types. I also found 5 new refactoring types and propose two new refactoring tools that would be useful for developers. @InProceedings{ICSE11p1164, author = {Da Young Lee}, title = {A Case Study on Refactoring in Haskell Programs}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1164--1166}, doi = {}, year = {2011}, } |
|
Lee, Juhnyoung |
ICSE '11-DEMOS: "Using MATCON to Generate CASE ..."
Using MATCON to Generate CASE Tools That Guide Deployment of Pre-Packaged Applications
Elad Fein, Natalia Razinkov, Shlomit Shachor, Pietro Mazzoleni, Sweefen Goh, Richard Goodwin, Manisha Bhandar, Shyh-Kwei Chen, Juhnyoung Lee, Vibha Singhal Sinha, Senthil Mani, Debdoot Mukherjee, Biplav Srivastava, and Pankaj Dhoolia (IBM Research Haifa, Israel; IBM Research Watson, USA; IBM Research, India) The complex process of adapting pre-packaged applications, such as Oracle or SAP, to an organization’s needs is full of challenges. Although detailed, structured, and well-documented methods govern this process, the consulting team implementing the method must spend a huge amount of manual effort to make sure the guidelines of the method are followed as intended by the method author. MATCON breaks down the method content, documents, templates, and work products into reusable objects, and enables them to be cataloged and indexed so these objects can be easily found and reused on subsequent projects. By using models and meta-modeling the reusable methods, we automatically produce a CASE tool to apply these methods, thereby guiding consultants through this complex process. The resulting tool helps consultants create the method deliverables for the initial phases of large customization projects. Our MATCON output, referred to as Consultant Assistant, has shown significant savings in training costs, a 20–30% improvement in productivity, and positive results in large Oracle and SAP implementations. @InProceedings{ICSE11p1016, author = {Elad Fein and Natalia Razinkov and Shlomit Shachor and Pietro Mazzoleni and Sweefen Goh and Richard Goodwin and Manisha Bhandar and Shyh-Kwei Chen and Juhnyoung Lee and Vibha Singhal Sinha and Senthil Mani and Debdoot Mukherjee and Biplav Srivastava and Pankaj Dhoolia}, title = {Using MATCON to Generate CASE Tools That Guide Deployment of Pre-Packaged Applications}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1016--1018}, doi = {}, year = {2011}, } |
|
Lee, Seok-Won |
ICSE '11-WORKSHOPS: "Seventh International Workshop ..."
Seventh International Workshop on Software Engineering for Secure Systems (SESS 2011)
Seok-Won Lee, Mattia Monga, and Jan Jürjens (University of Nebraska-Lincoln, USA; Università degli Studi di Milano, Italy; TU Dortmund, Germany) The 7th edition of the SESS workshop aims at providing a venue for software engineers and security researchers to exchange ideas and techniques. In fact, software is at core of most of the business transactions and its smart integration in an industrial setting may be the competitive advantage even when the core competence is outside the ICT field. As a result, the revenues of a firm depend directly on several complex software-based systems. Thus, stakeholders and users should be able to trust these systems to provide data and elaborations with a degree of confidentiality, integrity, and availability compatible with their needs. Moreover, the pervasiveness of software products in the creation of critical infrastructures has raised the value of trustworthiness and new efforts should be dedicated to achieve it. However, nowadays almost every application has some kind of security requirement even if its use is not to be considered critical. @InProceedings{ICSE11p1200, author = {Seok-Won Lee and Mattia Monga and Jan Jürjens}, title = {Seventh International Workshop on Software Engineering for Secure Systems (SESS 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1200--1201}, doi = {}, year = {2011}, } |
|
Legay, Axel |
ICSE '11: "Symbolic Model Checking of ..."
Symbolic Model Checking of Software Product Lines
Andreas Classen, Patrick Heymans, Pierre-Yves Schobbens, and Axel Legay (University of Namur, Belgium; IRISA/INRIA Rennes, France; University of Liège, Belgium) We study the problem of model checking software product line (SPL) behaviours against temporal properties. This is more difficult than for single systems because an SPL with n features yields up to 2^n individual systems to verify. As each individual verification suffers from state explosion, it is crucial to propose efficient formalisms and heuristics. We recently proposed featured transition systems (FTS), a compact representation for SPL behaviour, and defined algorithms for model checking FTS against linear temporal properties. Although they showed to outperform individual system verifications, they still face a state explosion problem as they enumerate and visit system states one by one. In this paper, we tackle this latter problem by using symbolic representations of the state space. This lead us to consider computation tree logic (CTL) which is supported by the industry-strength symbolic model checker NuSMV. We first lay the foundations for symbolic SPL model checking by defining a feature-oriented version of CTL and its dedicated algorithms. We then describe an implementation that adapts the NuSMV language and tool infrastructure. Finally, we propose theoretical and empirical evaluations of our results. The benchmarks show that for certain properties, our algorithm is over a hundred times faster than model checking each system with the standard algorithm. @InProceedings{ICSE11p321, author = {Andreas Classen and Patrick Heymans and Pierre-Yves Schobbens and Axel Legay}, title = {Symbolic Model Checking of Software Product Lines}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {321--330}, doi = {}, year = {2011}, } |
|
Lewis, Chris |
ICSE '11-WORKSHOPS: "Workshop on Games and Software ..."
Workshop on Games and Software Engineering (GAS 2011)
Jim Whitehead and Chris Lewis (UC Santa Cruz, USA) At the core of video games are complex interactions leading to emergent behaviors. This complexity creates difficulties architecting components, predicting their behaviors and testing the results. The Workshop on Games and Software Engineering (GAS 2011) provides an opportunity for software engineering researchers and practitioners who work with games to come together and discuss how these two areas can be intertwined. @InProceedings{ICSE11p1194, author = {Jim Whitehead and Chris Lewis}, title = {Workshop on Games and Software Engineering (GAS 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1194--1195}, doi = {}, year = {2011}, } |
|
Lewis, Grace A. |
ICSE '11-WORKSHOPS: "Third International Workshop ..."
Third International Workshop on Principles of Engineering Service-Oriented Systems (PESOS 2011)
Manuel Carro, Dimka Karastoyanova, Grace A. Lewis, and Anna Liu (Universidad Politécnica de Madrid, Spain; University of Stuttgart, Germany; CMU, USA; NICTA, Australia) ervice-oriented systems have attracted great interest from industry and research communities worldwide. Service integrators, developers, and providers are collaborating to address the various challenges in the field. PESOS 2011 is a forum for all these communities to present and discuss a wide range of topics related to service-oriented systems. The goal of PESOS is to bring together researchers from academia and industry, as well as practitioners working in the areas of software engineering and service-oriented systems to discuss research challenges, recent developments, novel applications, as well as methods, techniques, experiences, and tools to support the engineering of service-oriented systems. @InProceedings{ICSE11p1218, author = {Manuel Carro and Dimka Karastoyanova and Grace A. Lewis and Anna Liu}, title = {Third International Workshop on Principles of Engineering Service-Oriented Systems (PESOS 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1218--1219}, doi = {}, year = {2011}, } |
|
Li, J. Jenny |
ICSE '11-WORKSHOPS: "Sixth International Workshop ..."
Sixth International Workshop on Automation of Software Test (AST 2011)
Howard Foster, Antonia Bertolino, and J. Jenny Li (City University London, UK; ISTI-CNR, Italy; Avaya Research Labs, USA) The Sixth International Workshop on Automation of Software Test (AST 2011) is associated with the 33rd International Conference on Software Engineering (ICSE 2011). This edition of AST was focused on the special theme of Software Design and the Automation of Software Test and authors were encouraged to submit work in this area. The workshop covers two days with presentations of regular research papers, industrial case studies and experience reports. The workshop also aims to have extensive discussions on collaborative solutions in the form of charette sessions. This paper summarizes the organization of the workshop, the special theme, as well as the sessions. @InProceedings{ICSE11p1216, author = {Howard Foster and Antonia Bertolino and J. Jenny Li}, title = {Sixth International Workshop on Automation of Software Test (AST 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1216--1217}, doi = {}, year = {2011}, } |
|
Li, Paul Luo |
ICSE '11-SEIP: "Characterizing the Differences ..."
Characterizing the Differences Between Pre- and Post- Release Versions of Software
Paul Luo Li, Ryan Kivett, Zhiyuan Zhan, Sung-eok Jeon, Nachiappan Nagappan, Brendan Murphy, and Andrew J. Ko (Microsoft Inc., USA; University of Washington, USA; Microsoft Research, USA) Many software producers utilize beta programs to predict postrelease quality and to ensure that their products meet quality expectations of users. Prior work indicates that software producers need to adjust predictions to account for usage environments and usage scenarios differences between beta populations and postrelease populations. However, little is known about how usage characteristics relate to field quality and how usage characteristics differ between beta and post-release. In this study, we examine application crash, application hang, system crash, and usage information from millions of Windows® users to 1) examine the effects of usage characteristics differences on field quality (e.g. which usage characteristics impact quality), 2) examine usage characteristics differences between beta and post-release (e.g. do impactful usage characteristics differ), and 3) report experiences adjusting field quality predictions for Windows. Among the 18 usage characteristics that we examined, the five most important were: the number of application executed, whether the machines was pre-installed by the original equipment manufacturer, two sub-populations (two language/geographic locales), and whether Windows was 64-bit (not 32-bit). We found each of these usage characteristics to differ between beta and post-release, and by adjusting for the differences, accuracy of field quality predictions for Windows improved by ~59%. @InProceedings{ICSE11p716, author = {Paul Luo Li and Ryan Kivett and Zhiyuan Zhan and Sung-eok Jeon and Nachiappan Nagappan and Brendan Murphy and Andrew J. Ko}, title = {Characterizing the Differences Between Pre- and Post- Release Versions of Software}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {716--725}, doi = {}, year = {2011}, } |
|
Li, Yang |
ICSE '11-NIER: "A Domain Specific Requirements ..."
A Domain Specific Requirements Model for Scientific Computing (NIER Track)
Yang Li, Nitesh Narayan, Jonas Helming, and Maximilian Koegel (TU München, Germany) Requirements engineering is a core activity in software engineering. However, formal requirements engineering methodologies and documented requirements are often missing in scientific computing projects. We claim that there is a need for methodologies, which capture requirements for scientific computing projects, because traditional requirements engineering methodologies are difficult to apply in this domain. We propose a novel domain specific requirements model to meet this need. We conducted an exploratory experiment to evaluate the usage of this model in scientific computing projects. The results indicate that the proposed model facilitates the communication across the domain boundary, which is between the scientific computing domain and the software engineering domain. It supports requirements elicitation for the projects efficiently. @InProceedings{ICSE11p848, author = {Yang Li and Nitesh Narayan and Jonas Helming and Maximilian Koegel}, title = {A Domain Specific Requirements Model for Scientific Computing (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {848--851}, doi = {}, year = {2011}, } |
|
Li, Zheng |
ICSE '11: "Model Projection: Simplifying ..."
Model Projection: Simplifying Models in Response to Restricting the Environment
Kelly Androutsopoulos, David Binkley, David Clark, Nicolas Gold, Mark Harman, Kevin Lano, and Zheng Li (University College London, UK; Loyola University Maryland, USA; King's College London, UK) This paper introduces Model Projection. Finite state models such as Extended Finite State Machines are being used in an ever increasing number of software engineering activities. Model projection facilitates model development by specializing models for a specific operating environment. A projection is useful in many design-level applications including specification reuse and property verification. The applicability of model projection rests upon three critical concerns: correctness, effectiveness, and efficiency, all of which are addressed in this paper. We introduce four related algorithms for model projection and prove each correct. We also present an empirical study of effectiveness and efficiency using ten models, including widely–studied benchmarks as well as industrial models. Results show that a typical projection includes about half of the states and a third of the transitions from the original model. @InProceedings{ICSE11p291, author = {Kelly Androutsopoulos and David Binkley and David Clark and Nicolas Gold and Mark Harman and Kevin Lano and Zheng Li}, title = {Model Projection: Simplifying Models in Response to Restricting the Environment}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {291--300}, doi = {}, year = {2011}, } |
|
Li, Zude |
ICSE '11-NIER: "Diagnosing New Faults Using ..."
Diagnosing New Faults Using Mutants and Prior Faults (NIER Track)
Syed Shariyar Murtaza, Nazim Madhavji, Mechelle Gittens, and Zude Li (University of Western Ontario, Canada; University of West Indies, Barbados) Literature indicates that 20% of a program’s code is responsible for 80% of the faults, and 50-90% of the field failures are rediscoveries of previous faults. Despite this, identification of faulty code can consume 30-40% time of error correction. Previous fault-discovery techniques focusing on field failures either require many pass-fail traces, discover only crashing failures, or identify faulty “files” (which are of large granularity) as origin of the source code. In our earlier work (the F007 approach), we identify faulty “functions” (which are of small granularity) in a field trace by using earlier resolved traces of the same release, which limits it to the known faulty functions. This paper overcomes this limitation by proposing a new “strategy” to identify new and old faulty functions using F007. This strategy uses failed traces of mutants (artificial faults) and failed traces of prior releases to identify faulty functions in the traces of succeeding release. Our results on two UNIX utilities (i.e., Flex and Gzip) show that faulty functions in the traces of the majority (60-85%) of failures of a new software release can be identified by reviewing only 20% of the code. If compared against prior techniques then this is a notable improvement in terms of contextual knowledge required and accuracy in the discovery of finer-grain fault origin. @InProceedings{ICSE11p960, author = {Syed Shariyar Murtaza and Nazim Madhavji and Mechelle Gittens and Zude Li}, title = {Diagnosing New Faults Using Mutants and Prior Faults (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {960--963}, doi = {}, year = {2011}, } |
|
Lim, Soo Ling |
ICSE '11-DEMOS: "StakeSource2.0: Using Social ..."
StakeSource2.0: Using Social Networks of Stakeholders to Identify and Prioritise Requirements
Soo Ling Lim, Daniela Damian, and Anthony Finkelstein (University College London, UK; University of Victoria, Canada) Software projects typically rely on system analysts to conduct requirements elicitation, an approach potentially costly for large projects with many stakeholders and requirements. This paper describes StakeSource2.0, a web-based tool that uses social networks and collaborative filtering, a “crowdsourcing” approach, to identify and prioritise stakeholders and their requirements. @InProceedings{ICSE11p1022, author = {Soo Ling Lim and Daniela Damian and Anthony Finkelstein}, title = {StakeSource2.0: Using Social Networks of Stakeholders to Identify and Prioritise Requirements}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1022--1024}, doi = {}, year = {2011}, } |
|
Litoiu, Marin |
ICSE '11-NIER: "Model-based Performance Testing ..."
Model-based Performance Testing (NIER Track)
Cornel Barna, Marin Litoiu, and Hamoun Ghanbari (York University, Canada) In this paper, we present a method for performance testing of transactional systems. The methods models the system under test, finds the software and hardware bottlenecks and generate the workloads that saturate them. The framework is adaptive, the model and workloads are determined during the performance test execution by measuring the system performance, fitting a performance model and by analytically computing the number and mix of users that will saturate the bottlenecks. We model the software system using a two layers queuing model and use analytical techniques to find the workload mixes that change the bottlenecks in the system. Those workload mixes become stress vectors and initial starting points for the stress test cases. The rest of test cases are generated based on a feedback loop that drives the software system towards the worst case behaviour. @InProceedings{ICSE11p872, author = {Cornel Barna and Marin Litoiu and Hamoun Ghanbari}, title = {Model-based Performance Testing (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {872--875}, doi = {}, year = {2011}, } |
|
Liu, Anna |
ICSE '11-WORKSHOPS: "Third International Workshop ..."
Third International Workshop on Principles of Engineering Service-Oriented Systems (PESOS 2011)
Manuel Carro, Dimka Karastoyanova, Grace A. Lewis, and Anna Liu (Universidad Politécnica de Madrid, Spain; University of Stuttgart, Germany; CMU, USA; NICTA, Australia) ervice-oriented systems have attracted great interest from industry and research communities worldwide. Service integrators, developers, and providers are collaborating to address the various challenges in the field. PESOS 2011 is a forum for all these communities to present and discuss a wide range of topics related to service-oriented systems. The goal of PESOS is to bring together researchers from academia and industry, as well as practitioners working in the areas of software engineering and service-oriented systems to discuss research challenges, recent developments, novel applications, as well as methods, techniques, experiences, and tools to support the engineering of service-oriented systems. @InProceedings{ICSE11p1218, author = {Manuel Carro and Dimka Karastoyanova and Grace A. Lewis and Anna Liu}, title = {Third International Workshop on Principles of Engineering Service-Oriented Systems (PESOS 2011)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1218--1219}, doi = {}, year = {2011}, } |
|
Liu, Peng |
ICSE '11-SEIP: "Value-Based Program Characterization ..."
Value-Based Program Characterization and Its Application to Software Plagiarism Detection
Yoon-Chan Jhi, Xinran Wang, Xiaoqi Jia, Sencun Zhu, Peng Liu, and Dinghao Wu (Pennsylvania State University, USA; Chinese Academy of Sciences, China) Identifying similar or identical code fragments becomes much more challenging in code theft cases where plagiarizers can use various automated code transformation techniques to hide stolen code from being detected. Previous works in this field are largely limited in that (1) most of them cannot handle advanced obfuscation techniques; (2) the methods based on source code analysis are less practical since the source code of suspicious programs is typically not available until strong evidences are collected; and (3) those depending on the features of specific operating systems or programming languages have limited applicability. Based on an observation that some critical runtime values are hard to be replaced or eliminated by semanticspreserving transformation techniques, we introduce a novel approach to dynamic characterization of executable programs. Leveraging such invariant values, our technique is resilient to various control and data obfuscation techniques. We show how the values can be extracted and refined to expose the critical values and how we can apply this runtime property to help solve problems in software plagiarism detection. We have implemented a prototype with a dynamic taint analyzer atop a generic processor emulator. Our experimental results show that the value-based method successfully discriminates 34 plagiarisms obfuscated by SandMark, plagiarisms heavily obfuscated by KlassMaster, programs obfuscated by Thicket, and executables obfuscated by Loco/Diablo. @InProceedings{ICSE11p756, author = {Yoon-Chan Jhi and Xinran Wang and Xiaoqi Jia and Sencun Zhu and Peng Liu and Dinghao Wu}, title = {Value-Based Program Characterization and Its Application to Software Plagiarism Detection}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {756--765}, doi = {}, year = {2011}, } |
|
Liu, Sheng |
ICSE '11-NIER: "Program Analysis: From Qualitative ..."
Program Analysis: From Qualitative Analysis to Quantitative Analysis (NIER Track)
Sheng Liu and Jian Zhang (Chinese Academy of Sciences, China) We propose to combine symbolic execution with volume computation to compute the exact execution frequency of program paths and branches. Given a path, we use symbolic execution to obtain the path condition which is a set of constraints; then we use volume computation to obtain the size of the solution space for the constraints. With such a methodology and supporting tools, we can decide which paths in a program are executed more often than the others. We can also generate certain test cases that are related to the execution frequency, e.g., those covering cold paths. @InProceedings{ICSE11p956, author = {Sheng Liu and Jian Zhang}, title = {Program Analysis: From Qualitative Analysis to Quantitative Analysis (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {956--959}, doi = {}, year = {2011}, } |
|
Lo, David |
ICSE '11: "Mining Message Sequence Graphs ..."
Mining Message Sequence Graphs
Sandeep Kumar, Siau Cheng Khoo, Abhik Roychoudhury, and David Lo (National University of Singapore, Singapore; Singapore Management University, Singapore) Dynamic specification mining involves discovering software behavior from traces for the purpose of program comprehension and bug detection. However, mining program behavior from execution traces is difficult for concurrent/distributed programs. Specifically, the inherent partial order relationships among events occurring across processes pose a big challenge to specification mining. In this paper, we propose a framework for mining partial orders so as to understand concurrent program behavior. Our miner takes in a set of concurrent program traces, and produces a message sequence graph (MSG) to represent the concurrent program behavior. An MSG represents a graph where the nodes of the graph are partial orders, represented as Message Sequence Charts. Mining an MSG allows us to understand concurrent program behaviors since the nodes of the MSG depict important “phases” or “interaction snippets” involving several concurrently executing processes. To demonstrate the power of this technique, we conducted experiments on mining behaviors of several fairly complex distributed systems. We show that our miner can produce the corresponding MSGs with both high precision and recall. @InProceedings{ICSE11p91, author = {Sandeep Kumar and Siau Cheng Khoo and Abhik Roychoudhury and David Lo}, title = {Mining Message Sequence Graphs}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {91--100}, doi = {}, year = {2011}, } |
|
Lochmann, Klaus |
ICSE '11-DEMOS: "The Quamoco Tool Chain for ..."
The Quamoco Tool Chain for Quality Modeling and Assessment
Florian Deissenboeck, Lars Heinemann, Markus Herrmannsdoerfer, Klaus Lochmann, and Stefan Wagner (TU München, Germany) Continuous quality assessment is crucial for the long-term success of evolving software. On the one hand, code analysis tools automatically supply quality indicators, but do not provide a complete overview of software quality. On the other hand, quality models define abstract characteristics that influence quality, but are not operationalized. Currently, no tool chain exists that integrates code analysis tools with quality models. To alleviate this, the Quamoco project provides a tool chain to both define and assess software quality. The tool chain consists of a quality model editor and an integration with the quality assessment toolkit ConQAT. Using the editor, we can define quality models ranging from abstract characteristics down to operationalized measures. From the quality model, a ConQAT configuration can be generated that can be used to automatically assess the quality of a software system. @InProceedings{ICSE11p1007, author = {Florian Deissenboeck and Lars Heinemann and Markus Herrmannsdoerfer and Klaus Lochmann and Stefan Wagner}, title = {The Quamoco Tool Chain for Quality Modeling and Assessment}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1007--1009}, doi = {}, year = {2011}, } |
|
Lopez, Nicolas |
ICSE '11-DEMOS: "A Demonstration of a Distributed ..."
A Demonstration of a Distributed Software Design Sketching Tool
Nicolas Mangano, Mitch Dempsey, Nicolas Lopez, and André van der Hoek (UC Irvine, USA) Software designers frequently sketch when they design, particularly during the early phases of exploration of a design problem and its solution. In so doing, they shun formal design tools, the reason being that such tools impose conformity and precision prematurely. Sketching on the other hand is a highly fluid and flexible way of expressing oneself. In this paper, we present Calico, a sketch-based distributed software design tool that supports software designers with a variety of features that improve over the use of just pen-and-paper or a regular whiteboard, and are tailored specifically for software design. Calico is meant to be used on electronic whiteboards or tablets, and provides for rapid creation and manipulation of design content by sets of developers who can collaborate distributedly. @InProceedings{ICSE11p1028, author = {Nicolas Mangano and Mitch Dempsey and Nicolas Lopez and André van der Hoek}, title = {A Demonstration of a Distributed Software Design Sketching Tool}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {1028--1030}, doi = {}, year = {2011}, } ICSE '11-NIER: "The Code Orb -- Supporting ..." The Code Orb -- Supporting Contextualized Coding via At-a-Glance Views (NIER Track) Nicolas Lopez and André van der Hoek (UC Irvine, USA) While code is typically presented as a flat file to a developer who must change it, this flat file exists within a context that can drastically influence how a developer approaches changing it. While the developer clearly must be careful changing any code, they probably should be yet more careful in changing code that recently saw major changes, is barely covered by test cases, and was the source of a number of bugs. Contextualized coding refers to the ability of the developer to effectively use such contextual information while they work on some changes. In this paper, we introduce the Code Orb, a contextualized coding tool that builds upon existing mining and analysis techniques to warn developers on a line-by-line basis of the volatility of the code they are working on. The key insight underneath the Code Orb is that it is neither desired nor possible to always present a code’s context in its entirety; instead, it is necessary to provide an abstracted view of the context that informs the developer of which parts of the code they need to pay more attention to. This paper discusses the principles of and rationale behind contextualized coding, introduces the Code Orb, and illustrates its function with example code and context drawn from the Mylyn [11] project. @InProceedings{ICSE11p824, author = {Nicolas Lopez and André van der Hoek}, title = {The Code Orb -- Supporting Contextualized Coding via At-a-Glance Views (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {824--827}, doi = {}, year = {2011}, } |
|
Lotufo, Rafael |
ICSE '11: "Reverse Engineering Feature ..."
Reverse Engineering Feature Models
Steven She, Rafael Lotufo, Thorsten Berger, Andrzej Wasowski, and Krzysztof Czarnecki (University of Waterloo, Canada; University of Leipzig, Germany; IT University of Copenhagen, Denmark) Feature models describe the common and variable characteristics of a product line. Their advantages are well recognized in product line methods. Unfortunately, creating a feature model for an existing project is time-consuming and requires substantial effort from a modeler. We present procedures for reverse engineering feature models based on a crucial heuristic for identifying parents—the major challenge of this task. We also automatically recover constructs such as feature groups, mandatory features, and implies/excludes edges. We evaluate the technique on two large-scale software product lines with existing reference feature models—the Linux and eCos kernels—and FreeBSD, a project without a feature model. Our heuristic is effective across all three projects by ranking the correct parent among the top results for a vast majority of features. The procedures effectively reduce the information a modeler has to consider from thousands of choices to typically five or less. @InProceedings{ICSE11p461, author = {Steven She and Rafael Lotufo and Thorsten Berger and Andrzej Wasowski and Krzysztof Czarnecki}, title = {Reverse Engineering Feature Models}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {461--470}, doi = {}, year = {2011}, } |
|
Lungu, Mircea |
ICSE '11-NIER: "A Study of Ripple Effects ..."
A Study of Ripple Effects in Software Ecosystems (NIER Track)
Romain Robbes and Mircea Lungu (University of Chile, Chile; University of Bern, Switzerland) When the Application Programming Interface (API) of a framework or library changes, its clients must be adapted. This change propagation—known as a ripple effect—is a problem that has garnered interest: several approaches have been proposed in the literature to react to these changes. Although studies of ripple effects exist at the single system level, no study has been performed on the actual extent and impact of these API changes in practice, on an entire software ecosystem associated with a community of developers. This paper reports on early results of such an empirical study of API changes that led to ripple effects across an entire ecosystem. Our case study subject is the development community gravitating aroung the Squeak and Pharo software ecosystems: six years of evolution, nearly 3,000 contributors, and close to 2,500 distinct systems. @InProceedings{ICSE11p904, author = {Romain Robbes and Mircea Lungu}, title = {A Study of Ripple Effects in Software Ecosystems (NIER Track)}, booktitle = {Proc.\ ICSE}, publisher = {ACM}, pages = {904--907}, doi = {}, year = {2011}, } |
|
Lutz, Rainer |
ICSE '11-NIER: "CREWW - Collaborative Requirements ..."
CREWW - Collaborative Requirements Engineering with Wii-Remotes (NIER Track)
Felix Bott, Stephan Diehl, and Rainer Lutz (University of Trier, Germany) In this paper, we present CREWW, a tool for co-located, collaborative CRC modeling and use case analysis. In CRC sessions role play is used to involve all stakeholders when determining whether the current software model completely and consistently captures the modeled use case. In this activity it quickly becomes difficult to keep track of which class is currently active or along which path the current state was reached. CREWW was designed to alleviate these and other weaknesses of the traditional approach. |