ICSE 2012 – Author Index |
Contents -
Abstracts -
Authors
Online Calendar - iCal File |
A B C D E G H I J K L M N P R S T V W X Y Z
Anckaerts, Guy |
ICSE '12-SEIP: "Efficient Reuse of Domain-Specific ..."
Efficient Reuse of Domain-Specific Test Knowledge: An Industrial Case in the Smart Card Domain
Nicolas Devos, Christophe Ponsard, Jean-Christophe Deprez, Renaud Bauvin, Benedicte Moriau, and Guy Anckaerts (CETIC, Belgium; STMicroelectronics, Belgium) While testing is heavily used and largely automated in software development projects, the reuse of test practices across similar projects in a given domain is seldom systematized and supported by adequate methods and tools. This paper presents a practical approach that emerged from a concrete industrial case in the smart card domain at STMicroelectronics Belgium in order to better address this kind of challenge. The central concept is a test knowledge repository organized as a collection of specific patterns named QPatterns. A systematic process was followed, first to gather, structure and abstract the test practices, then to produce and validate an initial repository, and finally to make it evolve later on Testers can then rely on this repository to produce high quality test plans identifying all the functional and non-functional aspects that have to be addressed, as well as the concrete tests that have to be developed within the context of a new project. A tool support was also developed and integrated in a traceable way into the existing industrial test environment. The approach was validated and is currently under deployment at STMicroelectronics Belgium. @InProceedings{ICSE12p1122, author = {Nicolas Devos and Christophe Ponsard and Jean-Christophe Deprez and Renaud Bauvin and Benedicte Moriau and Guy Anckaerts}, title = {Efficient Reuse of Domain-Specific Test Knowledge: An Industrial Case in the Smart Card Domain}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1122--1131}, doi = {}, year = {2012}, } |
|
Andronick, June |
ICSE '12-SEIP: "Large-Scale Formal Verification ..."
Large-Scale Formal Verification in Practice: A Process Perspective
June Andronick, Ross Jeffery, Gerwin Klein, Rafal Kolanski, Mark Staples, He Zhang, and Liming Zhu (NICTA, Australia; UNSW, Australia) The L4.verified project was a rare success in large-scale, formal verification: it provided a formal, machine-checked, code-level proof of the full functional correctness of the seL4 microkernel. In this paper we report on the development process and management issues of this project, highlighting key success factors. We formulate a detailed descriptive model of its middle-out development process, and analyze the evolution and dependencies of code and proof artifacts. We compare our key findings on verification and re-verification with insights from other verification efforts in the literature. Our analysis of the project is based on complete access to project logs, meeting notes, and version control data over its entire history, including its long-term, ongoing maintenance phase. The aim of this work is to aid understanding of how to successfully run large-scale formal software verification projects. @InProceedings{ICSE12p1001, author = {June Andronick and Ross Jeffery and Gerwin Klein and Rafal Kolanski and Mark Staples and He Zhang and Liming Zhu}, title = {Large-Scale Formal Verification in Practice: A Process Perspective}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1001--1010}, doi = {}, year = {2012}, } |
|
Bauvin, Renaud |
ICSE '12-SEIP: "Efficient Reuse of Domain-Specific ..."
Efficient Reuse of Domain-Specific Test Knowledge: An Industrial Case in the Smart Card Domain
Nicolas Devos, Christophe Ponsard, Jean-Christophe Deprez, Renaud Bauvin, Benedicte Moriau, and Guy Anckaerts (CETIC, Belgium; STMicroelectronics, Belgium) While testing is heavily used and largely automated in software development projects, the reuse of test practices across similar projects in a given domain is seldom systematized and supported by adequate methods and tools. This paper presents a practical approach that emerged from a concrete industrial case in the smart card domain at STMicroelectronics Belgium in order to better address this kind of challenge. The central concept is a test knowledge repository organized as a collection of specific patterns named QPatterns. A systematic process was followed, first to gather, structure and abstract the test practices, then to produce and validate an initial repository, and finally to make it evolve later on Testers can then rely on this repository to produce high quality test plans identifying all the functional and non-functional aspects that have to be addressed, as well as the concrete tests that have to be developed within the context of a new project. A tool support was also developed and integrated in a traceable way into the existing industrial test environment. The approach was validated and is currently under deployment at STMicroelectronics Belgium. @InProceedings{ICSE12p1122, author = {Nicolas Devos and Christophe Ponsard and Jean-Christophe Deprez and Renaud Bauvin and Benedicte Moriau and Guy Anckaerts}, title = {Efficient Reuse of Domain-Specific Test Knowledge: An Industrial Case in the Smart Card Domain}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1122--1131}, doi = {}, year = {2012}, } |
|
Bianculli, Domenico |
ICSE '12-SEIP: "Specification Patterns from ..."
Specification Patterns from Research to Industry: A Case Study in Service-Based Applications
Domenico Bianculli, Carlo Ghezzi, Cesare Pautasso, and Patrick Senti (University of Lugano, Switzerland; Politecnico di Milano, Italy; Credit Suisse, Switzerland) Specification patterns have proven to help developers to state precise system requirements, as well as formalize them by means of dedicated specification languages. Most of the past work has focused its applicability area to the specification of concurrent and real-time systems, and has been limited to a research setting. In this paper we present the results of our study on specification patterns for service-based applications (SBAs). The study focuses on industrial SBAs in the banking domain. We started by performing an extensive analysis of the usage of specification patterns in published research case studies --- representing almost ten years of research in the area of specification, verification, and validation of SBAs. We then compared these patterns with a large body of specifications written by our industrial partner over a similar time period. The paper discusses the outcome of this comparison, indicating that some needs of the industry, especially in the area of requirements specification languages, are not fully met by current software engineering research. @InProceedings{ICSE12p967, author = {Domenico Bianculli and Carlo Ghezzi and Cesare Pautasso and Patrick Senti}, title = {Specification Patterns from Research to Industry: A Case Study in Service-Based Applications}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {967--975}, doi = {}, year = {2012}, } |
|
Bnayahu, Jonathan |
ICSE '12-SEIP: "Making Sense of Healthcare ..."
Making Sense of Healthcare Benefits
Jonathan Bnayahu, Maayan Goldstein, Mordechai Nisenson, and Yahalomit Simionovici (IBM Research, Israel) A key piece of information in healthcare is a patient's benefit plan. It details which treatments and procedures are covered by the health insurer (or payer), and at which conditions. While the most accurate and complete implementation of the plan resides in the payer’s claims adjudication systems, the inherent complexity of these systems forces payers to maintain multiple repositories of benefit information for other service and regulatory needs. In this paper we present a technology that deals with this complexity. We show how a large US health payer benefited from using the visualization, search, summarization and other capabilities of the technology. We argue that this technology can be used to improve productivity and reduce error rate in the benefits administration workflow, leading to lower administrative overhead and cost for health payers, which benefits both payers and patients. @InProceedings{ICSE12p1033, author = {Jonathan Bnayahu and Maayan Goldstein and Mordechai Nisenson and Yahalomit Simionovici}, title = {Making Sense of Healthcare Benefits}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1033--1042}, doi = {}, year = {2012}, } |
|
Bragdon, Andrew |
ICSE '12-SEIP: "Debugger Canvas: Industrial ..."
Debugger Canvas: Industrial Experience with the Code Bubbles Paradigm
Robert DeLine, Andrew Bragdon, Kael Rowan, Jens Jacobsen, and Steven P. Reiss (Microsoft Research, USA; Brown University, USA) At ICSE 2010, the Code Bubbles team from Brown University and the Code Canvas team from Microsoft Research presented similar ideas for new user experiences for an integrated development environment. Since then, the two teams formed a collaboration, along with the Microsoft Visual Studio team, to release Debugger Canvas, an industrial version of the Code Bubbles paradigm. With Debugger Canvas, a programmer debugs her code as a collection of code bubbles, annotated with call paths and variable values, on a two-dimensional pan-and-zoom surface. In this experience report, we describe new user interface ideas, describe the rationale behind our design choices, evaluate the performance overhead of the new design, and provide user feedback based on lab participants, post-release usage data, and a user survey and interviews. We conclude that the code bubbles paradigm does scale to existing customer code bases, is best implemented as a mode in the existing user experience rather than a replacement, and is most useful when the user has a long or complex call paths, a large or unfamiliar code base, or complex control patterns, like factories or dynamic linking. @InProceedings{ICSE12p1063, author = {Robert DeLine and Andrew Bragdon and Kael Rowan and Jens Jacobsen and Steven P. Reiss}, title = {Debugger Canvas: Industrial Experience with the Code Bubbles Paradigm}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1063--1072}, doi = {}, year = {2012}, } |
|
Braithwaite, Keith |
ICSE '12-SEIP: "Software as an Engineering ..."
Software as an Engineering Material: How the Affordances of Programming Have Changed and What to Do about It (Invited Industrial Talk)
Keith Braithwaite (Zühlke Engineering, UK) A contemporary programmer has astonishingly abundant processing power under their fingers. That power increases much faster than research into and published results about programming techniques can change. Meanwhile, practitioners still have to make a living by adding value in capital-constrained environments. How have practitioners taken advantage of the relative cheapness of processing power to add value more quickly, to reduce cost, manage risk and please customers and themsleves? And are there any signposts for where they might go next? @InProceedings{ICSE12p997, author = {Keith Braithwaite}, title = {Software as an Engineering Material: How the Affordances of Programming Have Changed and What to Do about It (Invited Industrial Talk)}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {997--997}, doi = {}, year = {2012}, } |
|
Buse, Raymond P. L. |
ICSE '12-SEIP: "Information Needs for Software ..."
Information Needs for Software Development Analytics
Raymond P. L. Buse and Thomas Zimmermann (University of Virginia, USA; Microsoft Research, USA) Software development is a data rich activity with many sophisticated metrics. Yet engineers often lack the tools and techniques necessary to leverage these potentially powerful information resources toward decision making. In this paper, we present the data and analysis needs of professional software engineers, which we identified among 110 developers and managers in a survey. We asked about their decision making process, their needs for artifacts and indicators, and scenarios in which they would use analytics. The survey responses lead us to propose several guidelines for analytics tools in software development including: Engineers do not necessarily have much expertise in data analysis; thus tools should be easy to use, fast, and produce concise output. Engineers have diverse analysis needs and consider most indicators to be important; thus tools should at the same time support many different types of artifacts and many indicators. In addition, engineers want to drill down into data based on time, organizational structure, and system architecture. @InProceedings{ICSE12p986, author = {Raymond P. L. Buse and Thomas Zimmermann}, title = {Information Needs for Software Development Analytics}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {986--995}, doi = {}, year = {2012}, } |
|
Chapman, Clovis |
ICSE '12-SEIP: "Towards a Federated Cloud ..."
Towards a Federated Cloud Ecosystem (Invited Industrial Talk)
Clovis Chapman (Dell, Ireland) Cloud computing has today become a widespread practice for the provisioning of IT services. Cloud infrastructures provide the means to lease computational resources on demand, typically on a pay per use or subscription model and without the need for significant capital investment into hardware. With enterprises seeking to migrate their services to the cloud to save on deployment costs, cater for rapid growth or generally relieve themselves from the responsibility of maintaining their own computing infrastructures, a diverse range of services is required to help fulfil business processes. In this talk, we discuss some of the challenges involved in deploying and managing an ecosystem of loosely coupled cloud services that may be accessed through and integrate with a wide range of devices and third party applications. In particular, we focus on how projects such as OpenStack are accelerating the evolution towards a federated cloud service ecosystem. We also examine how the portfolio of existing and emerging standards such as OAuth and the Simple Cloud Identity Management framework can be exploited to seamlessly incorporate cloud services into business processes and solve the problem of identity and access management when dealing with applications exploiting services across organisational boundaries. @InProceedings{ICSE12p966, author = {Clovis Chapman}, title = {Towards a Federated Cloud Ecosystem (Invited Industrial Talk)}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {966--966}, doi = {}, year = {2012}, } |
|
Cirilo, Elder |
ICSE '12-SEIP: "On the Proactive and Interactive ..."
On the Proactive and Interactive Visualization for Feature Evolution Comprehension: An Industrial Investigation
Renato Novais, Camila Nunes, Caio Lima, Elder Cirilo, Francisco Dantas, Alessandro Garcia , and Manoel Mendonça (Federal University of Bahia, Brazil; Federal Institute of Bahia, Brazil; PUC-Rio, Brazil) Program comprehension is a key activity through maintenance and evolution of large-scale software systems. The understanding of a program often requires the evolution analysis of individual functionalities, so-called features. The comprehension of evolving features is not trivial as their implementations are often tangled and scattered through many modules. Even worse, existing techniques are limited in providing developers with direct means for visualizing the evolution of features’ code. This work presents a proactive and interactive visualization strategy to enable feature evolution analysis. It proactively identifies code elements of evolving features and provides multiple views to present their structure under different perspectives. The novel visualization strategy was compared to a lightweight visualization strategy based on a tree-structure. We ran a controlled experiment with industry developers, who performed feature evolution comprehension tasks on an industrial-strength software. The results showed that the use of the proposed strategy presented significant gains in terms of correctness and execution time for feature evolution comprehension tasks. @InProceedings{ICSE12p1043, author = {Renato Novais and Camila Nunes and Caio Lima and Elder Cirilo and Francisco Dantas and Alessandro Garcia and Manoel Mendonça}, title = {On the Proactive and Interactive Visualization for Feature Evolution Comprehension: An Industrial Investigation}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1043--1052}, doi = {}, year = {2012}, } |
|
Dang, Yingnong |
ICSE '12-SEIP: "ReBucket: A Method for Clustering ..."
ReBucket: A Method for Clustering Duplicate Crash Reports Based on Call Stack Similarity
Yingnong Dang , Rongxin Wu, Hongyu Zhang, Dongmei Zhang , and Peter Nobel (Microsoft Research, China; Tsinghua University, China; Microsoft, USA) Software often crashes. Once a crash happens, a crash report could be sent to software developers for investigation upon user permission. To facilitate efficient handling of crashes, crash reports received by Microsoft's Windows Error Reporting (WER) system are organized into a set of "buckets". Each bucket contains duplicate crash reports that are deemed as manifestations of the same bug. The bucket information is important for prioritizing efforts to resolve crashing bugs. To improve the accuracy of bucketing, we propose ReBucket, a method for clustering crash reports based on call stack matching. ReBucket measures the similarities of call stacks in crash reports and then assigns the reports to appropriate buckets based on the similarity values. We evaluate ReBucket using crash data collected from five widely-used Microsoft products. The results show that ReBucket achieves better overall performance than the existing methods. In average, the F-measure obtained by ReBucket is about 0.88. @InProceedings{ICSE12p1083, author = {Yingnong Dang and Rongxin Wu and Hongyu Zhang and Dongmei Zhang and Peter Nobel}, title = {ReBucket: A Method for Clustering Duplicate Crash Reports Based on Call Stack Similarity}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1083--1092}, doi = {}, year = {2012}, } |
|
Dantas, Francisco |
ICSE '12-SEIP: "On the Proactive and Interactive ..."
On the Proactive and Interactive Visualization for Feature Evolution Comprehension: An Industrial Investigation
Renato Novais, Camila Nunes, Caio Lima, Elder Cirilo, Francisco Dantas, Alessandro Garcia , and Manoel Mendonça (Federal University of Bahia, Brazil; Federal Institute of Bahia, Brazil; PUC-Rio, Brazil) Program comprehension is a key activity through maintenance and evolution of large-scale software systems. The understanding of a program often requires the evolution analysis of individual functionalities, so-called features. The comprehension of evolving features is not trivial as their implementations are often tangled and scattered through many modules. Even worse, existing techniques are limited in providing developers with direct means for visualizing the evolution of features’ code. This work presents a proactive and interactive visualization strategy to enable feature evolution analysis. It proactively identifies code elements of evolving features and provides multiple views to present their structure under different perspectives. The novel visualization strategy was compared to a lightweight visualization strategy based on a tree-structure. We ran a controlled experiment with industry developers, who performed feature evolution comprehension tasks on an industrial-strength software. The results showed that the use of the proposed strategy presented significant gains in terms of correctness and execution time for feature evolution comprehension tasks. @InProceedings{ICSE12p1043, author = {Renato Novais and Camila Nunes and Caio Lima and Elder Cirilo and Francisco Dantas and Alessandro Garcia and Manoel Mendonça}, title = {On the Proactive and Interactive Visualization for Feature Evolution Comprehension: An Industrial Investigation}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1043--1052}, doi = {}, year = {2012}, } |
|
DeLine, Robert |
ICSE '12-SEIP: "Debugger Canvas: Industrial ..."
Debugger Canvas: Industrial Experience with the Code Bubbles Paradigm
Robert DeLine, Andrew Bragdon, Kael Rowan, Jens Jacobsen, and Steven P. Reiss (Microsoft Research, USA; Brown University, USA) At ICSE 2010, the Code Bubbles team from Brown University and the Code Canvas team from Microsoft Research presented similar ideas for new user experiences for an integrated development environment. Since then, the two teams formed a collaboration, along with the Microsoft Visual Studio team, to release Debugger Canvas, an industrial version of the Code Bubbles paradigm. With Debugger Canvas, a programmer debugs her code as a collection of code bubbles, annotated with call paths and variable values, on a two-dimensional pan-and-zoom surface. In this experience report, we describe new user interface ideas, describe the rationale behind our design choices, evaluate the performance overhead of the new design, and provide user feedback based on lab participants, post-release usage data, and a user survey and interviews. We conclude that the code bubbles paradigm does scale to existing customer code bases, is best implemented as a mode in the existing user experience rather than a replacement, and is most useful when the user has a long or complex call paths, a large or unfamiliar code base, or complex control patterns, like factories or dynamic linking. @InProceedings{ICSE12p1063, author = {Robert DeLine and Andrew Bragdon and Kael Rowan and Jens Jacobsen and Steven P. Reiss}, title = {Debugger Canvas: Industrial Experience with the Code Bubbles Paradigm}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1063--1072}, doi = {}, year = {2012}, } |
|
Deprez, Jean-Christophe |
ICSE '12-SEIP: "Efficient Reuse of Domain-Specific ..."
Efficient Reuse of Domain-Specific Test Knowledge: An Industrial Case in the Smart Card Domain
Nicolas Devos, Christophe Ponsard, Jean-Christophe Deprez, Renaud Bauvin, Benedicte Moriau, and Guy Anckaerts (CETIC, Belgium; STMicroelectronics, Belgium) While testing is heavily used and largely automated in software development projects, the reuse of test practices across similar projects in a given domain is seldom systematized and supported by adequate methods and tools. This paper presents a practical approach that emerged from a concrete industrial case in the smart card domain at STMicroelectronics Belgium in order to better address this kind of challenge. The central concept is a test knowledge repository organized as a collection of specific patterns named QPatterns. A systematic process was followed, first to gather, structure and abstract the test practices, then to produce and validate an initial repository, and finally to make it evolve later on Testers can then rely on this repository to produce high quality test plans identifying all the functional and non-functional aspects that have to be addressed, as well as the concrete tests that have to be developed within the context of a new project. A tool support was also developed and integrated in a traceable way into the existing industrial test environment. The approach was validated and is currently under deployment at STMicroelectronics Belgium. @InProceedings{ICSE12p1122, author = {Nicolas Devos and Christophe Ponsard and Jean-Christophe Deprez and Renaud Bauvin and Benedicte Moriau and Guy Anckaerts}, title = {Efficient Reuse of Domain-Specific Test Knowledge: An Industrial Case in the Smart Card Domain}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1122--1131}, doi = {}, year = {2012}, } |
|
Devos, Nicolas |
ICSE '12-SEIP: "Efficient Reuse of Domain-Specific ..."
Efficient Reuse of Domain-Specific Test Knowledge: An Industrial Case in the Smart Card Domain
Nicolas Devos, Christophe Ponsard, Jean-Christophe Deprez, Renaud Bauvin, Benedicte Moriau, and Guy Anckaerts (CETIC, Belgium; STMicroelectronics, Belgium) While testing is heavily used and largely automated in software development projects, the reuse of test practices across similar projects in a given domain is seldom systematized and supported by adequate methods and tools. This paper presents a practical approach that emerged from a concrete industrial case in the smart card domain at STMicroelectronics Belgium in order to better address this kind of challenge. The central concept is a test knowledge repository organized as a collection of specific patterns named QPatterns. A systematic process was followed, first to gather, structure and abstract the test practices, then to produce and validate an initial repository, and finally to make it evolve later on Testers can then rely on this repository to produce high quality test plans identifying all the functional and non-functional aspects that have to be addressed, as well as the concrete tests that have to be developed within the context of a new project. A tool support was also developed and integrated in a traceable way into the existing industrial test environment. The approach was validated and is currently under deployment at STMicroelectronics Belgium. @InProceedings{ICSE12p1122, author = {Nicolas Devos and Christophe Ponsard and Jean-Christophe Deprez and Renaud Bauvin and Benedicte Moriau and Guy Anckaerts}, title = {Efficient Reuse of Domain-Specific Test Knowledge: An Industrial Case in the Smart Card Domain}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1122--1131}, doi = {}, year = {2012}, } |
|
Eder, Sebastian |
ICSE '12-SEIP: "How Much Does Unused Code ..."
How Much Does Unused Code Matter for Maintenance?
Sebastian Eder, Maximilian Junker, Elmar Jürgens, Benedikt Hauptmann, Rudolf Vaas, and Karl-Heinz Prommer (TU Munich, Germany; Munich Re, Germany) Software systems contain unnecessary code. Its maintenance causes unnecessary costs. We present tool-support that employs dynamic analysis of deployed software to detect unused code as an approximation of unnecessary code, and static analysis to reveal its changes during maintenance. We present a case study on maintenance of unused code in an industrial software system over the course of two years. It quantifies the amount of code that is unused, the amount of maintenance activity that went into it and makes the potential benefit of tool support explicit, which informs maintainers that are about to modify unused code. @InProceedings{ICSE12p1101, author = {Sebastian Eder and Maximilian Junker and Elmar Jürgens and Benedikt Hauptmann and Rudolf Vaas and Karl-Heinz Prommer}, title = {How Much Does Unused Code Matter for Maintenance?}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1101--1110}, doi = {}, year = {2012}, } |
|
Esteve, Marie-Aude |
ICSE '12-SEIP: "Formal Correctness, Safety, ..."
Formal Correctness, Safety, Dependability, and Performance Analysis of a Satellite
Marie-Aude Esteve, Joost-Pieter Katoen , Viet Yen Nguyen, Bart Postma, and Yuri Yushtein (European Space Agency, Netherlands; RWTH Aachen University, Germany; University of Twente, Netherlands) This paper reports on the usage of a broad palette of formal modeling and analysis techniques on a regular industrial-size design of an ultra-modern satellite platform. These efforts were carried out in parallel with the conventional software development of the satellite platform. The model itself is expressed in a formalized dialect of AADL. Its formal nature enables rigorous and automated analysis, for which the recently developed COMPASS toolset was used. The whole effort revealed numerous inconsistencies in the early design documents, and the use of formal analyses provided additional insight on discrete system behavior (comprising nearly 50 million states), on hybrid system behavior involving discrete and continuous variables, and enabled the automated generation of large fault trees (66 nodes) for safety analysis that typically are constructed by hand. The model's size pushed the computational tractability of the algorithms underlying the formal analyses, and revealed bottlenecks for future theoretical research. Additionally, the effort led to newly learned practices from which subsequent formal modeling and analysis efforts shall benefit, especially when they are injected in the conventional software development lifecycle. The case demonstrates the feasibility of fully capturing a system-level design as a single comprehensive formal model and analyze it automatically using a toolset based on (probabilistic) model checkers. @InProceedings{ICSE12p1021, author = {Marie-Aude Esteve and Joost-Pieter Katoen and Viet Yen Nguyen and Bart Postma and Yuri Yushtein}, title = {Formal Correctness, Safety, Dependability, and Performance Analysis of a Satellite}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1021--1030}, doi = {}, year = {2012}, } |
|
Garcia, Alessandro |
ICSE '12-SEIP: "On the Proactive and Interactive ..."
On the Proactive and Interactive Visualization for Feature Evolution Comprehension: An Industrial Investigation
Renato Novais, Camila Nunes, Caio Lima, Elder Cirilo, Francisco Dantas, Alessandro Garcia , and Manoel Mendonça (Federal University of Bahia, Brazil; Federal Institute of Bahia, Brazil; PUC-Rio, Brazil) Program comprehension is a key activity through maintenance and evolution of large-scale software systems. The understanding of a program often requires the evolution analysis of individual functionalities, so-called features. The comprehension of evolving features is not trivial as their implementations are often tangled and scattered through many modules. Even worse, existing techniques are limited in providing developers with direct means for visualizing the evolution of features’ code. This work presents a proactive and interactive visualization strategy to enable feature evolution analysis. It proactively identifies code elements of evolving features and provides multiple views to present their structure under different perspectives. The novel visualization strategy was compared to a lightweight visualization strategy based on a tree-structure. We ran a controlled experiment with industry developers, who performed feature evolution comprehension tasks on an industrial-strength software. The results showed that the use of the proposed strategy presented significant gains in terms of correctness and execution time for feature evolution comprehension tasks. @InProceedings{ICSE12p1043, author = {Renato Novais and Camila Nunes and Caio Lima and Elder Cirilo and Francisco Dantas and Alessandro Garcia and Manoel Mendonça}, title = {On the Proactive and Interactive Visualization for Feature Evolution Comprehension: An Industrial Investigation}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1043--1052}, doi = {}, year = {2012}, } |
|
Ghezzi, Carlo |
ICSE '12-SEIP: "Specification Patterns from ..."
Specification Patterns from Research to Industry: A Case Study in Service-Based Applications
Domenico Bianculli, Carlo Ghezzi, Cesare Pautasso, and Patrick Senti (University of Lugano, Switzerland; Politecnico di Milano, Italy; Credit Suisse, Switzerland) Specification patterns have proven to help developers to state precise system requirements, as well as formalize them by means of dedicated specification languages. Most of the past work has focused its applicability area to the specification of concurrent and real-time systems, and has been limited to a research setting. In this paper we present the results of our study on specification patterns for service-based applications (SBAs). The study focuses on industrial SBAs in the banking domain. We started by performing an extensive analysis of the usage of specification patterns in published research case studies --- representing almost ten years of research in the area of specification, verification, and validation of SBAs. We then compared these patterns with a large body of specifications written by our industrial partner over a similar time period. The paper discusses the outcome of this comparison, indicating that some needs of the industry, especially in the area of requirements specification languages, are not fully met by current software engineering research. @InProceedings{ICSE12p967, author = {Domenico Bianculli and Carlo Ghezzi and Cesare Pautasso and Patrick Senti}, title = {Specification Patterns from Research to Industry: A Case Study in Service-Based Applications}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {967--975}, doi = {}, year = {2012}, } |
|
Glaser, Axel |
ICSE '12-SEIP: "Methodology for Migration ..."
Methodology for Migration of Long Running Process Instances in a Global Large Scale BPM Environment in Credit Suisse's SOA Landscape
Tarmo Ploom, Stefan Scheit, and Axel Glaser (Credit Suisse, Switzerland) Research about process instance migration covers mainly changes in process models during the process evolution and their effects on the same runtime environment. But what if the runtime environment - a legacy Business Process Execution (BPE) platform - had to be replaced with a new solution? Several migration aspects must be taken into account. (1) Process models from the old BPE platform have to be converted to the target process definition language on the target BPE platform. (2) Existing Business Process Management (BPM) applications must be integrated via new BPE platform interfaces. (3) Process instances and process instance data state must be migrated. For each of these points an appropriate migration strategy must be chosen. This paper describes the migration methodology which was applied for the BPE platform renewal in Credit Suisse. @InProceedings{ICSE12p976, author = {Tarmo Ploom and Stefan Scheit and Axel Glaser}, title = {Methodology for Migration of Long Running Process Instances in a Global Large Scale BPM Environment in Credit Suisse's SOA Landscape}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {976--985}, doi = {}, year = {2012}, } |
|
Goeb, Andreas |
ICSE '12-SEIP: "The Quamoco Product Quality ..."
The Quamoco Product Quality Modelling and Assessment Approach
Stefan Wagner, Klaus Lochmann, Lars Heinemann, Michael Kläs, Adam Trendowicz, Reinhold Plösch, Andreas Seidl, Andreas Goeb, and Jonathan Streit (University of Stuttgart, Germany; TU Munich, Germany; Fraunhofer IESE, Germany; JKU Linz, Austria; Capgemini, Germany; SAP, Germany; itestra, Germany) Published software quality models either provide abstract quality attributes or concrete quality assessments. There are no models that seamlessly integrate both aspects. In the project Quamoco, we built a comprehensive approach with the aim to close this gap. For this, we developed in several iterations a meta quality model specifying general concepts, a quality base model covering the most important quality factors and a quality assessment approach. The meta model introduces the new concept of a product factor, which bridges the gap between concrete measurements and abstract quality aspects. Product factors have measures and instruments to operationalise quality by measurements from manual inspection and tool analysis. The base model uses the ISO 25010 quality attributes, which we refine by 200 factors and 600 measures for Java and C# systems. We found in several empirical validations that the assessment results fit to the expectations of experts for the corresponding systems. The empirical analyses also showed that several of the correlations are statistically significant and that the maintainability part of the base model has the highest correlation, which fits to the fact that this part is the most comprehensive. Although we still see room for extending and improving the base model, it shows a high correspondence with expert opinions and hence is able to form the basis for repeatable and understandable quality assessments in practice. @InProceedings{ICSE12p1132, author = {Stefan Wagner and Klaus Lochmann and Lars Heinemann and Michael Kläs and Adam Trendowicz and Reinhold Plösch and Andreas Seidl and Andreas Goeb and Jonathan Streit}, title = {The Quamoco Product Quality Modelling and Assessment Approach}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1132--1141}, doi = {}, year = {2012}, } |
|
Goldstein, Maayan |
ICSE '12-SEIP: "Making Sense of Healthcare ..."
Making Sense of Healthcare Benefits
Jonathan Bnayahu, Maayan Goldstein, Mordechai Nisenson, and Yahalomit Simionovici (IBM Research, Israel) A key piece of information in healthcare is a patient's benefit plan. It details which treatments and procedures are covered by the health insurer (or payer), and at which conditions. While the most accurate and complete implementation of the plan resides in the payer’s claims adjudication systems, the inherent complexity of these systems forces payers to maintain multiple repositories of benefit information for other service and regulatory needs. In this paper we present a technology that deals with this complexity. We show how a large US health payer benefited from using the visualization, search, summarization and other capabilities of the technology. We argue that this technology can be used to improve productivity and reduce error rate in the benefits administration workflow, leading to lower administrative overhead and cost for health payers, which benefits both payers and patients. @InProceedings{ICSE12p1033, author = {Jonathan Bnayahu and Maayan Goldstein and Mordechai Nisenson and Yahalomit Simionovici}, title = {Making Sense of Healthcare Benefits}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1033--1042}, doi = {}, year = {2012}, } |
|
Guo, Philip J. |
ICSE '12-SEIP: "Characterizing and Predicting ..."
Characterizing and Predicting Which Bugs Get Reopened
Thomas Zimmermann , Nachiappan Nagappan, Philip J. Guo, and Brendan Murphy (Microsoft Research, USA; Stanford University, USA; Microsoft Research, UK) Fixing bugs is an important part of the software development process. An underlying aspect is the effectiveness of fixes: if a fair number of fixed bugs are reopened, it could indicate instability in the software system. To the best of our knowledge there has been on little prior work on understanding the dynamics of bug reopens. Towards that end, in this paper, we characterize when bug reports are reopened by using the Microsoft Windows operating system project as an empirical case study. Our analysis is based on a mixed-methods approach. First, we categorize the primary reasons for reopens based on a survey of 358 Microsoft employees. We then reinforce these results with a large-scale quantitative study of Windows bug reports, focusing on factors related to bug report edits and relationships between people involved in handling the bug. Finally, we build statistical models to describe the impact of various metrics on reopening bugs ranging from the reputation of the opener to how the bug was found. @InProceedings{ICSE12p1073, author = {Thomas Zimmermann and Nachiappan Nagappan and Philip J. Guo and Brendan Murphy}, title = {Characterizing and Predicting Which Bugs Get Reopened}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1073--1082}, doi = {}, year = {2012}, } |
|
Hauptmann, Benedikt |
ICSE '12-SEIP: "How Much Does Unused Code ..."
How Much Does Unused Code Matter for Maintenance?
Sebastian Eder, Maximilian Junker, Elmar Jürgens, Benedikt Hauptmann, Rudolf Vaas, and Karl-Heinz Prommer (TU Munich, Germany; Munich Re, Germany) Software systems contain unnecessary code. Its maintenance causes unnecessary costs. We present tool-support that employs dynamic analysis of deployed software to detect unused code as an approximation of unnecessary code, and static analysis to reveal its changes during maintenance. We present a case study on maintenance of unused code in an industrial software system over the course of two years. It quantifies the amount of code that is unused, the amount of maintenance activity that went into it and makes the potential benefit of tool support explicit, which informs maintainers that are about to modify unused code. @InProceedings{ICSE12p1101, author = {Sebastian Eder and Maximilian Junker and Elmar Jürgens and Benedikt Hauptmann and Rudolf Vaas and Karl-Heinz Prommer}, title = {How Much Does Unused Code Matter for Maintenance?}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1101--1110}, doi = {}, year = {2012}, } |
|
Heinemann, Lars |
ICSE '12-SEIP: "The Quamoco Product Quality ..."
The Quamoco Product Quality Modelling and Assessment Approach
Stefan Wagner, Klaus Lochmann, Lars Heinemann, Michael Kläs, Adam Trendowicz, Reinhold Plösch, Andreas Seidl, Andreas Goeb, and Jonathan Streit (University of Stuttgart, Germany; TU Munich, Germany; Fraunhofer IESE, Germany; JKU Linz, Austria; Capgemini, Germany; SAP, Germany; itestra, Germany) Published software quality models either provide abstract quality attributes or concrete quality assessments. There are no models that seamlessly integrate both aspects. In the project Quamoco, we built a comprehensive approach with the aim to close this gap. For this, we developed in several iterations a meta quality model specifying general concepts, a quality base model covering the most important quality factors and a quality assessment approach. The meta model introduces the new concept of a product factor, which bridges the gap between concrete measurements and abstract quality aspects. Product factors have measures and instruments to operationalise quality by measurements from manual inspection and tool analysis. The base model uses the ISO 25010 quality attributes, which we refine by 200 factors and 600 measures for Java and C# systems. We found in several empirical validations that the assessment results fit to the expectations of experts for the corresponding systems. The empirical analyses also showed that several of the correlations are statistically significant and that the maintainability part of the base model has the highest correlation, which fits to the fact that this part is the most comprehensive. Although we still see room for extending and improving the base model, it shows a high correspondence with expert opinions and hence is able to form the basis for repeatable and understandable quality assessments in practice. @InProceedings{ICSE12p1132, author = {Stefan Wagner and Klaus Lochmann and Lars Heinemann and Michael Kläs and Adam Trendowicz and Reinhold Plösch and Andreas Seidl and Andreas Goeb and Jonathan Streit}, title = {The Quamoco Product Quality Modelling and Assessment Approach}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1132--1141}, doi = {}, year = {2012}, } |
|
Iwama, Futoshi |
ICSE '12-SEIP: "Constructing Parser for Industrial ..."
Constructing Parser for Industrial Software Specifications Containing Formal and Natural Language Description
Futoshi Iwama, Taiga Nakamura, and Hironori Takeuchi (IBM Research, Japan) This paper describes a novel framework for creating a parser to process and analyze texts written in a ``partially structured'' natural language. In many projects, the contents of document artifacts tend to be described as a mixture of formal parts (i.e. the text constructs follow specific conventions) and parts written in arbitrary free text. Formal parsers, typically defined and used to process a description with rigidly defined syntax such as program source code are very precise and efficient in processing the formal part, while parsers developed for natural language processing (NLP) are good at robustly interpreting the free-text part. Therefore, combining these parsers with different characteristics can allow for more flexible and practical processing of various project documents. Unfortunately, conventional approaches to constructing a parser from multiple small parsers were studied extensively only for formal language parsers and are not directly applicable to NLP parsers due to the differences in the way the input text is extracted and evaluated. We propose a method to configure and generate a combined parser by extending an approach based on parser combinator, the operators for composing multiple formal parsers, to support both NLP and formal parsers. The resulting text parser is based on Parsing Expression Grammars, and it benefits from the strength of both parser types. We demonstrate an application of such combined parser in practical situations and show that the proposed approach can efficiently construct a parser for analyzing project-specific industrial specification documents. @InProceedings{ICSE12p1011, author = {Futoshi Iwama and Taiga Nakamura and Hironori Takeuchi}, title = {Constructing Parser for Industrial Software Specifications Containing Formal and Natural Language Description}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1011--1020}, doi = {}, year = {2012}, } |
|
Jacobsen, Jens |
ICSE '12-SEIP: "Debugger Canvas: Industrial ..."
Debugger Canvas: Industrial Experience with the Code Bubbles Paradigm
Robert DeLine, Andrew Bragdon, Kael Rowan, Jens Jacobsen, and Steven P. Reiss (Microsoft Research, USA; Brown University, USA) At ICSE 2010, the Code Bubbles team from Brown University and the Code Canvas team from Microsoft Research presented similar ideas for new user experiences for an integrated development environment. Since then, the two teams formed a collaboration, along with the Microsoft Visual Studio team, to release Debugger Canvas, an industrial version of the Code Bubbles paradigm. With Debugger Canvas, a programmer debugs her code as a collection of code bubbles, annotated with call paths and variable values, on a two-dimensional pan-and-zoom surface. In this experience report, we describe new user interface ideas, describe the rationale behind our design choices, evaluate the performance overhead of the new design, and provide user feedback based on lab participants, post-release usage data, and a user survey and interviews. We conclude that the code bubbles paradigm does scale to existing customer code bases, is best implemented as a mode in the existing user experience rather than a replacement, and is most useful when the user has a long or complex call paths, a large or unfamiliar code base, or complex control patterns, like factories or dynamic linking. @InProceedings{ICSE12p1063, author = {Robert DeLine and Andrew Bragdon and Kael Rowan and Jens Jacobsen and Steven P. Reiss}, title = {Debugger Canvas: Industrial Experience with the Code Bubbles Paradigm}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1063--1072}, doi = {}, year = {2012}, } |
|
Jang, Yoonkyu |
ICSE '12-SEIP: "Industrial Application of ..."
Industrial Application of Concolic Testing Approach: A Case Study on libexif by Using CREST-BV and KLEE
Yunho Kim, Moonzoo Kim, YoungJoo Kim, and Yoonkyu Jang (KAIST, South Korea; Samsung Electronics, South Korea) As smartphones become popular, manufacturers such as Samsung Electronics are developing smartphones with rich functionality such as a camera and photo editing quickly, which accelerates the adoption of open source applications in the smartphone platforms. However, developers often do not know the detail of open source applications, because they did not develop the applications themselves. Thus, it is a challenging problem to test open source applications effectively in short time. This paper reports our experience of applying concolic testing technique to test libexif, an open source library to manipulate EXIF information in image files. We have demonstrated that concolic testing technique is effective and efficient at detecting bugs with modest effort in industrial setting. We also compare two concolic testing tools, CREST-BV and KLEE, in this testing project. Furthermore, we compare the concolic testing results with the analysis result of the Coverity Prevent static analyzer. We detected a memory access bug, a null pointer dereference bug, and four divide-by-zero bugs in libexif through concolic testing, none of which were detected by Coverity Prevent. @InProceedings{ICSE12p1142, author = {Yunho Kim and Moonzoo Kim and YoungJoo Kim and Yoonkyu Jang}, title = {Industrial Application of Concolic Testing Approach: A Case Study on libexif by Using CREST-BV and KLEE}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1142--1151}, doi = {}, year = {2012}, } |
|
Jeffery, Ross |
ICSE '12-SEIP: "Large-Scale Formal Verification ..."
Large-Scale Formal Verification in Practice: A Process Perspective
June Andronick, Ross Jeffery, Gerwin Klein, Rafal Kolanski, Mark Staples, He Zhang, and Liming Zhu (NICTA, Australia; UNSW, Australia) The L4.verified project was a rare success in large-scale, formal verification: it provided a formal, machine-checked, code-level proof of the full functional correctness of the seL4 microkernel. In this paper we report on the development process and management issues of this project, highlighting key success factors. We formulate a detailed descriptive model of its middle-out development process, and analyze the evolution and dependencies of code and proof artifacts. We compare our key findings on verification and re-verification with insights from other verification efforts in the literature. Our analysis of the project is based on complete access to project logs, meeting notes, and version control data over its entire history, including its long-term, ongoing maintenance phase. The aim of this work is to aid understanding of how to successfully run large-scale formal software verification projects. @InProceedings{ICSE12p1001, author = {June Andronick and Ross Jeffery and Gerwin Klein and Rafal Kolanski and Mark Staples and He Zhang and Liming Zhu}, title = {Large-Scale Formal Verification in Practice: A Process Perspective}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1001--1010}, doi = {}, year = {2012}, } |
|
Junker, Maximilian |
ICSE '12-SEIP: "How Much Does Unused Code ..."
How Much Does Unused Code Matter for Maintenance?
Sebastian Eder, Maximilian Junker, Elmar Jürgens, Benedikt Hauptmann, Rudolf Vaas, and Karl-Heinz Prommer (TU Munich, Germany; Munich Re, Germany) Software systems contain unnecessary code. Its maintenance causes unnecessary costs. We present tool-support that employs dynamic analysis of deployed software to detect unused code as an approximation of unnecessary code, and static analysis to reveal its changes during maintenance. We present a case study on maintenance of unused code in an industrial software system over the course of two years. It quantifies the amount of code that is unused, the amount of maintenance activity that went into it and makes the potential benefit of tool support explicit, which informs maintainers that are about to modify unused code. @InProceedings{ICSE12p1101, author = {Sebastian Eder and Maximilian Junker and Elmar Jürgens and Benedikt Hauptmann and Rudolf Vaas and Karl-Heinz Prommer}, title = {How Much Does Unused Code Matter for Maintenance?}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1101--1110}, doi = {}, year = {2012}, } |
|
Jürgens, Elmar |
ICSE '12-SEIP: "How Much Does Unused Code ..."
How Much Does Unused Code Matter for Maintenance?
Sebastian Eder, Maximilian Junker, Elmar Jürgens, Benedikt Hauptmann, Rudolf Vaas, and Karl-Heinz Prommer (TU Munich, Germany; Munich Re, Germany) Software systems contain unnecessary code. Its maintenance causes unnecessary costs. We present tool-support that employs dynamic analysis of deployed software to detect unused code as an approximation of unnecessary code, and static analysis to reveal its changes during maintenance. We present a case study on maintenance of unused code in an industrial software system over the course of two years. It quantifies the amount of code that is unused, the amount of maintenance activity that went into it and makes the potential benefit of tool support explicit, which informs maintainers that are about to modify unused code. @InProceedings{ICSE12p1101, author = {Sebastian Eder and Maximilian Junker and Elmar Jürgens and Benedikt Hauptmann and Rudolf Vaas and Karl-Heinz Prommer}, title = {How Much Does Unused Code Matter for Maintenance?}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1101--1110}, doi = {}, year = {2012}, } |
|
Katoen, Joost-Pieter |
ICSE '12-SEIP: "Formal Correctness, Safety, ..."
Formal Correctness, Safety, Dependability, and Performance Analysis of a Satellite
Marie-Aude Esteve, Joost-Pieter Katoen , Viet Yen Nguyen, Bart Postma, and Yuri Yushtein (European Space Agency, Netherlands; RWTH Aachen University, Germany; University of Twente, Netherlands) This paper reports on the usage of a broad palette of formal modeling and analysis techniques on a regular industrial-size design of an ultra-modern satellite platform. These efforts were carried out in parallel with the conventional software development of the satellite platform. The model itself is expressed in a formalized dialect of AADL. Its formal nature enables rigorous and automated analysis, for which the recently developed COMPASS toolset was used. The whole effort revealed numerous inconsistencies in the early design documents, and the use of formal analyses provided additional insight on discrete system behavior (comprising nearly 50 million states), on hybrid system behavior involving discrete and continuous variables, and enabled the automated generation of large fault trees (66 nodes) for safety analysis that typically are constructed by hand. The model's size pushed the computational tractability of the algorithms underlying the formal analyses, and revealed bottlenecks for future theoretical research. Additionally, the effort led to newly learned practices from which subsequent formal modeling and analysis efforts shall benefit, especially when they are injected in the conventional software development lifecycle. The case demonstrates the feasibility of fully capturing a system-level design as a single comprehensive formal model and analyze it automatically using a toolset based on (probabilistic) model checkers. @InProceedings{ICSE12p1021, author = {Marie-Aude Esteve and Joost-Pieter Katoen and Viet Yen Nguyen and Bart Postma and Yuri Yushtein}, title = {Formal Correctness, Safety, Dependability, and Performance Analysis of a Satellite}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1021--1030}, doi = {}, year = {2012}, } |
|
Kim, Moonzoo |
ICSE '12-SEIP: "Industrial Application of ..."
Industrial Application of Concolic Testing Approach: A Case Study on libexif by Using CREST-BV and KLEE
Yunho Kim, Moonzoo Kim, YoungJoo Kim, and Yoonkyu Jang (KAIST, South Korea; Samsung Electronics, South Korea) As smartphones become popular, manufacturers such as Samsung Electronics are developing smartphones with rich functionality such as a camera and photo editing quickly, which accelerates the adoption of open source applications in the smartphone platforms. However, developers often do not know the detail of open source applications, because they did not develop the applications themselves. Thus, it is a challenging problem to test open source applications effectively in short time. This paper reports our experience of applying concolic testing technique to test libexif, an open source library to manipulate EXIF information in image files. We have demonstrated that concolic testing technique is effective and efficient at detecting bugs with modest effort in industrial setting. We also compare two concolic testing tools, CREST-BV and KLEE, in this testing project. Furthermore, we compare the concolic testing results with the analysis result of the Coverity Prevent static analyzer. We detected a memory access bug, a null pointer dereference bug, and four divide-by-zero bugs in libexif through concolic testing, none of which were detected by Coverity Prevent. @InProceedings{ICSE12p1142, author = {Yunho Kim and Moonzoo Kim and YoungJoo Kim and Yoonkyu Jang}, title = {Industrial Application of Concolic Testing Approach: A Case Study on libexif by Using CREST-BV and KLEE}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1142--1151}, doi = {}, year = {2012}, } |
|
Kim, YoungJoo |
ICSE '12-SEIP: "Industrial Application of ..."
Industrial Application of Concolic Testing Approach: A Case Study on libexif by Using CREST-BV and KLEE
Yunho Kim, Moonzoo Kim, YoungJoo Kim, and Yoonkyu Jang (KAIST, South Korea; Samsung Electronics, South Korea) As smartphones become popular, manufacturers such as Samsung Electronics are developing smartphones with rich functionality such as a camera and photo editing quickly, which accelerates the adoption of open source applications in the smartphone platforms. However, developers often do not know the detail of open source applications, because they did not develop the applications themselves. Thus, it is a challenging problem to test open source applications effectively in short time. This paper reports our experience of applying concolic testing technique to test libexif, an open source library to manipulate EXIF information in image files. We have demonstrated that concolic testing technique is effective and efficient at detecting bugs with modest effort in industrial setting. We also compare two concolic testing tools, CREST-BV and KLEE, in this testing project. Furthermore, we compare the concolic testing results with the analysis result of the Coverity Prevent static analyzer. We detected a memory access bug, a null pointer dereference bug, and four divide-by-zero bugs in libexif through concolic testing, none of which were detected by Coverity Prevent. @InProceedings{ICSE12p1142, author = {Yunho Kim and Moonzoo Kim and YoungJoo Kim and Yoonkyu Jang}, title = {Industrial Application of Concolic Testing Approach: A Case Study on libexif by Using CREST-BV and KLEE}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1142--1151}, doi = {}, year = {2012}, } |
|
Kim, Yunho |
ICSE '12-SEIP: "Industrial Application of ..."
Industrial Application of Concolic Testing Approach: A Case Study on libexif by Using CREST-BV and KLEE
Yunho Kim, Moonzoo Kim, YoungJoo Kim, and Yoonkyu Jang (KAIST, South Korea; Samsung Electronics, South Korea) As smartphones become popular, manufacturers such as Samsung Electronics are developing smartphones with rich functionality such as a camera and photo editing quickly, which accelerates the adoption of open source applications in the smartphone platforms. However, developers often do not know the detail of open source applications, because they did not develop the applications themselves. Thus, it is a challenging problem to test open source applications effectively in short time. This paper reports our experience of applying concolic testing technique to test libexif, an open source library to manipulate EXIF information in image files. We have demonstrated that concolic testing technique is effective and efficient at detecting bugs with modest effort in industrial setting. We also compare two concolic testing tools, CREST-BV and KLEE, in this testing project. Furthermore, we compare the concolic testing results with the analysis result of the Coverity Prevent static analyzer. We detected a memory access bug, a null pointer dereference bug, and four divide-by-zero bugs in libexif through concolic testing, none of which were detected by Coverity Prevent. @InProceedings{ICSE12p1142, author = {Yunho Kim and Moonzoo Kim and YoungJoo Kim and Yoonkyu Jang}, title = {Industrial Application of Concolic Testing Approach: A Case Study on libexif by Using CREST-BV and KLEE}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1142--1151}, doi = {}, year = {2012}, } |
|
Kläs, Michael |
ICSE '12-SEIP: "The Quamoco Product Quality ..."
The Quamoco Product Quality Modelling and Assessment Approach
Stefan Wagner, Klaus Lochmann, Lars Heinemann, Michael Kläs, Adam Trendowicz, Reinhold Plösch, Andreas Seidl, Andreas Goeb, and Jonathan Streit (University of Stuttgart, Germany; TU Munich, Germany; Fraunhofer IESE, Germany; JKU Linz, Austria; Capgemini, Germany; SAP, Germany; itestra, Germany) Published software quality models either provide abstract quality attributes or concrete quality assessments. There are no models that seamlessly integrate both aspects. In the project Quamoco, we built a comprehensive approach with the aim to close this gap. For this, we developed in several iterations a meta quality model specifying general concepts, a quality base model covering the most important quality factors and a quality assessment approach. The meta model introduces the new concept of a product factor, which bridges the gap between concrete measurements and abstract quality aspects. Product factors have measures and instruments to operationalise quality by measurements from manual inspection and tool analysis. The base model uses the ISO 25010 quality attributes, which we refine by 200 factors and 600 measures for Java and C# systems. We found in several empirical validations that the assessment results fit to the expectations of experts for the corresponding systems. The empirical analyses also showed that several of the correlations are statistically significant and that the maintainability part of the base model has the highest correlation, which fits to the fact that this part is the most comprehensive. Although we still see room for extending and improving the base model, it shows a high correspondence with expert opinions and hence is able to form the basis for repeatable and understandable quality assessments in practice. @InProceedings{ICSE12p1132, author = {Stefan Wagner and Klaus Lochmann and Lars Heinemann and Michael Kläs and Adam Trendowicz and Reinhold Plösch and Andreas Seidl and Andreas Goeb and Jonathan Streit}, title = {The Quamoco Product Quality Modelling and Assessment Approach}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1132--1141}, doi = {}, year = {2012}, } |
|
Klein, Gerwin |
ICSE '12-SEIP: "Large-Scale Formal Verification ..."
Large-Scale Formal Verification in Practice: A Process Perspective
June Andronick, Ross Jeffery, Gerwin Klein, Rafal Kolanski, Mark Staples, He Zhang, and Liming Zhu (NICTA, Australia; UNSW, Australia) The L4.verified project was a rare success in large-scale, formal verification: it provided a formal, machine-checked, code-level proof of the full functional correctness of the seL4 microkernel. In this paper we report on the development process and management issues of this project, highlighting key success factors. We formulate a detailed descriptive model of its middle-out development process, and analyze the evolution and dependencies of code and proof artifacts. We compare our key findings on verification and re-verification with insights from other verification efforts in the literature. Our analysis of the project is based on complete access to project logs, meeting notes, and version control data over its entire history, including its long-term, ongoing maintenance phase. The aim of this work is to aid understanding of how to successfully run large-scale formal software verification projects. @InProceedings{ICSE12p1001, author = {June Andronick and Ross Jeffery and Gerwin Klein and Rafal Kolanski and Mark Staples and He Zhang and Liming Zhu}, title = {Large-Scale Formal Verification in Practice: A Process Perspective}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1001--1010}, doi = {}, year = {2012}, } |
|
Kolanski, Rafal |
ICSE '12-SEIP: "Large-Scale Formal Verification ..."
Large-Scale Formal Verification in Practice: A Process Perspective
June Andronick, Ross Jeffery, Gerwin Klein, Rafal Kolanski, Mark Staples, He Zhang, and Liming Zhu (NICTA, Australia; UNSW, Australia) The L4.verified project was a rare success in large-scale, formal verification: it provided a formal, machine-checked, code-level proof of the full functional correctness of the seL4 microkernel. In this paper we report on the development process and management issues of this project, highlighting key success factors. We formulate a detailed descriptive model of its middle-out development process, and analyze the evolution and dependencies of code and proof artifacts. We compare our key findings on verification and re-verification with insights from other verification efforts in the literature. Our analysis of the project is based on complete access to project logs, meeting notes, and version control data over its entire history, including its long-term, ongoing maintenance phase. The aim of this work is to aid understanding of how to successfully run large-scale formal software verification projects. @InProceedings{ICSE12p1001, author = {June Andronick and Ross Jeffery and Gerwin Klein and Rafal Kolanski and Mark Staples and He Zhang and Liming Zhu}, title = {Large-Scale Formal Verification in Practice: A Process Perspective}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1001--1010}, doi = {}, year = {2012}, } |
|
Lima, Caio |
ICSE '12-SEIP: "On the Proactive and Interactive ..."
On the Proactive and Interactive Visualization for Feature Evolution Comprehension: An Industrial Investigation
Renato Novais, Camila Nunes, Caio Lima, Elder Cirilo, Francisco Dantas, Alessandro Garcia , and Manoel Mendonça (Federal University of Bahia, Brazil; Federal Institute of Bahia, Brazil; PUC-Rio, Brazil) Program comprehension is a key activity through maintenance and evolution of large-scale software systems. The understanding of a program often requires the evolution analysis of individual functionalities, so-called features. The comprehension of evolving features is not trivial as their implementations are often tangled and scattered through many modules. Even worse, existing techniques are limited in providing developers with direct means for visualizing the evolution of features’ code. This work presents a proactive and interactive visualization strategy to enable feature evolution analysis. It proactively identifies code elements of evolving features and provides multiple views to present their structure under different perspectives. The novel visualization strategy was compared to a lightweight visualization strategy based on a tree-structure. We ran a controlled experiment with industry developers, who performed feature evolution comprehension tasks on an industrial-strength software. The results showed that the use of the proposed strategy presented significant gains in terms of correctness and execution time for feature evolution comprehension tasks. @InProceedings{ICSE12p1043, author = {Renato Novais and Camila Nunes and Caio Lima and Elder Cirilo and Francisco Dantas and Alessandro Garcia and Manoel Mendonça}, title = {On the Proactive and Interactive Visualization for Feature Evolution Comprehension: An Industrial Investigation}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1043--1052}, doi = {}, year = {2012}, } |
|
Lochmann, Klaus |
ICSE '12-SEIP: "The Quamoco Product Quality ..."
The Quamoco Product Quality Modelling and Assessment Approach
Stefan Wagner, Klaus Lochmann, Lars Heinemann, Michael Kläs, Adam Trendowicz, Reinhold Plösch, Andreas Seidl, Andreas Goeb, and Jonathan Streit (University of Stuttgart, Germany; TU Munich, Germany; Fraunhofer IESE, Germany; JKU Linz, Austria; Capgemini, Germany; SAP, Germany; itestra, Germany) Published software quality models either provide abstract quality attributes or concrete quality assessments. There are no models that seamlessly integrate both aspects. In the project Quamoco, we built a comprehensive approach with the aim to close this gap. For this, we developed in several iterations a meta quality model specifying general concepts, a quality base model covering the most important quality factors and a quality assessment approach. The meta model introduces the new concept of a product factor, which bridges the gap between concrete measurements and abstract quality aspects. Product factors have measures and instruments to operationalise quality by measurements from manual inspection and tool analysis. The base model uses the ISO 25010 quality attributes, which we refine by 200 factors and 600 measures for Java and C# systems. We found in several empirical validations that the assessment results fit to the expectations of experts for the corresponding systems. The empirical analyses also showed that several of the correlations are statistically significant and that the maintainability part of the base model has the highest correlation, which fits to the fact that this part is the most comprehensive. Although we still see room for extending and improving the base model, it shows a high correspondence with expert opinions and hence is able to form the basis for repeatable and understandable quality assessments in practice. @InProceedings{ICSE12p1132, author = {Stefan Wagner and Klaus Lochmann and Lars Heinemann and Michael Kläs and Adam Trendowicz and Reinhold Plösch and Andreas Seidl and Andreas Goeb and Jonathan Streit}, title = {The Quamoco Product Quality Modelling and Assessment Approach}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1132--1141}, doi = {}, year = {2012}, } |
|
Mendes, Emilia |
ICSE '12-SEIP: "Using Knowledge Elicitation ..."
Using Knowledge Elicitation to Improve Web Effort Estimation: Lessons from Six Industrial Case Studies
Emilia Mendes (Zayed University, United Arab Emirates) This paper details our experience building and validating six different expert-based Web effort estimation models for ICT companies in New Zealand and Brazil. All models were created using Bayesian networks, via eliciting knowledge from domain experts, and validated using data from past finished projects. Post-mortem interviews with the participating companies showed that they found the entire process extremely beneficial and worthwhile, and that all the models created remained in use by those companies. @InProceedings{ICSE12p1111, author = {Emilia Mendes}, title = {Using Knowledge Elicitation to Improve Web Effort Estimation: Lessons from Six Industrial Case Studies}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1111--1120}, doi = {}, year = {2012}, } |
|
Mendonça, Manoel |
ICSE '12-SEIP: "On the Proactive and Interactive ..."
On the Proactive and Interactive Visualization for Feature Evolution Comprehension: An Industrial Investigation
Renato Novais, Camila Nunes, Caio Lima, Elder Cirilo, Francisco Dantas, Alessandro Garcia , and Manoel Mendonça (Federal University of Bahia, Brazil; Federal Institute of Bahia, Brazil; PUC-Rio, Brazil) Program comprehension is a key activity through maintenance and evolution of large-scale software systems. The understanding of a program often requires the evolution analysis of individual functionalities, so-called features. The comprehension of evolving features is not trivial as their implementations are often tangled and scattered through many modules. Even worse, existing techniques are limited in providing developers with direct means for visualizing the evolution of features’ code. This work presents a proactive and interactive visualization strategy to enable feature evolution analysis. It proactively identifies code elements of evolving features and provides multiple views to present their structure under different perspectives. The novel visualization strategy was compared to a lightweight visualization strategy based on a tree-structure. We ran a controlled experiment with industry developers, who performed feature evolution comprehension tasks on an industrial-strength software. The results showed that the use of the proposed strategy presented significant gains in terms of correctness and execution time for feature evolution comprehension tasks. @InProceedings{ICSE12p1043, author = {Renato Novais and Camila Nunes and Caio Lima and Elder Cirilo and Francisco Dantas and Alessandro Garcia and Manoel Mendonça}, title = {On the Proactive and Interactive Visualization for Feature Evolution Comprehension: An Industrial Investigation}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1043--1052}, doi = {}, year = {2012}, } |
|
Menzies, Tim |
ICSE '12-SEIP: "Goldfish Bowl Panel: Software ..."
Goldfish Bowl Panel: Software Development Analytics
Tim Menzies and Thomas Zimmermann (West Virginia University, USA; Microsoft Research, USA) Gaming companies now routinely apply data mining to their user data in order to plan the next release of their software. We predict that such software development analytics will become commonplace, in the near future. For example, as large software systems migrate to the cloud, they are divided and sold as dozens of smaller apps; when shopping inside the cloud, users are free to mix and match their apps from multiple vendors (e.g. Google Docs’ word processor with Zoho’s slide manager); to extend, or even retain, market share cloud vendors must mine their user data in order to understand what features best attract their clients. This panel will address the open issues with analytics. Issues addressed will include the following. What is the potential for software development analytics? What are the strengths and weaknesses of the current generation of analytics tools? How best can we mature those tools? @InProceedings{ICSE12p1031, author = {Tim Menzies and Thomas Zimmermann}, title = {Goldfish Bowl Panel: Software Development Analytics}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1031--1032}, doi = {}, year = {2012}, } |
|
Moriau, Benedicte |
ICSE '12-SEIP: "Efficient Reuse of Domain-Specific ..."
Efficient Reuse of Domain-Specific Test Knowledge: An Industrial Case in the Smart Card Domain
Nicolas Devos, Christophe Ponsard, Jean-Christophe Deprez, Renaud Bauvin, Benedicte Moriau, and Guy Anckaerts (CETIC, Belgium; STMicroelectronics, Belgium) While testing is heavily used and largely automated in software development projects, the reuse of test practices across similar projects in a given domain is seldom systematized and supported by adequate methods and tools. This paper presents a practical approach that emerged from a concrete industrial case in the smart card domain at STMicroelectronics Belgium in order to better address this kind of challenge. The central concept is a test knowledge repository organized as a collection of specific patterns named QPatterns. A systematic process was followed, first to gather, structure and abstract the test practices, then to produce and validate an initial repository, and finally to make it evolve later on Testers can then rely on this repository to produce high quality test plans identifying all the functional and non-functional aspects that have to be addressed, as well as the concrete tests that have to be developed within the context of a new project. A tool support was also developed and integrated in a traceable way into the existing industrial test environment. The approach was validated and is currently under deployment at STMicroelectronics Belgium. @InProceedings{ICSE12p1122, author = {Nicolas Devos and Christophe Ponsard and Jean-Christophe Deprez and Renaud Bauvin and Benedicte Moriau and Guy Anckaerts}, title = {Efficient Reuse of Domain-Specific Test Knowledge: An Industrial Case in the Smart Card Domain}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1122--1131}, doi = {}, year = {2012}, } |
|
Murphy, Brendan |
ICSE '12-SEIP: "Characterizing and Predicting ..."
Characterizing and Predicting Which Bugs Get Reopened
Thomas Zimmermann , Nachiappan Nagappan, Philip J. Guo, and Brendan Murphy (Microsoft Research, USA; Stanford University, USA; Microsoft Research, UK) Fixing bugs is an important part of the software development process. An underlying aspect is the effectiveness of fixes: if a fair number of fixed bugs are reopened, it could indicate instability in the software system. To the best of our knowledge there has been on little prior work on understanding the dynamics of bug reopens. Towards that end, in this paper, we characterize when bug reports are reopened by using the Microsoft Windows operating system project as an empirical case study. Our analysis is based on a mixed-methods approach. First, we categorize the primary reasons for reopens based on a survey of 358 Microsoft employees. We then reinforce these results with a large-scale quantitative study of Windows bug reports, focusing on factors related to bug report edits and relationships between people involved in handling the bug. Finally, we build statistical models to describe the impact of various metrics on reopening bugs ranging from the reputation of the opener to how the bug was found. @InProceedings{ICSE12p1073, author = {Thomas Zimmermann and Nachiappan Nagappan and Philip J. Guo and Brendan Murphy}, title = {Characterizing and Predicting Which Bugs Get Reopened}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1073--1082}, doi = {}, year = {2012}, } |
|
Nagappan, Nachiappan |
ICSE '12-SEIP: "Characterizing and Predicting ..."
Characterizing and Predicting Which Bugs Get Reopened
Thomas Zimmermann , Nachiappan Nagappan, Philip J. Guo, and Brendan Murphy (Microsoft Research, USA; Stanford University, USA; Microsoft Research, UK) Fixing bugs is an important part of the software development process. An underlying aspect is the effectiveness of fixes: if a fair number of fixed bugs are reopened, it could indicate instability in the software system. To the best of our knowledge there has been on little prior work on understanding the dynamics of bug reopens. Towards that end, in this paper, we characterize when bug reports are reopened by using the Microsoft Windows operating system project as an empirical case study. Our analysis is based on a mixed-methods approach. First, we categorize the primary reasons for reopens based on a survey of 358 Microsoft employees. We then reinforce these results with a large-scale quantitative study of Windows bug reports, focusing on factors related to bug report edits and relationships between people involved in handling the bug. Finally, we build statistical models to describe the impact of various metrics on reopening bugs ranging from the reputation of the opener to how the bug was found. @InProceedings{ICSE12p1073, author = {Thomas Zimmermann and Nachiappan Nagappan and Philip J. Guo and Brendan Murphy}, title = {Characterizing and Predicting Which Bugs Get Reopened}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1073--1082}, doi = {}, year = {2012}, } |
|
Nakamura, Taiga |
ICSE '12-SEIP: "Constructing Parser for Industrial ..."
Constructing Parser for Industrial Software Specifications Containing Formal and Natural Language Description
Futoshi Iwama, Taiga Nakamura, and Hironori Takeuchi (IBM Research, Japan) This paper describes a novel framework for creating a parser to process and analyze texts written in a ``partially structured'' natural language. In many projects, the contents of document artifacts tend to be described as a mixture of formal parts (i.e. the text constructs follow specific conventions) and parts written in arbitrary free text. Formal parsers, typically defined and used to process a description with rigidly defined syntax such as program source code are very precise and efficient in processing the formal part, while parsers developed for natural language processing (NLP) are good at robustly interpreting the free-text part. Therefore, combining these parsers with different characteristics can allow for more flexible and practical processing of various project documents. Unfortunately, conventional approaches to constructing a parser from multiple small parsers were studied extensively only for formal language parsers and are not directly applicable to NLP parsers due to the differences in the way the input text is extracted and evaluated. We propose a method to configure and generate a combined parser by extending an approach based on parser combinator, the operators for composing multiple formal parsers, to support both NLP and formal parsers. The resulting text parser is based on Parsing Expression Grammars, and it benefits from the strength of both parser types. We demonstrate an application of such combined parser in practical situations and show that the proposed approach can efficiently construct a parser for analyzing project-specific industrial specification documents. @InProceedings{ICSE12p1011, author = {Futoshi Iwama and Taiga Nakamura and Hironori Takeuchi}, title = {Constructing Parser for Industrial Software Specifications Containing Formal and Natural Language Description}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1011--1020}, doi = {}, year = {2012}, } |
|
Nguyen, Viet Yen |
ICSE '12-SEIP: "Formal Correctness, Safety, ..."
Formal Correctness, Safety, Dependability, and Performance Analysis of a Satellite
Marie-Aude Esteve, Joost-Pieter Katoen , Viet Yen Nguyen, Bart Postma, and Yuri Yushtein (European Space Agency, Netherlands; RWTH Aachen University, Germany; University of Twente, Netherlands) This paper reports on the usage of a broad palette of formal modeling and analysis techniques on a regular industrial-size design of an ultra-modern satellite platform. These efforts were carried out in parallel with the conventional software development of the satellite platform. The model itself is expressed in a formalized dialect of AADL. Its formal nature enables rigorous and automated analysis, for which the recently developed COMPASS toolset was used. The whole effort revealed numerous inconsistencies in the early design documents, and the use of formal analyses provided additional insight on discrete system behavior (comprising nearly 50 million states), on hybrid system behavior involving discrete and continuous variables, and enabled the automated generation of large fault trees (66 nodes) for safety analysis that typically are constructed by hand. The model's size pushed the computational tractability of the algorithms underlying the formal analyses, and revealed bottlenecks for future theoretical research. Additionally, the effort led to newly learned practices from which subsequent formal modeling and analysis efforts shall benefit, especially when they are injected in the conventional software development lifecycle. The case demonstrates the feasibility of fully capturing a system-level design as a single comprehensive formal model and analyze it automatically using a toolset based on (probabilistic) model checkers. @InProceedings{ICSE12p1021, author = {Marie-Aude Esteve and Joost-Pieter Katoen and Viet Yen Nguyen and Bart Postma and Yuri Yushtein}, title = {Formal Correctness, Safety, Dependability, and Performance Analysis of a Satellite}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1021--1030}, doi = {}, year = {2012}, } |
|
Nisenson, Mordechai |
ICSE '12-SEIP: "Making Sense of Healthcare ..."
Making Sense of Healthcare Benefits
Jonathan Bnayahu, Maayan Goldstein, Mordechai Nisenson, and Yahalomit Simionovici (IBM Research, Israel) A key piece of information in healthcare is a patient's benefit plan. It details which treatments and procedures are covered by the health insurer (or payer), and at which conditions. While the most accurate and complete implementation of the plan resides in the payer’s claims adjudication systems, the inherent complexity of these systems forces payers to maintain multiple repositories of benefit information for other service and regulatory needs. In this paper we present a technology that deals with this complexity. We show how a large US health payer benefited from using the visualization, search, summarization and other capabilities of the technology. We argue that this technology can be used to improve productivity and reduce error rate in the benefits administration workflow, leading to lower administrative overhead and cost for health payers, which benefits both payers and patients. @InProceedings{ICSE12p1033, author = {Jonathan Bnayahu and Maayan Goldstein and Mordechai Nisenson and Yahalomit Simionovici}, title = {Making Sense of Healthcare Benefits}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1033--1042}, doi = {}, year = {2012}, } |
|
Nobel, Peter |
ICSE '12-SEIP: "ReBucket: A Method for Clustering ..."
ReBucket: A Method for Clustering Duplicate Crash Reports Based on Call Stack Similarity
Yingnong Dang , Rongxin Wu, Hongyu Zhang, Dongmei Zhang , and Peter Nobel (Microsoft Research, China; Tsinghua University, China; Microsoft, USA) Software often crashes. Once a crash happens, a crash report could be sent to software developers for investigation upon user permission. To facilitate efficient handling of crashes, crash reports received by Microsoft's Windows Error Reporting (WER) system are organized into a set of "buckets". Each bucket contains duplicate crash reports that are deemed as manifestations of the same bug. The bucket information is important for prioritizing efforts to resolve crashing bugs. To improve the accuracy of bucketing, we propose ReBucket, a method for clustering crash reports based on call stack matching. ReBucket measures the similarities of call stacks in crash reports and then assigns the reports to appropriate buckets based on the similarity values. We evaluate ReBucket using crash data collected from five widely-used Microsoft products. The results show that ReBucket achieves better overall performance than the existing methods. In average, the F-measure obtained by ReBucket is about 0.88. @InProceedings{ICSE12p1083, author = {Yingnong Dang and Rongxin Wu and Hongyu Zhang and Dongmei Zhang and Peter Nobel}, title = {ReBucket: A Method for Clustering Duplicate Crash Reports Based on Call Stack Similarity}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1083--1092}, doi = {}, year = {2012}, } |
|
Novais, Renato |
ICSE '12-SEIP: "On the Proactive and Interactive ..."
On the Proactive and Interactive Visualization for Feature Evolution Comprehension: An Industrial Investigation
Renato Novais, Camila Nunes, Caio Lima, Elder Cirilo, Francisco Dantas, Alessandro Garcia , and Manoel Mendonça (Federal University of Bahia, Brazil; Federal Institute of Bahia, Brazil; PUC-Rio, Brazil) Program comprehension is a key activity through maintenance and evolution of large-scale software systems. The understanding of a program often requires the evolution analysis of individual functionalities, so-called features. The comprehension of evolving features is not trivial as their implementations are often tangled and scattered through many modules. Even worse, existing techniques are limited in providing developers with direct means for visualizing the evolution of features’ code. This work presents a proactive and interactive visualization strategy to enable feature evolution analysis. It proactively identifies code elements of evolving features and provides multiple views to present their structure under different perspectives. The novel visualization strategy was compared to a lightweight visualization strategy based on a tree-structure. We ran a controlled experiment with industry developers, who performed feature evolution comprehension tasks on an industrial-strength software. The results showed that the use of the proposed strategy presented significant gains in terms of correctness and execution time for feature evolution comprehension tasks. @InProceedings{ICSE12p1043, author = {Renato Novais and Camila Nunes and Caio Lima and Elder Cirilo and Francisco Dantas and Alessandro Garcia and Manoel Mendonça}, title = {On the Proactive and Interactive Visualization for Feature Evolution Comprehension: An Industrial Investigation}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1043--1052}, doi = {}, year = {2012}, } |
|
Nunes, Camila |
ICSE '12-SEIP: "On the Proactive and Interactive ..."
On the Proactive and Interactive Visualization for Feature Evolution Comprehension: An Industrial Investigation
Renato Novais, Camila Nunes, Caio Lima, Elder Cirilo, Francisco Dantas, Alessandro Garcia , and Manoel Mendonça (Federal University of Bahia, Brazil; Federal Institute of Bahia, Brazil; PUC-Rio, Brazil) Program comprehension is a key activity through maintenance and evolution of large-scale software systems. The understanding of a program often requires the evolution analysis of individual functionalities, so-called features. The comprehension of evolving features is not trivial as their implementations are often tangled and scattered through many modules. Even worse, existing techniques are limited in providing developers with direct means for visualizing the evolution of features’ code. This work presents a proactive and interactive visualization strategy to enable feature evolution analysis. It proactively identifies code elements of evolving features and provides multiple views to present their structure under different perspectives. The novel visualization strategy was compared to a lightweight visualization strategy based on a tree-structure. We ran a controlled experiment with industry developers, who performed feature evolution comprehension tasks on an industrial-strength software. The results showed that the use of the proposed strategy presented significant gains in terms of correctness and execution time for feature evolution comprehension tasks. @InProceedings{ICSE12p1043, author = {Renato Novais and Camila Nunes and Caio Lima and Elder Cirilo and Francisco Dantas and Alessandro Garcia and Manoel Mendonça}, title = {On the Proactive and Interactive Visualization for Feature Evolution Comprehension: An Industrial Investigation}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1043--1052}, doi = {}, year = {2012}, } |
|
Pautasso, Cesare |
ICSE '12-SEIP: "Specification Patterns from ..."
Specification Patterns from Research to Industry: A Case Study in Service-Based Applications
Domenico Bianculli, Carlo Ghezzi, Cesare Pautasso, and Patrick Senti (University of Lugano, Switzerland; Politecnico di Milano, Italy; Credit Suisse, Switzerland) Specification patterns have proven to help developers to state precise system requirements, as well as formalize them by means of dedicated specification languages. Most of the past work has focused its applicability area to the specification of concurrent and real-time systems, and has been limited to a research setting. In this paper we present the results of our study on specification patterns for service-based applications (SBAs). The study focuses on industrial SBAs in the banking domain. We started by performing an extensive analysis of the usage of specification patterns in published research case studies --- representing almost ten years of research in the area of specification, verification, and validation of SBAs. We then compared these patterns with a large body of specifications written by our industrial partner over a similar time period. The paper discusses the outcome of this comparison, indicating that some needs of the industry, especially in the area of requirements specification languages, are not fully met by current software engineering research. @InProceedings{ICSE12p967, author = {Domenico Bianculli and Carlo Ghezzi and Cesare Pautasso and Patrick Senti}, title = {Specification Patterns from Research to Industry: A Case Study in Service-Based Applications}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {967--975}, doi = {}, year = {2012}, } |
|
Penix, John |
ICSE '12-SEIP: "Large-Scale Test Automation ..."
Large-Scale Test Automation in the Cloud (Invited Industrial Talk)
John Penix (Google, USA) Software development at Google is big and fast. The code base receives 20+ code changes per minute and 50% of the files change every month! Each product is developed and released from ‘head’ relying on automated tests verifying the product behavior. Release frequency varies from multiple times per day to once every few weeks, depending on the product team. With such a huge, fast-moving codebase, it is possible for teams to get stuck spending a lot of time just keeping their build ‘green’. A continuous integration system should help by providing the exact change at which a test started failing, instead of a range of suspect changes or doing a lengthy binary-search for the offending change. We have built a system that uses dependency analysis to determine all the tests a change transitively affects and then runs only those tests for every change. The system is built on top of Google’s cloud computing infrastructure enabling many builds to be executed concurrently, allowing the system to run affected tests as soon as a change is submitted. The use of smart tools and cloud computing infrastructure in the continuous integration system enables quick, effective feedback to development teams. @InProceedings{ICSE12p1121, author = {John Penix}, title = {Large-Scale Test Automation in the Cloud (Invited Industrial Talk)}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1121--1121}, doi = {}, year = {2012}, } |
|
Ploom, Tarmo |
ICSE '12-SEIP: "Methodology for Migration ..."
Methodology for Migration of Long Running Process Instances in a Global Large Scale BPM Environment in Credit Suisse's SOA Landscape
Tarmo Ploom, Stefan Scheit, and Axel Glaser (Credit Suisse, Switzerland) Research about process instance migration covers mainly changes in process models during the process evolution and their effects on the same runtime environment. But what if the runtime environment - a legacy Business Process Execution (BPE) platform - had to be replaced with a new solution? Several migration aspects must be taken into account. (1) Process models from the old BPE platform have to be converted to the target process definition language on the target BPE platform. (2) Existing Business Process Management (BPM) applications must be integrated via new BPE platform interfaces. (3) Process instances and process instance data state must be migrated. For each of these points an appropriate migration strategy must be chosen. This paper describes the migration methodology which was applied for the BPE platform renewal in Credit Suisse. @InProceedings{ICSE12p976, author = {Tarmo Ploom and Stefan Scheit and Axel Glaser}, title = {Methodology for Migration of Long Running Process Instances in a Global Large Scale BPM Environment in Credit Suisse's SOA Landscape}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {976--985}, doi = {}, year = {2012}, } |
|
Plösch, Reinhold |
ICSE '12-SEIP: "The Quamoco Product Quality ..."
The Quamoco Product Quality Modelling and Assessment Approach
Stefan Wagner, Klaus Lochmann, Lars Heinemann, Michael Kläs, Adam Trendowicz, Reinhold Plösch, Andreas Seidl, Andreas Goeb, and Jonathan Streit (University of Stuttgart, Germany; TU Munich, Germany; Fraunhofer IESE, Germany; JKU Linz, Austria; Capgemini, Germany; SAP, Germany; itestra, Germany) Published software quality models either provide abstract quality attributes or concrete quality assessments. There are no models that seamlessly integrate both aspects. In the project Quamoco, we built a comprehensive approach with the aim to close this gap. For this, we developed in several iterations a meta quality model specifying general concepts, a quality base model covering the most important quality factors and a quality assessment approach. The meta model introduces the new concept of a product factor, which bridges the gap between concrete measurements and abstract quality aspects. Product factors have measures and instruments to operationalise quality by measurements from manual inspection and tool analysis. The base model uses the ISO 25010 quality attributes, which we refine by 200 factors and 600 measures for Java and C# systems. We found in several empirical validations that the assessment results fit to the expectations of experts for the corresponding systems. The empirical analyses also showed that several of the correlations are statistically significant and that the maintainability part of the base model has the highest correlation, which fits to the fact that this part is the most comprehensive. Although we still see room for extending and improving the base model, it shows a high correspondence with expert opinions and hence is able to form the basis for repeatable and understandable quality assessments in practice. @InProceedings{ICSE12p1132, author = {Stefan Wagner and Klaus Lochmann and Lars Heinemann and Michael Kläs and Adam Trendowicz and Reinhold Plösch and Andreas Seidl and Andreas Goeb and Jonathan Streit}, title = {The Quamoco Product Quality Modelling and Assessment Approach}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1132--1141}, doi = {}, year = {2012}, } |
|
Podgurski, Andy |
ICSE '12-SEIP: "Extending Static Analysis ..."
Extending Static Analysis by Mining Project-Specific Rules
Boya Sun, Gang Shu, Andy Podgurski, and Brian Robinson (Case Western Reserve University, USA; ABB Research, USA) Commercial static program analysis tools can be used to detect many defects that are common across applications. However, such tools currently have limited ability to reveal defects that are specific to individual projects, unless specialized checkers are devised and implemented by tool users. Developers do not typically exploit this capability. By contrast, defect mining tools developed by researchers can discover project-specific defects, but they require specialized expertise to employ and they may not be robust enough for general use. We present a hybrid approach in which a sophisticated dependence-based rule mining tool is used to discover project-specific programming rules, which are then transformed automatically into checkers that a commercial static analysis tool can run against a code base to reveal defects. We also present the results of an empirical study in which this approach was applied successfully to two large industrial code bases. Finally, we analyze the potential implications of this approach for software development practice. @InProceedings{ICSE12p1053, author = {Boya Sun and Gang Shu and Andy Podgurski and Brian Robinson}, title = {Extending Static Analysis by Mining Project-Specific Rules}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1053--1062}, doi = {}, year = {2012}, } |
|
Ponsard, Christophe |
ICSE '12-SEIP: "Efficient Reuse of Domain-Specific ..."
Efficient Reuse of Domain-Specific Test Knowledge: An Industrial Case in the Smart Card Domain
Nicolas Devos, Christophe Ponsard, Jean-Christophe Deprez, Renaud Bauvin, Benedicte Moriau, and Guy Anckaerts (CETIC, Belgium; STMicroelectronics, Belgium) While testing is heavily used and largely automated in software development projects, the reuse of test practices across similar projects in a given domain is seldom systematized and supported by adequate methods and tools. This paper presents a practical approach that emerged from a concrete industrial case in the smart card domain at STMicroelectronics Belgium in order to better address this kind of challenge. The central concept is a test knowledge repository organized as a collection of specific patterns named QPatterns. A systematic process was followed, first to gather, structure and abstract the test practices, then to produce and validate an initial repository, and finally to make it evolve later on Testers can then rely on this repository to produce high quality test plans identifying all the functional and non-functional aspects that have to be addressed, as well as the concrete tests that have to be developed within the context of a new project. A tool support was also developed and integrated in a traceable way into the existing industrial test environment. The approach was validated and is currently under deployment at STMicroelectronics Belgium. @InProceedings{ICSE12p1122, author = {Nicolas Devos and Christophe Ponsard and Jean-Christophe Deprez and Renaud Bauvin and Benedicte Moriau and Guy Anckaerts}, title = {Efficient Reuse of Domain-Specific Test Knowledge: An Industrial Case in the Smart Card Domain}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1122--1131}, doi = {}, year = {2012}, } |
|
Postma, Bart |
ICSE '12-SEIP: "Formal Correctness, Safety, ..."
Formal Correctness, Safety, Dependability, and Performance Analysis of a Satellite
Marie-Aude Esteve, Joost-Pieter Katoen , Viet Yen Nguyen, Bart Postma, and Yuri Yushtein (European Space Agency, Netherlands; RWTH Aachen University, Germany; University of Twente, Netherlands) This paper reports on the usage of a broad palette of formal modeling and analysis techniques on a regular industrial-size design of an ultra-modern satellite platform. These efforts were carried out in parallel with the conventional software development of the satellite platform. The model itself is expressed in a formalized dialect of AADL. Its formal nature enables rigorous and automated analysis, for which the recently developed COMPASS toolset was used. The whole effort revealed numerous inconsistencies in the early design documents, and the use of formal analyses provided additional insight on discrete system behavior (comprising nearly 50 million states), on hybrid system behavior involving discrete and continuous variables, and enabled the automated generation of large fault trees (66 nodes) for safety analysis that typically are constructed by hand. The model's size pushed the computational tractability of the algorithms underlying the formal analyses, and revealed bottlenecks for future theoretical research. Additionally, the effort led to newly learned practices from which subsequent formal modeling and analysis efforts shall benefit, especially when they are injected in the conventional software development lifecycle. The case demonstrates the feasibility of fully capturing a system-level design as a single comprehensive formal model and analyze it automatically using a toolset based on (probabilistic) model checkers. @InProceedings{ICSE12p1021, author = {Marie-Aude Esteve and Joost-Pieter Katoen and Viet Yen Nguyen and Bart Postma and Yuri Yushtein}, title = {Formal Correctness, Safety, Dependability, and Performance Analysis of a Satellite}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1021--1030}, doi = {}, year = {2012}, } |
|
Prommer, Karl-Heinz |
ICSE '12-SEIP: "How Much Does Unused Code ..."
How Much Does Unused Code Matter for Maintenance?
Sebastian Eder, Maximilian Junker, Elmar Jürgens, Benedikt Hauptmann, Rudolf Vaas, and Karl-Heinz Prommer (TU Munich, Germany; Munich Re, Germany) Software systems contain unnecessary code. Its maintenance causes unnecessary costs. We present tool-support that employs dynamic analysis of deployed software to detect unused code as an approximation of unnecessary code, and static analysis to reveal its changes during maintenance. We present a case study on maintenance of unused code in an industrial software system over the course of two years. It quantifies the amount of code that is unused, the amount of maintenance activity that went into it and makes the potential benefit of tool support explicit, which informs maintainers that are about to modify unused code. @InProceedings{ICSE12p1101, author = {Sebastian Eder and Maximilian Junker and Elmar Jürgens and Benedikt Hauptmann and Rudolf Vaas and Karl-Heinz Prommer}, title = {How Much Does Unused Code Matter for Maintenance?}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1101--1110}, doi = {}, year = {2012}, } |
|
Reiss, Steven P. |
ICSE '12-SEIP: "Debugger Canvas: Industrial ..."
Debugger Canvas: Industrial Experience with the Code Bubbles Paradigm
Robert DeLine, Andrew Bragdon, Kael Rowan, Jens Jacobsen, and Steven P. Reiss (Microsoft Research, USA; Brown University, USA) At ICSE 2010, the Code Bubbles team from Brown University and the Code Canvas team from Microsoft Research presented similar ideas for new user experiences for an integrated development environment. Since then, the two teams formed a collaboration, along with the Microsoft Visual Studio team, to release Debugger Canvas, an industrial version of the Code Bubbles paradigm. With Debugger Canvas, a programmer debugs her code as a collection of code bubbles, annotated with call paths and variable values, on a two-dimensional pan-and-zoom surface. In this experience report, we describe new user interface ideas, describe the rationale behind our design choices, evaluate the performance overhead of the new design, and provide user feedback based on lab participants, post-release usage data, and a user survey and interviews. We conclude that the code bubbles paradigm does scale to existing customer code bases, is best implemented as a mode in the existing user experience rather than a replacement, and is most useful when the user has a long or complex call paths, a large or unfamiliar code base, or complex control patterns, like factories or dynamic linking. @InProceedings{ICSE12p1063, author = {Robert DeLine and Andrew Bragdon and Kael Rowan and Jens Jacobsen and Steven P. Reiss}, title = {Debugger Canvas: Industrial Experience with the Code Bubbles Paradigm}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1063--1072}, doi = {}, year = {2012}, } |
|
Robinson, Brian |
ICSE '12-SEIP: "Extending Static Analysis ..."
Extending Static Analysis by Mining Project-Specific Rules
Boya Sun, Gang Shu, Andy Podgurski, and Brian Robinson (Case Western Reserve University, USA; ABB Research, USA) Commercial static program analysis tools can be used to detect many defects that are common across applications. However, such tools currently have limited ability to reveal defects that are specific to individual projects, unless specialized checkers are devised and implemented by tool users. Developers do not typically exploit this capability. By contrast, defect mining tools developed by researchers can discover project-specific defects, but they require specialized expertise to employ and they may not be robust enough for general use. We present a hybrid approach in which a sophisticated dependence-based rule mining tool is used to discover project-specific programming rules, which are then transformed automatically into checkers that a commercial static analysis tool can run against a code base to reveal defects. We also present the results of an empirical study in which this approach was applied successfully to two large industrial code bases. Finally, we analyze the potential implications of this approach for software development practice. @InProceedings{ICSE12p1053, author = {Boya Sun and Gang Shu and Andy Podgurski and Brian Robinson}, title = {Extending Static Analysis by Mining Project-Specific Rules}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1053--1062}, doi = {}, year = {2012}, } |
|
Rowan, Kael |
ICSE '12-SEIP: "Debugger Canvas: Industrial ..."
Debugger Canvas: Industrial Experience with the Code Bubbles Paradigm
Robert DeLine, Andrew Bragdon, Kael Rowan, Jens Jacobsen, and Steven P. Reiss (Microsoft Research, USA; Brown University, USA) At ICSE 2010, the Code Bubbles team from Brown University and the Code Canvas team from Microsoft Research presented similar ideas for new user experiences for an integrated development environment. Since then, the two teams formed a collaboration, along with the Microsoft Visual Studio team, to release Debugger Canvas, an industrial version of the Code Bubbles paradigm. With Debugger Canvas, a programmer debugs her code as a collection of code bubbles, annotated with call paths and variable values, on a two-dimensional pan-and-zoom surface. In this experience report, we describe new user interface ideas, describe the rationale behind our design choices, evaluate the performance overhead of the new design, and provide user feedback based on lab participants, post-release usage data, and a user survey and interviews. We conclude that the code bubbles paradigm does scale to existing customer code bases, is best implemented as a mode in the existing user experience rather than a replacement, and is most useful when the user has a long or complex call paths, a large or unfamiliar code base, or complex control patterns, like factories or dynamic linking. @InProceedings{ICSE12p1063, author = {Robert DeLine and Andrew Bragdon and Kael Rowan and Jens Jacobsen and Steven P. Reiss}, title = {Debugger Canvas: Industrial Experience with the Code Bubbles Paradigm}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1063--1072}, doi = {}, year = {2012}, } |
|
Scheit, Stefan |
ICSE '12-SEIP: "Methodology for Migration ..."
Methodology for Migration of Long Running Process Instances in a Global Large Scale BPM Environment in Credit Suisse's SOA Landscape
Tarmo Ploom, Stefan Scheit, and Axel Glaser (Credit Suisse, Switzerland) Research about process instance migration covers mainly changes in process models during the process evolution and their effects on the same runtime environment. But what if the runtime environment - a legacy Business Process Execution (BPE) platform - had to be replaced with a new solution? Several migration aspects must be taken into account. (1) Process models from the old BPE platform have to be converted to the target process definition language on the target BPE platform. (2) Existing Business Process Management (BPM) applications must be integrated via new BPE platform interfaces. (3) Process instances and process instance data state must be migrated. For each of these points an appropriate migration strategy must be chosen. This paper describes the migration methodology which was applied for the BPE platform renewal in Credit Suisse. @InProceedings{ICSE12p976, author = {Tarmo Ploom and Stefan Scheit and Axel Glaser}, title = {Methodology for Migration of Long Running Process Instances in a Global Large Scale BPM Environment in Credit Suisse's SOA Landscape}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {976--985}, doi = {}, year = {2012}, } |
|
Schulte, Wolfram |
ICSE '12-SEIP: "Ten Years of Automated Code ..."
Ten Years of Automated Code Analysis at Microsoft (Invited Industrial Talk)
Wolfram Schulte (Microsoft Research, USA) Automated code analysis is technology aimed at locating, describing and repairing areas of weakness in code. Code weaknesses range from security vulnerabilities, logic errors, concurrency violations, to improper resource usage, violations of architectures or coding guidelines. Common to all code analysis techniques is that they build abstractions of code and then check those abstractions for properties of interest. For instance a type checker computes how types are used, abstract interpreters and symbolic evaluators check how values flow, model checkers analyze how state evolves. Building modern program analysis tools thus requires a multi-pronged approach to find a variety of weaknesses. In this talk I will discuss and compare several program analysis tools, which MSR build during the last ten years. They include theorem provers, program verifiers, bug finders, malware scanners, and test case generators. I will describe the need for their development, their innovation, and application. Many of these tools had considerable impact on Microsoft's development practices, as well as on the research community. Some of them are being shipped in products such as the Static Driver Verifier or as part of Visual Studio. Performing program analysis as part of quality assurance is meanwhile standard practice in many software development companies. However several challenges have not yet been resolved. Thus, I will conclude with a set of open challenges in program analysis which hopefully triggers new aspiring directions in our joint quest of delivering predictable software that is free from defect and vulnerabilities. @InProceedings{ICSE12p1000, author = {Wolfram Schulte}, title = {Ten Years of Automated Code Analysis at Microsoft (Invited Industrial Talk)}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1000--1000}, doi = {}, year = {2012}, } |
|
Seidl, Andreas |
ICSE '12-SEIP: "The Quamoco Product Quality ..."
The Quamoco Product Quality Modelling and Assessment Approach
Stefan Wagner, Klaus Lochmann, Lars Heinemann, Michael Kläs, Adam Trendowicz, Reinhold Plösch, Andreas Seidl, Andreas Goeb, and Jonathan Streit (University of Stuttgart, Germany; TU Munich, Germany; Fraunhofer IESE, Germany; JKU Linz, Austria; Capgemini, Germany; SAP, Germany; itestra, Germany) Published software quality models either provide abstract quality attributes or concrete quality assessments. There are no models that seamlessly integrate both aspects. In the project Quamoco, we built a comprehensive approach with the aim to close this gap. For this, we developed in several iterations a meta quality model specifying general concepts, a quality base model covering the most important quality factors and a quality assessment approach. The meta model introduces the new concept of a product factor, which bridges the gap between concrete measurements and abstract quality aspects. Product factors have measures and instruments to operationalise quality by measurements from manual inspection and tool analysis. The base model uses the ISO 25010 quality attributes, which we refine by 200 factors and 600 measures for Java and C# systems. We found in several empirical validations that the assessment results fit to the expectations of experts for the corresponding systems. The empirical analyses also showed that several of the correlations are statistically significant and that the maintainability part of the base model has the highest correlation, which fits to the fact that this part is the most comprehensive. Although we still see room for extending and improving the base model, it shows a high correspondence with expert opinions and hence is able to form the basis for repeatable and understandable quality assessments in practice. @InProceedings{ICSE12p1132, author = {Stefan Wagner and Klaus Lochmann and Lars Heinemann and Michael Kläs and Adam Trendowicz and Reinhold Plösch and Andreas Seidl and Andreas Goeb and Jonathan Streit}, title = {The Quamoco Product Quality Modelling and Assessment Approach}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1132--1141}, doi = {}, year = {2012}, } |
|
Senti, Patrick |
ICSE '12-SEIP: "Specification Patterns from ..."
Specification Patterns from Research to Industry: A Case Study in Service-Based Applications
Domenico Bianculli, Carlo Ghezzi, Cesare Pautasso, and Patrick Senti (University of Lugano, Switzerland; Politecnico di Milano, Italy; Credit Suisse, Switzerland) Specification patterns have proven to help developers to state precise system requirements, as well as formalize them by means of dedicated specification languages. Most of the past work has focused its applicability area to the specification of concurrent and real-time systems, and has been limited to a research setting. In this paper we present the results of our study on specification patterns for service-based applications (SBAs). The study focuses on industrial SBAs in the banking domain. We started by performing an extensive analysis of the usage of specification patterns in published research case studies --- representing almost ten years of research in the area of specification, verification, and validation of SBAs. We then compared these patterns with a large body of specifications written by our industrial partner over a similar time period. The paper discusses the outcome of this comparison, indicating that some needs of the industry, especially in the area of requirements specification languages, are not fully met by current software engineering research. @InProceedings{ICSE12p967, author = {Domenico Bianculli and Carlo Ghezzi and Cesare Pautasso and Patrick Senti}, title = {Specification Patterns from Research to Industry: A Case Study in Service-Based Applications}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {967--975}, doi = {}, year = {2012}, } |
|
Shu, Gang |
ICSE '12-SEIP: "Extending Static Analysis ..."
Extending Static Analysis by Mining Project-Specific Rules
Boya Sun, Gang Shu, Andy Podgurski, and Brian Robinson (Case Western Reserve University, USA; ABB Research, USA) Commercial static program analysis tools can be used to detect many defects that are common across applications. However, such tools currently have limited ability to reveal defects that are specific to individual projects, unless specialized checkers are devised and implemented by tool users. Developers do not typically exploit this capability. By contrast, defect mining tools developed by researchers can discover project-specific defects, but they require specialized expertise to employ and they may not be robust enough for general use. We present a hybrid approach in which a sophisticated dependence-based rule mining tool is used to discover project-specific programming rules, which are then transformed automatically into checkers that a commercial static analysis tool can run against a code base to reveal defects. We also present the results of an empirical study in which this approach was applied successfully to two large industrial code bases. Finally, we analyze the potential implications of this approach for software development practice. @InProceedings{ICSE12p1053, author = {Boya Sun and Gang Shu and Andy Podgurski and Brian Robinson}, title = {Extending Static Analysis by Mining Project-Specific Rules}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1053--1062}, doi = {}, year = {2012}, } |
|
Sillitti, Alberto |
ICSE '12-SEIP: "Understanding the Impact of ..."
Understanding the Impact of Pair Programming on Developers Attention: A Case Study on a Large Industrial Experimentation
Alberto Sillitti, Giancarlo Succi, and Jelena Vlasenko (Free University of Bolzano, Italy) Pair Programming is one of the most studied and debated development techniques. However, at present, we do not have a clear, objective, and quantitative understanding of the claimed benefits of such development approach. All the available studies focus on the analysis of the effects of Pair Programming (e.g., code quality, development speed, etc.) with different findings and limited replicability of the experiments. This paper adopts a different approach that could be replicated in an easier way: it investigates how Pair Programming affects the way developers write code and interact with their development machine. In particular, the paper focuses on the effects that Pair Programming has on developers’ attention and productivity. The study was performed on a professional development team observed for ten months and it finds out that Pair Programming helps developers to eliminate distracting activities and to focus on productive activities. @InProceedings{ICSE12p1093, author = {Alberto Sillitti and Giancarlo Succi and Jelena Vlasenko}, title = {Understanding the Impact of Pair Programming on Developers Attention: A Case Study on a Large Industrial Experimentation}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1093--1100}, doi = {}, year = {2012}, } |
|
Simionovici, Yahalomit |
ICSE '12-SEIP: "Making Sense of Healthcare ..."
Making Sense of Healthcare Benefits
Jonathan Bnayahu, Maayan Goldstein, Mordechai Nisenson, and Yahalomit Simionovici (IBM Research, Israel) A key piece of information in healthcare is a patient's benefit plan. It details which treatments and procedures are covered by the health insurer (or payer), and at which conditions. While the most accurate and complete implementation of the plan resides in the payer’s claims adjudication systems, the inherent complexity of these systems forces payers to maintain multiple repositories of benefit information for other service and regulatory needs. In this paper we present a technology that deals with this complexity. We show how a large US health payer benefited from using the visualization, search, summarization and other capabilities of the technology. We argue that this technology can be used to improve productivity and reduce error rate in the benefits administration workflow, leading to lower administrative overhead and cost for health payers, which benefits both payers and patients. @InProceedings{ICSE12p1033, author = {Jonathan Bnayahu and Maayan Goldstein and Mordechai Nisenson and Yahalomit Simionovici}, title = {Making Sense of Healthcare Benefits}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1033--1042}, doi = {}, year = {2012}, } |
|
Sprenger, Tom |
ICSE '12-SEIP: "How Software Engineering Can ..."
How Software Engineering Can Benefit from Traditional Industries - A Practical Experience Report (Invited Industrial Talk)
Tom Sprenger (AdNovum Informatik, Switzerland) To be competitive in today's market, the IT industry faces many challenges in the development and maintenance of enterprise information systems. Engineering these largescaled systems efficiently requires making decisions about a number of issues. In addition, customer’s expectations imply continuous software delivery in predictable quality. The operation such systems demands for transparency of the software in regard to lifecycle, change and incident management as well as cost efficiency. Addressing these challenges, we learned how to benefit from traditional industries. Contrary to the fact that the IT business calls itself gladly an industry, the industrialization of software engineering in most cases moves on a rather modest level. Industrialization means not only to build a solution or product on top of managed and well-defined processes, but also to have access to structured information about the current conditions of manufacturing at any time. Comparably with test series and assembly lines of the automobile industry, each individual component and each step from the beginning of manufacturing up to the final product should be equipped with measuring points for quality and progress. Even one step further the product itself, after it has left the factory, should be able to continuously provide analytic data for diagnostic reasons. Information is automatically collected and builds the basic essentials for process control, optimization and continuous improvement of the software engineering process. This presentation shows by means of a practical experience report how AdNovum managed to build its software engineering based on a well-balanced system of processes, continuous measurement and control - as well as a healthy portion of pragmatism. We implemented an efficient and predictable software delivery pipeline based on five cornerstones that enables us to ship more than 1500 customer deliveries per year. @InProceedings{ICSE12p999, author = {Tom Sprenger}, title = {How Software Engineering Can Benefit from Traditional Industries - A Practical Experience Report (Invited Industrial Talk)}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {999--999}, doi = {}, year = {2012}, } |
|
Staples, Mark |
ICSE '12-SEIP: "Large-Scale Formal Verification ..."
Large-Scale Formal Verification in Practice: A Process Perspective
June Andronick, Ross Jeffery, Gerwin Klein, Rafal Kolanski, Mark Staples, He Zhang, and Liming Zhu (NICTA, Australia; UNSW, Australia) The L4.verified project was a rare success in large-scale, formal verification: it provided a formal, machine-checked, code-level proof of the full functional correctness of the seL4 microkernel. In this paper we report on the development process and management issues of this project, highlighting key success factors. We formulate a detailed descriptive model of its middle-out development process, and analyze the evolution and dependencies of code and proof artifacts. We compare our key findings on verification and re-verification with insights from other verification efforts in the literature. Our analysis of the project is based on complete access to project logs, meeting notes, and version control data over its entire history, including its long-term, ongoing maintenance phase. The aim of this work is to aid understanding of how to successfully run large-scale formal software verification projects. @InProceedings{ICSE12p1001, author = {June Andronick and Ross Jeffery and Gerwin Klein and Rafal Kolanski and Mark Staples and He Zhang and Liming Zhu}, title = {Large-Scale Formal Verification in Practice: A Process Perspective}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1001--1010}, doi = {}, year = {2012}, } |
|
Streit, Jonathan |
ICSE '12-SEIP: "The Quamoco Product Quality ..."
The Quamoco Product Quality Modelling and Assessment Approach
Stefan Wagner, Klaus Lochmann, Lars Heinemann, Michael Kläs, Adam Trendowicz, Reinhold Plösch, Andreas Seidl, Andreas Goeb, and Jonathan Streit (University of Stuttgart, Germany; TU Munich, Germany; Fraunhofer IESE, Germany; JKU Linz, Austria; Capgemini, Germany; SAP, Germany; itestra, Germany) Published software quality models either provide abstract quality attributes or concrete quality assessments. There are no models that seamlessly integrate both aspects. In the project Quamoco, we built a comprehensive approach with the aim to close this gap. For this, we developed in several iterations a meta quality model specifying general concepts, a quality base model covering the most important quality factors and a quality assessment approach. The meta model introduces the new concept of a product factor, which bridges the gap between concrete measurements and abstract quality aspects. Product factors have measures and instruments to operationalise quality by measurements from manual inspection and tool analysis. The base model uses the ISO 25010 quality attributes, which we refine by 200 factors and 600 measures for Java and C# systems. We found in several empirical validations that the assessment results fit to the expectations of experts for the corresponding systems. The empirical analyses also showed that several of the correlations are statistically significant and that the maintainability part of the base model has the highest correlation, which fits to the fact that this part is the most comprehensive. Although we still see room for extending and improving the base model, it shows a high correspondence with expert opinions and hence is able to form the basis for repeatable and understandable quality assessments in practice. @InProceedings{ICSE12p1132, author = {Stefan Wagner and Klaus Lochmann and Lars Heinemann and Michael Kläs and Adam Trendowicz and Reinhold Plösch and Andreas Seidl and Andreas Goeb and Jonathan Streit}, title = {The Quamoco Product Quality Modelling and Assessment Approach}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1132--1141}, doi = {}, year = {2012}, } |
|
Succi, Giancarlo |
ICSE '12-SEIP: "Understanding the Impact of ..."
Understanding the Impact of Pair Programming on Developers Attention: A Case Study on a Large Industrial Experimentation
Alberto Sillitti, Giancarlo Succi, and Jelena Vlasenko (Free University of Bolzano, Italy) Pair Programming is one of the most studied and debated development techniques. However, at present, we do not have a clear, objective, and quantitative understanding of the claimed benefits of such development approach. All the available studies focus on the analysis of the effects of Pair Programming (e.g., code quality, development speed, etc.) with different findings and limited replicability of the experiments. This paper adopts a different approach that could be replicated in an easier way: it investigates how Pair Programming affects the way developers write code and interact with their development machine. In particular, the paper focuses on the effects that Pair Programming has on developers’ attention and productivity. The study was performed on a professional development team observed for ten months and it finds out that Pair Programming helps developers to eliminate distracting activities and to focus on productive activities. @InProceedings{ICSE12p1093, author = {Alberto Sillitti and Giancarlo Succi and Jelena Vlasenko}, title = {Understanding the Impact of Pair Programming on Developers Attention: A Case Study on a Large Industrial Experimentation}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1093--1100}, doi = {}, year = {2012}, } |
|
Sun, Boya |
ICSE '12-SEIP: "Extending Static Analysis ..."
Extending Static Analysis by Mining Project-Specific Rules
Boya Sun, Gang Shu, Andy Podgurski, and Brian Robinson (Case Western Reserve University, USA; ABB Research, USA) Commercial static program analysis tools can be used to detect many defects that are common across applications. However, such tools currently have limited ability to reveal defects that are specific to individual projects, unless specialized checkers are devised and implemented by tool users. Developers do not typically exploit this capability. By contrast, defect mining tools developed by researchers can discover project-specific defects, but they require specialized expertise to employ and they may not be robust enough for general use. We present a hybrid approach in which a sophisticated dependence-based rule mining tool is used to discover project-specific programming rules, which are then transformed automatically into checkers that a commercial static analysis tool can run against a code base to reveal defects. We also present the results of an empirical study in which this approach was applied successfully to two large industrial code bases. Finally, we analyze the potential implications of this approach for software development practice. @InProceedings{ICSE12p1053, author = {Boya Sun and Gang Shu and Andy Podgurski and Brian Robinson}, title = {Extending Static Analysis by Mining Project-Specific Rules}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1053--1062}, doi = {}, year = {2012}, } |
|
Takeuchi, Hironori |
ICSE '12-SEIP: "Constructing Parser for Industrial ..."
Constructing Parser for Industrial Software Specifications Containing Formal and Natural Language Description
Futoshi Iwama, Taiga Nakamura, and Hironori Takeuchi (IBM Research, Japan) This paper describes a novel framework for creating a parser to process and analyze texts written in a ``partially structured'' natural language. In many projects, the contents of document artifacts tend to be described as a mixture of formal parts (i.e. the text constructs follow specific conventions) and parts written in arbitrary free text. Formal parsers, typically defined and used to process a description with rigidly defined syntax such as program source code are very precise and efficient in processing the formal part, while parsers developed for natural language processing (NLP) are good at robustly interpreting the free-text part. Therefore, combining these parsers with different characteristics can allow for more flexible and practical processing of various project documents. Unfortunately, conventional approaches to constructing a parser from multiple small parsers were studied extensively only for formal language parsers and are not directly applicable to NLP parsers due to the differences in the way the input text is extracted and evaluated. We propose a method to configure and generate a combined parser by extending an approach based on parser combinator, the operators for composing multiple formal parsers, to support both NLP and formal parsers. The resulting text parser is based on Parsing Expression Grammars, and it benefits from the strength of both parser types. We demonstrate an application of such combined parser in practical situations and show that the proposed approach can efficiently construct a parser for analyzing project-specific industrial specification documents. @InProceedings{ICSE12p1011, author = {Futoshi Iwama and Taiga Nakamura and Hironori Takeuchi}, title = {Constructing Parser for Industrial Software Specifications Containing Formal and Natural Language Description}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1011--1020}, doi = {}, year = {2012}, } |
|
Trendowicz, Adam |
ICSE '12-SEIP: "The Quamoco Product Quality ..."
The Quamoco Product Quality Modelling and Assessment Approach
Stefan Wagner, Klaus Lochmann, Lars Heinemann, Michael Kläs, Adam Trendowicz, Reinhold Plösch, Andreas Seidl, Andreas Goeb, and Jonathan Streit (University of Stuttgart, Germany; TU Munich, Germany; Fraunhofer IESE, Germany; JKU Linz, Austria; Capgemini, Germany; SAP, Germany; itestra, Germany) Published software quality models either provide abstract quality attributes or concrete quality assessments. There are no models that seamlessly integrate both aspects. In the project Quamoco, we built a comprehensive approach with the aim to close this gap. For this, we developed in several iterations a meta quality model specifying general concepts, a quality base model covering the most important quality factors and a quality assessment approach. The meta model introduces the new concept of a product factor, which bridges the gap between concrete measurements and abstract quality aspects. Product factors have measures and instruments to operationalise quality by measurements from manual inspection and tool analysis. The base model uses the ISO 25010 quality attributes, which we refine by 200 factors and 600 measures for Java and C# systems. We found in several empirical validations that the assessment results fit to the expectations of experts for the corresponding systems. The empirical analyses also showed that several of the correlations are statistically significant and that the maintainability part of the base model has the highest correlation, which fits to the fact that this part is the most comprehensive. Although we still see room for extending and improving the base model, it shows a high correspondence with expert opinions and hence is able to form the basis for repeatable and understandable quality assessments in practice. @InProceedings{ICSE12p1132, author = {Stefan Wagner and Klaus Lochmann and Lars Heinemann and Michael Kläs and Adam Trendowicz and Reinhold Plösch and Andreas Seidl and Andreas Goeb and Jonathan Streit}, title = {The Quamoco Product Quality Modelling and Assessment Approach}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1132--1141}, doi = {}, year = {2012}, } |
|
Vaas, Rudolf |
ICSE '12-SEIP: "How Much Does Unused Code ..."
How Much Does Unused Code Matter for Maintenance?
Sebastian Eder, Maximilian Junker, Elmar Jürgens, Benedikt Hauptmann, Rudolf Vaas, and Karl-Heinz Prommer (TU Munich, Germany; Munich Re, Germany) Software systems contain unnecessary code. Its maintenance causes unnecessary costs. We present tool-support that employs dynamic analysis of deployed software to detect unused code as an approximation of unnecessary code, and static analysis to reveal its changes during maintenance. We present a case study on maintenance of unused code in an industrial software system over the course of two years. It quantifies the amount of code that is unused, the amount of maintenance activity that went into it and makes the potential benefit of tool support explicit, which informs maintainers that are about to modify unused code. @InProceedings{ICSE12p1101, author = {Sebastian Eder and Maximilian Junker and Elmar Jürgens and Benedikt Hauptmann and Rudolf Vaas and Karl-Heinz Prommer}, title = {How Much Does Unused Code Matter for Maintenance?}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1101--1110}, doi = {}, year = {2012}, } |
|
Vlasenko, Jelena |
ICSE '12-SEIP: "Understanding the Impact of ..."
Understanding the Impact of Pair Programming on Developers Attention: A Case Study on a Large Industrial Experimentation
Alberto Sillitti, Giancarlo Succi, and Jelena Vlasenko (Free University of Bolzano, Italy) Pair Programming is one of the most studied and debated development techniques. However, at present, we do not have a clear, objective, and quantitative understanding of the claimed benefits of such development approach. All the available studies focus on the analysis of the effects of Pair Programming (e.g., code quality, development speed, etc.) with different findings and limited replicability of the experiments. This paper adopts a different approach that could be replicated in an easier way: it investigates how Pair Programming affects the way developers write code and interact with their development machine. In particular, the paper focuses on the effects that Pair Programming has on developers’ attention and productivity. The study was performed on a professional development team observed for ten months and it finds out that Pair Programming helps developers to eliminate distracting activities and to focus on productive activities. @InProceedings{ICSE12p1093, author = {Alberto Sillitti and Giancarlo Succi and Jelena Vlasenko}, title = {Understanding the Impact of Pair Programming on Developers Attention: A Case Study on a Large Industrial Experimentation}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1093--1100}, doi = {}, year = {2012}, } |
|
Wagner, Stefan |
ICSE '12-SEIP: "The Quamoco Product Quality ..."
The Quamoco Product Quality Modelling and Assessment Approach
Stefan Wagner, Klaus Lochmann, Lars Heinemann, Michael Kläs, Adam Trendowicz, Reinhold Plösch, Andreas Seidl, Andreas Goeb, and Jonathan Streit (University of Stuttgart, Germany; TU Munich, Germany; Fraunhofer IESE, Germany; JKU Linz, Austria; Capgemini, Germany; SAP, Germany; itestra, Germany) Published software quality models either provide abstract quality attributes or concrete quality assessments. There are no models that seamlessly integrate both aspects. In the project Quamoco, we built a comprehensive approach with the aim to close this gap. For this, we developed in several iterations a meta quality model specifying general concepts, a quality base model covering the most important quality factors and a quality assessment approach. The meta model introduces the new concept of a product factor, which bridges the gap between concrete measurements and abstract quality aspects. Product factors have measures and instruments to operationalise quality by measurements from manual inspection and tool analysis. The base model uses the ISO 25010 quality attributes, which we refine by 200 factors and 600 measures for Java and C# systems. We found in several empirical validations that the assessment results fit to the expectations of experts for the corresponding systems. The empirical analyses also showed that several of the correlations are statistically significant and that the maintainability part of the base model has the highest correlation, which fits to the fact that this part is the most comprehensive. Although we still see room for extending and improving the base model, it shows a high correspondence with expert opinions and hence is able to form the basis for repeatable and understandable quality assessments in practice. @InProceedings{ICSE12p1132, author = {Stefan Wagner and Klaus Lochmann and Lars Heinemann and Michael Kläs and Adam Trendowicz and Reinhold Plösch and Andreas Seidl and Andreas Goeb and Jonathan Streit}, title = {The Quamoco Product Quality Modelling and Assessment Approach}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1132--1141}, doi = {}, year = {2012}, } |
|
Wolff, Eberhard |
ICSE '12-SEIP: "Software Architecture - What ..."
Software Architecture - What Does It Mean in Industry? (Invited Industrial Talk)
Eberhard Wolff (adesso, Germany) Architecture is crucial for the success of software projects. However, the metaphor is taken from Civil Engineering - does it really fit to building Software? And ultimately software is code - how does architecture help to create the code? This talk give some practical advices based on several years of experience in the industry as an architect as well as coach and trainer for Software Architecture. It shows core elements of Software Architectures, defines its relation to coding and how architects should are educated in the industry. @InProceedings{ICSE12p998, author = {Eberhard Wolff}, title = {Software Architecture - What Does It Mean in Industry? (Invited Industrial Talk)}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {998--998}, doi = {}, year = {2012}, } |
|
Wu, Rongxin |
ICSE '12-SEIP: "ReBucket: A Method for Clustering ..."
ReBucket: A Method for Clustering Duplicate Crash Reports Based on Call Stack Similarity
Yingnong Dang , Rongxin Wu, Hongyu Zhang, Dongmei Zhang , and Peter Nobel (Microsoft Research, China; Tsinghua University, China; Microsoft, USA) Software often crashes. Once a crash happens, a crash report could be sent to software developers for investigation upon user permission. To facilitate efficient handling of crashes, crash reports received by Microsoft's Windows Error Reporting (WER) system are organized into a set of "buckets". Each bucket contains duplicate crash reports that are deemed as manifestations of the same bug. The bucket information is important for prioritizing efforts to resolve crashing bugs. To improve the accuracy of bucketing, we propose ReBucket, a method for clustering crash reports based on call stack matching. ReBucket measures the similarities of call stacks in crash reports and then assigns the reports to appropriate buckets based on the similarity values. We evaluate ReBucket using crash data collected from five widely-used Microsoft products. The results show that ReBucket achieves better overall performance than the existing methods. In average, the F-measure obtained by ReBucket is about 0.88. @InProceedings{ICSE12p1083, author = {Yingnong Dang and Rongxin Wu and Hongyu Zhang and Dongmei Zhang and Peter Nobel}, title = {ReBucket: A Method for Clustering Duplicate Crash Reports Based on Call Stack Similarity}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1083--1092}, doi = {}, year = {2012}, } |
|
Xie, Tao |
ICSE '12-SEIP: "Software Analytics in Practice: ..."
Software Analytics in Practice: Mini Tutorial
Dongmei Zhang and Tao Xie (Microsoft Research, China; North Carolina State University, USA) A huge wealth of various data exist in the practice of software development. Further rich data are produced by modern software and services in operation, many of which tend to be data-driven and/or data-producing in nature. Hidden in the data is information about the quality of software and services and the dynamics of software development. Software analytics is to develop and apply data exploration and analysis technologies, such as pattern recognition, machine learning, and information visualization, on software data to obtain insightful and actionable information for modern software and services. This tutorial presents latest research and practice on principles, techniques, and applications of software analytics in practice, highlighting success stories in industry, research achievements that are transferred to industrial practice, and future research and practice directions in software analytics. The attendees can acquire the skills and knowledge needed to perform industrial research or conduct industrial practice in the field of software analytics and to integrate analytics in their own industrial research, practice, and training. @InProceedings{ICSE12p996, author = {Dongmei Zhang and Tao Xie}, title = {Software Analytics in Practice: Mini Tutorial}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {996--996}, doi = {}, year = {2012}, } |
|
Yushtein, Yuri |
ICSE '12-SEIP: "Formal Correctness, Safety, ..."
Formal Correctness, Safety, Dependability, and Performance Analysis of a Satellite
Marie-Aude Esteve, Joost-Pieter Katoen , Viet Yen Nguyen, Bart Postma, and Yuri Yushtein (European Space Agency, Netherlands; RWTH Aachen University, Germany; University of Twente, Netherlands) This paper reports on the usage of a broad palette of formal modeling and analysis techniques on a regular industrial-size design of an ultra-modern satellite platform. These efforts were carried out in parallel with the conventional software development of the satellite platform. The model itself is expressed in a formalized dialect of AADL. Its formal nature enables rigorous and automated analysis, for which the recently developed COMPASS toolset was used. The whole effort revealed numerous inconsistencies in the early design documents, and the use of formal analyses provided additional insight on discrete system behavior (comprising nearly 50 million states), on hybrid system behavior involving discrete and continuous variables, and enabled the automated generation of large fault trees (66 nodes) for safety analysis that typically are constructed by hand. The model's size pushed the computational tractability of the algorithms underlying the formal analyses, and revealed bottlenecks for future theoretical research. Additionally, the effort led to newly learned practices from which subsequent formal modeling and analysis efforts shall benefit, especially when they are injected in the conventional software development lifecycle. The case demonstrates the feasibility of fully capturing a system-level design as a single comprehensive formal model and analyze it automatically using a toolset based on (probabilistic) model checkers. @InProceedings{ICSE12p1021, author = {Marie-Aude Esteve and Joost-Pieter Katoen and Viet Yen Nguyen and Bart Postma and Yuri Yushtein}, title = {Formal Correctness, Safety, Dependability, and Performance Analysis of a Satellite}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1021--1030}, doi = {}, year = {2012}, } |
|
Zhang, Dongmei |
ICSE '12-SEIP: "ReBucket: A Method for Clustering ..."
ReBucket: A Method for Clustering Duplicate Crash Reports Based on Call Stack Similarity
Yingnong Dang , Rongxin Wu, Hongyu Zhang, Dongmei Zhang , and Peter Nobel (Microsoft Research, China; Tsinghua University, China; Microsoft, USA) Software often crashes. Once a crash happens, a crash report could be sent to software developers for investigation upon user permission. To facilitate efficient handling of crashes, crash reports received by Microsoft's Windows Error Reporting (WER) system are organized into a set of "buckets". Each bucket contains duplicate crash reports that are deemed as manifestations of the same bug. The bucket information is important for prioritizing efforts to resolve crashing bugs. To improve the accuracy of bucketing, we propose ReBucket, a method for clustering crash reports based on call stack matching. ReBucket measures the similarities of call stacks in crash reports and then assigns the reports to appropriate buckets based on the similarity values. We evaluate ReBucket using crash data collected from five widely-used Microsoft products. The results show that ReBucket achieves better overall performance than the existing methods. In average, the F-measure obtained by ReBucket is about 0.88. @InProceedings{ICSE12p1083, author = {Yingnong Dang and Rongxin Wu and Hongyu Zhang and Dongmei Zhang and Peter Nobel}, title = {ReBucket: A Method for Clustering Duplicate Crash Reports Based on Call Stack Similarity}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1083--1092}, doi = {}, year = {2012}, } ICSE '12-SEIP: "Software Analytics in Practice: ..." Software Analytics in Practice: Mini Tutorial Dongmei Zhang and Tao Xie (Microsoft Research, China; North Carolina State University, USA) A huge wealth of various data exist in the practice of software development. Further rich data are produced by modern software and services in operation, many of which tend to be data-driven and/or data-producing in nature. Hidden in the data is information about the quality of software and services and the dynamics of software development. Software analytics is to develop and apply data exploration and analysis technologies, such as pattern recognition, machine learning, and information visualization, on software data to obtain insightful and actionable information for modern software and services. This tutorial presents latest research and practice on principles, techniques, and applications of software analytics in practice, highlighting success stories in industry, research achievements that are transferred to industrial practice, and future research and practice directions in software analytics. The attendees can acquire the skills and knowledge needed to perform industrial research or conduct industrial practice in the field of software analytics and to integrate analytics in their own industrial research, practice, and training. @InProceedings{ICSE12p996, author = {Dongmei Zhang and Tao Xie}, title = {Software Analytics in Practice: Mini Tutorial}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {996--996}, doi = {}, year = {2012}, } |
|
Zhang, He |
ICSE '12-SEIP: "Large-Scale Formal Verification ..."
Large-Scale Formal Verification in Practice: A Process Perspective
June Andronick, Ross Jeffery, Gerwin Klein, Rafal Kolanski, Mark Staples, He Zhang, and Liming Zhu (NICTA, Australia; UNSW, Australia) The L4.verified project was a rare success in large-scale, formal verification: it provided a formal, machine-checked, code-level proof of the full functional correctness of the seL4 microkernel. In this paper we report on the development process and management issues of this project, highlighting key success factors. We formulate a detailed descriptive model of its middle-out development process, and analyze the evolution and dependencies of code and proof artifacts. We compare our key findings on verification and re-verification with insights from other verification efforts in the literature. Our analysis of the project is based on complete access to project logs, meeting notes, and version control data over its entire history, including its long-term, ongoing maintenance phase. The aim of this work is to aid understanding of how to successfully run large-scale formal software verification projects. @InProceedings{ICSE12p1001, author = {June Andronick and Ross Jeffery and Gerwin Klein and Rafal Kolanski and Mark Staples and He Zhang and Liming Zhu}, title = {Large-Scale Formal Verification in Practice: A Process Perspective}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1001--1010}, doi = {}, year = {2012}, } |
|
Zhang, Hongyu |
ICSE '12-SEIP: "ReBucket: A Method for Clustering ..."
ReBucket: A Method for Clustering Duplicate Crash Reports Based on Call Stack Similarity
Yingnong Dang , Rongxin Wu, Hongyu Zhang, Dongmei Zhang , and Peter Nobel (Microsoft Research, China; Tsinghua University, China; Microsoft, USA) Software often crashes. Once a crash happens, a crash report could be sent to software developers for investigation upon user permission. To facilitate efficient handling of crashes, crash reports received by Microsoft's Windows Error Reporting (WER) system are organized into a set of "buckets". Each bucket contains duplicate crash reports that are deemed as manifestations of the same bug. The bucket information is important for prioritizing efforts to resolve crashing bugs. To improve the accuracy of bucketing, we propose ReBucket, a method for clustering crash reports based on call stack matching. ReBucket measures the similarities of call stacks in crash reports and then assigns the reports to appropriate buckets based on the similarity values. We evaluate ReBucket using crash data collected from five widely-used Microsoft products. The results show that ReBucket achieves better overall performance than the existing methods. In average, the F-measure obtained by ReBucket is about 0.88. @InProceedings{ICSE12p1083, author = {Yingnong Dang and Rongxin Wu and Hongyu Zhang and Dongmei Zhang and Peter Nobel}, title = {ReBucket: A Method for Clustering Duplicate Crash Reports Based on Call Stack Similarity}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1083--1092}, doi = {}, year = {2012}, } |
|
Zhu, Liming |
ICSE '12-SEIP: "Large-Scale Formal Verification ..."
Large-Scale Formal Verification in Practice: A Process Perspective
June Andronick, Ross Jeffery, Gerwin Klein, Rafal Kolanski, Mark Staples, He Zhang, and Liming Zhu (NICTA, Australia; UNSW, Australia) The L4.verified project was a rare success in large-scale, formal verification: it provided a formal, machine-checked, code-level proof of the full functional correctness of the seL4 microkernel. In this paper we report on the development process and management issues of this project, highlighting key success factors. We formulate a detailed descriptive model of its middle-out development process, and analyze the evolution and dependencies of code and proof artifacts. We compare our key findings on verification and re-verification with insights from other verification efforts in the literature. Our analysis of the project is based on complete access to project logs, meeting notes, and version control data over its entire history, including its long-term, ongoing maintenance phase. The aim of this work is to aid understanding of how to successfully run large-scale formal software verification projects. @InProceedings{ICSE12p1001, author = {June Andronick and Ross Jeffery and Gerwin Klein and Rafal Kolanski and Mark Staples and He Zhang and Liming Zhu}, title = {Large-Scale Formal Verification in Practice: A Process Perspective}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1001--1010}, doi = {}, year = {2012}, } |
|
Zimmermann, Thomas |
ICSE '12-SEIP: "Information Needs for Software ..."
Information Needs for Software Development Analytics
Raymond P. L. Buse and Thomas Zimmermann (University of Virginia, USA; Microsoft Research, USA) Software development is a data rich activity with many sophisticated metrics. Yet engineers often lack the tools and techniques necessary to leverage these potentially powerful information resources toward decision making. In this paper, we present the data and analysis needs of professional software engineers, which we identified among 110 developers and managers in a survey. We asked about their decision making process, their needs for artifacts and indicators, and scenarios in which they would use analytics. The survey responses lead us to propose several guidelines for analytics tools in software development including: Engineers do not necessarily have much expertise in data analysis; thus tools should be easy to use, fast, and produce concise output. Engineers have diverse analysis needs and consider most indicators to be important; thus tools should at the same time support many different types of artifacts and many indicators. In addition, engineers want to drill down into data based on time, organizational structure, and system architecture. @InProceedings{ICSE12p986, author = {Raymond P. L. Buse and Thomas Zimmermann}, title = {Information Needs for Software Development Analytics}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {986--995}, doi = {}, year = {2012}, } ICSE '12-SEIP: "Characterizing and Predicting ..." Characterizing and Predicting Which Bugs Get Reopened Thomas Zimmermann , Nachiappan Nagappan, Philip J. Guo, and Brendan Murphy (Microsoft Research, USA; Stanford University, USA; Microsoft Research, UK) Fixing bugs is an important part of the software development process. An underlying aspect is the effectiveness of fixes: if a fair number of fixed bugs are reopened, it could indicate instability in the software system. To the best of our knowledge there has been on little prior work on understanding the dynamics of bug reopens. Towards that end, in this paper, we characterize when bug reports are reopened by using the Microsoft Windows operating system project as an empirical case study. Our analysis is based on a mixed-methods approach. First, we categorize the primary reasons for reopens based on a survey of 358 Microsoft employees. We then reinforce these results with a large-scale quantitative study of Windows bug reports, focusing on factors related to bug report edits and relationships between people involved in handling the bug. Finally, we build statistical models to describe the impact of various metrics on reopening bugs ranging from the reputation of the opener to how the bug was found. @InProceedings{ICSE12p1073, author = {Thomas Zimmermann and Nachiappan Nagappan and Philip J. Guo and Brendan Murphy}, title = {Characterizing and Predicting Which Bugs Get Reopened}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1073--1082}, doi = {}, year = {2012}, } ICSE '12-SEIP: "Goldfish Bowl Panel: Software ..." Goldfish Bowl Panel: Software Development Analytics Tim Menzies and Thomas Zimmermann (West Virginia University, USA; Microsoft Research, USA) Gaming companies now routinely apply data mining to their user data in order to plan the next release of their software. We predict that such software development analytics will become commonplace, in the near future. For example, as large software systems migrate to the cloud, they are divided and sold as dozens of smaller apps; when shopping inside the cloud, users are free to mix and match their apps from multiple vendors (e.g. Google Docs’ word processor with Zoho’s slide manager); to extend, or even retain, market share cloud vendors must mine their user data in order to understand what features best attract their clients. This panel will address the open issues with analytics. Issues addressed will include the following. What is the potential for software development analytics? What are the strengths and weaknesses of the current generation of analytics tools? How best can we mature those tools? @InProceedings{ICSE12p1031, author = {Tim Menzies and Thomas Zimmermann}, title = {Goldfish Bowl Panel: Software Development Analytics}, booktitle = {Proc.\ ICSE}, publisher = {IEEE}, pages = {1031--1032}, doi = {}, year = {2012}, } |
89 authors
proc time: 0.48