Powered by
2012 34th International Conference on Software Engineering (ICSE),
June 2–9, 2012,
Zurich, Switzerland
New Ideas and Emerging Results
NIER in Support of Software Engineers
Wed, Jun 6, 10:45 - 12:45
Automatically Detecting Developer Activities and Problems in Software Development Work
Tobias Roehm and Walid Maalej
(TU Munich, Germany)
Detecting the current activity of developers and problems they are facing is a prerequisite for a context-aware assistance and for capturing developers’ experiences during their work. We present an approach to detect the current activity of software developers and if they are facing a problem. By observing developer actions like changing code or searching the web, we detect whether developers are locating the cause of a problem, searching for a solution, or applying a solution.
We model development work as recurring problem solution cycle, detect developer’s actions by instrumenting the IDE, translate developer actions to observations using ontologies, and infer developer activities by using Hidden Markov Models. In a preliminary evaluation, our approach was able to correctly detect 72% of all activities. However, a broader more reliable evaluation is still needed.
@InProceedings{ICSE12p1260,
author = {Tobias Roehm and Walid Maalej},
title = {Automatically Detecting Developer Activities and Problems in Software Development Work},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1260--1263},
doi = {},
year = {2012},
}
Software Process Improvement through the Identification and Removal of Project-Level Knowledge Flow Obstacles
Susan M. Mitchell and Carolyn B. Seaman
(University of Maryland in Baltimore County, USA)
Uncontrollable costs, schedule overruns, and poor end product quality continue to plague the software
engineering field. This research investigates software process improvement (SPI) through the application of knowledge management (KM) at the software project level. A pilot study was conducted to investigate what types of obstacles to knowledge flow exist within a software development project, as well as the potential influence on SPI of their mitigation or removal. The KM technique of “knowledge mapping” was used as a research technique to characterize knowledge flow. Results show that such mitigation or removal was acknowledged by project team members as having the potential for lowering project labor cost, improving schedule adherence, and enhancing final product quality.
@InProceedings{ICSE12p1264,
author = {Susan M. Mitchell and Carolyn B. Seaman},
title = {Software Process Improvement through the Identification and Removal of Project-Level Knowledge Flow Obstacles},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1264--1267},
doi = {},
year = {2012},
}
Symbiotic General-Purpose and Domain-Specific Languages
Colin Atkinson, Ralph Gerbig, and Bastian Kennel
(University of Mannheim, Germany)
Domain-Specific Modeling Languages (DSMLs) have received great attention in recent years and are expected to play a big role in the future of software engineering as processes become more view-centric. However, they are a "two-edged sword". While they provide strong support for communication within communities, allowing experts to express themselves using concepts tailored to their exact needs, they are a poor vehicle for communication across communities because of their lack of common, transcending concepts. In contrast, General-Purpose Modeling Languages (GPMLs) have the opposite problem - they are poor at the former but good at the latter. The value of models in software engineering would therefore be significantly boosted if the advantages of DSMLs and GPMLs could be combined and models could be viewed in a domain-specific or general-purpose way depending on the needs of the user. In this paper we present an approach for achieving such a synergy based on the orthogonal classification architecture. In this architecture model elements have two classifiers: a linguistic one representing their "general-purpose" and an ontological one representing their "domain-specific" type. By associating visualization symbols with both classifiers it is possible to support two concrete syntaxes at the same time and allow the domain-specific and general-purpose notation to support each other - that is, to form a symbiotic relationship.
@InProceedings{ICSE12p1268,
author = {Colin Atkinson and Ralph Gerbig and Bastian Kennel},
title = {Symbiotic General-Purpose and Domain-Specific Languages},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1268--1271},
doi = {},
year = {2012},
}
Evaluating the Specificity of Text Retrieval Queries to Support Software Engineering Tasks
Sonia Haiduc, Gabriele Bavota,
Rocco Oliveto, Andrian Marcus, and
Andrea De Lucia
(Wayne State University, USA; University of Salerno, Italy; University of Molise, Italy)
Text retrieval approaches have been used to address many software engineering tasks. In most cases, their use involves issuing a textual query to retrieve a set of relevant software artifacts from the system. The performance of all these approaches depends on the quality of the given query (i.e., its ability to describe the information need in such a way that the relevant software artifacts are retrieved during the search). Currently, the only way to tell that a query failed to lead to the expected software artifacts is by investing time and effort in analyzing the search results. In addition, it is often very difficult to ascertain what part of the query leads to poor results. We propose a novel pre-retrieval metric, which reflects the quality of a query by measuring the specificity of its terms. We exemplify the use of the new specificity metric on the task of concept location in source code. A preliminary empirical study shows that our metric is a good effort predictor for text retrieval-based concept location, outperforming existing techniques from the field of natural language document retrieval.
@InProceedings{ICSE12p1272,
author = {Sonia Haiduc and Gabriele Bavota and Rocco Oliveto and Andrian Marcus and Andrea De Lucia},
title = {Evaluating the Specificity of Text Retrieval Queries to Support Software Engineering Tasks},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1272--1275},
doi = {},
year = {2012},
}
Co-adapting Human Collaborations and Software Architectures
Christoph Dorn and
Richard N. Taylor
(UC Irvine, USA)
Human collaboration has become an integral part of large-scale systems for massive online knowledge sharing, content distribution, and social networking. Maintenance of these complex systems, however, still relies on adaptation mechanisms that remain unaware of the prevailing user collaboration patterns. Consequently, a system cannot react to changes in the interaction behavior thereby impeding the collaboration's evolution. In this paper, we make the case for a human architecture model and its mapping onto software architecture elements as fundamental building blocks for system adaptation.
@InProceedings{ICSE12p1276,
author = {Christoph Dorn and Richard N. Taylor},
title = {Co-adapting Human Collaborations and Software Architectures},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1276--1279},
doi = {},
year = {2012},
}
Release Engineering Practices and Pitfalls
Hyrum K. Wright and Dewayne E. Perry
(University of Texas at Austin, USA)
The release and deployment phase of the software development process is often overlooked as part of broader software engineering research. In this paper, we discuss early results from a set of multiple semi-structured interviews with practicing release engineers. Subjects for the interviews are drawn from a number of different commercial software development organizations, and our interviews focus on why release process faults and failures occur, how organizations recover from them, and how they can be predicted, avoided or prevented in the future. Along the way, the interviews provide insight into the state of release engineering today, and interesting relationships between software architecture and release processes.
@InProceedings{ICSE12p1280,
author = {Hyrum K. Wright and Dewayne E. Perry},
title = {Release Engineering Practices and Pitfalls},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1280--1283},
doi = {},
year = {2012},
}
Augmented Intelligence - The New AI - Unleashing Human Capabilities in Knowledge Work
James M. Corrigan
(Stony Brook University, USA)
In this paper I describe a novel application of contemplative techniques to software engineering with the goal of augmenting the intellectual capabilities of knowledge workers within the field in four areas: flexibility, attention, creativity, and trust. The augmentation of software engineers’ intellectual capabilities is proposed as a third complement to the traditional focus of methodologies on the process and environmental factors of the software development endeavor. I argue that these capabilities have been shown to be open to improvement through the practices traditionally used in spiritual traditions, but now used increasingly in other fields of knowledge work, such as in the medical profession and the education field. Historically, the intellectual capabilities of software engineers have been treated as a given within any particular software development effort. This is argued to be an aspect ripe for inclusion within software development methodologies.
@InProceedings{ICSE12p1284,
author = {James M. Corrigan},
title = {Augmented Intelligence - The New AI - Unleashing Human Capabilities in Knowledge Work},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1284--1287},
doi = {},
year = {2012},
}
NIER for Mining Product and Process Data
Thu, Jun 7, 10:45 - 12:45
On How Often Code Is Cloned across Repositories
Niko Schwarz, Mircea Lungu, and
Romain Robbes
(University of Bern, Switzerland; University of Chile, Chile)
Detecting code duplication in large code bases, or even across project boundaries, is problematic due to the massive amount of data involved. Large-scale clone detection also opens new challenges beyond asking for the provenance of a single clone fragment, such as assessing the prevalence of code clones on the entire code base, and their evolution.
We propose a set of lightweight techniques that may scale up to very large amounts of source code in the presence of multiple versions. The common idea behind these techniques is to use bad hashing to get a quick answer. We report on a case study, the Squeaksource ecosystem, which features thousands of software projects, with more than 40 million versions of methods, across more than seven years of evolution. We provide estimates for the prevalence of type-1, type-2, and type-3 clones in Squeaksource.
@InProceedings{ICSE12p1288,
author = {Niko Schwarz and Mircea Lungu and Romain Robbes},
title = {On How Often Code Is Cloned across Repositories},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1288--1291},
doi = {},
year = {2012},
}
Mining Input Sanitization Patterns for Predicting SQL Injection and Cross Site Scripting Vulnerabilities
Lwin Khin Shar and Hee Beng Kuan Tan
(Nanyang Technological University, Singapore)
Static code attributes such as lines of code and cyclomatic complexity have been shown to be useful indicators of defects in software modules. As web applications adopt input sanitization routines to prevent web security risks, static code attributes that represent the characteristics of these routines may be useful for predicting web application vulnerabilities. In this paper, we classify various input sanitization methods into different types and propose a set of static code attributes that represent these types. Then we use data mining methods to predict SQL injection and cross site scripting vulnerabilities in web applications. Preliminary experiments show that our proposed attributes are important indicators of such vulnerabilities.
@InProceedings{ICSE12p1292,
author = {Lwin Khin Shar and Hee Beng Kuan Tan},
title = {Mining Input Sanitization Patterns for Predicting SQL Injection and Cross Site Scripting Vulnerabilities},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1292--1295},
doi = {},
year = {2012},
}
Inferring Developer Expertise through Defect Analysis
Tung Thanh Nguyen, Tien N. Nguyen, Evelyn Duesterwald, Tim Klinger, and Peter Santhanam
(Iowa State University, USA; IBM Research, USA)
Fixing defects is an essential software development activity. For commercial software vendors, the time to repair defects in deployed business-critical software products or applications is a key quality metric for sustained customer satisfaction. In this paper, we report on the analysis of about 1,500 defect records from an IBM middle-ware product collected over a five-year period. The analysis includes a characterization of each repaired defect by topic and a ranking of developers by inferred expertise on each topic. We find clear evidence that defect resolution time is strongly influenced by the specific developer and his/her expertise in the defect's topic. To validate our approach, we conducted interviews with the product’s manager who provided us with his own ranking of developer expertise for comparison. We argue that our automated developer expertise ranking can be beneficial in the planning of a software project and is applicable beyond software support in the other phases of the software lifecycle.
@InProceedings{ICSE12p1296,
author = {Tung Thanh Nguyen and Tien N. Nguyen and Evelyn Duesterwald and Tim Klinger and Peter Santhanam},
title = {Inferring Developer Expertise through Defect Analysis},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1296--1299},
doi = {},
year = {2012},
}
Green Mining: Investigating Power Consumption across Versions
Abram Hindle
(University of Alberta, Canada)
Power consumption is increasingly becoming a concern for not only
electrical engineers, but for software engineers as well, due to the
increasing popularity of new power-limited contexts such as
mobile-computing, smart-phones and cloud-computing.
Software changes can alter software power consumption behaviour and
can cause power performance regressions. By tracking software power
consumption we can build models to provide suggestions to avoid
power regressions.
There is much research on software power consumption, but little
focus on the relationship between software changes and power
consumption. Most work measures the power consumption of a single
software task; instead we seek to extend this work across the
history (revisions) of a project.
We develop a set of tests for a well established product and then run
those tests across all versions of the product while recording the
power usage of these tests.
We provide and demonstrate a methodology that enables the analysis
of power consumption performance for over 500 nightly builds of Firefox
3.6; we show that software change does induce changes in power
consumption.
This methodology and case study are a first step towards combining
power measurement and mining software repositories research, thus
enabling developers to avoid power regressions via power consumption
awareness.
@InProceedings{ICSE12p1300,
author = {Abram Hindle},
title = {Green Mining: Investigating Power Consumption across Versions},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1300--1303},
doi = {},
year = {2012},
}
Multi-label Software Behavior Learning
Yang Feng and
Zhenyu Chen
(Nanjing University, China)
Software behavior learning is an important task in software engineering. Software behavior is usually represented as a program execution. It is expected that similar executions have similar behavior, i.e. revealing the same faults. Single-label learning has been used to assign a single label (fault) to a failing
execution in the existing efforts. However, a failing execution may be caused by several faults simultaneously. Hence, it needs to assign multiple labels to support software engineering tasks in practice. In this paper, we present multi-label software behavior learning. A well-known multi-label learning algorithm ML-KNN is introduced to achieve comprehensive learning of software behavior. We conducted a preliminary experiment on two industrial programs: flex and grep. The experimental results show that multi-label learning can produce more precise and complete results than single-label learning.
@InProceedings{ICSE12p1304,
author = {Yang Feng and Zhenyu Chen},
title = {Multi-label Software Behavior Learning},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1304--1307},
doi = {},
year = {2012},
}
Trends in Object-Oriented Software Evolution: Investigating Network Properties
Alexander Chatzigeorgiou and George Melas
(University of Macedonia, Greece)
The rise of social networks and the accompanying interest to study their evolution has stimulated a number of research efforts to analyze their growth patterns by means of network analysis. The inherent graph-like structure of object-oriented systems calls for the application of the corresponding methods and tools to analyze software evolution. In this paper we investigate network properties of two open-source systems and observe interesting phenomena regarding their growth. Relating the observed evolutionary trends to principles and laws of software design enables a high-level assessment of tendencies in the underlying design quality.
@InProceedings{ICSE12p1308,
author = {Alexander Chatzigeorgiou and George Melas},
title = {Trends in Object-Oriented Software Evolution: Investigating Network Properties},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1308--1311},
doi = {},
year = {2012},
}
Exploring Techniques for Rationale Extraction from Existing Documents
Benjamin Rogers, James Gung, Yechen Qiao, and Janet E. Burge
(Miami University, USA)
The rationale for a software system captures the designers’ and developers’ intent behind the decisions made during its development. This information has many potential uses but is typically not captured explicitly. This paper describes an initial investigation into the use of text mining and parsing techniques for identifying rationale from existing documents. Initial results indicate that the use of linguistic features results in better precision but significantly lower recall than using text mining.
@InProceedings{ICSE12p1312,
author = {Benjamin Rogers and James Gung and Yechen Qiao and Janet E. Burge},
title = {Exploring Techniques for Rationale Extraction from Existing Documents},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1312--1315},
doi = {},
year = {2012},
}
NIER to Leverage Social Aspects
Fri, Jun 8, 08:45 - 10:15
Continuous Social Screencasting to Facilitate Software Tool Discovery
Emerson Murphy-Hill
(North Carolina State University, USA)
The wide variety of software development tools available today have a great potential to improve the way developers make software, but that potential goes unfulfilled when developers are not aware of useful tools. In this paper, I introduce the idea of continuous social screencasting, a novel mechanism to help developers gain awareness of relevant tools by enabling them to learn remotely and asychronously from their peers. The idea builds on the strength of several existing techniques that developers already use for discovering new tools, including screencasts and online social networks.
@InProceedings{ICSE12p1316,
author = {Emerson Murphy-Hill},
title = {Continuous Social Screencasting to Facilitate Software Tool Discovery},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1316--1319},
doi = {},
year = {2012},
}
UDesignIt: Towards Social Media for Community-Driven Design
Phil Greenwood, Awais Rashid, and James Walkerdine
(Lancaster University, UK)
Online social networks are now common place in day-to-day lives. They are also increasingly used to drive social action initiatives, either led by government or communities themselves (e.g., SeeClickFix, LoveLewisham.org, mumsnet). However, such initiatives are mainly used for crowd sourcing community views or coordinating activities. With the changing global economic and political landscape, there is an ever pressing need to engage citizens on a large-scale, not only in consultations about systems that affect them, but also involve them directly in the design of these very systems. In this paper we present the UDesignIt platform that combines social media technologies with software engineering concepts to empower communities to discuss and extract high-level design features. It combines natural language processing, feature modelling and visual overlays in the form of ``image clouds'' to enable communities and software engineers alike to unlock the knowledge contained in the unstructured and unfiltered content of social media where people discuss social problems and their solutions. By automatically extracting key themes and presenting them in a structured and organised manner in near real-time, the approach drives a shift towards large-scale engagement of community stakeholders for system design.
@InProceedings{ICSE12p1320,
author = {Phil Greenwood and Awais Rashid and James Walkerdine},
title = {UDesignIt: Towards Social Media for Community-Driven Design},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1320--1323},
doi = {},
year = {2012},
}
Influencing the Adoption of Software Engineering Methods Using Social Software
Leif Singer and Kurt Schneider
(Leibniz Universität Hannover, Germany)
Software engineering research and practice provide a wealth of methods that improve the quality of software and lower the costs of producing it. Even though processes mandate their use, methods are not employed consequently. Software developers and development organizations thus cannot fully benefit from these methods. We propose a method that, for a given software engineering method, provides instructions on how to improve its adoption using social software. This employs the intrinsic motivation of software developers rather than prescribing behavior. As a result, we believe that software engineering methods will be applied better and more frequently.
@InProceedings{ICSE12p1324,
author = {Leif Singer and Kurt Schneider},
title = {Influencing the Adoption of Software Engineering Methods Using Social Software},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1324--1327},
doi = {},
year = {2012},
}
Toward Actionable, Broadly Accessible Contests in Software Engineering
Jane Cleland-Huang, Yonghee Shin, Ed Keenan, Adam Czauderna, Greg Leach, Evan Moritz, Malcom Gethers,
Denys Poshyvanyk , Jane Huffman Hayes, and Wenbin Li
(DePaul University, USA; College of William and Mary, USA; University of Kentucky, USA)
Software Engineering challenges and contests are becoming increasingly popular for focusing researchers' efforts on particular problems. Such contests tend to follow either an exploratory model, in which the contest holders provide data and ask the contestants to discover ``interesting things'' they can do with it, or task-oriented contests in which contestants must perform a specific task on a provided dataset. Only occasionally do contests provide more rigorous evaluation mechanisms that precisely specify the task to be performed and the metrics that will be used to evaluate the results. In this paper, we propose actionable and crowd-sourced contests: actionable because the contest describes a precise task, datasets, and evaluation metrics, and also provides a downloadable operating environment for the contest; and crowd-sourced because providing these features creates accessibility to Information Technology hobbyists and students who are attracted by the challenge. Our proposed approach is illustrated using research challenges from the software traceability area as well as an experimental workbench named TraceLab.
@InProceedings{ICSE12p1328,
author = {Jane Cleland-Huang and Yonghee Shin and Ed Keenan and Adam Czauderna and Greg Leach and Evan Moritz and Malcom Gethers and Denys Poshyvanyk and Jane Huffman Hayes and Wenbin Li},
title = {Toward Actionable, Broadly Accessible Contests in Software Engineering},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1328--1331},
doi = {},
year = {2012},
}
CodeTimeline: Storytelling with Versioning Data
Adrian Kuhn and Mirko Stocker
(University of British Columbia, Canada; University of Applied Sciences Rapperswil, Switzerland)
Working with a software system typically requires knowledge of the system's history, however this knowledge is often only tribal memory of the development team. In past user studies we have observed that when being presented with collaboration views and word clouds from the system's history engineers start sharing memories linked to those visualizations. In this paper we propose an approach based on a story-telling visualization, which is designed to entice engineers to share and document their tribal memory. Sticky notes can be used to share memories of a system's lifetime events, such as past design rationales but also more casual memories like pictures from after-work beer or a hackathon. We present an early-stage prototype implementation and include two design studies created using that prototype.
@InProceedings{ICSE12p1332,
author = {Adrian Kuhn and Mirko Stocker},
title = {CodeTimeline: Storytelling with Versioning Data},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1332--1335},
doi = {},
year = {2012},
}
NIER for Verification and Evolution
Fri, Jun 8, 10:45 - 12:45
Analyzing Multi-agent Systems with Probabilistic Model Checking Approach
Songzheng Song, Jianye Hao, Yang Liu, Jun Sun, Ho-Fung Leung, and Jin Song Dong
(National University of Singapore, Singapore; Chinese University of Hong Kong, China; University of Technology and Design, Singapore)
Multi-agent systems, which are composed of autonomous agents, have been successfully employed as a modeling paradigm in many scenarios. However, it is challenging to guarantee the correctness of their behaviors due to the complex nature of the autonomous agents, especially when they have stochastic characteristics. In this work, we propose to apply probabilistic model checking to analyze multi-agent systems. A modeling language called PMA is defined to specify such kind of systems, and LTL property and logic of knowledge combined with probabilistic requirements are supported to analyze system behaviors. Initial evaluation indicates the effectiveness of our current progress; meanwhile some challenges and possible solutions are discussed as our ongoing work.
@InProceedings{ICSE12p1336,
author = {Songzheng Song and Jianye Hao and Yang Liu and Jun Sun and Ho-Fung Leung and Jin Song Dong},
title = {Analyzing Multi-agent Systems with Probabilistic Model Checking Approach},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1336--1339},
doi = {},
year = {2012},
}
Brace: An Assertion Framework for Debugging Cyber-Physical Systems
Kevin Boos, Chien-Liang Fok, Christine Julien, and Miryung Kim
(University of Texas at Austin, USA)
Developing cyber-physical systems (CPS) is challenging because correctness depends on both logical and physical states, which are collectively difficult to observe. The developer often need to repeatedly rerun the system while observing its behavior and tweak the hardware and software until it meets minimum requirements. This process is tedious, error-prone, and lacks rigor. To address this, we propose BRACE, a framework that simplifies the process by enabling developers to correlate cyber (i.e., logical) and physical properties of the system via assertions. This paper presents our initial investigation into the requirements and semantics of such assertions, which we call CPS assertions. We discusses our experience implementing and using the framework with a mobile robot, and highlight key future research challenges.
@InProceedings{ICSE12p1340,
author = {Kevin Boos and Chien-Liang Fok and Christine Julien and Miryung Kim},
title = {Brace: An Assertion Framework for Debugging Cyber-Physical Systems},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1340--1343},
doi = {},
year = {2012},
}
Augmenting Test Suites Effectiveness by Increasing Output Diversity
Nadia Alshahwan and
Mark Harman
(University College London, UK)
The uniqueness (or otherwise) of test outputs
ought to have a bearing on test effectiveness, yet it has not
previously been studied. In this paper we introduce a novel
test suite adequacy criterion based on output uniqueness. We
propose 4 definitions of output uniqueness with varying degrees
of strictness. We present a preliminary evaluation for web application
testing that confirms that output uniqueness enhances
fault-finding effectiveness. The approach outperforms random
augmentation in fault finding ability by an overall average of
280% in 5 medium sized, real world web applications.
@InProceedings{ICSE12p1344,
author = {Nadia Alshahwan and Mark Harman},
title = {Augmenting Test Suites Effectiveness by Increasing Output Diversity},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1344--1347},
doi = {},
year = {2012},
}
Improving IDE Recommendations by Considering Global Implications of Existing Recommendations
Kıvanç Muşlu, Yuriy Brun, Reid Holmes,
Michael D. Ernst , and David Notkin
(University of Washington, USA; University of Waterloo, Canada)
Modern integrated development environments (IDEs) offer recommendations to aid development, such as auto-completions, refactorings, and fixes for compilation errors. Recommendations for each code location are typically computed independently
of the other locations. We propose that an IDE should consider the whole codebase, not just the local context, before offering recommendations for a particular location. We demonstrate the potential benefits of our technique by presenting four concrete scenarios in which the Eclipse IDE fails to provide proper Quick Fixes at relevant locations, even though it offers those fixes at other locations. We describe a technique that can augment an existing IDE’s recommendations to account for non-local information. For example, when some compilation errors depend on others, our technique helps the developer decide which errors to resolve first.
@InProceedings{ICSE12p1348,
author = {Kıvanç Muşlu and Yuriy Brun and Reid Holmes and Michael D. Ernst and David Notkin},
title = {Improving IDE Recommendations by Considering Global Implications of Existing Recommendations},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1348--1351},
doi = {},
year = {2012},
}
Towards Flexible Evolution of Dynamically Adaptive Systems
Gilles Perrouin, Brice Morin, Franck Chauvel, Franck Fleurey,
Jacques Klein , Yves Le Traon, Olivier Barais, and
Jean-Marc Jézéquel
(University of Namur, Belgium; SINTEF, Norway; University of Luxembourg, Luxembourg; IRISA, France)
Modern software systems need to be continuously available under varying conditions. Their ability to dynamically adapt to their execution context is thus increasingly seen as a key to their success. Recently, many approaches were proposed to design and support the execution of Dynamically Adaptive Systems (DAS). However, the ability of a DAS to evolve is limited to the addition, update or removal of adaptation rules or reconfiguration scripts. These artifacts are very specific to the control loop managing such a DAS and runtime evolution of the DAS requirements may affect other parts of the DAS. In this paper, we argue to evolve all parts of the loop. We suggest leveraging recent advances in model-driven techniques to offer an approach that supports the evolution of both systems and their adaptation capabilities. The basic idea is to consider the control loop itself as an adaptive system.
@InProceedings{ICSE12p1352,
author = {Gilles Perrouin and Brice Morin and Franck Chauvel and Franck Fleurey and Jacques Klein and Yves Le Traon and Olivier Barais and Jean-Marc Jézéquel},
title = {Towards Flexible Evolution of Dynamically Adaptive Systems},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1352--1355},
doi = {},
year = {2012},
}
Towards Business Processes Orchestrating the Physical Enterprise with Wireless Sensor Networks
Fabio Casati, Florian Daniel, Guenadi Dantchev, Joakim Eriksson, Niclas Finne, Stamatis Karnouskos, Patricio Moreno Montero, Luca Mottola, Felix Jonathan Oppermann, Gian Pietro Picco, Antonio Quartulli, Kay Römer, Patrik Spiess, Stefano Tranquillini, and Thiemo Voigt
(University of Trento, Italy; SAP, Germany; Swedish Institute of Computer Science, Sweden; Acciona Infraestructuras, Spain; University of Lübeck, Germany)
The industrial adoption of wireless sensor networks (WSNs) is hampered by two main factors. First, there is a lack of integration of WSNs with business process modeling languages and back-ends. Second, programming WSNs is still challenging as it is mainly performed at the operating system level. To this end, we provide makeSense: a unified programming framework and a compilation chain that, from high-level business process specifications, generates code ready for deployment on WSN nodes.
@InProceedings{ICSE12p1356,
author = {Fabio Casati and Florian Daniel and Guenadi Dantchev and Joakim Eriksson and Niclas Finne and Stamatis Karnouskos and Patricio Moreno Montero and Luca Mottola and Felix Jonathan Oppermann and Gian Pietro Picco and Antonio Quartulli and Kay Römer and Patrik Spiess and Stefano Tranquillini and Thiemo Voigt},
title = {Towards Business Processes Orchestrating the Physical Enterprise with Wireless Sensor Networks},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1356--1359},
doi = {},
year = {2012},
}
Engineering and Verifying Requirements for Programmable Self-Assembling Nanomachines
Robyn Lutz, Jack Lutz, James Lathrop
, Titus Klinge, Eric Henderson, Divita Mathur, and Dalia Abo Sheasha
(Iowa State University, USA; California Institute of Technology, USA)
We propose an extension of van Lamsweerde’s goal-oriented requirements engineering to the domain of programmable DNA nanotechnology. This is a domain in which individual devices (agents) are at most a few dozen nanometers in diameter. These devices are programmed to assemble themselves from molecular components and perform their assigned tasks. The devices carry out their tasks in the probabilistic world of chemical kinetics, so they are individually error-prone. However, the number of devices deployed is roughly on the order of a nanomole, and some goals are achieved when enough of these agents achieve their assigned subgoals. We show that it is useful in this setting to augment the AND/OR goal diagrams to allow goal refinements that are mediated by threshold functions, rather than ANDs or ORs. We illustrate this method by engineering requirements for a system of molecular detectors (DNA origami “pliers” that capture target molecules) invented by Kuzuya, Sakai, Yamazaki, Xu, and Komiyama (2011). We model this system in the Prism probabilistic symbolic model checker, and we use Prism to verify that requirements are satisfied. This gives prima facie evidence that software engineering methods can be used to make DNA nanotechnology more productive, predictable and safe.
@InProceedings{ICSE12p1360,
author = {Robyn Lutz and Jack Lutz and James Lathrop and Titus Klinge and Eric Henderson and Divita Mathur and Dalia Abo Sheasha},
title = {Engineering and Verifying Requirements for Programmable Self-Assembling Nanomachines},
booktitle = {Proc.\ ICSE},
publisher = {IEEE},
pages = {1360--1363},
doi = {},
year = {2012},
}
proc time: 0.4