Powered by
2012 20th IEEE International Requirements Engineering Conference (RE),
September 24–28, 2012,
Chicago, Illinois, USA
Main Research Track
Handling Uncertainty
Wed, Sep 26, 10:30 - 12:00
Managing Requirements Uncertainty with Partial Models
Rick Salay, Marsha Chechik
, and Jennifer Horkoff
(University of Toronto, Canada)
Models are good at expressing information that is known but do not typically have support for representing what information a modeler does not know at a particular phase in the software development process. Partial models address this by being able to precisely represent uncertainty about model content. In previous work, we developed a general approach for defining partial models and applied it to capturing uncertainty, including reasoning over design models containing uncertainty. In this paper, we show how to apply our approach to managing requirements uncertainty. In particular, we address the problem of specifying uncertainty within a requirements model, refining a model as uncertainty reduces and reasoning with traceability relations between models containing uncertainty. We illustrate our approach using the meeting scheduler example.
@InProceedings{RE12p1,
author = {Rick Salay and Marsha Chechik and Jennifer Horkoff},
title = {Managing Requirements Uncertainty with Partial Models},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {1--10},
doi = {},
year = {2012},
}
Speculative Requirements: Automatic Detection of Uncertainty in Natural Language Requirements
Hui Yang, Anne De Roeck, Vincenzo Gervasi, Alistair Willis, and Bashar Nuseibeh
(Open University, UK; University of Pisa, Italy; Lero, Ireland)
Stakeholders frequently use speculative language when they need to convey their requirements with some degree of uncertainty. Due to the intrinsic vagueness of speculative language, speculative requirements risk being misunderstood, and related uncertainty overlooked, and may benefit from careful treatment in the requirements engineering process. In this paper, we present a linguistically-oriented approach to automatic detection of uncertainty in natural language (NL) requirements. Our approach comprises two stages. First we identify speculative sentences by applying a machine learning algorithm called Conditional Random Fields (CRFs) to identify uncertainty cues. The algorithm exploits a rich set of lexical and syntactic features extracted from requirements sentences. Second, we try to determine the scope of uncertainty. We use a rule-based approach that draws on a set of hand-crafted linguistic heuristics to determine the uncertainty scope with the help of dependency structures present in the sentence parse tree. We report on a series of experiments we conducted to evaluate the performance and usefulness of our system.
@InProceedings{RE12p11,
author = {Hui Yang and Anne De Roeck and Vincenzo Gervasi and Alistair Willis and Bashar Nuseibeh},
title = {Speculative Requirements: Automatic Detection of Uncertainty in Natural Language Requirements},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {11--20},
doi = {},
year = {2012},
}
Resolving Uncertainty in Automotive Feature Interactions
Silky Arora, Prahladavaradan Sampath, and Ramesh S
(General Motors, India)
The modern automobile is a complex electronic system with a number
of features providing functionalities for driver and passenger
convenience, control of the vehicle, and safety of the occupants.
As new features are developed and introduced into the automobile,
they interact with already existing features, sometimes resulting in
undesirable behaviours. These undesirable interactions are often
detected very late in the development cycle, or sometimes even in
the field. This introduces uncertainty in the system development
process as changes to address these interactions often result in a
cascading series of changes whose scope
is difficult to predict. This paper presents a method and algorithms
for identifying and resolving feature interactions early in the
development life-cycle by addressing the problem at the level of
requirements specifications. We have applied this method
successfully in the automotive domain and present a case study of
detecting and resolving feature interactions.
@InProceedings{RE12p21,
author = {Silky Arora and Prahladavaradan Sampath and Ramesh S},
title = {Resolving Uncertainty in Automotive Feature Interactions},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {21--30},
doi = {},
year = {2012},
}
Requirements Processes
Wed, Sep 26, 10:30 - 12:00
Process Improvement for Traceability: A Study of Human Fallibility
Wei-Keat Kong, Jane Huffman Hayes, Alex Dekhtyar, and Olga Dekhtyar
(University of Kentucky, USA; Cal Poly, USA)
Human analysts working with results from automated traceability tools often make incorrect decisions that lead to lower quality final trace matrices. As the human must vet the results of trace tools for mission- and safety-critical systems, the hopes of developing expedient and accurate tracing procedures lies in understanding how analysts work with trace matrices. This paper describes a study to understand when and why humans make correct and incorrect decisions during tracing tasks through logs of analyst actions. In addition to the traditional measures of recall and precision to describe the accuracy of the results, we introduce and study new measures that focus on analyst work quality: potential recall, sensitivity, and effort distribution. We use these measures to visualize analyst progress towards the final trace matrix, identifying factors that may influence their performance and determining how actual tracing strategies, derived from analyst logs, affect results.
@InProceedings{RE12p31,
author = {Wei-Keat Kong and Jane Huffman Hayes and Alex Dekhtyar and Olga Dekhtyar},
title = {Process Improvement for Traceability: A Study of Human Fallibility},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {31--40},
doi = {},
year = {2012},
}
How do Software Architects Consider Non-functional Requirements: An Exploratory Study
David Ameller, Claudia Ayala, Jordi Cabot, and
Xavier Franch
(Universitat Politècnica de Catalunya, Spain; INRIA, France)
Dealing with non-functional requirements (NFRs) has posed a challenge onto software engineers for many years. Over the years, many methods and techniques have been proposed to improve their elicitation, documentation, and validation. Knowing more about the state of the practice on these topics may benefit both practitioners’ and researchers’ daily work. A few empirical studies have been conducted in the past, but none under the perspective of software architects, in spite of the great influence that NFRs have on daily architects’ practices. This paper presents some of the findings of an empirical study based on 13 interviews with software architects. It addresses questions such as: who decides the NFRs, what types of NFRs matter to architects, how are NFRs documented, and how are NFRs validated. The results are contextualized with existing previous work.
@InProceedings{RE12p41,
author = {David Ameller and Claudia Ayala and Jordi Cabot and Xavier Franch},
title = {How do Software Architects Consider Non-functional Requirements: An Exploratory Study},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {41--50},
doi = {},
year = {2012},
}
Evaluating the Software Product Management Maturity Matrix
Willem Bekkers, Sjaak Brinkkemper, Lucas van den Bemd, Frederik Mijnhardt, Christoph Wagner, and Inge van de Weerd
(Utrecht University, Netherlands; VU University Amsterdam, Netherlands)
Product managers play a pivotal role in maximizing value for software companies. To assist product managers in their activities the Software Product Management (SPM) Maturity Matrix has been created that enables product managers to benchmark their organization, assess individual processes and apply best practices to create an effective SPM environment. Although a number of case studies and expert evaluations have been performed, a large scale quantitative analysis has not yet been conducted to evaluate this instrument. This research evaluates and improves the SPM Maturity Matrix based on 62 case studies. The cases were analyzed to uncover anomalies: blocking questions, blocking levels, and undifferentiating questions. The anomalies were then discussed in a workgroup with experts which resulted in suggested improvements to address the anomalies. The suggestions of the workgroup will be used to improve the SPM Maturity Matrix. As an additional result, the case studies also provide valuable insight into the maturity of software companies in industry.
@InProceedings{RE12p51,
author = {Willem Bekkers and Sjaak Brinkkemper and Lucas van den Bemd and Frederik Mijnhardt and Christoph Wagner and Inge van de Weerd},
title = {Evaluating the Software Product Management Maturity Matrix},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {51--60},
doi = {},
year = {2012},
}
Requirements Management and Tracing 1
Wed, Sep 26, 13:30 - 15:00
Identifying Outdated Requirements Based on Source Code Changes
Eya Ben Charrada, Anne Koziolek, and Martin Glinz
(University of Zurich, Switzerland)
Keeping requirements specifications up-to-date when systems evolve is a manual and expensive task. Software engineers have to go through the whole requirements document and look for the requirements that are affected by a change. Consequently, engineers usually apply changes to the implementation directly and leave requirements unchanged.
In this paper, we propose an approach for automatically detecting outdated requirements based on changes in the code. Our approach first identifies the changes in the code that are likely to affect requirements. Then it extracts a set of keywords describing the changes. These keywords are traced to the requirements specification, using an existing automated traceability tool, to identify affected requirements.
Automatically identifying outdated requirements reduces the effort and time needed for the maintenance of requirements specifications significantly and thus helps preserve the knowledge contained in them.
We evaluated our approach in a case study where we analyzed two consecutive source code versions and were able to detect 12 requirements-related changes out of 14 with a precision of 79%. Then we traced a set of keywords we extracted from these changes to the requirements specification. In comparison to simply tracing changed classes to requirements, we got better results in most cases.
@InProceedings{RE12p61,
author = {Eya Ben Charrada and Anne Koziolek and Martin Glinz},
title = {Identifying Outdated Requirements Based on Source Code Changes},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {61--70},
doi = {},
year = {2012},
}
The Quest for Ubiquity: A Roadmap for Software and Systems Traceability Research
Orlena C. Z. Gotel, Jane Cleland-Huang, Jane Huffman Hayes, Andrea Zisman, Alexander Egyed,
Paul Grünbacher, and Giuliano Antoniol
(DePaul University, USA; University of Kentucky, USA; City University London, UK; JKU Linz, Austria; École Polytechnique de Montréal, Canada)
Traceability underlies many important software
and systems engineering activities, such as change impact
analysis and regression testing. Despite important research
advances, as in the automated creation and maintenance of
trace links, traceability implementation and use is still not
pervasive in industry. A community of traceability researchers
and practitioners has been collaborating to understand the
hurdles to making traceability ubiquitous. Over a series of
years, workshops have been held to elicit and enhance research
challenges and related tasks to address these shortcomings. A
continuing discussion of the community has resulted in the
research roadmap of this paper. We present a brief view of the
state of the art in traceability, the grand challenge for
traceability and future directions for the field.
@InProceedings{RE12p71,
author = {Orlena C. Z. Gotel and Jane Cleland-Huang and Jane Huffman Hayes and Andrea Zisman and Alexander Egyed and Paul Grünbacher and Giuliano Antoniol},
title = {The Quest for Ubiquity: A Roadmap for Software and Systems Traceability Research},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {71--80},
doi = {},
year = {2012},
}
Enhancing Candidate Link Generation for Requirements Tracing: The Cluster Hypothesis Revisited
Nan Niu and Anas Mahmoud
(Mississippi State University, USA)
Modern requirements tracing tools employ information retrieval methods to automatically generate candidate links. Due to the inherent trade-off between recall and precision, such methods cannot achieve a high coverage without also retrieving a great number of false positives, causing a significant drop in result accuracy. In this paper, we propose an approach to improving the quality of candidate link generation for the requirements tracing process. We base our research on the cluster hypothesis which suggests that correct and incorrect links can be grouped in high-quality and low-quality clusters respectively. Result accuracy can thus be enhanced by identifying and filtering out low-quality clusters. We describe our approach by investigating three open-source datasets, and further evaluate our work through an industrial study. The results show that our approach outperforms a baseline pruning strategy and that improvements are still possible.
@InProceedings{RE12p81,
author = {Nan Niu and Anas Mahmoud},
title = {Enhancing Candidate Link Generation for Requirements Tracing: The Cluster Hypothesis Revisited},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {81--90},
doi = {},
year = {2012},
}
Legal and Regulatory Requirements
Wed, Sep 26, 15:30 - 17:00
Reconciling Multi-jurisdictional Legal Requirements: A Case Study in Requirements Water Marking
David G. Gordon and
Travis D. Breaux
(CMU, USA)
Companies that own, license, or maintain personal information face a daunting number of privacy and security regulations. Companies are subject to new regulations from one or more governing bodies, when companies introduce new or existing products into a jurisdiction, when regulations change, or when data is transferred across political borders. To address this problem, we developed a framework called "requirements water marking" that business analysts can use to align and reconcile requirements from multiple jurisdictions (municipalities, provinces, nations) to produce a single high or low standard of care. We evaluate the framework in an empirical case study conducted over a subset of U.S. data breach notification laws that require companies to secure their data and notify consumers in the event of data loss or theft. In this study, applying our framework reduced the number of requirements a company must comply with by 76% across 8 jurisdictions. We show how the framework surfaces critical requirements trade-offs and potential regulatory conflicts that companies must address during the reconciliation process. We summarize our results, including surveys of information technology law experts to contextualize our empirical results in legal practice.
@InProceedings{RE12p91,
author = {David G. Gordon and Travis D. Breaux},
title = {Reconciling Multi-jurisdictional Legal Requirements: A Case Study in Requirements Water Marking},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {91--100},
doi = {},
year = {2012},
}
Managing Changing Compliance Requirements by Predicting Regulatory Evolution: An Adaptability Framework
Jeremy C. Maxwell, Annie I. Antón, and Peter Swire
(North Carolina State University, USA; Allscripts Healthcare Solutions, USA; Georgia Tech, USA; Ohio State University, USA)
Over time, laws change to meet evolving social needs. Requirements engineers that develop software for regulated domains, such as healthcare or finance, must adapt their software as laws change to maintain legal compliance. In the United States, regulatory agencies will almost always release a proposed regulation, or rule, and accept comments from the public. The agency then considers these comments when drafting a final rule that will be binding on the regulated domain. Herein, we examine how these proposed rules evolve into final rules, and propose an Adaptability Framework. This framework can aid software engineers in predicting what areas of a proposed rule are most likely to evolve, allowing engineers to begin building towards the more stable sections of the rule. We develop the framework through a formative study using the Health Insurance Portability and Accountability (HIPAA) Security Rule and apply it in a summative study on the Health Information Technology: Initial Set of Standards, Implementation Specifications, and Certification Criteria for Electronic Health Record Technology.
@InProceedings{RE12p101,
author = {Jeremy C. Maxwell and Annie I. Antón and Peter Swire},
title = {Managing Changing Compliance Requirements by Predicting Regulatory Evolution: An Adaptability Framework},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {101--110},
doi = {},
year = {2012},
}
RE@Runtime
Thu, Sep 27, 10:30 - 12:00
Requirements-Driven Adaptive Security: Protecting Variable Assets at Runtime
Mazeiar Salehie, Liliana Pasquale, Inah Omoronyia,
Raian Ali, and Bashar Nuseibeh
(Lero, Ireland; Bournemouth University, UK; Open University, UK)
Security is primarily concerned with protecting assets from harm. Identifying and evaluating assets are therefore key activities in any security engineering process -- from modeling threats and attacks, discovering existing vulnerabilities, to selecting appropriate countermeasures. However, despite their crucial role, assets are often neglected during the development of secure software systems. Indeed, many systems are designed with fixed security boundaries and assumptions, without the possibility to adapt when assets change unexpectedly, new threats arise, or undiscovered vulnerabilities are revealed. To handle such changes, systems must be capable of dynamically enabling different security countermeasures. This paper promotes assets as first-class entities in engineering secure software systems. An asset model is related to requirements, expressed through a goal model, and the objectives of an attacker, expressed through a threat model. These models are then used as input to build a causal network to analyze system security in different situations, and to enable, when necessary, a set of countermeasures to mitigate security threats. The causal network is conceived as a runtime entity that tracks relevant changes that may arise at runtime, and enables a new set of countermeasures. We illustrate and evaluate our proposed approach by applying it to a substantive example concerned with security of mobile phones.
@InProceedings{RE12p111,
author = {Mazeiar Salehie and Liliana Pasquale and Inah Omoronyia and Raian Ali and Bashar Nuseibeh},
title = {Requirements-Driven Adaptive Security: Protecting Variable Assets at Runtime},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {111--120},
doi = {},
year = {2012},
}
Stateful Requirements Monitoring for Self-Repairing Socio-Technical Systems
Lingxiao Fu, Xin Peng
,
Yijun Yu, John Mylopoulos, and Wenyun Zhao
(Fudan University, China; Open University, UK; University of Trento, Italy)
Socio-technical systems consist of human, hardware and software components that work in tandem to fulfill stakeholder requirements. By their very nature, such systems operate under uncertainty as components fail, humans act in unpredictable ways, and the environment of the system changes. Self-repair refers to the ability of such systems to restore fulfillment of their requirements by relying on monitoring, reasoning, and diagnosing on the current state of individual requirements. Self-repair is complicated by the multi-agent nature of socio-technical systems, which demands that requirements monitoring and self-repair be done in a decentralized fashion. In this paper, we propose a stateful requirements monitoring approach by maintaining an instance of a state machine for each requirement, represented as a goal, with runtime monitoring and compensation capabilities. By managing the interactions between the state machines, our approach supports hierarchical goal reasoning in both upward and downward directions. We have implemented a customizable Java framework that supports experimentation by simulating a socio-technical system. Results from our experiments suggest effective and precise support for a wide range of self-repairing decisions in a socio-technical setting.
@InProceedings{RE12p121,
author = {Lingxiao Fu and Xin Peng and Yijun Yu and John Mylopoulos and Wenyun Zhao},
title = {Stateful Requirements Monitoring for Self-Repairing Socio-Technical Systems},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {121--130},
doi = {},
year = {2012},
}
Privacy Arguments: Analysing Selective Disclosure Requirements for Mobile Applications
Thein Than Tun, Arosha K. Bandara, Blaine A. Price,
Yijun Yu, Charles Haley, Inah Omoronyia, and Bashar Nuseibeh
(Open University, UK; Frogfish Technologies, UK; Lero, Ireland)
Privacy requirements for mobile applications offer a distinct set of challenges for requirements engineering. First, they are highly dynamic, changing over time and locations, and across the different roles of agents involved and the kinds of information that may be disclosed. Second, although some general privacy requirements can be elicited a priori, users often refine them at runtime as they interact with the system and its environment. Selectively disclosing information to appropriate agents is therefore a key privacy management challenge, requiring carefully formulated privacy requirements amenable to systematic reasoning. In this paper, we introduce privacy arguments as a means of analysing privacy requirements in general and selective disclosure requirements (that are both content- and context-sensitive) in particular. Privacy arguments allow individual users to express personal preferences, which are then used to reason about privacy for each user under different contexts. At runtime, these arguments provide a way to reason about requirements satisfaction and diagnosis. Our proposed approach is demonstrated and evaluated using the privacy requirements of BuddyTracker, a mobile application we developed as part of our overall research programme.
@InProceedings{RE12p131,
author = {Thein Than Tun and Arosha K. Bandara and Blaine A. Price and Yijun Yu and Charles Haley and Inah Omoronyia and Bashar Nuseibeh},
title = {Privacy Arguments: Analysing Selective Disclosure Requirements for Mobile Applications},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {131--140},
doi = {},
year = {2012},
}
Feature Models
Thu, Sep 27, 10:30 - 12:00
Mining Binary Constraints in the Construction of Feature Models
Li Yi, Wei Zhang,
Haiyan Zhao, Zhi Jin, and Hong Mei
(Peking University, China)
Feature models provide an effective way to organize and reuse requirements in a specific domain. A feature model consists of a feature tree and cross-tree constraints. Identifying features and then building a feature tree takes a lot of effort, and many semi-automated approaches have been proposed to help the situation. However, finding cross-tree constraints is often more challenging which still lacks the help of automation. In this paper, we propose an approach to mining cross-tree binary constraints in the construction of feature models. Binary constraints are the most basic kind of cross-tree constraints that involve exactly two features and can be further classified into two sub-types, i.e. requires and excludes. Given these two sub-types, a pair of any two features in a feature model falls into one of the following classes: no constraints between them, a requires between them, or an excludes between them. Therefore we perform a 3-class classification on feature pairs to mine binary constraints from features. We incorporate a support vector machine as the classifier and utilize a genetic algorithm to optimize it. We conduct a series of experiments on two feature models constructed by third parties, to evaluate the effectiveness of our approach under different conditions that might occur in practical use. Results show that we can mine binary constraints at a high recall (near 100% in most cases), which is important because finding a missing constraint is very costly in real, often large, feature models.
@InProceedings{RE12p141,
author = {Li Yi and Wei Zhang and Haiyan Zhao and Zhi Jin and Hong Mei},
title = {Mining Binary Constraints in the Construction of Feature Models},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {141--150},
doi = {},
year = {2012},
}
A Feature-Oriented Requirements Modelling Language
Pourya Shaker, Joanne M. Atlee, and Shige Wang
(University of Waterloo, Canada; General Motors, USA)
In this paper, we present a feature-oriented requirements modelling language (FORML) for modelling the behavioural requirements of a software product line. FORML aims to support feature modularity and precise requirements modelling, and to ease the task of adding new features to a set of existing requirements. In particular, FORML decomposes a product line’s requirements into feature modules, and provides language support for specifying tightly-coupled features as model fragments that extend and override existing feature modules. We discuss how decisions in the design of FORML affect the evolvability of requirements models, and explicate the specification of intended interactions among related features. We applied FORML to the specification of two feature sets, automotive and telephony, and we discuss how well the case studies exercised the language and how the requirements models evolved over the course of the case studies.
@InProceedings{RE12p151,
author = {Pourya Shaker and Joanne M. Atlee and Shige Wang},
title = {A Feature-Oriented Requirements Modelling Language},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {151--160},
doi = {},
year = {2012},
}
Efficient Consistency Checking of Scenario-Based Product-Line Specifications
Joel Greenyer, Amir Molzam Sharifloo, Maxime Cordy, and Patrick Heymans
(Politecnico di Milano, Italy; University of Namur, Belgium)
Modern technical systems typically consist of multiple components and must provide many functions that are realized by the complex interaction of these components. Moreover, very often not only a single product, but a whole product line with different compositions of components and functions must be developed. To cope with this complexity, it is important that engineers have intuitive, but precise means for specifying the requirements for these systems and have tools for automatically finding inconsistencies within the requirements, because these could lead to costly iterations in the later development. We propose a technique for the scenario-based specification of component interactions based on Modal Sequence Diagrams. Moreover, we developed an efficient technique for automatically finding inconsistencies in the scenario-based specification of many variants at once by exploiting recent advances in the model-checking of product lines. Our evaluation shows benefits of this technique over performing individual consistency checking of each variant specification.
@InProceedings{RE12p161,
author = {Joel Greenyer and Amir Molzam Sharifloo and Maxime Cordy and Patrick Heymans},
title = {Efficient Consistency Checking of Scenario-Based Product-Line Specifications},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {161--170},
doi = {},
year = {2012},
}
Requirements Communication
Thu, Sep 27, 13:30 - 15:00
What You Need Is What You Get! The Vision of View-Based Requirements Specifications
Anne Gross and Joerg Doerr
(Fraunhofer IESE, Germany)
Software requirements specifications play a crucial role in software development projects. Especially in large projects, these specifications serve as a source of communication and information for a variety of roles involved in downstream activities like architecture, design, and testing. This vision paper argues that in order to create high-quality requirements specifications that fit the specific demands of successive document stakeholders, our research community needs to better understand the particular information needs of downstream development roles. In this paper, the authors introduce the idea of view-based requirements specifications. Two scenarios illustrate (1) current problems and challenges related to the research underlying the envisioned idea and (2) how these problems could be solved in the future. Based on these scenarios, challenges and research questions are outlined and supplemented with current results of exemplary user studies. Furthermore, potential future research is suggested, which the community should perform to answer the research questions as part of a research agenda.
@InProceedings{RE12p171,
author = {Anne Gross and Joerg Doerr},
title = {What You Need Is What You Get! The Vision of View-Based Requirements Specifications},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {171--180},
doi = {},
year = {2012},
}
The Impact of Domain Knowledge on the Effectiveness of Requirements Idea Generation during Requirements Elicitation
Ali Niknafs and Daniel M. Berry
(University of Waterloo, Canada)
It is believed that the effectiveness of requirements engineering activities depends at least partially on the individuals involved. One of the factors that seems to influence an individual’s effectiveness in requirements engineering activities is knowledge of the problem being solved, i.e., domain knowledge. While a requirements engineer’s having in-depth domain knowledge helps him or her to understand the problem easier, he or she can fall for tacit assumptions of the domain and might overlook issues that are obvious to domain experts. This paper describes a controlled experiment to test the hypothesis that adding to a requirements elicitation team for a computer-based system in a particular domain, requirements analysts that are ignorant of the domain improves the effectiveness of the requirements elicitation team. The results, although not conclusive, show some support for accepting the hypothesis. The results were analyzed also to determine the effect of creativity, industrial experience, and requirements engineering experience. The results suggest other hypotheses to be studied in the future.
@InProceedings{RE12p181,
author = {Ali Niknafs and Daniel M. Berry},
title = {The Impact of Domain Knowledge on the Effectiveness of Requirements Idea Generation during Requirements Elicitation},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {181--190},
doi = {},
year = {2012},
}
Using Collective Intelligence to Detect Pragmatic Ambiguities
Alessio Ferrari and Stefania Gnesi
(ISTI-CNR, Italy)
This paper presents a novel approach for pragmatic ambiguity detection in natural language (NL) requirements specifications defined for a specific application domain.
Starting from a requirements specification, we use a Web-search engine to retrieve a set of documents focused on the same domain of the specification. From these domain-related documents, we extract different knowledge graphs, which are employed to analyse each requirement sentence looking for potential ambiguities. To this end, an algorithm has been developed that takes the concepts expressed in the sentence and searches for corresponding ``concept paths'' within each graph.
The paths resulting from the traversal of each graph are compared and, if their overall similarity score is lower than a given threshold, the requirements specification sentence is considered ambiguous from the pragmatic point of view.
A proof of concept is given throughout the paper to illustrate the soundness of the proposed strategy.
@InProceedings{RE12p191,
author = {Alessio Ferrari and Stefania Gnesi},
title = {Using Collective Intelligence to Detect Pragmatic Ambiguities},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {191--200},
doi = {},
year = {2012},
}
Goal Modeling
Fri, Sep 28, 08:30 - 10:00
A Probabilistic Framework for Goal-Oriented Risk Analysis
Antoine Cailliau and Axel van Lamsweerde
(Université Catholique de Louvain, Belgium)
Requirements completeness is among the most critical and difficult software engineering challenges. Missing requirements often result from poor risk analysis at requirements engineering time. Obstacle analysis is a goal-oriented form of risk analysis aimed at anticipating exceptional conditions in which the software should behave adequately. In the identify-assess-control cycles of such analysis, the assessment step is not well supported by current techniques. This step is concerned with evaluating how likely the obstacles to goals are and how likely and severe their consequences are. Those key factors drive the selection of most appropriate countermeasures to be integrated in the system goal model for increased completeness. Moreover, obstacles to probabilistic goals are currently not supported; such goals prescribe that some corresponding target property should be satisfied in at least X% of the cases.
The paper presents a probabilistic framework for goal specification and obstacle assessment. The specification language for goals and obstacles is extended with a probabilistic layer where probabilities have a precise semantics grounded on system-specific phenomena. The probability of a root obstacle to a goal is thereby computed by up-propagation of probabilities of finer-grained obstacles through the obstacle refinement tree. The probability and severity of obstacle consequences is in turn computed by up-propagation from the obstructed leaf goals through the goal refinement graph. The paper shows how the computed information can be used to prioritize obstacles for countermeasure selection towards a more complete and robust goal model. The framework is evaluated on a non-trivial carpooling support system.
@InProceedings{RE12p201,
author = {Antoine Cailliau and Axel van Lamsweerde},
title = {A Probabilistic Framework for Goal-Oriented Risk Analysis},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {201--210},
doi = {},
year = {2012},
}
Requirements Analysis for a Product Family of DNA Nanodevices
Robyn R. Lutz, Jack H. Lutz, James I. Lathrop, Titus H. Klinge, Divita Mathur, D. M. Stull, Taylor G. Bergquist, and Eric R. Henderson
(Iowa State University, USA; Jet Propulsion Laboratory, USA)
DNA nanotechnology uses the information processing capabilities of nucleic acids to design self-assembling, programmable structures and devices at the nanoscale. Devices developed to date have been programmed to implement logic circuits and neural networks, capture or release specific molecules, and traverse molecular tracks and mazes.
Here we investigate the use of requirements engineering methods to make DNA nanotechnology more productive, predictable, and safe. We use goal-oriented requirements modeling to identify, specify, and analyze a product family of DNA nanodevices, and we use PRISM model checking to verify both common properties across the family and properties that are specific to individual products. Challenges to doing requirements engineering in this domain include the error-prone nature of nanodevices carrying out their tasks in the probabilistic world of chemical kinetics, the fact that roughly a nanomole (a 1 followed by 14 0s) of devices are typically deployed at once, and the difficulty of specifying and achieving modularity in a realm where devices have many opportunities to interfere with each other. Nevertheless, our results show that requirements engineering is useful in DNA nanotechnology and that leveraging the similarities among nanodevices in the product family improves the modeling and analysis by supporting reuse.
@InProceedings{RE12p211,
author = {Robyn R. Lutz and Jack H. Lutz and James I. Lathrop and Titus H. Klinge and Divita Mathur and D. M. Stull and Taylor G. Bergquist and Eric R. Henderson},
title = {Requirements Analysis for a Product Family of DNA Nanodevices},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {211--220},
doi = {},
year = {2012},
}
On Eliciting Contribution Measures in Goal Models
Sotirios Liaskos, Rina Jalman, and Jorge Aranda
(York University, Canada; University of Victoria, Canada)
Goal models have been found to be useful for supporting the decision making process in the early requirements phase. Through measuring contribution degrees of low-level decisions to the fulfilment of high-level quality goals and combining them with priority statements, it is possible to compare alternative solutions of the requirements problem against each other. But where do contribution measures come from and what is the right way to combine them in order to do such analysis? In this paper we describe how full application of the Analytic Hierarchy Process (AHP) can be used to quantitatively assess contribution relationships in goal models based on stakeholder input and how we can reason about the result in order to make informed decisions. An exploratory experiment shows that the proposed procedure is feasible and offers evidence that the resulting goal model is useful for guiding a decision. It also shows that situation-specific characteristics of the requirements problem at hand may influence stakeholder input in a variety of ways, a phenomenon that may need to be studied further in the context of eliciting such models.
@InProceedings{RE12p221,
author = {Sotirios Liaskos and Rina Jalman and Jorge Aranda},
title = {On Eliciting Contribution Measures in Goal Models},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {221--230},
doi = {},
year = {2012},
}
Requirements Management and Tracing 2
Fri, Sep 28, 10:30 - 12:00
Breaking the Big-Bang Practice of Traceability: Pushing Timely Trace Recommendations to Project Stakeholders
Jane Cleland-Huang, Patrick Mäder, Mehdi Mirakhorli, and Sorawit Amornborvornwong
(DePaul University, USA; JKU Linz, Austria)
In many software intensive systems traceability is used to support a variety of software engineering activities such as impact analysis, compliance verification, and requirements validation. However, in practice, traceability links are often created towards the end of the project specifically for approval or certification purposes. This practice can result in inaccurate and incomplete traces, and also means that traceability links are not available to support early development efforts. We address these problems by presenting a trace recommender system which pushes recommendations to project stakeholders as they create or modify traceable artifacts. We also introduce the novel concept of a trace obligation, which is used to track satisfaction relations between a target artifact and a set of source artifacts. We model traceability events and subsequent actions, including user recommendations, using the Business Process Modeling Notation (BPMN). We demonstrate and evaluate the efficacy of our approach through an illustrative example and a simulation conducted using the software engineering artifacts of a robotic system for supporting arm rehabilitation. Our results show that tracking trace obligations and generating trace recommendations throughout the active phases of a project can lead to early construction of traceability knowledge.
@InProceedings{RE12p231,
author = {Jane Cleland-Huang and Patrick Mäder and Mehdi Mirakhorli and Sorawit Amornborvornwong},
title = {Breaking the Big-Bang Practice of Traceability: Pushing Timely Trace Recommendations to Project Stakeholders},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {231--240},
doi = {},
year = {2012},
}
Characterization of Functional Software Requirements Space: The Law of Requirements Taxonomic Growth
Arbi Ghazarian
(Arizona State University, USA)
This paper reports on a large-scale empirical multiple-case study that aimed to characterize the requirements space in the domain of web-based Enterprise Systems (ES). Results from this study, among others, showed that, on the average, about 85% of all the software functionalities in the studied domain are specified using a small core set of five requirements classes even though the results of the study hint at a larger set of nine requirements classes that should be covered. The study also uncovered a law describing the growth pattern of the emerging requirements classes in software domains. According to this law, the emergence of the classes in a requirements taxonomic scheme for a particular domain, independent of the order in which specifications of requirements in that domain are analyzed, includes a rapid initial growth phase, where the majority of the requirements classes are identified, followed by a rapid slow-down phase with periods of no growth (i.e., the stabilization phase).
@InProceedings{RE12p241,
author = {Arbi Ghazarian},
title = {Characterization of Functional Software Requirements Space: The Law of Requirements Taxonomic Growth},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {241--250},
doi = {},
year = {2012},
}
Detecting and Classifying Patterns of Requirements Clarifications
Eric Knauss, Daniela Damian, Germán Poo-Caamaño, and Jane Cleland-Huang
(University of Victoria, Canada; DePaul University, USA)
In current project environments, requirements often evolve throughout the project and are worked on by stakeholders in large and distributed teams. Such teams often use online tools such as mailing lists, bug tracking systems or online discussion forums to communicate, clarify or coordinate work on requirements. In this kind of environment, the expected evolution from initial idea, through clarification, to a stable requirement, often stagnates. When project managers are not aware of underlying problems, development may pro- ceed before requirements are fully understood and stabilized, leading to numerous implementation issues and often resulting in the need for early redesign and modification.
In this paper, we present an approach to analyzing online requirements communication and a method for the detection and classification of clarification events in requirement discus- sions. We used our approach to analyze online requirements communication in the IBM Rational Team Concert (RTC) project and identified a set of six clarification patterns. Since a predominant amount of clarifications through the lifetime of a requirement is often indicative of problematic requirements, our approach lends support to project managers to assess, in real-time, the state of discussions around a requirement and promptly react to requirements problems.
@InProceedings{RE12p251,
author = {Eric Knauss and Daniela Damian and Germán Poo-Caamaño and Jane Cleland-Huang},
title = {Detecting and Classifying Patterns of Requirements Clarifications},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {251--260},
doi = {},
year = {2012},
}
proc time: 0.37