Powered by
2013 21st IEEE International Requirements Engineering Conference (RE),
July 15–19, 2013,
Rio de Janeiro, Brasil
Research Track
Legal and Privacy Requirements
Automated Text Mining for Requirements Analysis of Policy Documents
Aaron K. Massey, Jacob Eisenstein, Annie I. Antón, and Peter P. Swire
(Georgia Tech, USA; Ohio State University, USA)
Businesses and organizations in jurisdictions around the world are required by law to provide their customers and users with information about their business practices in the form of policy documents. Requirements engineers analyze these documents as sources of requirements, but this analysis is a time-consuming and mostly manual process. Moreover, policy documents contain legalese and present readability challenges to requirements engineers seeking to analyze them. In this paper, we perform a large-scale analysis of 2,061 policy documents, including policy documents from the Google Top 1000 most visited websites and the Fortune 500 companies, for three purposes: (1) to assess the readability of these policy documents for requirements engineers; (2) to determine if automated text mining can indicate whether a policy document contains requirements expressed as either privacy protections or vulnerabilities; and (3) to establish the generalizability of prior work in the identification of privacy protections and vulnerabilities from privacy policies to other policy documents. Our results suggest that this requirements analysis technique, developed on a small set of policy documents in two domains, may generalize to other domains.
@InProceedings{RE13p4,
author = {Aaron K. Massey and Jacob Eisenstein and Annie I. Antón and Peter P. Swire},
title = {Automated Text Mining for Requirements Analysis of Policy Documents},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {4--13},
doi = {},
year = {2013},
}
Formal Analysis of Privacy Requirements Specifications for Multi-tier Applications
Travis D. Breaux and Ashwini Rao
(CMU, USA)
Companies require data from multiple sources to develop new information systems, such as social networking, ecommerce and location-based services. Systems rely on complex, multi-stakeholder data supply-chains to deliver value. These data supply-chains have complex privacy requirements: privacy policies affecting multiple stakeholders (e.g. user, developer, company, government) regulate the collection, use and sharing of data over multiple jurisdictions (e.g. California, United States, Europe). Increasingly, regulators expect companies to ensure consistency between company privacy policies and company data practices. To address this problem, we propose a methodology to map policy requirements in natural language to a formal representation in Description Logic. Using the formal representation, we reason about conflicting requirements within a single policy and among
multiple policies in a data supply chain. Further, we enable tracing data flows within the supply-chain. We derive our methodology from an exploratory case study of Facebook platform policy. We demonstrate the feasibility of our approach in an evaluation involving Facebook, Zynga and
AOL-Advertising policies. Our results identify three conflicts that exist between Facebook and Zynga policies, and one conflict within the AOL Advertising policy.
@InProceedings{RE13p14,
author = {Travis D. Breaux and Ashwini Rao},
title = {Formal Analysis of Privacy Requirements Specifications for Multi-tier Applications},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {14--23},
doi = {},
year = {2013},
}
An Empirical Investigation of Software Engineers' Ability to Classify Legal Cross-References
Jeremy C. Maxwell, Annie I. Antón, and Julie B. Earp
(North Carolina State University, USA; Georgia Tech, USA)
Requirements engineers often have to develop software for regulated domains. These regulations often contain cross-references to other laws. Cross-references can introduce exceptions or definitions, constrain existing requirements, or even conflict with other compliance requirements. To develop compliant software, requirements engineers must understand the impact these cross-references have on their software. In this paper, we present an empirical study in which we measure the ability of software practitioners to classify cross-references using our previously developed cross-reference taxonomy. We discover that software practitioners are not well equipped to understand the impact of cross-references on their software.
@InProceedings{RE13p24,
author = {Jeremy C. Maxwell and Annie I. Antón and Julie B. Earp},
title = {An Empirical Investigation of Software Engineers' Ability to Classify Legal Cross-References},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {24--31},
doi = {},
year = {2013},
}
Automated Traceability
Supporting Requirements Traceability through Refactoring
Anas Mahmoud and Nan Niu
(Mississippi State University, USA)
Modern traceability tools employ information retrieval (IR) methods to generate candidate traceability links. These methods track textual signs embedded in the system to establish relationships between software artifacts. However, as software systems evolve, new and inconsistent terminology finds its way into the system’s taxonomy, thus corrupting its lexical structure and distorting its traceability tracks. In this paper, we argue that the distorted lexical tracks of the system can be systematically re-established through refactoring, a set of behavior-preserving transformations for keeping the system quality under control during evolution. To test this novel hypothesis, we investigate the effect of integrating various types of refactoring on the performance of requirements-to-code automated tracing methods. In particular, we identify the problems of missing, misplaced, and duplicated signs in software artifacts, and then examine to what extent refactorings that restore, move, and remove textual information can overcome these problems respectively.We conduct our experimental analysis using three datasets from different application domains. Results show that restoring textual information in the system has a positive impact on tracing. In contrast, refactorings that remove redundant information impact tracing negatively.
Refactorings that move information among the system modules are found to have no significant effect. Our findings address several issues related to code and requirements evolution, as well as refactoring as a mechanism to enhance the practicality of automated tracing tools.
@InProceedings{RE13p32,
author = {Anas Mahmoud and Nan Niu},
title = {Supporting Requirements Traceability through Refactoring},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {32--41},
doi = {},
year = {2013},
}
Foundations for an Expert System in Domain-Specific Traceability
Jin Guo, Jane Cleland-Huang, and Brian Berenbach
(DePaul University, USA; Siemens, USA)
Attempts to utilize information retrieval techniques to fully automate the creation of traceability links have been hindered by terminology mismatches between source and target artifacts. Therefore, current trace retrieval algorithms tend to produce imprecise and incomplete results. In this paper we address this mismatch by proposing an expert system which integrates a knowledge base of domain concepts and their relationships, a set of logic rules for defining relationships between artifacts based on these rules, and a process for mapping artifacts into a structure against which the rules can be applied. This paper lays down the core foundations needed to integrate an expert system into the automated tracing process. We construct a knowledge base and inference rules for part of a large industrial project in the transportation domain and empirically show that our approach significantly improves precision and recall of the generated trace links.
@InProceedings{RE13p42,
author = {Jin Guo and Jane Cleland-Huang and Brian Berenbach},
title = {Foundations for an Expert System in Domain-Specific Traceability},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {42--51},
doi = {},
year = {2013},
}
Video
Application of Reinforcement Learning to Requirements Engineering: Requirements Tracing
Hakim Sultanov and Jane Huffman Hayes
(University of Kentucky, USA)
We posit that machine learning can be applied to effectively address requirements engineering problems. Specifically, we present a requirements traceability method based on the machine learning technique Reinforcement Learning (RL). The RL method demonstrates a rather targeted generation of candidate links between textual requirements artifacts (high level requirements traced to low level requirements, for example). This work also presents the utilization of synonyms in the context of common textual segments. The technique has been validated using two real-world datasets from two problem domains. Our technique demonstrated statistically significant better results than the Information Retrieval technique.
@InProceedings{RE13p52,
author = {Hakim Sultanov and Jane Huffman Hayes},
title = {Application of Reinforcement Learning to Requirements Engineering: Requirements Tracing},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {52--61},
doi = {},
year = {2013},
}
Video
Formal Modeling
On Requirements Verification for Model Refinements
Carlo Ghezzi, Claudio Menghi, Amir Molzam Sharifloo, and Paola Spoletini
(Politecnico di Milano, Italy; Universita dell’Insubria, Italy)
Conventional formal verification techniques rely on the assumption that a system’s specification is completely available so that the analysis can say whether or not a set of properties will be satisfied. On the contrary, modern development lifecycles call for agile—incremental and iterative— approaches to tame the boosting complexity of modern software systems and reduce development risks. We focus here on requirements verification performed in the early exploratory stages on high-level models and we discuss how this can be integrated into an agile approach. We present a new technique to model-check incomplete high-level specifications against formally specified requirements. We do this in the context of incomplete hierarchical Statecharts, verified against qCTL properties. Our approach supports step-wise specification and refinement verification. Verification can be incremental, that is alternative refinements may be separately explored and verification is only replayed for the modified parts. The results are presented by introducing the formalisms, the model-checking algorithm, and the tool we have implemented.
@InProceedings{RE13p62,
author = {Carlo Ghezzi and Claudio Menghi and Amir Molzam Sharifloo and Paola Spoletini},
title = {On Requirements Verification for Model Refinements},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {62--71},
doi = {},
year = {2013},
}
Video
Distributing Refinements of a System-Level Partial Behavior Model
Ivo Krka and Nenad Medvidovic
(University of Southern California, USA)
Early in a system’s life cycle, a system’s behavior is typically partially specified using scenarios, invariants, and temporal properties. These specifications prohibit or require certain behaviors, while leaving other behaviors uncategorized into either of those. Engineers refine the specification by eliciting more requirements to finally arrive at a complete behavioral description. Partial-behavior models have been utilized as a formal foundation for capturing partial system specifications. Mapping the requirements to partial behavior models enables automated analyses (e.g., requirements consistency checking) and helps to elicit new requirements. Under the current practices, software systems are reasoned about and their behavior specified exclusively at the system level, disregarding of the fact that a system typically consists of interacting components. However, exclusively refining a behavior specification at the system-level runs the risk of arriving at an inconsistent specification, i.e. one that is not realizable as a composition of the system’s components. To address this problem, we propose a framework that provides the lacking support: a newly specified requirement implicitly refines the system’s underlying partial behavior model; our framework maps the new requirement to components by automatically distributing the system model refinements to the components’ underlying models. By doing so, our framework prevents requirements inconsistencies and helps to identify further necessary requirements. We discuss the framework’s soundness and correctness, and demonstrate its features on a case study previously used in related literature.
@InProceedings{RE13p72,
author = {Ivo Krka and Nenad Medvidovic},
title = {Distributing Refinements of a System-Level Partial Behavior Model},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {72--81},
doi = {},
year = {2013},
}
A Mode-Based Pattern for Feature Requirements, and a Generic Feature Interface
David Dietrich and Joanne M. Atlee
(University of Waterloo, Canada)
In this paper, we propose a pattern for decomposing and structuring the model of a feature's behavioural requirements, based on modes of operation (e.g., Active, Inactive, Failed) that are common to features in multiple domains. Interestingly, the highest-level modes of the pattern can serve as a generic behavioural interface for all features that adhere to the pattern. We have applied the pattern in modelling the behavioural requirements of 19 automotive features that were specified in 5 production-grade requirements documents. We found that the pattern was applicable to all 19 features, and that our proposed generic feature interface was applicable to 50 out of 57 inter-feature references.
@InProceedings{RE13p82,
author = {David Dietrich and Joanne M. Atlee},
title = {A Mode-Based Pattern for Feature Requirements, and a Generic Feature Interface},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {82--91},
doi = {},
year = {2013},
}
Elicitation
Requirements Elicitation: Towards the Unknown Unknowns
Alistair Sutcliffe and Pete Sawyer
(University of Lancaster, UK)
Requirements elicitation research is reviewed using a framework categorising the relative ‘knowness’ of requirements specification and Common Ground discourse theory. The main contribution of this survey is to review requirements elicitation from the perspective of this framework and propose a road map of research to tackle outstanding elicitation problems involving tacit knowledge. Elicitation techniques (interviews, scenarios, prototypes, etc.) are investigated, followed by representations, models and support tools. The survey results suggest that elicitation techniques appear to be relatively mature, although new areas of creative requirements are emerging. Representations and models are also well established although there is potential for more sophisticated modelling of domain knowledge. While model-checking tools continue to become more elaborate, more growth is apparent in NL tools such as text mining and IR which help to categorize and disambiguate requirements. Social collaboration support is a relatively new area that facilitates categorisation, prioritisation and matching collections of requirements for product line versions. A road map for future requirements elicitation research is proposed investigating the prospects for techniques, models and tools in green-field domains where few solutions exist, contrasted with brown-field domains where collections of requirements and products already exist. The paper concludes with remarks on the possibility of elicitation tackling the most difficult question of ‘unknown unknown’ requirements.
@InProceedings{RE13p92,
author = {Alistair Sutcliffe and Pete Sawyer},
title = {Requirements Elicitation: Towards the Unknown Unknowns},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {92--104},
doi = {},
year = {2013},
}
How Cloud Providers Elicit Consumer Requirements: An Exploratory Study of Nineteen Companies
Irina Todoran, Norbert Seyff, and Martin Glinz
(University of Zurich, Switzerland)
Requirements elicitation is widely seen as a crucial step towards delivering successful software. In the context of emerging cloud systems, the question is whether and how the elicitation process differs from that used for traditional systems, and if the current methods suffice. We interviewed 19 cloud providers to gain an in-depth understanding of the state of practice with regard to the adoption and implementation of existing elicitation methods. The results of this exploratory study show that, whereas a few cloud providers try to implement and adapt traditional methods, the large majority uses ad-hoc approaches for identifying consumer needs. There are various causes for this situation, ranging from consumer reachability issues and previous failed attempts, to a complete lack of development strategy. The study suggests that only a small number of the current techniques can be applied successfully in cloud systems, hence showing a need to research new ways of supporting cloud providers. The main contribution of this work lies in revealing what elicitation methods are used by cloud providers and clarifying the challenges related to requirements elicitation posed by the cloud paradigm. Further, we identify some key features for cloud-specific elicitation methods.
@InProceedings{RE13p105,
author = {Irina Todoran and Norbert Seyff and Martin Glinz},
title = {How Cloud Providers Elicit Consumer Requirements: An Exploratory Study of Nineteen Companies},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {105--114},
doi = {},
year = {2013},
}
Requirements Sources
Visual Notation Design 2.0: Towards User Comprehensible Requirements Engineering Notations
Patrice Caire, Nicolas Genon, Patrick Heymans, and Daniel L. Moody
(University of Luxembourg, Luxembourg; University of Namur, Belgium; Ozemantics, Australia)
The success of requirements engineering depends critically on effective communication between business analysts and end users, yet empirical studies show that business stakeholders understand RE notations very poorly. This paper proposes a novel approach to designing RE visual notations that actively involves naïve users in the process. We use i*, one of the most influential RE notations, to demonstrate the approach, but the same approach could be applied to any RE notation. We present the results of 5 related empirical studies that show that novices outperform experts in designing symbols that are comprehensible to novices: the differences are both statistically significant and practically meaningful. Symbols designed by novices increased semantic transparency (their ability to be spontaneously interpreted by other novices) by almost 300% compared to the existing i* notation. The results challenge the conventional wisdom about visual notation design: that it should be conducted by a small group of experts; our research suggests that it should instead be conducted by large numbers of novices. The approach is consistent with Web 2.0, in that it harnesses the collective intelligence of end users and actively involves them in the notation design process as “prosumers” rather than passive consumers. We believe this approach has the potential to radically change the way visual notations are designed in the future.
@InProceedings{RE13p115,
author = {Patrice Caire and Nicolas Genon and Patrick Heymans and Daniel L. Moody},
title = {Visual Notation Design 2.0: Towards User Comprehensible Requirements Engineering Notations},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {115--124},
doi = {},
year = {2013},
}
User Feedback in the AppStore: An Empirical Study
Dennis Pagano and
Walid Maalej
(TU Munich, Germany; University of Hamburg, Germany)
Application distribution platforms - or app stores - such as Google Play or Apple AppStore allow users to submit feedback in form of ratings and reviews to downloaded applications. In the last few years, these platforms have become very popular to both application developers and users. However, their real potential for and impact on requirements engineering processes are not yet well understood. This paper reports on an exploratory study, which analyzes over one million reviews from the Apple AppStore. We investigated how and when users provide feedback, inspected the feedback content, and analyzed its impact on the user community. We found that most of the feedback is provided shortly after new releases, with a quickly decreasing frequency over time. Reviews typically contain multiple topics, such as user experience, bug reports, and feature requests. The quality and constructiveness vary widely, from helpful advices and innovative ideas to insulting offenses. Feedback content has an impact on download numbers: positive messages usually lead to better ratings and vice versa. Negative feedback such as shortcomings is typically destructive and misses context details and user experience. We discuss our findings and their impact on software and requirements engineering teams.
@InProceedings{RE13p125,
author = {Dennis Pagano and Walid Maalej},
title = {User Feedback in the AppStore: An Empirical Study},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {125--134},
doi = {},
year = {2013},
}
Handling Change
Learning from Evolution History to Predict Future Requirement Changes
Lin Shi, Qing Wang, and Mingshu Li
(ISCAS, China; UCAS, China)
Managing the costs and risks of evolution is a challenging problem in the RE community. The challenge lies in the difficulty of analyzing and assessing the proneness to requirement changes across multiple versions, especially when the scale of requirements is large. In this paper, we define a series of metrics to characterize historic evolution information, and propose a novel method for predicting requirements that are likely to evolve in the future based on the metrics. We apply the prediction method to analyze the product updates history through a case study. The empirical results show that this method can provide a tradeoff solution that narrows down the scope of change analysis to a small set of requirements, but it still can retrieve nearly half of the future changes. The results indicate that the defined metrics are sensitive to the history of requirements evolution, and the prediction method can reach a valuable outcome for requirement engineers to balance their workload and risks.
@InProceedings{RE13p135,
author = {Lin Shi and Qing Wang and Mingshu Li},
title = {Learning from Evolution History to Predict Future Requirement Changes},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {135--144},
doi = {},
year = {2013},
}
Assessing Regulatory Change through Legal Requirements Coverage Modeling
David G. Gordon and
Travis D. Breaux
(CMU, USA)
Developing global markets offer companies new opportunities to manufacture and sell information technology (IT) products in ways unforeseen by current laws and regulations. This innovation leads to changing requirements due to changes in product features, laws, or the locality where the product is sold or manufactured. To help developers rationalize these changes, we introduce a preliminary framework and method that can be used by requirements engineers and their legal teams to identify relevant legal requirements and trace changes in requirements coverage. The framework includes a method to translate IT regulations into a legal requirements coverage model used to make coverage assertions about existing or planned IT systems. We evaluated the framework in a case study using three IT laws: California’s Confidentiality of Medical Records Act, the U.S. Health Information Portability and Accountability Act (HIPAA) and amendments from the Health Information Technology for Economic and Clinical Health (HITECH) Act, and the India 2011 Information Technology Rules. Further, we demonstrate the framework using three scenarios: new product features are proposed; product-related services are outsourced abroad; and regulations change to address changes in the market.
@InProceedings{RE13p145,
author = {David G. Gordon and Travis D. Breaux},
title = {Assessing Regulatory Change through Legal Requirements Coverage Modeling},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {145--154},
doi = {},
year = {2013},
}
A Goal Model Elaboration for Localizing Changes in Software Evolution
Hiroyuki Nakagawa, Akihiko Ohsuga, and Shinichi Honiden
(University of Electro-Communications, Japan; NII, Japan)
Software evolution is an essential activity that adapts existing software to changes in requirements. Localizing the impact of changes is one of the most efficient strategies for successful evolution. We exploit requirements descriptions in order to extract loosely coupled components and localize changes for evolution. We define a process of elaboration for the goal model that extracts a set of control loops from the requirements descriptions as components that constitute extensible systems. We regard control loops to be independent components that prevent the impact of a change from spreading outside them. To support the elaboration, we introduce two patterns: one to extract control loops from the goal model and another to detect possible conflicts between control loops. We experimentally evaluated our approach in two types of software development and the results demonstrate that our elaboration technique helps us to analyze the impact of changes in the source code and prevent the complexity of the code from increasing.
@InProceedings{RE13p155,
author = {Hiroyuki Nakagawa and Akihiko Ohsuga and Shinichi Honiden},
title = {A Goal Model Elaboration for Localizing Changes in Software Evolution},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {155--164},
doi = {},
year = {2013},
}
Directions in Decentralized RE
Ongoing Software Development without Classical Requirements
Thomas A. Alspaugh and Walt Scacchi
(UC Irvine, USA)
Many prominent open source software (OSS) development projects produce systems without overt requirements artifacts or processes, contrary to expectations resulting from classical software development experience and research, and a growing number of critical software systems are evolved and sustained in this way yet provide quality and rich functional capabilities to users and integrators that accept them without question. We examine data from several OSS projects to investigate this conundrum, and discuss the results of research into OSS outcomes that sheds light on the consequences of this approach to software requirements in terms of risk of development failure and quality of the resulting system.
@InProceedings{RE13p165,
author = {Thomas A. Alspaugh and Walt Scacchi},
title = {Ongoing Software Development without Classical Requirements},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {165--174},
doi = {},
year = {2013},
}
Assumption-Based Risk Identification Method (ARM) in Dynamic Service Provisioning
Alireza Zarghami, Eelco Vriezekolk, Mohammad Zarifi Eslami, Marten van Sinderen, and Roel Wieringa
(University of Twente, Netherlands)
In this paper we consider service-oriented applications composed of component services provided by different, economically independent service providers. As in all composite applications, the component services are composed and configured to meet requirements for the composite application. However, in a field experiment of composite service-oriented applications we found that, although the services as actually delivered by the service providers meet their requirements, there is still a mismatch across service providers due to unstated assumptions, and that this mismatch causes an incorrect composite application to be delivered to end-users. Identifying and analyzing these initially unstated assumptions turns requirements engineering for service-oriented applications into risk analysis. In this paper, we describe a field experiment with an experimental service-oriented homecare system, in which unexpected behavior of the system turned up unstated assumptions about the contributing service providers. We then present an assumptions-driven risk identification method that can help identifying these risks, and we show how we applied this method in the second iteration of the field experiment. The method adapts some techniques from problem frame diagrams to identify relevant assumptions on service providers. The method is informal, and takes the view from nowhere in that it does not result in a specification of the component services, but for every component service delivers a set of assumptions that the service must satisfy in order to contribute to the overall system requirements. We end the paper with a discussion of generalizability of this method.
@InProceedings{RE13p175,
author = {Alireza Zarghami and Eelco Vriezekolk and Mohammad Zarifi Eslami and Marten van Sinderen and Roel Wieringa},
title = {Assumption-Based Risk Identification Method (ARM) in Dynamic Service Provisioning},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {175--184},
doi = {},
year = {2013},
}
Can Requirements Dependency Network Be Used as Early Indicator of Software Integration Bugs?
Junjie Wang, Juan Li, Qing Wang, Da Yang, He Zhang, and Mingshu Li
(ISCAS, China; UCAS, China; University of East London, UK)
Complexity, cohesion and coupling have been recognized
as prominent indicators for software quality. One characterization
of software complexity is the existence of dependency
relationship. Moreover, degree of dependency reflects the cohesion
and coupling between software elements. Dependencies on
design and implementation phase have been proven as important
predictors for software bugs. We empirically investigated how
requirements dependencies correlate with and predict software
integration bugs, which can provide early estimate regarding
software quality, therefore facilitate decision making early in the
software lifecycle. We conducted network analysis on requirements
dependency networks of two commercial software projects.
We then performed correlation analysis between network measures
(e.g., degree, closeness) and number of bugs. Afterwards,
bug prediction models were built using these network measures.
Significant correlation is observed between most of our network
measures and number of bugs. These network measures can
predict the number of bugs with high accuracy and sensitivity.
We further identified the significant predictors for bug prediction.
Besides, the indication effect of network measures on bug number
varies among different types of requirements dependency. These
observations show that requirements dependency network can be
used as an early indicator of software integration bugs.
@InProceedings{RE13p185,
author = {Junjie Wang and Juan Li and Qing Wang and Da Yang and He Zhang and Mingshu Li},
title = {Can Requirements Dependency Network Be Used as Early Indicator of Software Integration Bugs?},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {185--194},
doi = {},
year = {2013},
}
Traceability in Practice
An Empirical Study on Project-Specific Traceability Strategies
Patrick Rempel, Patrick Mäder, and Tobias Kuschke
(TU Ilmenau, Germany)
Effective requirements traceability supports practitioners in reaching higher project maturity and better product quality. Researchers argue that effective traceability barely happens by chance or through ad-hoc efforts and that traceability should be explicitly defined upfront. However, in a previous study we found that practitioners rarely follow explicit traceability strategies. We were interested in the reason for this discrepancy. Are practitioners able to reach effective traceability without an explicit definition? More specifically, how suitable is requirements traceability that is not strategically planned in supporting a project's development process. Our interview study involved practitioners from 17 companies. These practitioners were familiar with the development process, the existing traceability and the goals of the project they reported about. For each project, we first modeled a traceability strategy based on the gathered information. Second, we examined and modeled the applied software engineering processes of each project. Thereby, we focused on executed tasks, involved actors, and pursued goals. Finally, we analyzed the quality and suitability of a project's traceability strategy. We report common problems across the analyzed traceability strategies and their possible causes. The overall quality and mismatch of analyzed traceability suggests that an upfront-defined traceability strategy is indeed required. Furthermore, we show that the decision for or against traceability relations between artifacts requires a detailed understanding of the project's engineering process and goals; emphasizing the need for a goal-oriented procedure to assess existing and define new traceability strategies.
@InProceedings{RE13p195,
author = {Patrick Rempel and Patrick Mäder and Tobias Kuschke},
title = {An Empirical Study on Project-Specific Traceability Strategies},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {195--204},
doi = {},
year = {2013},
}
Keeping Requirements on Track via Visual Analytics
Nan Niu, Sandeep Reddivari, and Zhangji Chen
(Mississippi State University, USA)
For many software projects, keeping requirements on track needs an effective and efficient path from data to decision. Visual analytics creates such a path that enables the human to extract insights by interacting with the relevant information. While various requirements visualization techniques exist, few have produced end-to-end values to practitioners. In this paper, we advance the literature on visual requirements analytics by characterizing its key components and relationships. This allows us to not only assess existing approaches, but also create tool enhancements in a principled manner. We evaluate our enhanced tool supports through a case study where massive, heterogeneous, and dynamic requirements are processed, visualized, and analyzed. In particular, our study illuminates how increased interactivity of requirements visualization could lead to actionable decisions.
@InProceedings{RE13p205,
author = {Nan Niu and Sandeep Reddivari and Zhangji Chen},
title = {Keeping Requirements on Track via Visual Analytics},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {205--214},
doi = {},
year = {2013},
}
RE@21: Keeping Requirements on Track
A History of the International Requirements Engineering Conference (RE) (RE@21)
Nancy R. Mead
(SEI, USA)
This paper traces the history of the International Requirements Engineering Conference from its beginnings to the present, with suggestions for future considerations. Other requirements engineering events and activities are also discussed. A timeline of major milestones is included, along with a brief discussion of requirements engineering research activities that occurred in parallel with the conference.}
@InProceedings{RE13p215,
author = {Nancy R. Mead},
title = {A History of the International Requirements Engineering Conference (RE) (RE@21)},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {215--221},
doi = {},
year = {2013},
}
A Review of Traceability Research at the Requirements Engineering Conference (RE@21)
Sunil Nair, Jose Luis de la Vara, and Sagar Sen
(Simula Research Laboratory, Norway)
Traceability between development artefacts and mainly from and to requirements plays a major role in system lifecycle, supporting activities such as system validation, change impact analysis, and regulation compliance. Many researchers have been working on this topic and have published their work throughout editions of the Requirements Engineering Conference. This paper aims to analyse the research on traceability published in the past 20 years of this conference and provide insights into its contribution to the traceability area. We have selected and reviewed 70 papers in the proceedings of the conference and summarised several aspects of traceability that have been addressed and by whom. The paper also discusses the evolution of the topic at the conference, compares the results with those reported in other publications, and proposes aspects on which further research should be conducted.
@InProceedings{RE13p222,
author = {Sunil Nair and Jose Luis de la Vara and Sagar Sen},
title = {A Review of Traceability Research at the Requirements Engineering Conference (RE@21)},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {222--229},
doi = {},
year = {2013},
}
Models in the RE Series (RE@21)
Stephen J. Morris
(City University London, UK)
This paper reports on the use and importance of models in the RE series of conferences based on the results of an analysis of the use of the word 'model' and of other words with 'model…' as their stem in the main bodies of the texts of published papers. The 620 papers examined contained 18,066 instances of these words. The words identified were divided into 'general terms' for models (505), 'special names' for models (215) and names for the 'nature and characteristics' of models and modelling (120). The large numbers are a clear indicator of the overall importance which the model has as a dominant concept and as a still proliferating artifact in the practice of those participating in the series. The three groups of names represent social conventions adopted for communication and continuity, the third providing a pragmatically rather than theoretically based overview of the factors affecting models and modelling. The conclusions use evidence from the study to suggest questions that may improve general practice and form the basis of more specific model declaration.
@InProceedings{RE13p230,
author = {Stephen J. Morris},
title = {Models in the RE Series (RE@21)},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {230--237},
doi = {},
year = {2013},
}
A Vision for Generic Concern-Oriented Requirements Reuse (RE@21)
Gunter Mussbacher and Jörg Kienzle
(University of Ottawa, Canada; McGill University, Canada)
Reuse is a powerful tool for improving the productivity of software development. The paper puts forward arguments in favor of generic requirements reuse rooted in the vision that effectiveness requires a focus on coordinated composition of reusable artifacts across the whole software development life cycle. A survey of publications on requirements reuse from the International Requirements Engineering (RE) Conference series determines the research landscape in this area over the last twenty years, assessing the hypothesis that there is no or little research reported at RE about generic reuse of requirements models that spans the software development life cycle. The paper then outlines, for the RE community, a research agenda associated with the presented vision for such an approach to requirements reuse that builds on concern-orientation, i.e., the ability to modularize and compose important requirements concerns throughout the software development life cycle, and model-driven engineering principles. In addition, early research results are briefly presented that illustrate favorably the feasibility of such an approach.
@InProceedings{RE13p238,
author = {Gunter Mussbacher and Jörg Kienzle},
title = {A Vision for Generic Concern-Oriented Requirements Reuse (RE@21)},
booktitle = {Proc.\ RE},
publisher = {IEEE},
pages = {238--249},
doi = {},
year = {2013},
}
proc time: 0.27