Powered by
Conference Publishing Consulting

2nd International Workshop on Artificial Intelligence for Requirements Engineering (AIRE 2015), August 24, 2015, Ottawa, ON, Canada

AIRE 2015 – Proceedings

Contents - Abstracts - Authors
Title Page

Foreword
We would like to welcome you to the 2nd International Workshop on Artificial Intelligence for Requirements Engineering (AIRE 2015). In this interdisciplinary workshop we will continue working to explore and extend the synergies between Artificial Intelligence and Requirements Engineering. Our objective is to discover Requirements Engineering areas that may benefit from the application of AI tools and techniques. We intend to inspire a new and broad community for interdisciplinary discussions concerning novel research directions for Requirements Engineering and Artificial Intelligence.
Measuring Requirement Quality to Predict Testability
Jane Huffman Hayes, Wenbin Li, Tingting Yu, Xue Han, Mark Hays, and Clinton Woodson
(University of Kentucky, USA)
Software bugs contribute to the cost of ownership for consumers in a software-driven society and can potentially lead to devastating failures. Software testing, including functional testing and structural testing, remains a common method for uncovering faults and assessing dependability of software systems. To enhance testing effectiveness, the developed artifacts (requirements, code) must be designed to be testable. Prior work has developed many approaches to address the testability of code when applied to structural testing, but to date no work has considered approaches for assessing and predicting testability of requirements to aid functional testing. In this work, we address requirement testability from the perspective of requirement understandability and quality using a machine learning and statistical analysis approach. We first use requirement measures to empirically investigate the relevant relationship between each measure and requirement testability. We then assess relevant requirement measures for predicting requirement testability. We examined two datasets, each consisting of requirement and code artifacts. We found that several measures assist in delineating between the testable and non-testable requirements, and found anecdotal evidence that a learned model of testability can be used to guide evaluation of requirements for other (non-trained) systems.
Article Search
DeNom: A Tool to Find Problematic Nominalizations using NLP
Mathias Landhäußer, Sven J. Körner, Walter F. Tichy, Jan Keim, and Jennifer Krisch
(KIT, Germany; Daimler, Germany)
Nominalizations in natural language requirements specifications can lead to imprecision. For example, in the phrase "transportation of pallets" it is unclear who transports the pallets from where to where and how. Guidelines for requirements specifications therefore recommend avoiding nominalizations. However, not all nominalizations are problematic. We present an industrial-strength text analysis tool called DeNom, which detects problematic nominalizations and reports them to the user for reformulation. DeNom uses Stanford’s parser and the Cyc ontology. It classifies nominalizations as problematic or acceptable by first detecting all nominalizations in the specification and then subtracting those which are sufficiently specified within the sentence through word references, attributes, nominal phrase constructions, etc. All remaining nominalizations are incompletely specified, and are therefore prone to conceal complex processes. These nominalizations are deemed problematic. A thorough evaluation used 10 real-world requirements specifications from Daimler AG consisting of 60,000 words. DeNom identified over 1,100 nominalizations and classified 129 of them as problematic. Only 45 of which were false positives, resulting in a precision of 66%. Recall was 88%. In contrast, a naive nominalization detector would overload the user with 1,100 warnings, a thousand of which would be false positives.
Article Search Info
Using Fuzzy Modeling for Consistent Definitions of Product Qualities in Requirements
Jean-Marc Davril, Maxime Cordy, Patrick Heymans, and Mathieu Acher
(University of Namur, Belgium; INRIA, France; IRISA, France; University of Rennes 1, France)
Companies increasingly rely on product differentiation and personalization strategies to provide their customers with an expansive catalog, and tools to assist them in finding the product meeting their needs. These tools include product search facilities, recommender systems, and product configurators. They typically represent a product as a set of features, which refer to a large number of technical specifications (e.g. size, weight, battery life). However, customers usually communicate and reason about products in terms of their qualities (e.g. ease-of-use, portability, ergonomics). In this paper, we tackle the problem of formalizing product qualities in the requirements of product-centred applications. Our goal is to extract product qualities from their technical features, so that customers can better perceive and evaluate the proposed products. To this end, we design a procedure for identifying segments of textual product documentation related to specific product qualities, and propose an approach based on fuzzy modeling to represent product qualities on top of technical specifications. Preliminary experiments we carried out on a catalog of cameras tend to show that fuzzy modeling is an appropriate formalism for representing product qualities. We also illustrate how modeled qualities can support the design of product configurators that are centered on the customers' needs.
Article Search
From Natural Language Requirements to UML Class Diagrams
Richa Sharma, Pratyoush K. Srivastava, and Kanad K. Biswas
(IIT Delhi, India; MNNIT Allahabad, India)
Unified Modeling Language (UML) is the most popular modeling language for analysis, design and development of the software system. There has been a lot of research interest in generating these UML models, especially class diagrams, automatically from Natural Language requirements. The interest in class diagrams can be attributed to the fact that classes represent the abstractions present in the system to be developed. However, automated generation of UML class diagrams is a challenging task as it involves lot of pre-processing or manual intervention at times. In this paper, we present dependency analysis based approach to derive UML class diagrams automatically from Natural Language requirements. We transform the requirements statements to an intermediary frame-based structured representation using dependency analysis of requirements statements and the Grammatical Knowledge Patterns. The knowledge stored in the frame-based structured representation is used to derive class diagrams using rule-based algorithm. Our approach has generated similar class diagrams as reported in earlier works based on linguistic analysis with either annotation or manual intervention. We present the effectiveness of our approach in terms of recall and precision for the case-studies presented in earlier works.
Article Search
Representation of Rules for Relevant Recommendations to Online Social Networks Users
Sarah Bouraga, Ivan Jureta, and Stéphane Faulkner
(University of Namur, Belgium)
In our prior work, we identified rules for use in recommendation algorithms on Online Social Network (OSN) in order to increase the relevance of content suggested to a user. The resulting recommendation algorithms filter out and prioritize event types for OSN users (such as photo posts by friends, status posts, shared content, etc.), and are thereby intended to reduce information overload. This paper proposes a representation of these rules in a requirements model of a OSN. This is interesting, because recommendation rules influence user behavior, which in turn influences future requirements. If there is a recommendation algorithm, then its behavior should be represented also in requirements models of the system. The paper makes two contributions. We define requirements that OSNs should satisfy in order to produce relevant recommendations of event types to users. We investigate whether an existing requirements modeling language (namely, i-star) can be used to model these requirements.
Article Search

proc time: 0.11