SANER 2018 Workshops
Workshops of the 2018 IEEE 25th International Conference on Software Analysis, Evolution, and Reengineering (SANER)
Powered by
Conference Publishing Consulting

2018 IEEE 2nd International Workshop on Validation, Analysis and Evolution of Software Tests (VST), March 20, 2018, Campobasso, Italy

VST 2018 – Proceedings

Contents - Abstracts - Authors

2018 IEEE 2nd International Workshop on Validation, Analysis and Evolution of Software Tests (VST)

Title Page

Message from the Chairs
Software projects accumulate large sets of test cases, encoding valuable expert knowledge about the software under test to the extent of many person years. Over time the reliability of the tests decreases, and they become difficult to understand and maintain. Extra effort is required for repairing broken tests and for adapting test suites and models to evolving software systems.
VST is an unique event bringing together academics, industrial researchers, and practitioners for exchanging experiences, solutions and new ideas in applying methods, techniques and tools from software analysis, evolution and reengineering to advance the state of the art in test development and maintenance.

Summarization Techniques for Code, Change, Testing, and User Feedback (Invited Paper)
Sebastiano Panichella
(University of Zurich, Switzerland)
Most of today's industries, from engineering to agriculture to health, are run on software. In such a context, ensuring software quality play an important role in most of current working environment and have a direct impact in any scientific and technical discipline. Software maintenance and testing have the crucial goal to find or discover possible software bugs (or defects) as early as possible, enabling software quality assurance. However, software maintenance and testing are very expensive and time-consuming activities for developers. For this reason, in the last years, several researchers in the field of Software Engineering (SE) devoted their effort in conceiving tools for boosting developers productivity during such development, maintenance and testing tasks. In this talk, I will first discuss some empirical work we performed to understand the main socio-technical challenges developers face when joining a new software project. I will discuss how to address them with the use of appropriate recommender systems aimed at supporting developers during program comprehension and maintenance tasks. Then, I'll show how 'Summarization Techniques' are an ideal technology for supporting developers when performing testing and debugging activities. Finally, I will summarize the main research advances, the current open challenges/problems and possible future directions to exploit for boosting developers productivity.

Article Search Info
Detecting Duplicate Examples in Behaviour Driven Development Specifications
Leonard Peter Binamungu, Suzanne M. Embury, and Nikolaos Konstantinou
(University of Manchester, UK)
In Behaviour-Driven Development (BDD), the behaviour of the software to be built is specified as a set of example interactions with the system, expressed using a "Given-When-Then" structure. The examples are written using customer language, and are readable by end-users. They are also executable, and act as tests that determine whether the implementation matches the desired behaviour or not. This approach can be effective in building a common understanding of the requirements, but it can also face problems. When the suites of examples grow large, they can be difficult and expensive to change. Duplication can creep in, and can be challenging to detect manually. Current tools for detecting duplication in code are also not effective for BDD examples. Moreover, human concerns of readability and clarity can rise. We present an approach for detecting duplication in BDD suites that is based around dynamic tracing, and describe an evaluation based on three open source systems.

Article Search
Automated Generation of Requirements-Based Test Cases for an Adaptive Cruise Control System
Adina Aniculaesei, Falk Howar, Peer Denecke, and Andreas Rausch
(TU Clausthal, Germany; TU Dortmund, Germany)
Checking that a complex software system conforms to an extensive catalogue of requirements is an elaborate and costly task which cannot be managed only through manual testing anymore. In this paper, we construct an academic case study in which we apply automated requirements-based test case generation to the protoype of an adaptive cruise control system. We focus on two main research goals with respect to our method: (1) how much code coverage can be obtained and (2) how many faults can be found using the generated test cases. We report on our results as well as on the lessons learned.

Article Search
A Retrospective of Production and Test Code Co-evolution in an Industrial Project
Claus Klammer, Georg Buchgeher, and Albin Kern
(Software Competence Center Hagenberg, Austria; ENGEL AUSTRIA, Austria)
Production and test code co-evolution is known to result in high quality, maintainable, more sustainable software artifacts. This report discusses the challenges and experiences obtained in the transformation from a traditional development process, where most of the testing has been conducted manually and in a subsequent development step, to an agile development process that enforces a certain number of test code coverage by automated tests. Within an industrial project we analyze the deviations from the aimed co-evolution path by means of customized visualizations and list and discuss the observed challenges.

Article Search
Evaluating the Efficiency of Continuous Testing during Test-Driven Development
Serge Demeyer, Benoît Verhaeghe, Anne Etien, Nicolas Anquetil, and Stéphane Ducasse
(University of Antwerp, Belgium; Inria, France)
Continuous testing is a novel feature within modern programming environments, where unit tests constantly run in the background providing early feedback about breaking changes. One of the more challenging aspects of such a continuous testing tool is choosing the heuristic which selects the tests to run based on the changes recently applied. To help tool builders select the most appropriate test selection heuristic, we assess their efficiency in a continuous testing context. We observe on two small but representative cases that a continuous testing tool generates significant reductions in number of tests that need to be executed. Nevertheless, these heuristics sometimes result in false negatives, thus in rare occasions discard pertinent tests.

Article Search

proc time: 0.39