Workshop VST 2018 – Author Index |
Contents -
Abstracts -
Authors
|
Aniculaesei, Adina |
![]() Adina Aniculaesei, Falk Howar, Peer Denecke, and Andreas Rausch (TU Clausthal, Germany; TU Dortmund, Germany) Checking that a complex software system conforms to an extensive catalogue of requirements is an elaborate and costly task which cannot be managed only through manual testing anymore. In this paper, we construct an academic case study in which we apply automated requirements-based test case generation to the protoype of an adaptive cruise control system. We focus on two main research goals with respect to our method: (1) how much code coverage can be obtained and (2) how many faults can be found using the generated test cases. We report on our results as well as on the lessons learned. ![]() |
|
Anquetil, Nicolas |
![]() Serge Demeyer, Benoît Verhaeghe, Anne Etien, Nicolas Anquetil, and Stéphane Ducasse (University of Antwerp, Belgium; Inria, France) Continuous testing is a novel feature within modern programming environments, where unit tests constantly run in the background providing early feedback about breaking changes. One of the more challenging aspects of such a continuous testing tool is choosing the heuristic which selects the tests to run based on the changes recently applied. To help tool builders select the most appropriate test selection heuristic, we assess their efficiency in a continuous testing context. We observe on two small but representative cases that a continuous testing tool generates significant reductions in number of tests that need to be executed. Nevertheless, these heuristics sometimes result in false negatives, thus in rare occasions discard pertinent tests. ![]() |
|
Binamungu, Leonard Peter |
![]() Leonard Peter Binamungu, Suzanne M. Embury, and Nikolaos Konstantinou (University of Manchester, UK) In Behaviour-Driven Development (BDD), the behaviour of the software to be built is specified as a set of example interactions with the system, expressed using a "Given-When-Then" structure. The examples are written using customer language, and are readable by end-users. They are also executable, and act as tests that determine whether the implementation matches the desired behaviour or not. This approach can be effective in building a common understanding of the requirements, but it can also face problems. When the suites of examples grow large, they can be difficult and expensive to change. Duplication can creep in, and can be challenging to detect manually. Current tools for detecting duplication in code are also not effective for BDD examples. Moreover, human concerns of readability and clarity can rise. We present an approach for detecting duplication in BDD suites that is based around dynamic tracing, and describe an evaluation based on three open source systems. ![]() |
|
Buchgeher, Georg |
![]() Claus Klammer, Georg Buchgeher, and Albin Kern (Software Competence Center Hagenberg, Austria; ENGEL AUSTRIA, Austria) Production and test code co-evolution is known to result in high quality, maintainable, more sustainable software artifacts. This report discusses the challenges and experiences obtained in the transformation from a traditional development process, where most of the testing has been conducted manually and in a subsequent development step, to an agile development process that enforces a certain number of test code coverage by automated tests. Within an industrial project we analyze the deviations from the aimed co-evolution path by means of customized visualizations and list and discuss the observed challenges. ![]() |
|
Demeyer, Serge |
![]() Serge Demeyer, Benoît Verhaeghe, Anne Etien, Nicolas Anquetil, and Stéphane Ducasse (University of Antwerp, Belgium; Inria, France) Continuous testing is a novel feature within modern programming environments, where unit tests constantly run in the background providing early feedback about breaking changes. One of the more challenging aspects of such a continuous testing tool is choosing the heuristic which selects the tests to run based on the changes recently applied. To help tool builders select the most appropriate test selection heuristic, we assess their efficiency in a continuous testing context. We observe on two small but representative cases that a continuous testing tool generates significant reductions in number of tests that need to be executed. Nevertheless, these heuristics sometimes result in false negatives, thus in rare occasions discard pertinent tests. ![]() |
|
Denecke, Peer |
![]() Adina Aniculaesei, Falk Howar, Peer Denecke, and Andreas Rausch (TU Clausthal, Germany; TU Dortmund, Germany) Checking that a complex software system conforms to an extensive catalogue of requirements is an elaborate and costly task which cannot be managed only through manual testing anymore. In this paper, we construct an academic case study in which we apply automated requirements-based test case generation to the protoype of an adaptive cruise control system. We focus on two main research goals with respect to our method: (1) how much code coverage can be obtained and (2) how many faults can be found using the generated test cases. We report on our results as well as on the lessons learned. ![]() |
|
Ducasse, Stéphane |
![]() Serge Demeyer, Benoît Verhaeghe, Anne Etien, Nicolas Anquetil, and Stéphane Ducasse (University of Antwerp, Belgium; Inria, France) Continuous testing is a novel feature within modern programming environments, where unit tests constantly run in the background providing early feedback about breaking changes. One of the more challenging aspects of such a continuous testing tool is choosing the heuristic which selects the tests to run based on the changes recently applied. To help tool builders select the most appropriate test selection heuristic, we assess their efficiency in a continuous testing context. We observe on two small but representative cases that a continuous testing tool generates significant reductions in number of tests that need to be executed. Nevertheless, these heuristics sometimes result in false negatives, thus in rare occasions discard pertinent tests. ![]() |
|
Embury, Suzanne M. |
![]() Leonard Peter Binamungu, Suzanne M. Embury, and Nikolaos Konstantinou (University of Manchester, UK) In Behaviour-Driven Development (BDD), the behaviour of the software to be built is specified as a set of example interactions with the system, expressed using a "Given-When-Then" structure. The examples are written using customer language, and are readable by end-users. They are also executable, and act as tests that determine whether the implementation matches the desired behaviour or not. This approach can be effective in building a common understanding of the requirements, but it can also face problems. When the suites of examples grow large, they can be difficult and expensive to change. Duplication can creep in, and can be challenging to detect manually. Current tools for detecting duplication in code are also not effective for BDD examples. Moreover, human concerns of readability and clarity can rise. We present an approach for detecting duplication in BDD suites that is based around dynamic tracing, and describe an evaluation based on three open source systems. ![]() |
|
Etien, Anne |
![]() Serge Demeyer, Benoît Verhaeghe, Anne Etien, Nicolas Anquetil, and Stéphane Ducasse (University of Antwerp, Belgium; Inria, France) Continuous testing is a novel feature within modern programming environments, where unit tests constantly run in the background providing early feedback about breaking changes. One of the more challenging aspects of such a continuous testing tool is choosing the heuristic which selects the tests to run based on the changes recently applied. To help tool builders select the most appropriate test selection heuristic, we assess their efficiency in a continuous testing context. We observe on two small but representative cases that a continuous testing tool generates significant reductions in number of tests that need to be executed. Nevertheless, these heuristics sometimes result in false negatives, thus in rare occasions discard pertinent tests. ![]() |
|
Howar, Falk |
![]() Adina Aniculaesei, Falk Howar, Peer Denecke, and Andreas Rausch (TU Clausthal, Germany; TU Dortmund, Germany) Checking that a complex software system conforms to an extensive catalogue of requirements is an elaborate and costly task which cannot be managed only through manual testing anymore. In this paper, we construct an academic case study in which we apply automated requirements-based test case generation to the protoype of an adaptive cruise control system. We focus on two main research goals with respect to our method: (1) how much code coverage can be obtained and (2) how many faults can be found using the generated test cases. We report on our results as well as on the lessons learned. ![]() |
|
Kern, Albin |
![]() Claus Klammer, Georg Buchgeher, and Albin Kern (Software Competence Center Hagenberg, Austria; ENGEL AUSTRIA, Austria) Production and test code co-evolution is known to result in high quality, maintainable, more sustainable software artifacts. This report discusses the challenges and experiences obtained in the transformation from a traditional development process, where most of the testing has been conducted manually and in a subsequent development step, to an agile development process that enforces a certain number of test code coverage by automated tests. Within an industrial project we analyze the deviations from the aimed co-evolution path by means of customized visualizations and list and discuss the observed challenges. ![]() |
|
Klammer, Claus |
![]() Claus Klammer, Georg Buchgeher, and Albin Kern (Software Competence Center Hagenberg, Austria; ENGEL AUSTRIA, Austria) Production and test code co-evolution is known to result in high quality, maintainable, more sustainable software artifacts. This report discusses the challenges and experiences obtained in the transformation from a traditional development process, where most of the testing has been conducted manually and in a subsequent development step, to an agile development process that enforces a certain number of test code coverage by automated tests. Within an industrial project we analyze the deviations from the aimed co-evolution path by means of customized visualizations and list and discuss the observed challenges. ![]() |
|
Konstantinou, Nikolaos |
![]() Leonard Peter Binamungu, Suzanne M. Embury, and Nikolaos Konstantinou (University of Manchester, UK) In Behaviour-Driven Development (BDD), the behaviour of the software to be built is specified as a set of example interactions with the system, expressed using a "Given-When-Then" structure. The examples are written using customer language, and are readable by end-users. They are also executable, and act as tests that determine whether the implementation matches the desired behaviour or not. This approach can be effective in building a common understanding of the requirements, but it can also face problems. When the suites of examples grow large, they can be difficult and expensive to change. Duplication can creep in, and can be challenging to detect manually. Current tools for detecting duplication in code are also not effective for BDD examples. Moreover, human concerns of readability and clarity can rise. We present an approach for detecting duplication in BDD suites that is based around dynamic tracing, and describe an evaluation based on three open source systems. ![]() |
|
Panichella, Sebastiano |
![]() Sebastiano Panichella (University of Zurich, Switzerland) Most of today's industries, from engineering to agriculture to health, are run on software. In such a context, ensuring software quality play an important role in most of current working environment and have a direct impact in any scientific and technical discipline. Software maintenance and testing have the crucial goal to find or discover possible software bugs (or defects) as early as possible, enabling software quality assurance. However, software maintenance and testing are very expensive and time-consuming activities for developers. For this reason, in the last years, several researchers in the field of Software Engineering (SE) devoted their effort in conceiving tools for boosting developers productivity during such development, maintenance and testing tasks. In this talk, I will first discuss some empirical work we performed to understand the main socio-technical challenges developers face when joining a new software project. I will discuss how to address them with the use of appropriate recommender systems aimed at supporting developers during program comprehension and maintenance tasks. Then, I'll show how 'Summarization Techniques' are an ideal technology for supporting developers when performing testing and debugging activities. Finally, I will summarize the main research advances, the current open challenges/problems and possible future directions to exploit for boosting developers productivity. ![]() ![]() |
|
Rausch, Andreas |
![]() Adina Aniculaesei, Falk Howar, Peer Denecke, and Andreas Rausch (TU Clausthal, Germany; TU Dortmund, Germany) Checking that a complex software system conforms to an extensive catalogue of requirements is an elaborate and costly task which cannot be managed only through manual testing anymore. In this paper, we construct an academic case study in which we apply automated requirements-based test case generation to the protoype of an adaptive cruise control system. We focus on two main research goals with respect to our method: (1) how much code coverage can be obtained and (2) how many faults can be found using the generated test cases. We report on our results as well as on the lessons learned. ![]() |
|
Verhaeghe, Benoît |
![]() Serge Demeyer, Benoît Verhaeghe, Anne Etien, Nicolas Anquetil, and Stéphane Ducasse (University of Antwerp, Belgium; Inria, France) Continuous testing is a novel feature within modern programming environments, where unit tests constantly run in the background providing early feedback about breaking changes. One of the more challenging aspects of such a continuous testing tool is choosing the heuristic which selects the tests to run based on the changes recently applied. To help tool builders select the most appropriate test selection heuristic, we assess their efficiency in a continuous testing context. We observe on two small but representative cases that a continuous testing tool generates significant reductions in number of tests that need to be executed. Nevertheless, these heuristics sometimes result in false negatives, thus in rare occasions discard pertinent tests. ![]() |
16 authors
proc time: 0.02