Powered by
11th ACM SIGSOFT International Workshop on Automating TEST Case Design, Selection, and Evaluation (A-TEST 2020),
November 8-9, 2020,
Virtual, USA
11th ACM SIGSOFT International Workshop on Automating TEST Case Design, Selection, and Evaluation (A-TEST 2020)
Frontmatter
Welcome from the Chairs
On behalf of the Program Committee, we are pleased to present the proceedings of the 11th International Workshop on Automating TEST Case Design, Selection and Evaluation workshop (A-Test 2020).
For six years in a row, A-TEST is co-located with
the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE). A-TEST workshop aims to provide a venue for researchers as well as the industry to exchange and discuss trending views, ideas, state of the art work in progress, and scientific results on Automating TEST Case Design, Selection and Evaluation.
This year, because of the Covid-19 pandemic, A-Test (as part of ESEC/FSE) is held virtually with an adapted program to still bring
together international researchers to exchange and discuss ideas about the latest progress in software testing.
Keynotes
SafeDNN: Understanding and Verifying Neural Networks (Keynote)
Corina S. Păsăreanu
(Carnegie Mellon Silicon Valley, USA; NASA Ames Research Center, USA)
The SafeDNN project at NASA Ames explores new techniques and tools to ensure that systems that use Deep Neural Networks (DNN) are safe, robust and interpretable. Research directions we are pursuing in this project include: symbolic execution for DNN analysis, label-guided clustering to automatically identify input regions that are robust, parallel and compositional approaches to improve formal SMT-based verification, property inference and automated program repair for DNNs, adversarial training and detection, probabilistic reasoning for DNNs. In this talk I will highlight some of the research advances from SafeDNN.
@InProceedings{A-TEST20p1,
author = {Corina S. Păsăreanu},
title = {SafeDNN: Understanding and Verifying Neural Networks (Keynote)},
booktitle = {Proc.\ A-TEST},
publisher = {ACM},
pages = {1--1},
doi = {10.1145/3412452.3428119},
year = {2020},
}
Publisher's Version
The Effectiveness of Automated Software Testing Techniques (Keynote)
Aldeida Aleti
(Monash University, Australia)
With the rise of AI-based systems, such as self-driving cars, Google search, and automated decision-making systems, new challenges have emerged for the testing community. Verifying such software systems is becoming an extremely difficult and expensive task, often constituting up to 90% of the software expenses. Software in a self-driving car, for example, must safely operate in an infinite number of scenarios, which makes it extremely hard to find bugs in such systems. In this talk, I will explore some of these challenges, and introduce our work which aims at improving the bug-detection capabilities of automated software testing. First, I will talk about a framework that maps the effectiveness of automated software testing techniques, by identifying software features that impact the ability of these techniques to achieve high code coverage. Next, I will introduce our latest work that incorporates defect prediction information to improve the efficiency of search-based software testing to detect software bugs.
@InProceedings{A-TEST20p2,
author = {Aldeida Aleti},
title = {The Effectiveness of Automated Software Testing Techniques (Keynote)},
booktitle = {Proc.\ A-TEST},
publisher = {ACM},
pages = {2--2},
doi = {10.1145/3412452.3428120},
year = {2020},
}
Publisher's Version
Papers
Navigation and Exploration in 3D-Game Automated Play Testing
I. S. W. B. Prasetya, Maurin Voshol, Tom Tanis, Adam Smits, Bram Smit, Jacco van Mourik, Menno Klunder, Frank Hoogmoed, Stijn Hinlopen, August van Casteren, Jesse van de Berg, Naraenda G.W.Y. Prasetya,
Samira Shirzadehhajimahmood, and
Saba Gholizadeh Ansari
(Utrecht University, Netherlands)
To enable automated software testing, the ability to automatically navigate to a state of interest and to explore all, or at least sufficient number of, instances of such a state is fundamental. When testing a computer game the problem has an extra dimension, namely the virtual world where the game is played on. This world often plays a dominant role in constraining which logical states are reachable, and how to reach them. So, any automated testing algorithm for computer games will inevitably need a layer that deals with navigation on a virtual world. Unlike e.g. navigating through the GUI of a typical web-based application, navigating over a virtual world is much more challenging.
This paper discusses how concepts from geometry and graph-based path finding can be applied in the context of game testing to solve the problem of automated navigation and exploration. As a proof of concept, the paper also briefly discusses the implementation of the proposed approach.
@InProceedings{A-TEST20p3,
author = {I. S. W. B. Prasetya and Maurin Voshol and Tom Tanis and Adam Smits and Bram Smit and Jacco van Mourik and Menno Klunder and Frank Hoogmoed and Stijn Hinlopen and August van Casteren and Jesse van de Berg and Naraenda G.W.Y. Prasetya and Samira Shirzadehhajimahmood and Saba Gholizadeh Ansari},
title = {Navigation and Exploration in 3D-Game Automated Play Testing},
booktitle = {Proc.\ A-TEST},
publisher = {ACM},
pages = {3--9},
doi = {10.1145/3412452.3423570},
year = {2020},
}
Publisher's Version
Comparing Transition Trees Test Suites Effectiveness for Different Mutation Operators
Hoda Khalil and
Yvan Labiche
(Carleton University, Canada)
Research demonstrated that faults seeded mutation using operators can be representative of faults in real systems. In this paper, we study the relationship between the different operators used to insert mutants in the fault domain of the system under test and the effectiveness of different state machine test suites at killing those mutants. We are particularly interested in the effectiveness of two interrelated state machine testing strategies at finding different types of faults. Those are the round-trip paths strategy and the transition tree strategy. Using empirical evaluation, we compare the effectiveness of more than two thousand unique test suites at killing mutants seeded using eight different mutation operators. We perform experiments on four experimental objects and provide qualitative analysis of the results. We conclude that neither of the two studied strategies is more effective than the other at killing a certain type of mutants. However, the structure of the finite state machine and the nature of the system under test affect the type of faults detected by the different testing strategies.
@InProceedings{A-TEST20p10,
author = {Hoda Khalil and Yvan Labiche},
title = {Comparing Transition Trees Test Suites Effectiveness for Different Mutation Operators},
booktitle = {Proc.\ A-TEST},
publisher = {ACM},
pages = {10--16},
doi = {10.1145/3412452.3423571},
year = {2020},
}
Publisher's Version
Fuzz4B: A Front-End to AFL Not Only for Fuzzing Experts
Ryu Miyaki,
Norihiro Yoshida, Natsuki Tsuzuki, Ryota Yamamoto, and Hiroaki Takada
(Nagoya University, Japan)
In this tool demonstration paper, we propose a tool named Fuzz4B (Fuzzing for Beginner), which is a front-end to a representative fuzzer AFL for developers who are inexperienced in fuzz testing. Fuzz4B is not only a front-end, but it also allows developers to reproduce a crash and minimize a fuzz that causes the crash. As a usage example, we demonstrated the use of Fuzz4B to perform fuzz testing to discover a failure of an open source library librope. Fuzz4B and its video are available at: https://github.com/Ryu-Miyaki/Fuzz4B.
@InProceedings{A-TEST20p17,
author = {Ryu Miyaki and Norihiro Yoshida and Natsuki Tsuzuki and Ryota Yamamoto and Hiroaki Takada},
title = {Fuzz4B: A Front-End to AFL Not Only for Fuzzing Experts},
booktitle = {Proc.\ A-TEST},
publisher = {ACM},
pages = {17--20},
doi = {10.1145/3412452.3423572},
year = {2020},
}
Publisher's Version
Video
Info
Towards Automated Testing of RPA Implementations
Marina Cernat, Adelina Nicoleta Staicu, and
Alin Stefanescu
(University of Bucharest, Romania; Cegeka, Romania)
Robotic Process Automation (RPA) is a technology that has grown tremendously in the last years, due to its usability in the area of process automation. An essential part of any software development process is quality assurance, so testing will be very important for RPA processes. However, the classical software techniques are not always suitable for the RPA software robots due to the mix of the graphical description of the robots and their implementations. In this short paper, we describe the state of the practice for testing of software robots and propose some ideas of test automation using model-based testing.
@InProceedings{A-TEST20p21,
author = {Marina Cernat and Adelina Nicoleta Staicu and Alin Stefanescu},
title = {Towards Automated Testing of RPA Implementations},
booktitle = {Proc.\ A-TEST},
publisher = {ACM},
pages = {21--24},
doi = {10.1145/3412452.3423573},
year = {2020},
}
Publisher's Version
proc time: 1.36