ESEC/FSE 2019 Workshops
27th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2019)
Powered by
Conference Publishing Consulting

10th ACM SIGSOFT International Workshop on Automating TEST Case Design, Selection, and Evaluation (A-TEST 2019), August 26-27, 2019, Tallinn, Estonia

A-TEST 2019 – Proceedings

Contents - Abstracts - Authors
Twitter: https://twitter.com/esecfse

10th ACM SIGSOFT International Workshop on Automating TEST Case Design, Selection, and Evaluation (A-TEST 2019)

Frontmatter

Title Page


Welcome from the Chairs


Papers

Testing Extended Finite State Machines using NSGA-III
Ana Ţurlea
(University of Bucharest, Romania)
Finite state machines (FSMs) are widely used in test data generation approaches. An extended finite state machine (EFSM) extends the FSM with memory (context variables), guards for each transition and assignment operations. In FSMs all paths are feasible, but the existence of context variables combined with guards in EFSMs can lead to infeasible paths. Using EFSMs in test data generation, we are dealing with feasibility problems. This paper presents a test suite generation algorithm for EFSMs. The algorithm produces a set of feasible transition paths (test cases) that cover all transitions using NSGA-III. We also measure the similarities between test cases from the generated test suite.

Publisher's Version
Extending UTP 2 with Cascading Arbitration Specifications
Marc-Florian Wendland, Martin Schneider, and Andreas Hoffmann
(Fraunhofer FOKUS, Germany)
In testing, the term arbitration describes the process of calculating the verdict after the execution of a test case or test suite. The calculation of the verdict follows a defined rule set that clearly specifies which verdict to produce under which conditions. In many situations, these rules simply follow the scheme that any deviation between the expected and the actual response of the system under test leads to fail. UTP 2 introduces the concept of arbitration specifications on various hierarchy levels to determine the final verdict of an arbitration target. It provides default arbitration specifications that adhere to the above-mentioned straightforward calculation of verdicts, but allows for overriding these default ones with user-defined arbitration specifications. Unfortunately, this override mechanism adversely affects the maintainability of test cases and test actions because of its high degree of intrusion. Arbitration targets, such as test sets, test cases and procedural elements, and arbitration specifications are tightly coupled with each other, losing the ability to reuse these arbitration targets in a different context with different arbitration specifications. In this paper, we suggest to replace this highly intrusive override mechanism with a decoupling binding mechanism. This binding mechanism increases both comprehensibility and maintainability of test specifications on one hand, because arbitration targets remain independent of any potential arbitration specification. On the other hand, it offers a high degree of reusability and flexibility to the user because of a cascading override mechanism inspired by W3C cascading style sheets.

Publisher's Version
Test Coverage Criteria for RESTful Web APIs
Alberto Martin-Lopez ORCID logo, Sergio Segura ORCID logo, and Antonio Ruiz-Cortés ORCID logo
(University of Seville, Spain)
Web APIs following the REST architectural style (so-called RESTful web APIs) have become the de-facto standard for software integration. As RESTful APIs gain momentum, so does the testing of them. However, there is a lack of mechanisms to assess the adequacy of testing approaches in this context, which makes it difficult to automatically measure and compare their effectiveness. In this paper, we first present a set of ten coverage criteria that allow to determine the degree to which a test suite exercises the different inputs (i.e. requests) and outputs (i.e. responses) of a RESTful API. We then arrange the proposed criteria into eight Test Coverage Levels (TCLs), where TCL0 represents the weakest coverage level and TCL7 represents the strongest one. This enables the automated assessment and comparison of testing techniques according to the overall coverage and TCL achieved by their generated test suites. Our evaluation results on two open-source APIs with real bugs show that the proposed coverage levels nicely correlate with code coverage and fault detection measurements.

Publisher's Version
Lessons Learned from Making the Transition to Model-Based GUI Testing
Rudolf Ramler ORCID logo, Claus Klammer ORCID logo, and Thomas Wetzlmaier
(Software Competence Center Hagenberg, Austria)
Model-based testing (MBT) has been proposed as an effective and versatile approach for testing graphical user interfaces (GUIs) by automatically generating executable test cases from a model of the GUI. Model-based GUI testing has received increasing attention in research, but it is still rarely applied in practice. In this paper, we present our experiences and share the lessons we learned from successfully introducing MBT for GUI testing in three industry projects. We describe the underlying modeling approach, the development of tests models in joint workshops, the implementation of the test model in form of model programs, and the integration of MBT in the test automation architecture. The findings distilled from the three cases are summarized as lessons learned to support the adoption of a model-based approach for GUI testing in practice.

Publisher's Version
Fragility of Layout-Based and Visual GUI Test Scripts: An Assessment Study on a Hybrid Mobile Application
Riccardo Coppola ORCID logo, Luca Ardito ORCID logo, and Marco Torchiano ORCID logo
(Politecnico di Torino, Italy)
Context: Albeit different approaches exist for automated GUI testing of hybrid mobile applications, the practice appears to be not so commonly adopted by developers. A possible reason for such a low diffusion can be the fragility of the techniques, i.e. the frequent need for maintaining test cases when the GUI of the app is changed.
Goal: In this paper, we perform an assessment of the maintenance needed by test cases for a hybrid mobile app, and the related fragility causes.
Methods: We evaluated a small test suite with a Layout-based testing tool (Appium) and a Visual one (EyeAutomate) and observed the changes needed by tests during the co-evolution with the GUI of the app.
Results: We found that 20% Layout-based test methods and 30% Visual test methods had to be modified at least once, and that each release induced fragilities in 3-4% of the test methods.
Conclusion: Fragility of GUI tests can induce relevant maintenance efforts in test suites of large applications. Several principal causes for fragilities have been identified for the tested hybrid application, and guidelines for developers are deduced from them.

Publisher's Version
A Platform for Diversity-Driven Test Amplification
Marcus Kessel and Colin Atkinson
(University of Mannheim, Germany)
Test amplification approaches take a manually written set of tests (input/output mappings) and enhance their effectiveness for some clearly defined engineering goal such as detecting faults. Conceptually, they can either achieve this in a ``black box'' way using only the initial ``seed'' tests or in a ``white box'' way utilizing additional inputs such as the source code or specification of the software under test. However, no fully black box approach to test amplification is currently available even though they can be used to enhance white box approaches. In this paper we introduce a new approach that uses the seed tests to search for existing redundant implementations of the software under test and leverages them as oracles in the generation and evaluation of new tests. The approach can therefore be used as a stand alone black box test amplification method or in tandem with other methods. In this paper we explain the approach, describe its synergies with other approaches and provide some evidence for its practical feasibility.

Publisher's Version

proc time: 2.59