Powered by
8th ACM SIGSOFT International Workshop on Automated Software Testing (A-TEST 2017),
September 4-5, 2017,
Paderborn, Germany
8th ACM SIGSOFT International Workshop on Automated Software Testing (A-TEST 2017)
Message from the Chairs
With proud we present the already 8th edition of the A-TEST workshop on Automated Software Testing. This year A-TEST is again co-located with the International ESEC/FSE Conference in Paderborn, Germany and will take place before main conference on the 4th and 5th of September 2017.
A-TEST workshop aims to provide a venue for researchers as well as the industry to exchange and discuss trending views, ideas, state of the art work in progress, and scientific results on Automated Test case design, selection and evaluation.
Dynamic Mutant Subsumption Analysis using LittleDarwin
Ali Parsai and
Serge Demeyer
(University of Antwerp, Belgium)
Many academic studies in the field of software testing rely on mutation testing to use as their comparison criteria. However, recent studies have shown that redundant mutants have a significant effect on the accuracy of their results. One solution to this problem is to use mutant subsumption to detect redundant mutants. Therefore, in order to facilitate research in this field, a mutation testing tool that is capable of detecting redundant mutants is needed. In this paper, we describe how we improved our tool, LittleDarwin, to fulfill this requirement.
@InProceedings{A-TEST17p1,
author = {Ali Parsai and Serge Demeyer},
title = {Dynamic Mutant Subsumption Analysis using LittleDarwin},
booktitle = {Proc.\ A-TEST},
publisher = {ACM},
pages = {1--4},
doi = {},
year = {2017},
}
Hybrid Monkey Testing: Enhancing Automated GUI Tests with Random Test Generation
Thomas Wetzlmaier and
Rudolf Ramler
(Software Competence Center Hagenberg, Austria)
Many software projects maintain automated GUI tests that are repeatedly executed for regression testing. Every test run executes exactly the same fixed sequence of steps confirming that the currently tested version shows precisely the same behavior as the last version. The confirmatory approach implemented by these tests limits their ability to find new defects. We therefore propose to combine existing automated regression tests with random test generation. Random test generation creates a rich variety of test steps that interact with the system under test in new, unexpected ways. Enhancing existing test cases with random test steps allows revealing new, hidden defects with little extra effort. In this paper we describe our implementation of a hybrid approach that enhances existing GUI test cases with additional, randomly generated interactions. We conducted an experiment using a mature, widely-used open source application. On average the added random interactions increased the number of visited application windows per test by 23.6% and code coverage by 12.9%. Running the enhanced tests revealed three new defects.
@InProceedings{A-TEST17p5,
author = {Thomas Wetzlmaier and Rudolf Ramler},
title = {Hybrid Monkey Testing: Enhancing Automated GUI Tests with Random Test Generation},
booktitle = {Proc.\ A-TEST},
publisher = {ACM},
pages = {5--10},
doi = {},
year = {2017},
}
Collaborative Economy for Testing Cost Reduction on Android Ecosystem
Kenyo Abadio Crosara Faria, Eduardo Noronha de Andrade Freitas, and Auri Marcelo Rizzo Vincenzi
(Federal Institute of Goias, Brazil; Federal University of Sao Carlos, Brazil)
Collaborative Economy (CE) promotes significant changes in several sectors around the world, e.g. the famous companies Uber, Airbnb, and Turo. The general idea behind CE is the establishment of a win-win partnership between two agents. One agent has a potential need for a resource of high cost for acquisition or location, and the other agent has the resource frequently idle.
Software quality verification on Android ecosystem is a hard task due to the fragmentation among the devices, i.e. the large number of devices configurations.
In this scenario, compatibility testing demands the acquisition or location of several different devices of high cost and volatility due to technology evolution.
On the other hand, there are several devices around the world with a high rate of idle time and which could be used for testing, generating an extra budget for their owners.
In this sense, this paper defends the principles of CE for supporting the testing activity of Android applications. We implemented a platform to use and evaluate the practical usefulness and applicability of CE principles in Android software testing context.
The platform makes it possible to run system testing on several devices, geographically distributed, simultaneously. The general idea is to record system testing by using an extension of Expresso, a Google proposed framework for user interface (UI) testing, and execute the test cases on idle devices previous registered on the platform, according to the test requirements.
We carried out some exploratory studies which evidentiate the potential of the proposed platform, its benefits, and its impact not only on the market but also on the way we can run efficiently testing on Android ecosystem.
@InProceedings{A-TEST17p11,
author = {Kenyo Abadio Crosara Faria and Eduardo Noronha de Andrade Freitas and Auri Marcelo Rizzo Vincenzi},
title = {Collaborative Economy for Testing Cost Reduction on Android Ecosystem},
booktitle = {Proc.\ A-TEST},
publisher = {ACM},
pages = {11--18},
doi = {},
year = {2017},
}
Evaluating Quality of Security Testing of the JDK
Padmanabhan Krishnan, Jerome Loh, Rebecca O'Donoghue, and Larissa Meinicke
(Oracle, Australia; University of Queensland, Australia)
In this position paper we describe how mutation testing can be
used to evaluate the quality of test suites from a security viewpoint.
Our focus is on measuring the quality of the test suite
associated with the Java Development Kit (JDK) because it provides the
core security properties for all applications.
We describe the challenges associated
with identifying security-specific mutation operators that are specific to the
Java model and ensuring that our solution can be automated for large code-bases
like the JDK.
@InProceedings{A-TEST17p19,
author = {Padmanabhan Krishnan and Jerome Loh and Rebecca O'Donoghue and Larissa Meinicke},
title = {Evaluating Quality of Security Testing of the JDK},
booktitle = {Proc.\ A-TEST},
publisher = {ACM},
pages = {19--20},
doi = {},
year = {2017},
}
Comparing Automated Visual GUI Testing Tools: An Industrial Case Study
Vahid Garousi,
Wasif Afzal, Adem Çağlar, İhsan Berk Işık, Berker Baydan, Seçkin Çaylak, Ahmet Zeki Boyraz, Burak Yolaçan, and Kadir Herkiloğlu
(University of Luxembourg, Luxembourg; Wageningen University, Netherlands; Mälardalen University, Sweden; Havelsan, Turkey)
Visual GUI testing (VGT) is a tool-driven technique, which uses image recognition for interaction and assertion of the behaviour of system under test. Motivated by a real industrial need, in the context of a large Turkish software and systems company providing solutions in the areas of defense and IT sector, we systematically planned and applied a VGT project in this industrial context. The goal of the initial phase of the project was to empirically evaluate two well-known VGT tools (Sikuli and JAutomate) to help the company select the best tool for a given testing project. Our results show that both two tools suffer from similar test ‘Replay’ problems such as the inability to find smaller-sized images. The repeatability of test executions was better for JAutomate in case of one of the two software under test (SUT) while it was comparable for the other. In terms of test development effort, for both tools, there were high correlations with number of steps in test suites, however the effort is reduced if test code is reused. The study has already provided benefits to the test engineers and managers in the company by increasing the know-how in the company w.r.t. VGT, and by identifying the challenges and their workarounds in using the tools. The industrial case study in this paper intends to add to the body of evidence in VGT and help other researchers and practitioners.
@InProceedings{A-TEST17p21,
author = {Vahid Garousi and Wasif Afzal and Adem Çağlar and İhsan Berk Işık and Berker Baydan and Seçkin Çaylak and Ahmet Zeki Boyraz and Burak Yolaçan and Kadir Herkiloğlu},
title = {Comparing Automated Visual GUI Testing Tools: An Industrial Case Study},
booktitle = {Proc.\ A-TEST},
publisher = {ACM},
pages = {21--28},
doi = {},
year = {2017},
}
proc time: 1.16