ISSTA 2016
25th International Symposium on Software Testing and Analysis (ISSTA)
Powered by
Conference Publishing Consulting

2nd International Workshop on User Interface Test Automation (INTUITEST 2016), July 21, 2016, Saarbrücken, Germany

INTUITEST 2016 – Proceedings

Contents - Abstracts - Authors

2nd International Workshop on User Interface Test Automation (INTUITEST 2016)

Title Page

Message from the Chairs
Welcome to the 2nd International Workshop on User Interface Test Automation (INTUITEST 2016), organized at Saarland University in Saarbrücken, Germany, on 21 July 2016, as a workshop of the International Symposium on Software Testing and Analysis (ISSTA'16).
Code Coverage for Any Kind of Test in Any Kind of Transcompiled Cross-Platform Applications
Matthias Hirzel and Herbert Klaeren
(University of Tübingen, Germany)
Code coverage is a widely used measure to determine how thoroughly an application is tested. There are many tools available for different languages. However, to the best of our knowledge, most of them focus on unit testing and ignore end-to-end tests with ui- or web tests. Furthermore, there is no support for determining code coverage of transcompiled cross-platform applications. This kind of application is written in one language, but compiled to and executed in a different programming language. Besides, it may run on a different platform. In this paper, we propose a new code coverage testing method that calculates the code coverage of any kind of test (unit-, integration- or ui-/web-test) for any type of (transcompiled) applications (desktop, web or mobile application). Developers obtain information about which parts of the source code are uncovered by tests. The basis of our approach is generic and may be applied in numerous programming languages based on an abstract syntax tree. We present our approach for any-kind-applications developed in Java and evaluate our tool on a web application created with Google Web Toolkit, on standard desktop applications, and on some small Java applications that use the Swing library to create user interfaces. Our results show that our tool is able to judge the code coverage of any kind of test. In particular, our tool is independent of the unit- or ui-/web test-framework in use. The runtime performance is promising although it is not as fast as already existing tools in the area of unit-testing.
Publisher's Version Article Search
Automated Mobile UI Test Fragility: An Exploratory Assessment Study on Android
Riccardo Coppola, Emanuele Raffero, and Marco Torchiano
(Politecnico di Torino, Italy)
Automated UI testing suffers from fragility due to continuous -although minor- changes in the UI of applications. Such fragility has been shown especially for the web domain, though no clear evidence is available for mobile applications. Our goal is to perform an exploratory assessment of the extent and causes of the fragiliy of UI automated tests for mobile applications. For this purpose, we analyzed a small test suite -that we developed using five different testing frameworks for an Android application (K-9 Mail) and observed the changes induced in the tests by the evolution of the UI. We found that up to 75% of code-based tests, and up to 100% of image recognition tests, had to be adapted because of the changes induced by the evolution of the application between two different versions. In addition we identified the main causes of such fragility: changes of identifiers, text or graphics, removal or relocation of elements, activity flow variation, execution time variation, and usage of physical buttons. The preliminary assessment showed that the fragility of UI tests can be a relevant issue also for mobile applications. A few common causes were found that can be used as the basis for providing guidelines for fragility avoidance and repair.
Publisher's Version Article Search

proc time: 0.11