ISSTA 2011 Workshop
2011 International Symposium on Software Testing and Analysis (ISSTA 2011)
Powered by
Conference Publishing Consulting

2011 International Workshop on End-to-End Test Script Engineering (ETSE 2011), July 17, 2011, Toronto, ON, Canada

ETSE 2011 – Proceedings

Contents - Abstracts - Authors

2011 International Workshop on End-to-End Test Script Engineering (ETSE 2011)

Title Page


Foreword
In recent years, we have witnessed the growing importance of software test automation. In particular, the creation of automated test scripts has seen an increasing emphasis and interest in industry. Automated test scripts provide many benefits, such as repeatable, predictable, and efficient test executions. However, creating scripts from manual test cases can be tedious and can require significant up-front investment. Moreover, after the initial development, like any software system, the test scripts require maintenance. Thus, test scripts have a lifecycle---involving different development, maintenance, and validation activities---and must co-evolve with the application. To perform such activities efficiently (especially when the test suite of an application grows over time to consist of hundreds of scripts), the development of appropriate tools, techniques, and methods that encompass the entire test-script lifecycle is essential.
The goal of this first workshop is to emphasize the end-to-end aspect of test-script engineering, and provide a forum for academics and practitioners to discuss the challenges, accomplishments, opportunities, and promising research directions related to the development and maintenance of automated test scripts.

Model based Approach to Assist Test Case Creation, Execution, and Maintenance for Test Automation
Priya Gupta and Prafullakumar Surve
(Tata Research Development and Design Center, India)
Applications, once developed, need to be maintained and tested as they undergo frequent changes. Test automation plays a significant role in testing activity, as it saves time and provides better utilization of resources. Test automation itself comes with many challenges such as mapping of user specifications to test-cases, test-case generation, maintenance of test-cases and test-scripts. In this paper, we propose a model-driven approach for test automation to provide end-to-end assistance in test case generation and automation, with focus on re-usability and maintainability. Functional specifications of system are mapped to test-cases for traceability which ensures better test automation process. Functional specifications of system are used as an input to design process models, which are used for automatic generation of test-cases. Process models consist of flows of different tasks in specified sequence. By recording the individual tasks, test-scripts for all the test-cases are generated. The test-cases and test-scripts can be modified and maintained using user friendly user-interface (UI) to provide better control to test designer and ease the load of tester. In this paper, we also present a case study performed on JBilling application to evaluate our approach.

Utilizing User Interface Models for Automated Instantiation and Execution of System Tests
Benedikt Hauptmann and Maximilian Junker
(TU München, Germany)
Scripts for automated system tests often contain technical knowledge about the user interface (UI). This makes test scripts brittle and hard to maintain which leads to high maintenance costs. As a consequence, automation of system tests is often abandoned. We present a model-driven approach that separates UI knowledge from test scripts. Tests are defined on a higher level, abstracting from UI usage. During test instantiation, abstract tests are enriched with UI information and executed against the system. We demonstrate the application of our approach to graphical UIs (GUIs) such as rich clients and web applications. To show the feasibility, we present a prototypical implementation testing the open-source application Bugzilla.

Automatic Test Concretization to Supply End-to-End MBT for Automotive Mechatronic Systems
Jonathan Lasalle, Fabien Peureux, and Jérôme Guillet
(UFC Besançon, France; UHA Mulhouse, France)
This paper presents an effective end-to-end Model-Based Testing approach to validate automotive mechatronic systems. This solution takes as input a UML/OCL model describing the stimuli of the environment that can excite the mechatronic System Under Test. It applies model coverage criteria to automatically generate test cases, and finally takes an offline approach to translate the generated test cases into executable test scripts that can be executed both on simulation model and physical test bench. The mechatronic System Under Test is then tested against a Matlab/Simulink simulation model, which defines the test oracle. This tooled and automated approach has been successfully experimented on a concrete case study about the validation of a vehicle front axle unit. This experimentation enabled us to validate our approach, and showed its effectiveness in the validation process of mechatronic systems.

WATER: Web Application TEst Repair
Shauvik Roy Choudhary, Dan Zhao, Husayn Versee, and Alessandro Orso
(Georgia Tech, USA; Hunan University, China)
Web applications tend to evolve quickly, resulting in errors and failures in test automation scripts that exercise them. Repairing such scripts to work on the updated application is essential for maintaining the quality of the test suite. Updating such scripts manually is a time consuming task, which is often difficult and is prone to errors if not performed carefully. In this paper, we propose a technique to automatically suggest repairs for such web application test scripts. Our technique is based on differential testing and compares the behavior of the test case on two successive versions of the web application: first version in which the test script runs successfully and the second version in which the script results in an error or failure. By analyzing the difference between these two executions, our technique suggests repairs that can be applied to repair the scripts. To evaluate our technique, we implemented it in a tool called WATER and exercised it on real web applications with test cases. Our experiments show that WATER can suggest meaningful repairs for practical test cases, many of which correspond to those made later by developers themselves.

Test Harness and Script Design Principles for Automated Testing of Non-GUI or Web Based Applications
Tuli Nivas
(Sabre Holdings Inc., USA)
Scarcity of commercially available testing tools that could support all native or application specific message formats as well as those that cater to non GUI or non web based backend applications leads to creating your own customized traffic generators or scripts. Also the test environment setup may differ from one system to another – some may use simulators or mocks to stub out complex software, others may just be a scaled down (in terms of number of servers) replica of the production environment. So what are the factors that need to be considered when creating scripts that can be used for native request formats and for non GUI or web based applications? How do we design a script that is easy to maintain and extend when new test scenarios are added to accurately assess the performance of an application? This paper provides (1) the general design principles for a test script that can be used to generate traffic for any request format as well as (2) specific factors to keep in mind when creating a script that will work in a test environment that uses a mock. In addition to this the core activities of testing include not only traffic generation but also setting up the environment, verifying that both the hardware and software configurations are accurate prior to sending traffic and creating a report at the end of the test. Therefore the test script needs to be part of a complete harness that accomplishes these tasks. The paper will address the (3) design and properties of such a harness. It provides a simple framework that can be easily used to complete an end to end testing process -pre test, traffic generation and post test activities.

Automated GUI Refactoring and Test Script Repair (Position Paper)
Brett Daniel, Qingzhou Luo, Mehdi Mirzaaghaei, Danny Dig, Darko Marinov, and Mauro Pezzè
(University of Illinois at Urbana-Champaign, USA; University of Lugano, Switzerland)
To improve the overall user experience, graphical user interfaces (GUIs) of successful software systems evolve continuously. While the evolution is beneficial for end users, it creates several problems for developers and testers. Developers need to manually change the GUI code. Testers need to manually inspect and repair test scripts because seemingly simple changes in the GUI often break existing automated GUI test scripts. This is time-consuming and error-prone.
The state-of-the-art tools for automatic GUI test repair use a black-box approach: they try to infer the changes between two GUI versions and then apply these changes to the test scripts. However, inferring these changes is challenging.
We propose a white-box approach where the GUI changes are automated and knowledge about them is reused to repair the test cases appropriately. We use GUI refactorings as a means to encode the evolution of the GUIs. We envision a smart IDE that will record these refactorings precisely as they happen and will use them to change the GUI code and to repair test cases. We illustrate through an example how our approach could work, discuss challenges that should be overcome to turn our vision into reality, and present a research agenda to address these challenges.

proc time: 0.02