Powered by
Conference Publishing Consulting

2013 International Workshop on Testing the Cloud (TTC), July 15, 2013, Lugano, Switzerland

TTC 2013 – Proceedings

Contents - Abstracts - Authors

2013 International Workshop on Testing the Cloud (TTC)


Title Page

On behalf of the workshop committee for Testing the Cloud (TTC 2013), it is our pleasure to welcome you in Lugano for this event co-located with the International Symposium on Software Testing and Analysis (ISSTA 2013). The aim of the workshop is to provide a forum for researchers from both academia and industry to present the high-quality results in the area of testing the Cloud. All submissions were evaluated by the PC members (mostly three reviews per paper). For this first edition of TTC the acceptance ratio for papers was 50%. All the accepted papers are presented during three oral sessions. The conference program also includes three keynote talks. We are very fortunate that three outstanding researchers and industrials have accepted our invitation to serve as keynote speakers: Ivona Brandic (Vienna University of Technology, Austria), Andreas Leitner (Google, Switzerland) and Ross Smith (Microsoft, USA).
We would like to thank the PC members for their hard work to timely deliver the reviews. Their high competence enabled us to prepare the high-quality final program of TTC 2013. We would also like to thank all the authors for their submissions. We wish to express our thanks to the organizers of the main ISSTA 2013 conference for their friendly support during the organization of TTC 2013. Finally, the support of IBM Dublin and The Irish Software Engineering Research Centre (Lero) is gratefully acknowledged.
TTC is the first event dedicated to testing the Cloud. Cloud computing is everywhere, inevitable: originally a layered abstraction of a heterogeneous environment, it has become the paradigm of a large-scale data-oriented system. Although it has some interesting features, testing its robustness and its reliability is a major challenge. The Cloud is an intricate collection of interconnected and virtualised computers, connected services, complex service-level agreements. From a testing perspective, the Cloud is then a complex composition of complex systems.
The question of testing this large, network-based, dynamic, composition of computers, virtual machines, servers, services, SLAs, seems particularly difficult. This problem is a perfect example of cross concerns between academia and product companies, and it covers a broad range of topics, from software development to code analysis, performance monitoring to formal model for system testing, etc.

Academic Session

On the Necessity of Model Checking NoSQL Database Schemas When Building SaaS Applications
Stefanie Scherzinger, Eduardo Cunha de Almeida, Felipe Ickert, and Marcos Didonet Del Fabro
(Regensburg University of Applied Sciences, Germany; UFPR, Brazil)
The design of the NoSQL schema has a direct impact on the scalability of web applications. Especially for developers with little experience in NoSQL stores, the risks inherent in poor schema design can be incalculable. Worse yet, the issues will only manifest once the application has been deployed, and the growing user base causes highly concurrent writes. In this paper, we present a model checking approach to reveal scalability bottlenecks in NoSQL schemas. Our approach draws on formal methods from tree automata theory to perform a conservative static analysis on both the schema and the expected write-behavior of users. We demonstrate the impact of schema-inherent bottlenecks for a popular NoSQL store, and show how concurrent writes can ultimately lead to a considerable share of failed transactions.

Publisher's Version Article Search
Enabling Large-Scale Testing of IaaS Cloud Platforms on the Grid'5000 Testbed
Sébastien Badia, Alexandra Carpen-Amarie, Adrien Lèbre, and Lucas Nussbaum
(INRIA, France; École des Mines de Nantes, France; Université de Lorraine, France)
Almost ten years after its premises, the Grid'5000 platform has become one of the most complete testbeds for designing or evaluating large-scale distributed systems. Initially dedicated to the study of High Performance Computing, the infrastructure has evolved to address wider concerns related to Desktop Computing, the Internet of Services and more recently the Cloud Computing paradigm. In this paper, we present the latest mechanisms we designed to enable the automated deployment of the major open-source IaaS cloudkits (i.e., Nimbus, OpenNebula, CloudStack, and OpenStack) on Grid'5000. Providing automatic, isolated and reproducible deployments of cloud environments lets end-users study and compare each solution or simply leverage one of them to perform higher-level cloud experiments (such as investigating Map/Reduce frameworks or applications).

Publisher's Version Article Search

Industrial Session I

Testing a Cloud Application: IBM SmartCloud iNotes: Methodologies and Tools
Michael Lynch, Thomas Cerqueus, and Christina Thorpe
(IBM, Ireland; Lero, Ireland; University College Dublin, Ireland)
IBM SmartCloud is a branded collection of Cloud products and solutions from IBM. It includes Infrastructure as a Service (IaaS), Software as a Service (SaaS), and Platform as a Service (PaaS) offered through public, private and hybrid cloud delivery models. This paper focuses on the software testing process employed for the SmartCloud iNotes SaaS application, providing details of the methodologies and tools developed to streamline testing. The new tools have enabled the testing team to meet the pace of the highly agile development team, enabling a more efficient software development lifecycle. Results indicate that the methodologies and tools used have increased the performance of the testing team: there was a decrease in the number of bugs present in the code (prior to release), and an overall increase in customer satisfaction.

Publisher's Version Article Search

Industrial Session II

Data Science in the Cloud: Analysis of Data from Testing in Production
Robert Musson and Ross Smith
(Microsoft, USA)
As global demographic, workforce, and technological trends alter the landscape of how software services are delivered, a shift towards testing in production and data science is altering the way organizations can deliver high quality experiences. Data science is the ability to find relevant relationships in the data in order to make decisions regarding the quality or performance of the software. This paper presents the landscape of testing in a global company and advocate a more generalized use of data science for testing the Cloud. It describes the collection and analysis of data and investigates the crucial question of data scientists profile and management.

Publisher's Version Article Search

proc time: 1.83