Powered by
Conference Publishing Consulting

1st International Workshop on Quality-Aware DevOps (QUDOS 2015), September 1, 2015, Bergamo, Italy

QUDOS 2015 – Proceedings

Contents - Abstracts - Authors
Twitter: https://twitter.com/FSEconf

Frontmatter

Title Page


Foreword
It is our great pleasure to welcome you to the first edition of the International Workshop on Quality-aware DevOps (QUDOS 2015), held on September 1, 2015 in Bergamo, Italy, co-located with the 10th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE 2015).
DevOps has emerged in recent years as a set of principles and practices for smoothing out the gap between development and operations, thus enabling faster release cycles for complex IT services. Common tools and methods used in DevOps include infrastructure as code, automation through deep modeling of systems, continuous deployment, and continuous integration. As of today, software engineering research has mainly explored these problems from a functional perspective, trying to increase the benefits and generality of these methods for the end users. However, this has left behind the definition of methods and tools for DevOps to assess, predict, and verify quality dimensions.
The QUDOS workshop focuses on the problem of how to best define and integrate quality assurance methods and tools in DevOps. Quality covers a broadly-defined set of dimensions including performance, reliability, safety, survivability, cost of ownership, among others. To answer these questions, the QUDOS workshop wants to bring together experts from academia and industry working in areas such as quality assurance, agile software engineering, and model-based development. The goal is to identify and disseminate novel quality-aware approaches to DevOps.

Approaches for Quality-Aware DevOps

A DevOps Approach to Integration of Software Components in an EU Research Project
Mark Stillwell and Jose G. F. Coutinho
(Imperial College London, UK)
We present a description of the development and deployment infrastructure being created to support the integration effort of HARNESS, an EU FP7 project. HARNESS is a multi-partner research project intended to bring the power of heterogeneous resources to the cloud. It consists of a number of different services and technologies that interact with the OpenStack cloud computing platform at various levels. Many of these components are being developed independently by different teams at different locations across Europe, and keeping the work fully integrated is a challenge. We use a combination of Vagrant based virtual machines, Docker containers, and Ansible playbooks to provide a consistent and up-to-date environment to each developer. The same playbooks used to configure local virtual machines are also used to manage a static testbed with heterogeneous compute and storage devices, and to automate ephemeral larger-scale deployments to Grid'5000. Access to internal projects is managed by GitLab, and automated testing of services within Docker-based environments and integrated deployments within virtual-machines is provided by Buildbot.

DevOps Meets Formal Modelling in High-Criticality Complex Systems
Marta Olszewska and Marina Waldén
(Abo Akademi University, Finland)
Quality is the cornerstone of high criticality systems, since in case of failure not only major financial losses are at stake, but also human lives. Formal methods that support model based-development are one of the methodologies used to achieve correct-by-construction systems. However, these are often heavy-weight and need a dedicated development process. In our work we combine formal and agile software engineering approaches. In particular, we use Event-B and Scrum to assure the quality and more rapid and flexible development. Since we identified that there are more prerequisites for a successful IT project, we use DevOps to embrace the development, quality assurance and IT operations. In this paper we show how formal modelling can function within DevOps and thus promote various dimensions of quality and continuous delivery.

Modelling Multi-tier Enterprise Applications Behaviour with Design of Experiments Technique
Tatiana Ustinova and Pooyan Jamshidi
(Imperial College London, UK)
Queueing network models are commonly used for performance modelling. However, through application development stage analytical models might not be able to continuously reflect performance, for example due to performance bugs or minor changes in the application code that cannot be readily reflected in the queueing model. To cope with this problem, a measurement-based approach adopting Design of Experiments (DoE) technique is proposed. The applicability of the proposed method is demonstrated on a complex 3-tier e-commerce application that is difficult to model with queueing networks.

A Proactive Approach for Runtime Self-Adaptation Based on Queueing Network Fluid Analysis
Emilio Incerto, Mirco TribastoneORCID logo, and Catia Trubiani
(Gran Sasso Science Institute, Italy; IMT Institute for Advanced Studies, Italy)
Complex software systems are required to adapt dynamically to changing workloads and scenarios, while guaranteeing a set of performance objectives. This is not a trivial task, since run-time variability makes the process of devising the needed resources challenging for software designers. In this context, self-adaptation is a promising technique that work towards the specification of the most suitable system configuration, such that the system behavior is preserved while meeting performance requirements. In this paper we propose a proactive approach based on queuing networks that allows self-adaptation by predicting performance flaws and devising the most suitable system resources allocation. The queueing network model represents the system behavior and embeds the input parameters (e.g., workload) observed at run-time. We rely on fluid approximation to speed up the analysis of transient dynamics for performance indices. To support our approach we developed a tool that automatically generates simulation and fluid analysis code from an high-level description of the queueing network. An illustrative example is provided to demonstrate the effectiveness of our approach.

Tools for Quality-Aware DevOps

Model-Based Performance Evaluations in Continuous Delivery Pipelines
Markus Dlugi, Andreas Brunnert, and Helmut Krcmar
(fortiss, Germany; TU München, Germany)
In order to increase the frequency of software releases and to improve their quality, continuous integration (CI) systems became widely used in recent years. Unfortunately, it is not easy to evaluate the performance of a software release in such systems. One of the main reasons for this difficulty is often the lack of a test environment that is comparable to a production system. Performance models can help in this scenario by eliminating the need for a production-sized environment. Building upon these capabilities of performance models, we have introduced a model-based performance change detection process for continuous delivery pipelines in a previous work. This work presents an implementation of the process as plug-in for the CI system Jenkins.

Continous Deployment of Multi-cloud Systems
Nicolas Ferry, Franck Chauvel, Hui Song, and Arnor Solberg
(SINTEF, Norway)
In this paper we present our mechanism and tooling for the continuous deployment and resource provisioning of multi-cloud applications. In order to facilitate collaboration between development and operation teams as promoted in the DevOps movement, our deployment and resource provisioning engine is based on the Models@Runtime principles. This enables applying the same concepts and language (i.e., CloudML) for deployment and resource provisioning at development-and operation-time.

SPACE4Cloud: A DevOps Environment for Multi-cloud Applications
Michele Guerriero, Michele Ciavotta, Giovanni Paolo Gibilisco, and Danilo Ardagna
(Politecnico di Milano, Italy)
Cloud computing has been a game changer in the design, development and management of modern applications, which have grown in scope and size becoming distributed and service oriented. New methodologies have emerged to deal with this paradigm shift in software engineering. Consequently, new tools, devoted to ease the convergence between developers and other IT professional, are required. Here, we present SPACE4Cloud, a DevOps integrated environment for model-driven design-time QoS assessment and optimization, and runtime capacity allocation for Cloud applications.

Filling the Gap: A Tool to Automate Parameter Estimation for Software Performance Models
Weikun Wang, Juan F. Pérez, and Giuliano Casale
(Imperial College London, UK)
Software performance engineering heavily relies on application and resource models that enable the prediction of Quality-of-Service metrics. Critical to these models is the accuracy of their parameters, the value of which can change with the application and the resources where it is deployed. In this paper we introduce the Filling-the-gap (FG) tool, which automates the parameter estimation of application performance models. This tool implements a set of statistical routines to estimate the parameters of performance models, which are automatically executed using monitoring information kept in a local database.

proc time: 1.29