ISSTA 2016
25th International Symposium on Software Testing and Analysis (ISSTA)
Powered by
Conference Publishing Consulting

2nd International Workshop on Quality-Aware DevOps (QUDOS 2016), July 21, 2016, Saarbrücken, Germany

QUDOS 2016 – Proceedings

Contents - Abstracts - Authors

2nd International Workshop on Quality-Aware DevOps (QUDOS 2016)


Title Page

Message from the Chairs
It is our great pleasure to welcome you to the second edition of the International Workshop on Quality-aware DevOps (QUDOS 2016), held on July 21, 2016 in Saarbrücken, Germany, co-located with the International Symposium on Software Testing and Analysis 2016 (ISSTA '16).

Approaches for Quality-Aware DevOps

Coverage-Based Metrics for Cloud Adaptation
Yonit Magid, Rachel Tzoref-Brill, and Marcel Zalmanovici
(IBM Research, Israel)
This work introduces novel combinatorial coverage based metrics for deciding upon automated Cloud infrastructure adaptation. Our approach utilizes a Combinatorial Testing engine, traditionally used for testing at the development phase, in order to measure the load behavior of a system in production. We determine how much the measured load behavior at runtime differs from the one observed during testing. We further estimate the involved risk of encountering untested behavior in the current configuration of the system as well as when transitioning to a new Cloud configuration using possible adaptation actions such as migration and scale-out. Based on our risk assessment, a Cloud adaptation engine may consequently decide on an adaptation action in order to transform the system to a configuration with a lesser associated risk.
Our work is part of a larger project that deals with automated Cloud infrastructure adaptation. We introduce the overall approach for automated adaptation, as well as our coverage-based metrics for risk assessment and the algorithms to calculate them. We demonstrate our metrics on an example setting consisting of two sub-components with multiple instances, comprising a typical installation of a telephony application.

Publisher's Version Article Search
Management Challenges for DevOps Adoption within UK SMEs
Stephen Jones, Joost Noppen, and Fiona Lettice
(University of East Anglia, UK)
The DevOps phenomenon is gathering pace as more UK organisations seek to leverage the benefits it can potentially bring to software engineering functions. However substantial organisational change is inherent to adopting DevOps, especially where there are prior and established methods. As part of a wider piece of doctoral research investigating the management challenges of DevOps adoption, we present early findings of a six month qualitative diary study following the adoption of DevOps within a UK based SME with over 200 employees. We find that within our case study organisation, the DevOps approach is being adopted for the development of a new system used both internally and by customers. DevOps, conceptually, appears to be generally well regarded, but in reality is proving difficult to fully adopt. This difficulty is down to a combination of necessity in maintaining a legacy system, lack of senior management buy-in, managerial structure and resistance. Additionally, we are finding evidence of job crafting, especially with the software developers. Taken together, we put forward the argument that DevOps is an interdisciplinary topic which would greatly benefit from further management and potentially psychology oriented research attention.

Publisher's Version Article Search
A Software Architecture Framework for Quality-Aware DevOps
Elisabetta Di Nitto, Pooyan Jamshidi, Michele Guerriero, Ilias Spais, and Damian A. Tamburri
(Politecnico di Milano, Italy; Imperial College, UK; ATC Athens, Greece)
DevOps is an emerging software engineering strategy entailing the joined efforts of development and operations people, their concerns and best practices with the purpose of realising a coherent working group for increased software development and operations speed. To allow software architecture practitioners to enrich and properly elaborate their architecture specifications in a manner which is consistent with DevOps, we surveyed a number of DevOps stakeholders. We studied concerns and challenges to be tackled with respect to preparing a software architecture which is DevOps-ready, i.e., described in all details needed to enact DevOps scenarios. Subsequently, we introduce SQUID, that stands for Specification Quality In DevOps. SQUID is a software architecture framework that supports the model-based documentation of software architectures and their quality properties in DevOps scenarios with the goal of providing DevOps-ready software architecture descriptions. We illustrate our framework in a case-study in the Big Data domain.

Publisher's Version Article Search
Towards a UML Profile for Data Intensive Applications
Abel Gómez, José Merseguer, Elisabetta Di Nitto, and Damian A. Tamburri
(Universidad de Zaragoza, Spain; Politecnico di Milano, Italy)
Data intensive applications that leverage Big Data technologies are rapidly gaining market trend. However, their design and quality assurance are far from satisfying software engineers needs. In fact, a CapGemini research shows that only 13% of organizations have achieved full-scale production for their Big Data implementations. We aim at addressing an early design and a quality evaluation of data intensive applications,being our goal to help software engineers on assessing quality metrics, such as the response time of theapplication. We address this goal by means of a quality analysis tool-chain.At the core of the tool, we are developing a Profile that converts the Unified Modeling Language into a domain specific modeling language for quality evaluation of data intensive applications.

Publisher's Version Article Search
A Systematic Approach for Performance Evaluation using Process Mining: The POSIDONIA Operations Case Study
Simona Bernardi, José Ignacio Requeno, Christophe Joubert, and Alberto Romeu
(Centro Universitario de la Defensa, Spain; Universidad de Zaragoza, Spain; Prodevelop, Spain)
Modelling plays an important role in the development of software applications, in particular for the assessment of non functional requirements such as performance. The value of a model depends on the level of alignment with the reality. In this paper, we propose a systematic approach to get a performance model that is a good representation of the system under analysis. From an UML-based system design we get automatically a normative Petri net model, which formally represents the system supposed behaviour, by applying model-to-model (M2M) transformation techniques. Then, a conformance checking technique is iteratively applied to align -from the qualitative point of view- the normative model and the data log until the required fitness threshold is not reached. Finally, a trace-driven simulation technique is used to enrich the aligned model with timing specification from the data log, then obtaining the performance Generalized Stochastic Petri Net (GSPN) model. The proposed approach has been applied to a customizable Integrated Port Operations Management System, POSIDONIA Operations, where the performance model has been used to analyse the scalability of the product considering different deployment configurations.

Publisher's Version Article Search
The M³ (Measure-Measure-Model) Tool-Chain for Performance Prediction of Multi-tier Applications
Devidas Gawali and Varsha Apte
(IIT Bombay, India)
Performance prediction of multi-tier applications is a critical step in the life-cycle of an application. However, the target hardware platform on which performance prediction is re- quired is often different from the testbed one on which the application performance can be measured, and is usually un- available for deployment and load testing of the application. In this paper, we present M3 , our Measure-Measure-Model method, which uses a pipeline of three tools to solve this problem. The tool-chain starts with AutoPerf, which mea- sures the CPU service demands of the application on the testbed. CloneGen then takes this and the number and size of network calls as input and generates a clone, whose CPU service demand matches the application’s. This clone is then deployed on the target, instead of the original application, since its code is simple, does not need a full database, and is thus easier to install. AutoPerf is used again to measure CPU service demand of the clone on the target, under light load generation. Finally, this service demand is fed into PerfCenter which is a multi-tier application performance modeling tool, which can then predict the application per- formance on the target under any workload. We validated the predictions made using the M3 tool-chain against direct measurement made on two applications - DellDVD and RU- BiS, on various combinations of testbed and target platforms (Intel and AMD servers) and found that in almost all cases, prediction error was less than 20%.

Publisher's Version Article Search

Tools for Quality-Aware DevOps

DICE Fault Injection Tool
Craig Sheridan, Darren Whigham, and Matej Artač
(Flexiant, UK; XLAB, Slovenia)
In this paper, we describe the motivation, innovation, design, running example and future development of a Fault Inject Tool (FIT). This tool enables controlled causing of cloud platform issues such as resource stress and service or VM outages, the purpose being to observe the subsequent effect on deployed applications. It is being designed for use in a DevOps workflow for tighter correlation between application design and cloud operation, although not limited to this usage, and helps improve resiliency for data intensive applications by bringing together fault tolerance, stress testing and benchmarking in a single tool.

Publisher's Version Article Search
Datalution: A Tool for Continuous Schema Evolution in NoSQL-Backed Web Applications
Stefanie Scherzinger, Stephanie Sombach, Katharina Wiech, Meike Klettke, and Uta Störl
(Regensburg University of Applied Sciences, Germany; University of Rostock, Germany; Darmstadt University of Applied Sciences, Germany)
When an incremental release of a web application is deployed, the structure of data already persisted in the production database may no longer match what the application code expects. Traditionally, eager schema migration is called for, where all legacy data is migrated in one go. With the growing popularity of schema-flexible NoSQL data stores, lazy forms of data migration have emerged: Legacy entities are migrated on-the-fly, one at-a-time, when they are loaded by the application. In this demo, we present Datalution, a tool demonstrating the merits of lazy data migration. Datalution can apply chains of pending schema changes, due to its Datalog-based internal representation. The Datalution approach thus ensures that schema evolution, as part of continous deployment, is carried out correctly.

Publisher's Version Article Search Video
Model-Driven Continuous Deployment for Quality DevOps
Matej Artač, Tadej Borovšak, Elisabetta Di Nitto, Michele Guerriero, and Damian A. Tamburri
(XLAB, Slovenia; Politecnico di Milano, Italy)
DevOps entails a series of software engineering strategies and tools that promise to deliver quality and speed at the same time with little or no additional expense. In our work we strived to enable a DevOps way of working, combining Model-Driven Engineering tenets with the challenges of delivering a model-driven continuous deployment tool that allows quick (re-)deployment of cloud applications for the purpose of continuous improvement. This paper illustrates the DICER tool and elaborates on how it can bring about the DevOps promise and enable the quality-awareness.

Publisher's Version Article Search
PET: Continuous Performance Evaluation Tool
Johannes Kroß, Felix Willnecker, Thomas Zwickl, and Helmut Krcmar
(fortiss, Germany; TU Munich, Germany)
Performance measurements and simulations produce large amounts of data in a short period of time. Release cycles are getting shorter due to the DevOps movement and heavily rely on live data from production or test environments. In addition, performance simulations increasingly become accurate and close to exact predictions. Results from these simulations are reliable and can be compared with live data to detect deviations from expected behavior. In this work, we present a comprehensive tool that can process and analyze measurement as well as simulation data quickly utilizing big data technologies. Live measurement data and simulation results can be analyzed for detecting performance problems, deviations from expected behavior or to simply compare a performance model with real world applications.

Publisher's Version Article Search
A Tool for Verification of Big-Data Applications
Marcello M. Bersani, Francesco Marconi, Matteo Rossi, and Madalina Erascu
(Politecnico di Milano, Italy; Institute e-Austria Timisoara, Romania; West University of Timisoara, Romania)
Quality-driven frameworks for developing data-intensive applications are becoming more and more popular, following the remarkable popularity of Big Data approaches. The DICE framework, designed within the DICE project (, has the goal of offering a novel profile and tools for data-aware quality-driven development. One of its tools is the DICE Verification Tool (D-VerT), which allows designers to evaluate their design against safety properties, such as reachability of undesired configurations of the system. This paper describes the first version of D-VerT, available open source at

Publisher's Version Article Search
TemPerf: Temporal Correlation between Performance Metrics and Source Code
Jürgen Cito, Genc Mazlami, and Philipp Leitner
(University of Zurich, Switzerland)
Today's rapidly evolving software systems continuously introduce new changes that can potentially degrade performance. Large-scale load testing prior to deployment is supposed to avoid performance regressions in production. However, due to the large input space in parameterized load testing, not all performance regressions can be prevented in practice. To support developers in identifying the change sets that had an impact on performance, we present TemPerf, a tool that correlates performance regressions with change sets by exploiting temporal constraints. It is implemented as an Eclipse IDE plugin that allows developers to visualize performance developments over time and display temporally correlated change sets retrieved from version control and continuous integration platforms.

Publisher's Version Article Search

proc time: 0.09