Workshop QUDOS 2015 – Author Index |
Contents -
Abstracts -
Authors
|
Ardagna, Danilo |
![]() Michele Guerriero, Michele Ciavotta, Giovanni Paolo Gibilisco, and Danilo Ardagna (Politecnico di Milano, Italy) Cloud computing has been a game changer in the design, development and management of modern applications, which have grown in scope and size becoming distributed and service oriented. New methodologies have emerged to deal with this paradigm shift in software engineering. Consequently, new tools, devoted to ease the convergence between developers and other IT professional, are required. Here, we present SPACE4Cloud, a DevOps integrated environment for model-driven design-time QoS assessment and optimization, and runtime capacity allocation for Cloud applications. ![]() |
|
Brunnert, Andreas |
![]() Markus Dlugi, Andreas Brunnert, and Helmut Krcmar (fortiss, Germany; TU München, Germany) In order to increase the frequency of software releases and to improve their quality, continuous integration (CI) systems became widely used in recent years. Unfortunately, it is not easy to evaluate the performance of a software release in such systems. One of the main reasons for this difficulty is often the lack of a test environment that is comparable to a production system. Performance models can help in this scenario by eliminating the need for a production-sized environment. Building upon these capabilities of performance models, we have introduced a model-based performance change detection process for continuous delivery pipelines in a previous work. This work presents an implementation of the process as plug-in for the CI system Jenkins. ![]() |
|
Casale, Giuliano |
![]() Weikun Wang, Juan F. Pérez, and Giuliano Casale (Imperial College London, UK) Software performance engineering heavily relies on application and resource models that enable the prediction of Quality-of-Service metrics. Critical to these models is the accuracy of their parameters, the value of which can change with the application and the resources where it is deployed. In this paper we introduce the Filling-the-gap (FG) tool, which automates the parameter estimation of application performance models. This tool implements a set of statistical routines to estimate the parameters of performance models, which are automatically executed using monitoring information kept in a local database. ![]() |
|
Chauvel, Franck |
![]() Nicolas Ferry, Franck Chauvel, Hui Song, and Arnor Solberg (SINTEF, Norway) In this paper we present our mechanism and tooling for the continuous deployment and resource provisioning of multi-cloud applications. In order to facilitate collaboration between development and operation teams as promoted in the DevOps movement, our deployment and resource provisioning engine is based on the Models@Runtime principles. This enables applying the same concepts and language (i.e., CloudML) for deployment and resource provisioning at development-and operation-time. ![]() |
|
Ciavotta, Michele |
![]() Michele Guerriero, Michele Ciavotta, Giovanni Paolo Gibilisco, and Danilo Ardagna (Politecnico di Milano, Italy) Cloud computing has been a game changer in the design, development and management of modern applications, which have grown in scope and size becoming distributed and service oriented. New methodologies have emerged to deal with this paradigm shift in software engineering. Consequently, new tools, devoted to ease the convergence between developers and other IT professional, are required. Here, we present SPACE4Cloud, a DevOps integrated environment for model-driven design-time QoS assessment and optimization, and runtime capacity allocation for Cloud applications. ![]() |
|
Coutinho, Jose G. F. |
![]() Mark Stillwell and Jose G. F. Coutinho (Imperial College London, UK) We present a description of the development and deployment infrastructure being created to support the integration effort of HARNESS, an EU FP7 project. HARNESS is a multi-partner research project intended to bring the power of heterogeneous resources to the cloud. It consists of a number of different services and technologies that interact with the OpenStack cloud computing platform at various levels. Many of these components are being developed independently by different teams at different locations across Europe, and keeping the work fully integrated is a challenge. We use a combination of Vagrant based virtual machines, Docker containers, and Ansible playbooks to provide a consistent and up-to-date environment to each developer. The same playbooks used to configure local virtual machines are also used to manage a static testbed with heterogeneous compute and storage devices, and to automate ephemeral larger-scale deployments to Grid'5000. Access to internal projects is managed by GitLab, and automated testing of services within Docker-based environments and integrated deployments within virtual-machines is provided by Buildbot. ![]() |
|
Dlugi, Markus |
![]() Markus Dlugi, Andreas Brunnert, and Helmut Krcmar (fortiss, Germany; TU München, Germany) In order to increase the frequency of software releases and to improve their quality, continuous integration (CI) systems became widely used in recent years. Unfortunately, it is not easy to evaluate the performance of a software release in such systems. One of the main reasons for this difficulty is often the lack of a test environment that is comparable to a production system. Performance models can help in this scenario by eliminating the need for a production-sized environment. Building upon these capabilities of performance models, we have introduced a model-based performance change detection process for continuous delivery pipelines in a previous work. This work presents an implementation of the process as plug-in for the CI system Jenkins. ![]() |
|
Ferry, Nicolas |
![]() Nicolas Ferry, Franck Chauvel, Hui Song, and Arnor Solberg (SINTEF, Norway) In this paper we present our mechanism and tooling for the continuous deployment and resource provisioning of multi-cloud applications. In order to facilitate collaboration between development and operation teams as promoted in the DevOps movement, our deployment and resource provisioning engine is based on the Models@Runtime principles. This enables applying the same concepts and language (i.e., CloudML) for deployment and resource provisioning at development-and operation-time. ![]() |
|
Gibilisco, Giovanni Paolo |
![]() Michele Guerriero, Michele Ciavotta, Giovanni Paolo Gibilisco, and Danilo Ardagna (Politecnico di Milano, Italy) Cloud computing has been a game changer in the design, development and management of modern applications, which have grown in scope and size becoming distributed and service oriented. New methodologies have emerged to deal with this paradigm shift in software engineering. Consequently, new tools, devoted to ease the convergence between developers and other IT professional, are required. Here, we present SPACE4Cloud, a DevOps integrated environment for model-driven design-time QoS assessment and optimization, and runtime capacity allocation for Cloud applications. ![]() |
|
Guerriero, Michele |
![]() Michele Guerriero, Michele Ciavotta, Giovanni Paolo Gibilisco, and Danilo Ardagna (Politecnico di Milano, Italy) Cloud computing has been a game changer in the design, development and management of modern applications, which have grown in scope and size becoming distributed and service oriented. New methodologies have emerged to deal with this paradigm shift in software engineering. Consequently, new tools, devoted to ease the convergence between developers and other IT professional, are required. Here, we present SPACE4Cloud, a DevOps integrated environment for model-driven design-time QoS assessment and optimization, and runtime capacity allocation for Cloud applications. ![]() |
|
Incerto, Emilio |
![]() Emilio Incerto, Mirco Tribastone, and Catia Trubiani (Gran Sasso Science Institute, Italy; IMT Institute for Advanced Studies, Italy) Complex software systems are required to adapt dynamically to changing workloads and scenarios, while guaranteeing a set of performance objectives. This is not a trivial task, since run-time variability makes the process of devising the needed resources challenging for software designers. In this context, self-adaptation is a promising technique that work towards the specification of the most suitable system configuration, such that the system behavior is preserved while meeting performance requirements. In this paper we propose a proactive approach based on queuing networks that allows self-adaptation by predicting performance flaws and devising the most suitable system resources allocation. The queueing network model represents the system behavior and embeds the input parameters (e.g., workload) observed at run-time. We rely on fluid approximation to speed up the analysis of transient dynamics for performance indices. To support our approach we developed a tool that automatically generates simulation and fluid analysis code from an high-level description of the queueing network. An illustrative example is provided to demonstrate the effectiveness of our approach. ![]() |
|
Jamshidi, Pooyan |
![]() Tatiana Ustinova and Pooyan Jamshidi (Imperial College London, UK) Queueing network models are commonly used for performance modelling. However, through application development stage analytical models might not be able to continuously reflect performance, for example due to performance bugs or minor changes in the application code that cannot be readily reflected in the queueing model. To cope with this problem, a measurement-based approach adopting Design of Experiments (DoE) technique is proposed. The applicability of the proposed method is demonstrated on a complex 3-tier e-commerce application that is difficult to model with queueing networks. ![]() |
|
Krcmar, Helmut |
![]() Markus Dlugi, Andreas Brunnert, and Helmut Krcmar (fortiss, Germany; TU München, Germany) In order to increase the frequency of software releases and to improve their quality, continuous integration (CI) systems became widely used in recent years. Unfortunately, it is not easy to evaluate the performance of a software release in such systems. One of the main reasons for this difficulty is often the lack of a test environment that is comparable to a production system. Performance models can help in this scenario by eliminating the need for a production-sized environment. Building upon these capabilities of performance models, we have introduced a model-based performance change detection process for continuous delivery pipelines in a previous work. This work presents an implementation of the process as plug-in for the CI system Jenkins. ![]() |
|
Olszewska, Marta |
![]() Marta Olszewska and Marina Waldén (Abo Akademi University, Finland) Quality is the cornerstone of high criticality systems, since in case of failure not only major financial losses are at stake, but also human lives. Formal methods that support model based-development are one of the methodologies used to achieve correct-by-construction systems. However, these are often heavy-weight and need a dedicated development process. In our work we combine formal and agile software engineering approaches. In particular, we use Event-B and Scrum to assure the quality and more rapid and flexible development. Since we identified that there are more prerequisites for a successful IT project, we use DevOps to embrace the development, quality assurance and IT operations. In this paper we show how formal modelling can function within DevOps and thus promote various dimensions of quality and continuous delivery. ![]() |
|
Pérez, Juan F. |
![]() Weikun Wang, Juan F. Pérez, and Giuliano Casale (Imperial College London, UK) Software performance engineering heavily relies on application and resource models that enable the prediction of Quality-of-Service metrics. Critical to these models is the accuracy of their parameters, the value of which can change with the application and the resources where it is deployed. In this paper we introduce the Filling-the-gap (FG) tool, which automates the parameter estimation of application performance models. This tool implements a set of statistical routines to estimate the parameters of performance models, which are automatically executed using monitoring information kept in a local database. ![]() |
|
Solberg, Arnor |
![]() Nicolas Ferry, Franck Chauvel, Hui Song, and Arnor Solberg (SINTEF, Norway) In this paper we present our mechanism and tooling for the continuous deployment and resource provisioning of multi-cloud applications. In order to facilitate collaboration between development and operation teams as promoted in the DevOps movement, our deployment and resource provisioning engine is based on the Models@Runtime principles. This enables applying the same concepts and language (i.e., CloudML) for deployment and resource provisioning at development-and operation-time. ![]() |
|
Song, Hui |
![]() Nicolas Ferry, Franck Chauvel, Hui Song, and Arnor Solberg (SINTEF, Norway) In this paper we present our mechanism and tooling for the continuous deployment and resource provisioning of multi-cloud applications. In order to facilitate collaboration between development and operation teams as promoted in the DevOps movement, our deployment and resource provisioning engine is based on the Models@Runtime principles. This enables applying the same concepts and language (i.e., CloudML) for deployment and resource provisioning at development-and operation-time. ![]() |
|
Stillwell, Mark |
![]() Mark Stillwell and Jose G. F. Coutinho (Imperial College London, UK) We present a description of the development and deployment infrastructure being created to support the integration effort of HARNESS, an EU FP7 project. HARNESS is a multi-partner research project intended to bring the power of heterogeneous resources to the cloud. It consists of a number of different services and technologies that interact with the OpenStack cloud computing platform at various levels. Many of these components are being developed independently by different teams at different locations across Europe, and keeping the work fully integrated is a challenge. We use a combination of Vagrant based virtual machines, Docker containers, and Ansible playbooks to provide a consistent and up-to-date environment to each developer. The same playbooks used to configure local virtual machines are also used to manage a static testbed with heterogeneous compute and storage devices, and to automate ephemeral larger-scale deployments to Grid'5000. Access to internal projects is managed by GitLab, and automated testing of services within Docker-based environments and integrated deployments within virtual-machines is provided by Buildbot. ![]() |
|
Tribastone, Mirco |
![]() Emilio Incerto, Mirco Tribastone, and Catia Trubiani (Gran Sasso Science Institute, Italy; IMT Institute for Advanced Studies, Italy) Complex software systems are required to adapt dynamically to changing workloads and scenarios, while guaranteeing a set of performance objectives. This is not a trivial task, since run-time variability makes the process of devising the needed resources challenging for software designers. In this context, self-adaptation is a promising technique that work towards the specification of the most suitable system configuration, such that the system behavior is preserved while meeting performance requirements. In this paper we propose a proactive approach based on queuing networks that allows self-adaptation by predicting performance flaws and devising the most suitable system resources allocation. The queueing network model represents the system behavior and embeds the input parameters (e.g., workload) observed at run-time. We rely on fluid approximation to speed up the analysis of transient dynamics for performance indices. To support our approach we developed a tool that automatically generates simulation and fluid analysis code from an high-level description of the queueing network. An illustrative example is provided to demonstrate the effectiveness of our approach. ![]() |
|
Trubiani, Catia |
![]() Emilio Incerto, Mirco Tribastone, and Catia Trubiani (Gran Sasso Science Institute, Italy; IMT Institute for Advanced Studies, Italy) Complex software systems are required to adapt dynamically to changing workloads and scenarios, while guaranteeing a set of performance objectives. This is not a trivial task, since run-time variability makes the process of devising the needed resources challenging for software designers. In this context, self-adaptation is a promising technique that work towards the specification of the most suitable system configuration, such that the system behavior is preserved while meeting performance requirements. In this paper we propose a proactive approach based on queuing networks that allows self-adaptation by predicting performance flaws and devising the most suitable system resources allocation. The queueing network model represents the system behavior and embeds the input parameters (e.g., workload) observed at run-time. We rely on fluid approximation to speed up the analysis of transient dynamics for performance indices. To support our approach we developed a tool that automatically generates simulation and fluid analysis code from an high-level description of the queueing network. An illustrative example is provided to demonstrate the effectiveness of our approach. ![]() |
|
Ustinova, Tatiana |
![]() Tatiana Ustinova and Pooyan Jamshidi (Imperial College London, UK) Queueing network models are commonly used for performance modelling. However, through application development stage analytical models might not be able to continuously reflect performance, for example due to performance bugs or minor changes in the application code that cannot be readily reflected in the queueing model. To cope with this problem, a measurement-based approach adopting Design of Experiments (DoE) technique is proposed. The applicability of the proposed method is demonstrated on a complex 3-tier e-commerce application that is difficult to model with queueing networks. ![]() |
|
Waldén, Marina |
![]() Marta Olszewska and Marina Waldén (Abo Akademi University, Finland) Quality is the cornerstone of high criticality systems, since in case of failure not only major financial losses are at stake, but also human lives. Formal methods that support model based-development are one of the methodologies used to achieve correct-by-construction systems. However, these are often heavy-weight and need a dedicated development process. In our work we combine formal and agile software engineering approaches. In particular, we use Event-B and Scrum to assure the quality and more rapid and flexible development. Since we identified that there are more prerequisites for a successful IT project, we use DevOps to embrace the development, quality assurance and IT operations. In this paper we show how formal modelling can function within DevOps and thus promote various dimensions of quality and continuous delivery. ![]() |
|
Wang, Weikun |
![]() Weikun Wang, Juan F. Pérez, and Giuliano Casale (Imperial College London, UK) Software performance engineering heavily relies on application and resource models that enable the prediction of Quality-of-Service metrics. Critical to these models is the accuracy of their parameters, the value of which can change with the application and the resources where it is deployed. In this paper we introduce the Filling-the-gap (FG) tool, which automates the parameter estimation of application performance models. This tool implements a set of statistical routines to estimate the parameters of performance models, which are automatically executed using monitoring information kept in a local database. ![]() |
23 authors
proc time: 0.71