Powered by
Conference Publishing Consulting

9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS 2014), June 2–3, 2014, Hyderabad, India

SEAMS 2014 – Proceedings

Contents - Abstracts - Authors

Frontmatter

Title Page

Preface
Welcome to the proceedings of the 9th edition of the International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS 2014). The symposium was held in Hyderabad, India, during June 2nd and 3rd, 2014.

Keynotes

Genetic Improvement for Adaptive Software Engineering (Keynote)
Mark Harman, Yue Jia, William B. Langdon, Justyna Petke, Iman Hemati Moghadam, Shin Yoo, and Fan Wu
(University College London, UK)
This paper presents a brief outline of an approach to online genetic improvement. We argue that existing progress in genetic improvement can be exploited to support adaptivity. We illustrate our proposed approach with a 'dreaming smart device' example that combines online and offline machine learning and optimisation.
Publisher's Version Article Search
Adapting Our View of Software Adaptation: An Architectural Perspective (Keynote)
Nenad Medvidovic
(University of Southern California, USA)
Engineers frequently neglect to carefully consider the impact of adaptation on a software system. As a result, the software system's architectural design sooner, rather than later, begins to deviate from the original designers' intent and to decay through unplanned introduction of new and/or invalidation of existing design decisions. For systems that are intended to be (self-)adaptive, this problem can be even more pronounced. A solution that was proposed over a decade ago was to keep the architectures of (self-)adaptive systems in sync with their implementations through carefully engineered implementation frameworks, and to allow implementation-level adaptations only via carefully controlled architecture-level operations. However, many approaches to (self-)adaptive software do not explicitly consider the system's architecture as the starting point for adaptation and, more generally, developers change systems in seemingly arbitrary ways all the time. This begs the question: What is the impact of system changes on a system's architecture in a general case? This keynote talk presents the results of an on-going study that has tried to shed light on this question. To date, the study has involved around 30 open-source systems and, in several cases, large numbers of versions of a given system. The keynote discusses and illustrates the challenges in extracting the architecture of a system from its implementation artifacts, the concrete problems posed by architectural decay, the difficulties of tracking the architectural impact of implementation-level changes, and the occasional arbitrariness with which the adaptation of real, widely-used software systems is approached. The keynote then identifies several promising research opportunities that present themselves for dealing with these problems in (self-)adaptive systems.
Publisher's Version Article Search

Search-Based and Data-Mining Approaches

Designing Search Based Adaptive Systems: A Quantitative Approach
Parisa Zoghi, Mark Shtern, and Marin Litoiu
(York University, Canada)
Designing an adaptive system to meet its quality constraints in the face of environmental uncertainties can be a challenging task. In cloud environment, a designer has to also consider and evaluate different control points, i.e., those variables that affect the quality of the software system. This paper presents a method for eliciting, evaluating and ranking control points for web applications deployed in cloud environments. The proposed method consists of several phases that take high-level stakeholders' adaptation goals and transform them into lower level MAPE-K loop control points. The MAPE-K loops are then activated at runtime using search-based algorithms. We conducted several experiments to evaluate the different phases of our methodology.
Publisher's Version Article Search
Towards Run-Time Adaptation of Test Cases for Self-Adaptive Systems in the Face of Uncertainty
Erik M. Fredericks, Byron DeVries, and Betty H. C. Cheng
(Michigan State University, USA)
Self-adaptive systems (SAS) may be subjected to conditions for which they were not explicitly designed. For those high-assurance SAS applications that must deliver critical services, techniques are needed to ensure that only acceptable behavior is provided. While testing an SAS at design time can validate its expected behaviors in known circumstances, testing at run time provides assurance that the SAS will continue to behave as expected in uncertain situations. This paper introduces Veritas, an approach for using utility functions to guide the test adaptation process as part of a run-time testing framework. Specifically, Veritas adapts test cases for an SAS at run time to ensure that the SAS continues to execute in a safe and correct manner when adapting to handle changing environmental conditions.
Publisher's Version Article Search
Automated Mining of Software Component Interactions for Self-Adaptation
Eric Yuan, Naeem Esfahani, and Sam Malek
(George Mason University, USA)
A self-adaptive software system should be able to monitor and analyze its runtime behavior and make adaptation decisions accordingly to meet certain desirable objectives. Traditional software adaptation techniques and recent "models@runtime" approaches usually require an a priori model for a system's dynamic behavior. Oftentimes the model is difficult to define and labor-intensive to maintain, and tends to get out of date due to adaptation and architecture decay. We propose an alternative approach that does not require defining the system's behavior model beforehand, but instead involves mining software component interactions from system execution traces to build a probabilistic usage model, which is in turn used to analyze, plan, and execute adaptations. Our preliminary evaluation of the approach against an Emergency Deployment System shows that the associations mining model can be used to effectively address a variety of adaptation needs, including (1) safely applying dynamic changes to a running software system without creating inconsistencies, (2) identifying potentially malicious (abnormal) behavior for self-protection, and (3) our ongoing research on improving deployment of software components in a distributed setting for performance self-optimization.
Publisher's Version Article Search

Security, Goals, and Requirements

Requirements-Driven Mediation for Collaborative Security
Amel Bennaceur, Arosha K. Bandara, Michael Jackson, Wei Liu, Lionel Montrieux, Thein Than Tun, Yijun Yu, and Bashar Nuseibeh
(Open University, UK; Wuhan Institute of Technology, China; Lero, Ireland)
Security is concerned with the protection of assets from intentional harm. Secure systems provide capabilities that enable such protection to satisfy some security requirements. In a world increasingly populated with mobile and ubiquitous computing technology, the scope and boundary of security systems can be uncertain and can change. A single functional component, or even multiple components individually, are often insufficient to satisfy complex security requirements on their own. Adaptive security aims to enable systems to vary their protection in the face of changes in their operational environment. Collaborative security, which we propose in this paper, aims to exploit the selection and deployment of multiple, potentially heterogeneous, software-intensive components to collaborate in order to meet security requirements in the face of changes in the environment, changes in assets under protection and their values, and the discovery of new threats and vulnerabilities. However, the components that need to collaborate may not have been designed and implemented to interact with one another collaboratively. To address this, we propose a novel framework for collaborative security that combines adaptive security, collaborative adaptation and an explicit representation of the capabilities of the software components that may be needed in order to achieve collaborative security. We elaborate on each of these framework elements, focusing in particular on the challenges and opportunities afforded by (1) the ability to capture, represent, and reason about the capabilities of different software components and their operational context, and (2) the ability of components to be selected and mediated at runtime in order to satisfy the security requirements. We illustrate our vision through a collaborative robotic implementation, and suggest some areas for future work.
Publisher's Version Article Search Video
Topology Aware Adaptive Security
Liliana Pasquale, Carlo Ghezzi, Claudio Menghi, Christos Tsigkanos, and Bashar Nuseibeh
(Lero, Ireland; University of Limerick, Ireland; Politecnico di Milano, Italy; Open University, UK)
Adaptive security systems aim to protect valuable assets in the face of changes in their operational environment. They do so by monitoring and analysing this environment, and deploying security functions that satisfy some protection (security, privacy, or forensic) requirements. In this paper, we suggest that a key characteristic for engineering adaptive security is the topology of the operational environment, which represents a physical and/or a digital space - including its structural relationships, such as containment, proximity, and reachability. For adaptive security, topology expresses a rich representation of context that can provide a system with both structural and semantic awareness of important contextual characteristics. These include the location of assets being protected or the proximity of potentially threatening agents that might harm them. Security-related actions, such as the physical movement of an actor from a room to another in a building, may be viewed as topological changes. The detection of a possible undesired topological change (such as an actor possessing a safe’s key entering the room where the safe is located) may lead to the decision to deploy a particular security control to protect the relevant asset. This position paper advocates topology awareness for more effective engineering of adaptive security. By monitoring changes in topology at runtime one can identify new or changing threats and attacks, and deploy adequate security controls accordingly. The paper elaborates on the notion of topology and provides a vision and research agenda on its role for systematically engineering adaptive security systems.
Publisher's Version Article Search
Self-Adaptive Applications: On the Development of Personalized Web-Tasking Systems
Lorena Castañeda, Norha M. Villegas, and Hausi A. Müller
(University of Victoria, Canada; Icesi University, Colombia; IBM, Canada)
Personalized Web-Tasking (PWT) proposes the automation of user-centric and repetitive web interactions to assist users in the fulfilment of personal goals using internet systems. In PWT, both personal goals and internet systems are affected by unpredictable changes in user preferences, situations, system infrastructures and environments. Therefore, self-adaptation enhanced with dynamic context monitoring is required to guarantee the effectiveness of PWT systems that, despite context uncertainty, must guarantee the accomplishment of personal goals and deliver pleasant user experiences. This position paper describes our approach to the development of PWT systems, which relies on self-adaptation and its enabling technologies. In particular, it presents our runtime modelling approach that is comprised of our PWT Ontology and Goal-oriented Context-sensitive web-tasking (GCT) models, and the way we exploit previous SEAMS contributions developed in our research group, the DYNAMICO reference model and the SmarterContext Monitoring Infrastructure and Reasoning Engine. The main goal of this paper is to demonstrate how the most crucial challenges in the engineering of PWT systems can be addressed by implementing them as self-adaptive software.
Publisher's Version Article Search
Modelling and Analysing Contextual Failures for Dependability Requirements
Danilo F. Mendonça, Raian Ali, and Genaína N. Rodrigues
(University of Brasília, Brazil; Bournemouth University, UK)
The notion of Contextual Requirements refers to the inter-relation between the requirements of a system, both functional and non-functional (NFRs), and the dynamic environment in which the system operates. Dependability requirements are NFRs which could also be context-dependent. The meaning and the consequence of faults affecting dependability vary in relation to the context in which a fault occurs. In this paper, we elaborate on the need to consider the contextual nature of failures and dependability. Then, we extend a contextual requirements model, the contextual goal model, to capture contextual failures and utilize that to enrich the semantic of dependability requirements. We provide techniques to analyse and reason about the effects of contexts on failures and their consequences. This analysis helps evaluate the possible alternative configurations to reach goals from dependability perspective and, hence, take adaptation decisions. Finally, we demonstrate the feasibility and applicability of our approach on a Mobile Personal Emergency Response system.
Publisher's Version Article Search Info

Analysis and Diagnosis

User-Centric Adaptation of Multi-tenant Services: Preference-Based Analysis for Service Reconfiguration
Jesús García-Galán, Liliana Pasquale, Pablo Trinidad, and Antonio Ruiz-Cortés
(University of Seville, Spain; Lero, Ireland; University of Limerick, Ireland)
Multi-tenancy is a key pillar of cloud services. It allows different tenants to share computing resources transparently and, at the same time, guarantees substantial cost savings for the providers. However, from a user perspective, one of the major drawbacks of multi-tenancy is lack of configurability. Depending on the isolation degree, the same service instance and even the same service configuration may be shared among multiple tenants (i.e. shared multi-tenant service). Moreover tenants usually have different - and in most of the cases - conflicting configuration preferences. To overcome this limitation, this paper introduces a novel approach to support user-centric adaptation in shared multi-tenant services. The adaptation objective aims to maximise tenants’ satisfaction, even when tenants and their preferences change during the service life-time. This paper describes how to engineer the activities of the MAPE loop to support user-centric adaptation, and focuses on the analysis of tenants’ preferences. In particular, we use a game theoretic analysis to identify a service configuration that maximises tenants’ preferences satisfaction. We illustrate and motivate our approach by utilising a multi-tenant desktop scenario. Obtained experimental results demonstrate the feasibility of the proposed analysis.
Publisher's Version Article Search
Diagnosing Unobserved Components in Self-Adaptive Systems
Paulo Casanova, David Garlan, Bradley Schmerl, and Rui Abreu
(Carnegie Mellon University, USA; University of Porto, Portugal)
Availability is an increasingly important quality for today's software-based systems and it has been successfully addressed by the use of closed-loop control systems in self-adaptive systems. Probes are inserted into a running system to obtain information and the information is fed to a controller that, through provided interfaces, acts on the system to alter its behavior. When a failure is detected, pinpointing the source of the failure is a critical step for a repair action. However, information obtained from a running system is commonly incomplete due to probing costs or unavailability of probes. In this paper we address the problem of fault localization in the presence of incomplete system monitoring. We may not be able to directly observe a component but we may be able to infer its health state. We provide formal criteria to determine when health states of unobservable components can be inferred and establish formal theoretical bounds for accuracy when using any spectrum-based fault localization algorithm.
Publisher's Version Article Search

Cloud Computing

Symbiotic and Sensitivity-Aware Architecture for Globally-Optimal Benefit in Self-Adaptive Cloud
Tao Chen and Rami Bahsoon
(University of Birmingham, UK)
Due to the uncertain and dynamic demand for Quality of Service (QoS) in cloud-based systems, engineering self-adaptivity in cloud architectures require novel approaches to support on-demand elasticity. The architecture should dynamically select an elastic strategy, which optimizes the global benefit for QoS and cost objectives for all cloud-based services. The architecture shall also provide mechanisms for reaching the strategy with minimal overhead. However, the challenge in the cloud is that the nature of objectives (e.g., throughput and the required cost) and QoS interference could cause overlapping sensitivity amongst intra- and inter-services objectives, which leads to objective-dependency (i.e., conflicted or harmonic) during optimization. In this paper, we propose a symbiotic and sensitivity-aware architecture for optimizing global-benefit with reduced overhead in the cloud. The architecture dynamically partitions QoS and cost objectives into sensitivity independent regions, where the local optimums are achieved. In addition, the architecture realizes the concept of symbiotic feedback loop, which is a bio-directional self-adaptive action that not only allows to dynamically monitor and adapt the managed services by scaling to their demand, but also to adaptively consolidate the managing system by re-partitioning the regions based on symptoms. We implement the architecture as a prototype extending on decentralized MAPE loop by introducing an Adaptor component. We then experimentally analyze and evaluate our architecture using hypothetical scenarios. The results reveal that our symbiotic and sensitivity-aware architecture is able to produce even better global benefit and smaller overhead in contrast to other non sensitivity-aware architectures.
Publisher's Version Article Search
Autonomic Resource Provisioning for Cloud-Based Software
Pooyan Jamshidi, Aakash Ahmad, and Claus Pahl
(Dublin City University, Ireland; Lero, Ireland)
Cloud elasticity provides a software system with the ability to maintain optimal user experience by automatically acquiring and releasing resources, while paying only for what has been consumed. The mechanism for automatically adding or removing resources on the fly is referred to as auto-scaling. The state-of-the-practice with respect to auto-scaling involves specifying threshold-based rules to implement elasticity policies for cloud-based applications. However, there are several shortcomings regarding this approach. Firstly, the elasticity rules must be specified precisely by quantitative values, which requires deep knowledge and expertise. Furthermore, existing approaches do not explicitly deal with uncertainty in cloud-based software, where noise and unexpected events are common. This paper exploits fuzzy logic to enable qualitative specification of elasticity rules for cloud-based software. In addition, this paper discusses a control theoretical approach using type-2 fuzzy logic systems to reason about elasticity under uncertainties. We conduct several experiments to demonstrate that cloud-based software enhanced with such elasticity controller can robustly handle unexpected spikes in the workload and provide acceptable user experience. This translates into increased profit for the cloud application owner.
Publisher's Version Article Search
A Computational Field Framework for Collaborative Task Execution in Volunteer Clouds
Stefano Sebastio, Michele Amoretti, and Alberto Lluch Lafuente
(IMT Institute for Advanced Studies, Italy; University of Parma, Italy)
The increasing diffusion of cloud technologies offers new opportunities for distributed and collaborative computing. Volunteer clouds are a prominent example, where participants join and leave the platform and collaborate by sharing computational resources. The high complexity, dynamism and unpredictability of such scenarios call for decentralized self-* approaches. We present in this paper a framework for the design and evaluation of self-adaptive collaborative task execution strategies in volunteer clouds. As a byproduct, we propose a novel strategy based on the Ant Colony Optimization paradigm, that we validate through simulation-based statistical analysis over Google cluster data.
Publisher's Version Article Search

Verification

Efficient Runtime Quantitative Verification using Caching, Lookahead, and Nearly-Optimal Reconfiguration
Simos Gerasimou, Radu Calinescu, and Alec Banks
(University of York, UK; Dstl, UK)
Self-adaptive systems used in safety-critical and business-critical applications must continue to comply with strict non-functional requirements while evolving in order to adapt to changing workloads, environments, and goals. Runtime quantitative verification (RQV) has been proposed as an effective means of enhancing self-adaptive systems with this capability. However, RQV frequently fails to provide the fast response times and low computation overheads required by real-world self-adaptive systems. In this paper, we investigate how three techniques, namely caching, lookahead and nearly-optimal reconfiguration, and combinations thereof, can help address this limitation. Extensive experiments in a case study involving the RQV-driven self-adaptation of an unmanned underwater vehicle indicate that these techniques can lead to significant reductions in RQV response times and computation overheads.
Publisher's Version Article Search
ActivFORMS: Active Formal Models for Self-Adaptation
M. Usman Iftikhar and Danny Weyns
(Linnaeus University, Sweden)
Self-adaptation enables a software system to deal autonomously with uncertainties, such as dynamic operating conditions that are difficult to predict or changing goals. A common approach to realize self-adaptation is with a MAPE-K feedback loop that consists of four adaptation components: Monitor, Analyze, Plan, and Execute. These components share Knowledge models of the managed system, its goals and environment. To provide guarantees of the adaptation goals, state of the art approaches propose using formal models of the knowledge. However, less attention is given to the formalization of the adaptation components themselves, which is important to provide guarantees of correctness of the adaptation behavior (e.g., does the execute component execute the plan correctly?). We propose Active FORmal Models for Self-adaptation (ActivFORMS) that uses an integrated formal model of the adaptation components and knowledge models. The formal model is directly executed by a virtual machine to realize adaptation, hence active model. The contributions of ActivFORMS are: (1) the approach assures that the adaptation goals that are verified offline are guaranteed at runtime, and (2) it supports dynamic adaptation of the active model to support changing goals. We show how we have applied ActivFORMS for a small-scale robotic system.
Publisher's Version Article Search
Run-Time Generation, Transformation, and Verification of Access Control Models for Self-Protection
Christopher Bailey, Lionel Montrieux, Rogério de Lemos, Yijun Yu, and Michel Wermelinger
(University of Kent, UK; Open University, UK; University of Coimbra, Portugal)
Self-adaptive access control, in which self-* properties are applied to protecting systems, is a promising solution for the handling of malicious user behaviour in complex infrastructures. A major challenge in self-adaptive access control is ensuring that chosen adaptations are valid, and produce a satisfiable model of access. The contribution of this paper is the generation, transformation and verification of Role Based Access Control (RBAC) models at run-time, as a means for providing assurances that the adaptations to be deployed are valid. The goal is to protect the system against insider threats by adapting at run-time the access control policies associated with system resources, and access rights assigned to users. Depending on the type of attack, and based on the models from the target system and its environment, the adapted access control models need to be evaluated against the RBAC metamodel, and the adaptation constraints related to the application. The feasibility of the proposed approach has been demonstrated in the context of a fully working prototype using malicious scenarios inspired by a well documented case of insider attack.
Publisher's Version Article Search

Decision-Making

A Prediction-Driven Adaptation Approach for Self-Adaptive Sensor Networks
Ivan Dario Paez Anaya, Viliam Simko, Johann Bourcier, Noël Plouzeau, and Jean-Marc Jézéquel
(IRISA, France; INRIA, France; University of Rennes 1, France; KIT, Germany)
Engineering self-adaptive software in unpredictable environments such as pervasive systems, where network's ability, remaining battery power and environmental conditions may vary over the lifetime of the system is a very challenging task. Many current software engineering approaches leverage run-time architectural models to ease the design of the autonomic control loop of these self-adaptive systems. While these approaches perform well in reacting to various evolutions of the runtime environment, implementations based on reactive paradigms have a limited ability to anticipate problems, leading to transient unavailability of the system, useless costly adaptations, or resources waste. In this paper, we follow a proactive self-adaptation approach that aims at overcoming the limitation of reactive approaches. Based on predictive analysis of internal and external context information, our approach regulates new architecture reconfigurations and deploys them using models at runtime. We have evaluated our approach on a case study where we combined hourly temperature readings provided by National Climatic Data Center (NCDC) with fire reports from Moderate Resolution Imaging Spectroradiometer (MODIS) and simulated the behavior of multiple systems. The results confirm that our proactive approach outperforms a typical reactive system in scenarios with seasonal behavior.
Publisher's Version Article Search
Stochastic Game Analysis and Latency Awareness for Proactive Self-Adaptation
Javier Cámara, Gabriel A. Moreno, and David Garlan
(Carnegie Mellon University, USA)
Although different approaches to decision-making in self-adaptive systems have shown their effectiveness in the past by factoring in predictions about the system and its environment (e.g., resource availability), no proposal considers the latency associated with the execution of tactics upon the target system. However, dierent adaptation tactics can take different amounts of time until their effects can be observed. In reactive adaptation, ignoring adaptation tactic latency can lead to suboptimal adaptation decisions (e.g., activating a server that takes more time to boot than the transient spike in traffic that triggered its activation). In proactive adaptation, taking adaptation latency into account is necessary to get the system into the desired state to deal with an upcoming situation. In this paper, we introduce a formal analysis technique based on model checking of stochastic multiplayer games (SMGs) that enables us to quantify the potential benefits of employing dierent types of algorithms for self-adaptation. In particular, we apply this technique to show the potential benefit of considering adaptation tactic latency in proactive adaptation algorithms. Our results show that factoring in tactic latency in decision making improves the outcome of adaptation. We also present an algorithm to do proactive adaptation that considers tactic latency, and show that it achieves higher utility than an algorithm that under the assumption of no latency is optimal.
Publisher's Version Article Search
Dealing with Multiple Failures in Zanshin: A Control-Theoretic Approach
Konstantinos Angelopoulos, Vítor E. Silva Souza, and John Mylopoulos
(University of Trento, Italy; Federal University of Espírito Santo, Brazil)
Adaptive software systems monitor the environment to ensure that their requirements are being fullled. When this is not the case, their adaptation mechanism proposes an adaptation (a change to the behaviour/configuration) that can lead to restored satisfaction of system requirements. Unfortunately, such adaptation mechanisms don't work very well in cases where there are multiple failures (divergence of system behaviour relative to several requirements). This paper proposes an adaptation mechanism that can handle multiple failures. The proposal consists of extending the Qualia adaptation mechanism of Zanshin enriched with features adopted from Control Theory. The proposed framework supports the definition of requirements for the adaptation process prescribing how to deal at runtime with problems such as conflicting requirements and synchronization, enhancing the precision and effectiveness of the adaptation mechanism. The proposed mechanism, named Qualia+ is illustrated and evaluated with an example using the meeting scheduling exemplar.
Publisher's Version Article Search

proc time: 0.19