ICSE 2011 Workshops
33rd International Conference on Software Engineering
Powered by
Conference Publishing Consulting

Seventh International Workshop on Software Engineering for Secure Systems (SESS 2011), May 22, 2011, Waikiki, Honolulu, HI, USA

SESS 2011 – Proceedings

Contents - Abstracts - Authors

Seventh International Workshop on Software Engineering for Secure Systems (SESS 2011)

Preface

Title Page


Foreword
The SESS workshop aims at providing a venue for software engineers and security researchers to exchange ideas and techniques. The theme of the seventh edition is "Soft and secure".

Full Papers

An Initial Study on the Use of Execution Complexity Metrics as Indicators of Software Vulnerabilities
Yonghee Shin and Laurie Williams
(DePaul University, USA; North Carolina State University, USA)
Allocating code inspection and testing resources to the most problematic code areas is important to reduce development time and cost. While complexity metrics collected statically from software artifacts are known to be helpful in finding vulnerable code locations, some complex code is rarely executed in practice and has less chance of its vulnerabilities being detected. To augment the use of static complexity metrics, this study examines execution complexity metrics that are collected during code execution as indicators of vulnerable code locations. We conducted case studies on two large size, widely-used open source projects, the Mozilla Firefox web browser and the Wireshark network protocol analyzer. Our results indicate that execution complexity metrics are better indicators of vulnerable code locations than the most commonly-used static complexity metric, lines of source code. The ability of execution complexity metrics to discriminate vulnerable code locations from neutral code locations and to predict vulnerable code locations vary depending on projects. However, the vulnerability prediction models using execution complexity metrics are superior to the models using static complexity metrics in reducing inspection effort.

Security Policy Foundations in Context UNITY
M. Todd Gamble, Rose F. Gamble, and Matthew L. Hale
(University of Tulsa, USA)
Security certification includes assessing an information system to verify its compliance with diverse, pre-selected security controls. The goal of certification is to identify where controls are implemented correctly and where they are violated, creating potential vulnerability risks. Certification complexity is magnified in software composed of systems of systems where there are limited formal methodologies to express management policies, given a set of security control properties, and verify them against the interaction of the participating components and their individual security policy implementations. In this paper, we extend Context UNITY, a formal, distributed, and context aware coordination language to support policy controls. The new language features enforce security controls and provide a means to declare policy specifics in a manner similar to declaring variable types. We use these features in a specification to show how verifying system compliance with selected security controls, such as those found in the NIST SP800-53 document, can be accomplished.

Preserving Security Properties under Refinement
Fabio Martinelli and Ilaria Matteucci
(IIT-CNR, Italy)
Communication is one of the cornerstone of our everyday life. Guaranteeing the security of a communication is a very important challenge. In this paper, we propose a formal top-down approach for assuring that security properties are preserved during the development of a complex and concurrent system, i.e., within passage from specification to implementation of the components of the system. Indeed, we investigate on the set of requirements a refinement function has to satisfy for preserving a class of properties that can be formalized as specific instances of a general scheme, called Generalized Non Deducibility on Composition (GNDC). Hence, we show that it is possible to guarantee that the refinement of a considered system that is verified to be GNDC at a high level of abstraction, is GNDC also at a lower one without checking it again.

A Conceptual Meta-Model for Secured Information Systems
Nadira Lammari, Jean-Sylvain Bucumi, Jacky Akoka, and Isabelle Comyn-Wattiau
(CNAM, France)
Over the past years, research on specifying, designing and developing secured information systems (IS) has been very active. Some contributions have focused on integrating security aspects, mainly access control mechanisms, at the implementation phase. Others pay a particular attention to the capture and analysis of security requirements. However, to our knowledge, no method addresses the whole problem of the specification of security requirements and their transformation through all the phases of the IS development life cycle. We argue that better Secured IS can be obtained if security issues are taken into account at an earlier phase of the system life cycle and integrated with functional aspects along the whole life cycle. This paper is a step forward to a comprehensive security conceptual meta-model encompassing the main security properties such as availability, integrity, confidentiality, and accountability. It integrates functional and non-functional requirements. It includes social, organizational as well as informational aspects. This meta-model is the backbone of our approach.

Position Papers

Composition of Least Privilege Analysis Results in Software Architectures (Position Paper)
Koen Buyens, Riccardo Scandariato, and Wouter Joosen
(Katholieke Universiteit Leuven, Belgium)
Security principles are often neglected by software architects, due to the lack of precise definitions. This results in potentially high-risk threats to systems. Our own previous work tackled this by introducing formal foundations for the least privilege (LP) principle in software architectures and providing a technique to identify violations to this principle. This work shows that this technique can scale by composing the results obtained from the analysis of the sub-parts of a larger system. The technique decomposes the system into independently described subsystems and a description listing the interactions between these subsystems. These descriptions are thence analyzed to obtain LP violations and subsequently composed to obtain the violations of the overall system.

Towards Transformation Guidelines from Secure Tropos to Misuse Cases (Position Paper)
Naved Ahmed and Raimundas Matulevičius
(University of Tartu, Estonia)
Progressive increase in developing secure information systems (IS) requires that the security concerns should be properly articulated well ahead in early requirement engineering (RE) along with other functional and non-functional requirements. In this paper, based on the domain model for IS security risk management (SRM) we propose a set of transformation guidelines to translate Secure Tropos models to the misuse case diagrams. We believe that such a model translation would help developers to elicit real security needs by integrating the security analysis starting from early requirement stages to all the stages of development process. The translation aligns the IS security concerns with functional requirements and maintains traceability of the security decisions to their origin.

PEASOUP: Preventing Exploits Against Software of Uncertain Provenance (Position Paper)
Michele Co and Brian Mastropietro
(University of Virginia, USA; Grammatech Inc., USA; Georgia Institute of Technology, USA; Raytheon Inc., USA)
Because software provides much of the critical services for modern society, it is vitally important to provide methodologies and tools for building and deploying reliable software. While there have been many advances towards this goal, much research remains to be done. For example, a recent evaluation of five state-of-the-art C/C++ static analysis tools applied to a corpus of code containing common weaknesses revealed that 41% of the potential vulnerabilities were detected by no tool. The problem of deploying resilient software is further complicated because modern software is often assembled from components from many sources. Consequently, it is difficult to know who built a particular component and what processes were used in its construction. Our research goal is to develop and demonstrate technology that provides comprehensive, automated techniques that allow end users to safely execute new software of uncertain provenance. This paper presents an overview of our vision for realizing these goals and outlines some of the challenging research problems that must be addressed to realize our vision. We call our vision PEASOUP and have begun implementing and evaluating these ideas.

Power Analysis Attack and Countermeasure on the Rabbit Stream Cipher (Position Paper)
KiSeok Bae, MahnKi Ahn, HoonJae Lee, JaeCheol Ha, and SangJae Moon
(Kyungpook National University, Korea; Defence agency for Technology and Quality, Korea; Dongseo University, Korea; Hoseo University, Korea)
Recently, there has been extensive research on mobile devices and stream cipher to increase security. The Rabbit stream cipher was selected for the final eSTREAM portfolio organized by EU ECRYPT and as one of algorithms of the ISO/IEC 18033-4 Stream Ciphers on ISO Security Standardization. As the Rabbit evaluated the complexity of side-channel analysis attack as ‘medium’ in a theoretical approach, the method of correlation power analysis attack and the feasibility of a practical power analysis attack in the experiments are described in this paper. We also propose a countermeasure with random masking and hiding schemes for linear operation. We construct the algorithm of the countermeasure with an additional operating time of 24% with 12.3% increased memory requirements to maintain high-speed performance. We use an eight-bit RISC AVR microprocessor (ATmega 128L) to implement our methods to show that the proposed method is secure against correlation power analysis attacks in practical experiments.

proc time: 0.76