Powered by
Conference Publishing Consulting

2012 3rd International Workshop on Emerging Trends in Software Metrics (WETSoM), June 3, 2012, Zurich, Switzerland

WETSoM 2012 – Proceedings

Contents - Abstracts - Authors

3rd International Workshop on Emerging Trends in Software Metrics (WETSoM)


Title Page

Welcome to 3rd International Workshop on Emerging Trends in Software Metrics (WETSoM 2012) Welcome to WETSoM2012, the 3rd International Workshop on Emerging Trends in Software Metrics. Since its start, WETSoM attracted a blend of academic and industrial researchers, creating a stimulating atmosphere to discuss the progresses of software metrics. A key motivation for this workshop is to help overcoming the low impact that software metrics has on current software development. This is pursued by critically examining the evidence for the effectiveness of existing metrics and identifying new directions for metrics. Evidence for existing metrics includes how the metrics have been used in practice and studies showing their effectiveness. Identifying new directions includes use of new theories, such as complex network theory, on which to base metrics. We are pleased that this year WETSoMfeatures 12 technical paper and an exciting keynote on mining developers' communication to assess software quality by Massimiliano di Penta. The program of WETSoM2012 is the result of hard work by many dedicated people; we especially thank the authors of submitted papers and the members of the program committee. Above all, the greatest richness of this workshop is its participants, who shape the discussion and points into new directions for software metrics research and practice. We hope you will have a great time and an unforgettable experience at WETSoM2012. Giulio Concas, Gerardo Canfora, Ewan Tempero, Hongyu Zhang


Mining Developers' Communication to Assess Software Quality: Promises, Challenges, Perils
Massimiliano Di Penta
(University of Sannio, Italy)
In recent years, researchers are building models relying on a wide variety of data that can be extracted from software repositories, concerning for example characteristics of source code changes, or be related to bug introduction and fixing. Software repositories also contain a huge amount of non-structured information, often expressed in natural language, concerning communication between developers, as well as tags, commit notes, or comments developers produce during their activities. This keynote illustrates, on the one hand, how explanatory or predictive models build upon software repositories could be enhanced by integrating them with the analysis of communication among developers. On the other hand, the keynote warns agains perils in doing that, due to the intrinsic imprecision and incompleteness of such a textual information, and explains how such problems could, at least, be mitigated.
Article Search

Session A

Measuring Metadata-Based Aspect-Oriented Code in Model-Driven Engineering
Sagar Sunkle, Vinay Kulkarni, and Suman Roychoudhury
(Tata Consultancy Services, India)
Metrics measurement for cost estimation in model-driven engineering (MDE) is complex because of number of different artifacts that can potentially be generated. The complexity arises as auto-generated code, manually added code, and non-code artifacts must be sized separately for their contribution to overall effort. In this paper, we address measurement of a special kind of code artifacts called metadata-based aspect-oriented code. Our MDE toolset delivers large database-centric business-critical enterprise applications. We cater to special needs of enterprises by providing support for customization along three concerns, namely design strategies, architecture, and technology platforms (<d, a, t>) in customer-specific applications. Code that is generated for these customizations is conditional in nature, in the sense that model-to-text transformation takes place differently based on choices along these concerns. In our recent efforts to apply Constructive Cost Model (COCOMO) II to our MDE practices, we discovered that while the measurement of the rest of code and non-code artifacts can be easily automated, product-line-like nature of code generation for specifics of <d, a, t> requires special treatment. Our contribution is the use of feature models to capture variations in these dimensions and their mapping to code size estimates. Our initial implementation suggests that this approach scales well considering the size of our applications and takes a step forward in providing complete cost estimation for MDE applications using COCOMO II.
Article Search
The 3C Approach for Agile Quality Assurance
André Janus, Andreas Schmietendorf, Reiner Dumke, and Jens Jäger
(André Janus - IT Consulting, Germany; HWR Berlin, Germany; University of Magdeburg, Germany; Jens Jäger Consulting, Germany)
Continuous Integration is an Agile Practice for the continuous integration of new Source Code into the Code Base including the automated compile, build and running of tests. From traditional Quality Assurance we know Software Metrics as a very good approach to measure Software Quality. Combining both there is a promising approach to control and ensure the internal Software Quality. This paper introduces the 3C Approach, which is an extension to the Agile Practice Continuous Integration: It adds Continuous Measurement and Continuous Improvement as subsequent Activities to CI and establishes Metric-based Quality-Gates for an Agile Quality Assurance. It was developed and proven in an Agile Maintenance and Evolution project for the Automotive Industry at T-Systems International – a large German ICT company. Within the project the approach was used for a (legacy) Java-based Web Application including the use of Open Source Tools from the Java Eco-System. But the approach is not limited to these technical boundaries as similar tools are available also for other technical platforms.
Article Search
Size Estimation of Web Applications through Web CMF Object
Erika Corona, Michele L. Marchesi, Giulio Barabino, Daniele Grechi, and Laura Piccinno
(University of Cagliari, Italy; University of Genova, Italy; Datasiel s.p.a., Italy)
This work outlines a new methodology for estimat­ing the size of Web applications developed with a Content Management Framework (CMF). The reason for proposing – through this work – a new methodology for size estimation is the realization of the inadequacy of the RWO method, which we had recently developed, in estimating the effort of the latest Web applications. The size metric used in the RWO method was found not to be well suited for Web applications developed through a CMF. In this work, we present the new key elements for analysis and planning, needed to define every important step in developing a Web application through a CMF. Using those elements, it is possible to obtain the size of such an ap­plication. We also present the experimental validation per­formed on a 7-project dataset, provided by an Italian software company.
Article Search
Functional versus Design Measures for Model-Driven Web Applications: A Case Study in the Context of Web Effort Estimation
Lucia De Marco, Filomena Ferrucci, Carmine Gravino, Federica Sarro, Silvia Abrahao, and Jaime Gomez
(University of Salerno, Italy; Universidad Politecnica de Valencia, Spain; University of Alicante, Spain)
In the literature we can identify two main approaches for sizing model-driven Web applications: one based on design measures and another based on functional measures. Design measures take into account the modeling primitives characterizing the models of the specific model-driven approach. On the other hand, the functional measures are obtained by applying functional size measurement procedures specifically conceived to map the modeling primitives of the model-driven approach into concepts of a functional size measurement method. In this paper, we focus our attention on the Object-Oriented Hypermedia (OO-H) method, a model-driven approach to design and develop Web applications. We report on the results of an empirical study carried out to compare the ability of some design measures and OO-HFP (a model-driven functional size measurement procedure) to predict the development effort of Web applications. To this aim, we exploited a dataset with 31 Web projects developed using OO-H. The analysis highlighted that each design measure was positively correlated with the Web application development effort. However, the best estimation model obtained by exploiting the Manual Stepwise Regression employed only the measure Internal Links (IL). Furthermore, the study highlighted that the estimates obtained with the IL based prediction model were significantly better than those achieved using the OO-HFP based prediction model. These results seem to confirm previous investigations suggesting that Function Point Analysis can fail to capture some specific features of Web applications.
Article Search

Session B

The Evolving Structures of Software Systems
Kecia Aline Marques Ferreira, Roberta Coeli Neves Moreira, Mariza Andrade S. Bigonha, and Roberto S. Bigonha
(CEFET-MG, Brazil; UFMG, Brazil)
Software maintenance is an important problem because software is an evolving complex system. To make software maintenance viable, it is important to know the real nature of the systems we have to deal with. Little House is a model that provides a macroscopic view of software systems. According to Little House, a software system can be modeled as a graph with five components. This model is intended to be an approach to improve the understanding and the analysis of software structures. However, to achieve this aim, it is necessary to determine its characteristics and its implications. This paper presents the results of an empirical study aiming to characterize software evolution by means of Little House and software metrics. We analyzed several versions of 13 open source software systems, which have been developed over nearly 10 years. The results of the study show that there are two main components of Little House which suffer substantial degradation as the software system evolves. This finding indicates that those components should be carefully taken in consideration when maintenance tasks are performed in the system.
Article Search
Integrating Metrics in an Ontological Framework Supporting SW-FMEA
Irene Bicchierai, Giacomo Bucci, Carlo Nocentini, and Enrico Vicario
(Università di Firenze, Italy)
The development process of safety-critical systems benefits from the early identification of failures affecting them. Several techniques have been designed in order to face this issue, among them Failure Mode Effect Analysis (FMEA). Although FMEA has been mainly thought for hardware systems, the increasing responsibilities assigned to software (SW) have fostered its application to SW as well (SW-FMEA), exacerbating the complexity of the analysis. Ontologies have been proposed as a way to formalize the SW-FMEA process and to give precise semantics to the involved concepts and data. We present a framework, based on an ontological model, which, beyond other capabilities, supports the collection of SW metrics enabling automatic identification of SW components not attaining the required level of assurance.
Article Search
Using Early Stage Project Data to Predict Change-Proneness
Claire Ingram and Steve Riddle
(Newcastle University, UK)
Several previous studies have suggested methods for predicting change-proneness based on software complexity metrics. We hypothesise that data from the early stages of a development project such as requirements and design could also be used to make such predictions. We define here a set of new metrics to capture data from the requirements and/or design stages, and derive values for these metrics using a case study project. We do find that significant differences in change-proneness can be detected between components with high or with low values for our metrics, suggesting that this is an area which would benefit from further study.
Article Search
Modification and Developer Metrics at the Function Level: Metrics for the Study of the Evolution of a Software Project
Gregorio Robles, Israel Herraiz, Daniel M. Germán, and Daniel Izquierdo-Cortázar
(Universidad Rey Juan Carlos, Spain; TU Madrid, Spain; University of Victoria, Canada)
Software evolution, and particularly its growth, has been mainly studied at the file (also sometimes referred as module) level. In this paper we propose to move from the physical towards a level that includes semantic information by using functions or methods for measuring the evolution of a software system. We point out that use of functions-based metrics has many advantages over the use of files or lines of code. We demonstrate our approach with an empirical study of two Free/Open Source projects: a community-driven project, Apache, and a company-led project, Novell Evolution. We discovered that most functions never change; when they do their number of modifications is correlated with their size, and that very few authors who modify each; finally we show that the departure of a developer from a software project slows the evolution of the functions that she authored.
Article Search

Session C

On the Statistical Distribution of Object-Oriented System Properties
Israel Herraiz, Daniel Rodriguez, and Rachel Harrison
(TU Madrid, Spain; University of Alcalá, Spain; Oxford Brookes University, UK)
The statistical distributions of different software properties have been thoroughly studied in the past, including software size, complexity and the number of defects. In the case of object-oriented systems, these distributions have been found to obey a power law, a common statistical distribution also found in many other fields. However, we have found that for some statistical properties, the behavior does not entirely follow a power law, but a mixture between a lognormal and a power law distribution. Our study is based on the Qualitas Corpus, a large compendium of diverse Java-based software projects. We have measured the Chidamber and Kemerer metrics suite for every file of every Java project in the corpus. Our results show that the range of high values for the different metrics follows a power law distribution, whereas the rest of the range follows a lognormal distribution. This is a pattern typical of so-called double Pareto distributions, also found in empirical studies for other software properties.
Article Search
PIVoT: Project Insights and Visualization Toolkit
Vibhu Saujanya Sharma and Vikrant Kaulgud
(Accenture Technology Labs, India)
An in-process view into a software development project's health is critical for its success. However, in services organizations, a typical software development team employs a heterogeneous set of tools based on client requirements through the different phases of the software project. The use of disparate tools with non-compatible outputs makes it very difficult to extract one coherent picture of the project's health and status. Existing project management tools either work at the process layer and rely on manually entered information, or are activity centric, without a holistic view. In this paper, we present PIVoT, a metric-based framework for automated, non-invasive, and in-process data collection and analysis in heterogeneous software project environments, that provides rich, multi-dimensional insights into the project's health and trajectory. Here, we introduce the different analyses, insights and metrics, and discuss their usage in typical software projects.
Article Search
Using Network Analysis Metrics to Discover Functionally Important Methods in Large-Scale Software Systems
Anjan Pakhira and Peter Andras
(Newcastle University, UK)
In large-scale software systems that integrate many components originating from different vendors, the understanding of the functional importance of the components is critical for the dependability of the system. However, in general, gaining such understanding is difficult. Here we describe the application of the combination of dynamic analysis and network analysis to large-scale software systems with the aim to determine methods of classes that are functionally important with respect to a given functionality of the software. We use as a test case the Google Chrome and predict functionally important methods in a weak sense in the context of usage scenarios. We validate the predictions using mutation testing and evaluate the behavior of the software following the mutation change. Our results indicate that network analysis metrics based on measurement of structural integrity can be used to predict methods of classes that are functionally important with respect to a given functionality of the software system.
Article Search
Entropy of the Degree Distribution and Object-Oriented Software Quality
Ivana Turnu, Michele L. Marchesi, and Roberto Tonelli
(University of Cagliari, Italy)
The entropy of degree distribution has been considered from many authors as a measure of a network's heterogeneity and consequently of the resilience to random failures. In this paper we propose the entropy of degree distribution as a new measure of software quality. We present a study were software systems are considered as complex networks which are characterized by heterogeneous distribution of links. On such complex software networks we computed the entropy of degree distribution. We analyzed various releases of the publically available Eclipse and Netbeans software systems, calculating the entropy of degree distribution for every release analyzed. Our results display a good correlation between the entropy of degree distribution and the number of bugs for Eclipse and Netbeans. Complexity and quality metrics are in general computed on every system module while the entropy is just a scalar number that characterizes a whole system, this result suggests that the entropy of degree distribution could be considered as a global quality metric for large software systems. Our results need however to be confirmed for other large software systems.
Article Search

proc time: 0.04