Workshop WETSoM 2012 – Author Index |
Contents -
Abstracts -
Authors
|
A B C D F G H I J K M N P R S T V
Abrahao, Silvia |
WETSoM '12: "Functional versus Design Measures ..."
Functional versus Design Measures for Model-Driven Web Applications: A Case Study in the Context of Web Effort Estimation
Lucia De Marco, Filomena Ferrucci, Carmine Gravino, Federica Sarro, Silvia Abrahao, and Jaime Gomez (University of Salerno, Italy; Universidad Politecnica de Valencia, Spain; University of Alicante, Spain) In the literature we can identify two main approaches for sizing model-driven Web applications: one based on design measures and another based on functional measures. Design measures take into account the modeling primitives characterizing the models of the specific model-driven approach. On the other hand, the functional measures are obtained by applying functional size measurement procedures specifically conceived to map the modeling primitives of the model-driven approach into concepts of a functional size measurement method. In this paper, we focus our attention on the Object-Oriented Hypermedia (OO-H) method, a model-driven approach to design and develop Web applications. We report on the results of an empirical study carried out to compare the ability of some design measures and OO-HFP (a model-driven functional size measurement procedure) to predict the development effort of Web applications. To this aim, we exploited a dataset with 31 Web projects developed using OO-H. The analysis highlighted that each design measure was positively correlated with the Web application development effort. However, the best estimation model obtained by exploiting the Manual Stepwise Regression employed only the measure Internal Links (IL). Furthermore, the study highlighted that the estimates obtained with the IL based prediction model were significantly better than those achieved using the OO-HFP based prediction model. These results seem to confirm previous investigations suggesting that Function Point Analysis can fail to capture some specific features of Web applications. @InProceedings{WETSoM12p21, author = {Lucia De Marco and Filomena Ferrucci and Carmine Gravino and Federica Sarro and Silvia Abrahao and Jaime Gomez}, title = {Functional versus Design Measures for Model-Driven Web Applications: A Case Study in the Context of Web Effort Estimation}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {21--27}, doi = {}, year = {2012}, } |
|
Andras, Peter |
WETSoM '12: "Using Network Analysis Metrics ..."
Using Network Analysis Metrics to Discover Functionally Important Methods in Large-Scale Software Systems
Anjan Pakhira and Peter Andras (Newcastle University, UK) In large-scale software systems that integrate many components originating from different vendors, the understanding of the functional importance of the components is critical for the dependability of the system. However, in general, gaining such understanding is difficult. Here we describe the application of the combination of dynamic analysis and network analysis to large-scale software systems with the aim to determine methods of classes that are functionally important with respect to a given functionality of the software. We use as a test case the Google Chrome and predict functionally important methods in a weak sense in the context of usage scenarios. We validate the predictions using mutation testing and evaluate the behavior of the software following the mutation change. Our results indicate that network analysis metrics based on measurement of structural integrity can be used to predict methods of classes that are functionally important with respect to a given functionality of the software system. @InProceedings{WETSoM12p70, author = {Anjan Pakhira and Peter Andras}, title = {Using Network Analysis Metrics to Discover Functionally Important Methods in Large-Scale Software Systems}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {70--76}, doi = {}, year = {2012}, } |
|
Barabino, Giulio |
WETSoM '12: "Size Estimation of Web Applications ..."
Size Estimation of Web Applications through Web CMF Object
Erika Corona, Michele L. Marchesi, Giulio Barabino, Daniele Grechi, and Laura Piccinno (University of Cagliari, Italy; University of Genova, Italy; Datasiel s.p.a., Italy) This work outlines a new methodology for estimating the size of Web applications developed with a Content Management Framework (CMF). The reason for proposing – through this work – a new methodology for size estimation is the realization of the inadequacy of the RWO method, which we had recently developed, in estimating the effort of the latest Web applications. The size metric used in the RWO method was found not to be well suited for Web applications developed through a CMF. In this work, we present the new key elements for analysis and planning, needed to define every important step in developing a Web application through a CMF. Using those elements, it is possible to obtain the size of such an application. We also present the experimental validation performed on a 7-project dataset, provided by an Italian software company. @InProceedings{WETSoM12p14, author = {Erika Corona and Michele L. Marchesi and Giulio Barabino and Daniele Grechi and Laura Piccinno}, title = {Size Estimation of Web Applications through Web CMF Object}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {14--20}, doi = {}, year = {2012}, } |
|
Bicchierai, Irene |
WETSoM '12: "Integrating Metrics in an ..."
Integrating Metrics in an Ontological Framework Supporting SW-FMEA
Irene Bicchierai, Giacomo Bucci, Carlo Nocentini, and Enrico Vicario (Università di Firenze, Italy) The development process of safety-critical systems benefits from the early identification of failures affecting them. Several techniques have been designed in order to face this issue, among them Failure Mode Effect Analysis (FMEA). Although FMEA has been mainly thought for hardware systems, the increasing responsibilities assigned to software (SW) have fostered its application to SW as well (SW-FMEA), exacerbating the complexity of the analysis. Ontologies have been proposed as a way to formalize the SW-FMEA process and to give precise semantics to the involved concepts and data. We present a framework, based on an ontological model, which, beyond other capabilities, supports the collection of SW metrics enabling automatic identification of SW components not attaining the required level of assurance. @InProceedings{WETSoM12p35, author = {Irene Bicchierai and Giacomo Bucci and Carlo Nocentini and Enrico Vicario}, title = {Integrating Metrics in an Ontological Framework Supporting SW-FMEA}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {35--41}, doi = {}, year = {2012}, } |
|
Bigonha, Mariza Andrade S. |
WETSoM '12: "The Evolving Structures of ..."
The Evolving Structures of Software Systems
Kecia Aline Marques Ferreira, Roberta Coeli Neves Moreira, Mariza Andrade S. Bigonha, and Roberto S. Bigonha (CEFET-MG, Brazil; UFMG, Brazil) Software maintenance is an important problem because software is an evolving complex system. To make software maintenance viable, it is important to know the real nature of the systems we have to deal with. Little House is a model that provides a macroscopic view of software systems. According to Little House, a software system can be modeled as a graph with five components. This model is intended to be an approach to improve the understanding and the analysis of software structures. However, to achieve this aim, it is necessary to determine its characteristics and its implications. This paper presents the results of an empirical study aiming to characterize software evolution by means of Little House and software metrics. We analyzed several versions of 13 open source software systems, which have been developed over nearly 10 years. The results of the study show that there are two main components of Little House which suffer substantial degradation as the software system evolves. This finding indicates that those components should be carefully taken in consideration when maintenance tasks are performed in the system. @InProceedings{WETSoM12p28, author = {Kecia Aline Marques Ferreira and Roberta Coeli Neves Moreira and Mariza Andrade S. Bigonha and Roberto S. Bigonha}, title = {The Evolving Structures of Software Systems}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {28--34}, doi = {}, year = {2012}, } |
|
Bigonha, Roberto S. |
WETSoM '12: "The Evolving Structures of ..."
The Evolving Structures of Software Systems
Kecia Aline Marques Ferreira, Roberta Coeli Neves Moreira, Mariza Andrade S. Bigonha, and Roberto S. Bigonha (CEFET-MG, Brazil; UFMG, Brazil) Software maintenance is an important problem because software is an evolving complex system. To make software maintenance viable, it is important to know the real nature of the systems we have to deal with. Little House is a model that provides a macroscopic view of software systems. According to Little House, a software system can be modeled as a graph with five components. This model is intended to be an approach to improve the understanding and the analysis of software structures. However, to achieve this aim, it is necessary to determine its characteristics and its implications. This paper presents the results of an empirical study aiming to characterize software evolution by means of Little House and software metrics. We analyzed several versions of 13 open source software systems, which have been developed over nearly 10 years. The results of the study show that there are two main components of Little House which suffer substantial degradation as the software system evolves. This finding indicates that those components should be carefully taken in consideration when maintenance tasks are performed in the system. @InProceedings{WETSoM12p28, author = {Kecia Aline Marques Ferreira and Roberta Coeli Neves Moreira and Mariza Andrade S. Bigonha and Roberto S. Bigonha}, title = {The Evolving Structures of Software Systems}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {28--34}, doi = {}, year = {2012}, } |
|
Bucci, Giacomo |
WETSoM '12: "Integrating Metrics in an ..."
Integrating Metrics in an Ontological Framework Supporting SW-FMEA
Irene Bicchierai, Giacomo Bucci, Carlo Nocentini, and Enrico Vicario (Università di Firenze, Italy) The development process of safety-critical systems benefits from the early identification of failures affecting them. Several techniques have been designed in order to face this issue, among them Failure Mode Effect Analysis (FMEA). Although FMEA has been mainly thought for hardware systems, the increasing responsibilities assigned to software (SW) have fostered its application to SW as well (SW-FMEA), exacerbating the complexity of the analysis. Ontologies have been proposed as a way to formalize the SW-FMEA process and to give precise semantics to the involved concepts and data. We present a framework, based on an ontological model, which, beyond other capabilities, supports the collection of SW metrics enabling automatic identification of SW components not attaining the required level of assurance. @InProceedings{WETSoM12p35, author = {Irene Bicchierai and Giacomo Bucci and Carlo Nocentini and Enrico Vicario}, title = {Integrating Metrics in an Ontological Framework Supporting SW-FMEA}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {35--41}, doi = {}, year = {2012}, } |
|
Corona, Erika |
WETSoM '12: "Size Estimation of Web Applications ..."
Size Estimation of Web Applications through Web CMF Object
Erika Corona, Michele L. Marchesi, Giulio Barabino, Daniele Grechi, and Laura Piccinno (University of Cagliari, Italy; University of Genova, Italy; Datasiel s.p.a., Italy) This work outlines a new methodology for estimating the size of Web applications developed with a Content Management Framework (CMF). The reason for proposing – through this work – a new methodology for size estimation is the realization of the inadequacy of the RWO method, which we had recently developed, in estimating the effort of the latest Web applications. The size metric used in the RWO method was found not to be well suited for Web applications developed through a CMF. In this work, we present the new key elements for analysis and planning, needed to define every important step in developing a Web application through a CMF. Using those elements, it is possible to obtain the size of such an application. We also present the experimental validation performed on a 7-project dataset, provided by an Italian software company. @InProceedings{WETSoM12p14, author = {Erika Corona and Michele L. Marchesi and Giulio Barabino and Daniele Grechi and Laura Piccinno}, title = {Size Estimation of Web Applications through Web CMF Object}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {14--20}, doi = {}, year = {2012}, } |
|
De Marco, Lucia |
WETSoM '12: "Functional versus Design Measures ..."
Functional versus Design Measures for Model-Driven Web Applications: A Case Study in the Context of Web Effort Estimation
Lucia De Marco, Filomena Ferrucci, Carmine Gravino, Federica Sarro, Silvia Abrahao, and Jaime Gomez (University of Salerno, Italy; Universidad Politecnica de Valencia, Spain; University of Alicante, Spain) In the literature we can identify two main approaches for sizing model-driven Web applications: one based on design measures and another based on functional measures. Design measures take into account the modeling primitives characterizing the models of the specific model-driven approach. On the other hand, the functional measures are obtained by applying functional size measurement procedures specifically conceived to map the modeling primitives of the model-driven approach into concepts of a functional size measurement method. In this paper, we focus our attention on the Object-Oriented Hypermedia (OO-H) method, a model-driven approach to design and develop Web applications. We report on the results of an empirical study carried out to compare the ability of some design measures and OO-HFP (a model-driven functional size measurement procedure) to predict the development effort of Web applications. To this aim, we exploited a dataset with 31 Web projects developed using OO-H. The analysis highlighted that each design measure was positively correlated with the Web application development effort. However, the best estimation model obtained by exploiting the Manual Stepwise Regression employed only the measure Internal Links (IL). Furthermore, the study highlighted that the estimates obtained with the IL based prediction model were significantly better than those achieved using the OO-HFP based prediction model. These results seem to confirm previous investigations suggesting that Function Point Analysis can fail to capture some specific features of Web applications. @InProceedings{WETSoM12p21, author = {Lucia De Marco and Filomena Ferrucci and Carmine Gravino and Federica Sarro and Silvia Abrahao and Jaime Gomez}, title = {Functional versus Design Measures for Model-Driven Web Applications: A Case Study in the Context of Web Effort Estimation}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {21--27}, doi = {}, year = {2012}, } |
|
Di Penta, Massimiliano |
WETSoM '12: "Mining Developers' Communication ..."
Mining Developers' Communication to Assess Software Quality: Promises, Challenges, Perils
Massimiliano Di Penta (University of Sannio, Italy) In recent years, researchers are building models relying on a wide variety of data that can be extracted from software repositories, concerning for example characteristics of source code changes, or be related to bug introduction and fixing. Software repositories also contain a huge amount of non-structured information, often expressed in natural language, concerning communication between developers, as well as tags, commit notes, or comments developers produce during their activities. This keynote illustrates, on the one hand, how explanatory or predictive models build upon software repositories could be enhanced by integrating them with the analysis of communication among developers. On the other hand, the keynote warns agains perils in doing that, due to the intrinsic imprecision and incompleteness of such a textual information, and explains how such problems could, at least, be mitigated. @InProceedings{WETSoM12p1, author = {Massimiliano Di Penta}, title = {Mining Developers' Communication to Assess Software Quality: Promises, Challenges, Perils}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {1--1}, doi = {}, year = {2012}, } |
|
Dumke, Reiner |
WETSoM '12: "The 3C Approach for Agile ..."
The 3C Approach for Agile Quality Assurance
André Janus, Andreas Schmietendorf, Reiner Dumke, and Jens Jäger (André Janus - IT Consulting, Germany; HWR Berlin, Germany; University of Magdeburg, Germany; Jens Jäger Consulting, Germany) Continuous Integration is an Agile Practice for the continuous integration of new Source Code into the Code Base including the automated compile, build and running of tests. From traditional Quality Assurance we know Software Metrics as a very good approach to measure Software Quality. Combining both there is a promising approach to control and ensure the internal Software Quality. This paper introduces the 3C Approach, which is an extension to the Agile Practice Continuous Integration: It adds Continuous Measurement and Continuous Improvement as subsequent Activities to CI and establishes Metric-based Quality-Gates for an Agile Quality Assurance. It was developed and proven in an Agile Maintenance and Evolution project for the Automotive Industry at T-Systems International – a large German ICT company. Within the project the approach was used for a (legacy) Java-based Web Application including the use of Open Source Tools from the Java Eco-System. But the approach is not limited to these technical boundaries as similar tools are available also for other technical platforms. @InProceedings{WETSoM12p9, author = {André Janus and Andreas Schmietendorf and Reiner Dumke and Jens Jäger}, title = {The 3C Approach for Agile Quality Assurance}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {9--13}, doi = {}, year = {2012}, } |
|
Ferreira, Kecia Aline Marques |
WETSoM '12: "The Evolving Structures of ..."
The Evolving Structures of Software Systems
Kecia Aline Marques Ferreira, Roberta Coeli Neves Moreira, Mariza Andrade S. Bigonha, and Roberto S. Bigonha (CEFET-MG, Brazil; UFMG, Brazil) Software maintenance is an important problem because software is an evolving complex system. To make software maintenance viable, it is important to know the real nature of the systems we have to deal with. Little House is a model that provides a macroscopic view of software systems. According to Little House, a software system can be modeled as a graph with five components. This model is intended to be an approach to improve the understanding and the analysis of software structures. However, to achieve this aim, it is necessary to determine its characteristics and its implications. This paper presents the results of an empirical study aiming to characterize software evolution by means of Little House and software metrics. We analyzed several versions of 13 open source software systems, which have been developed over nearly 10 years. The results of the study show that there are two main components of Little House which suffer substantial degradation as the software system evolves. This finding indicates that those components should be carefully taken in consideration when maintenance tasks are performed in the system. @InProceedings{WETSoM12p28, author = {Kecia Aline Marques Ferreira and Roberta Coeli Neves Moreira and Mariza Andrade S. Bigonha and Roberto S. Bigonha}, title = {The Evolving Structures of Software Systems}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {28--34}, doi = {}, year = {2012}, } |
|
Ferrucci, Filomena |
WETSoM '12: "Functional versus Design Measures ..."
Functional versus Design Measures for Model-Driven Web Applications: A Case Study in the Context of Web Effort Estimation
Lucia De Marco, Filomena Ferrucci, Carmine Gravino, Federica Sarro, Silvia Abrahao, and Jaime Gomez (University of Salerno, Italy; Universidad Politecnica de Valencia, Spain; University of Alicante, Spain) In the literature we can identify two main approaches for sizing model-driven Web applications: one based on design measures and another based on functional measures. Design measures take into account the modeling primitives characterizing the models of the specific model-driven approach. On the other hand, the functional measures are obtained by applying functional size measurement procedures specifically conceived to map the modeling primitives of the model-driven approach into concepts of a functional size measurement method. In this paper, we focus our attention on the Object-Oriented Hypermedia (OO-H) method, a model-driven approach to design and develop Web applications. We report on the results of an empirical study carried out to compare the ability of some design measures and OO-HFP (a model-driven functional size measurement procedure) to predict the development effort of Web applications. To this aim, we exploited a dataset with 31 Web projects developed using OO-H. The analysis highlighted that each design measure was positively correlated with the Web application development effort. However, the best estimation model obtained by exploiting the Manual Stepwise Regression employed only the measure Internal Links (IL). Furthermore, the study highlighted that the estimates obtained with the IL based prediction model were significantly better than those achieved using the OO-HFP based prediction model. These results seem to confirm previous investigations suggesting that Function Point Analysis can fail to capture some specific features of Web applications. @InProceedings{WETSoM12p21, author = {Lucia De Marco and Filomena Ferrucci and Carmine Gravino and Federica Sarro and Silvia Abrahao and Jaime Gomez}, title = {Functional versus Design Measures for Model-Driven Web Applications: A Case Study in the Context of Web Effort Estimation}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {21--27}, doi = {}, year = {2012}, } |
|
Germán, Daniel M. |
WETSoM '12: "Modification and Developer ..."
Modification and Developer Metrics at the Function Level: Metrics for the Study of the Evolution of a Software Project
Gregorio Robles, Israel Herraiz, Daniel M. Germán, and Daniel Izquierdo-Cortázar (Universidad Rey Juan Carlos, Spain; TU Madrid, Spain; University of Victoria, Canada) Software evolution, and particularly its growth, has been mainly studied at the file (also sometimes referred as module) level. In this paper we propose to move from the physical towards a level that includes semantic information by using functions or methods for measuring the evolution of a software system. We point out that use of functions-based metrics has many advantages over the use of files or lines of code. We demonstrate our approach with an empirical study of two Free/Open Source projects: a community-driven project, Apache, and a company-led project, Novell Evolution. We discovered that most functions never change; when they do their number of modifications is correlated with their size, and that very few authors who modify each; finally we show that the departure of a developer from a software project slows the evolution of the functions that she authored. @InProceedings{WETSoM12p49, author = {Gregorio Robles and Israel Herraiz and Daniel M. Germán and Daniel Izquierdo-Cortázar}, title = {Modification and Developer Metrics at the Function Level: Metrics for the Study of the Evolution of a Software Project}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {49--55}, doi = {}, year = {2012}, } |
|
Gomez, Jaime |
WETSoM '12: "Functional versus Design Measures ..."
Functional versus Design Measures for Model-Driven Web Applications: A Case Study in the Context of Web Effort Estimation
Lucia De Marco, Filomena Ferrucci, Carmine Gravino, Federica Sarro, Silvia Abrahao, and Jaime Gomez (University of Salerno, Italy; Universidad Politecnica de Valencia, Spain; University of Alicante, Spain) In the literature we can identify two main approaches for sizing model-driven Web applications: one based on design measures and another based on functional measures. Design measures take into account the modeling primitives characterizing the models of the specific model-driven approach. On the other hand, the functional measures are obtained by applying functional size measurement procedures specifically conceived to map the modeling primitives of the model-driven approach into concepts of a functional size measurement method. In this paper, we focus our attention on the Object-Oriented Hypermedia (OO-H) method, a model-driven approach to design and develop Web applications. We report on the results of an empirical study carried out to compare the ability of some design measures and OO-HFP (a model-driven functional size measurement procedure) to predict the development effort of Web applications. To this aim, we exploited a dataset with 31 Web projects developed using OO-H. The analysis highlighted that each design measure was positively correlated with the Web application development effort. However, the best estimation model obtained by exploiting the Manual Stepwise Regression employed only the measure Internal Links (IL). Furthermore, the study highlighted that the estimates obtained with the IL based prediction model were significantly better than those achieved using the OO-HFP based prediction model. These results seem to confirm previous investigations suggesting that Function Point Analysis can fail to capture some specific features of Web applications. @InProceedings{WETSoM12p21, author = {Lucia De Marco and Filomena Ferrucci and Carmine Gravino and Federica Sarro and Silvia Abrahao and Jaime Gomez}, title = {Functional versus Design Measures for Model-Driven Web Applications: A Case Study in the Context of Web Effort Estimation}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {21--27}, doi = {}, year = {2012}, } |
|
Gravino, Carmine |
WETSoM '12: "Functional versus Design Measures ..."
Functional versus Design Measures for Model-Driven Web Applications: A Case Study in the Context of Web Effort Estimation
Lucia De Marco, Filomena Ferrucci, Carmine Gravino, Federica Sarro, Silvia Abrahao, and Jaime Gomez (University of Salerno, Italy; Universidad Politecnica de Valencia, Spain; University of Alicante, Spain) In the literature we can identify two main approaches for sizing model-driven Web applications: one based on design measures and another based on functional measures. Design measures take into account the modeling primitives characterizing the models of the specific model-driven approach. On the other hand, the functional measures are obtained by applying functional size measurement procedures specifically conceived to map the modeling primitives of the model-driven approach into concepts of a functional size measurement method. In this paper, we focus our attention on the Object-Oriented Hypermedia (OO-H) method, a model-driven approach to design and develop Web applications. We report on the results of an empirical study carried out to compare the ability of some design measures and OO-HFP (a model-driven functional size measurement procedure) to predict the development effort of Web applications. To this aim, we exploited a dataset with 31 Web projects developed using OO-H. The analysis highlighted that each design measure was positively correlated with the Web application development effort. However, the best estimation model obtained by exploiting the Manual Stepwise Regression employed only the measure Internal Links (IL). Furthermore, the study highlighted that the estimates obtained with the IL based prediction model were significantly better than those achieved using the OO-HFP based prediction model. These results seem to confirm previous investigations suggesting that Function Point Analysis can fail to capture some specific features of Web applications. @InProceedings{WETSoM12p21, author = {Lucia De Marco and Filomena Ferrucci and Carmine Gravino and Federica Sarro and Silvia Abrahao and Jaime Gomez}, title = {Functional versus Design Measures for Model-Driven Web Applications: A Case Study in the Context of Web Effort Estimation}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {21--27}, doi = {}, year = {2012}, } |
|
Grechi, Daniele |
WETSoM '12: "Size Estimation of Web Applications ..."
Size Estimation of Web Applications through Web CMF Object
Erika Corona, Michele L. Marchesi, Giulio Barabino, Daniele Grechi, and Laura Piccinno (University of Cagliari, Italy; University of Genova, Italy; Datasiel s.p.a., Italy) This work outlines a new methodology for estimating the size of Web applications developed with a Content Management Framework (CMF). The reason for proposing – through this work – a new methodology for size estimation is the realization of the inadequacy of the RWO method, which we had recently developed, in estimating the effort of the latest Web applications. The size metric used in the RWO method was found not to be well suited for Web applications developed through a CMF. In this work, we present the new key elements for analysis and planning, needed to define every important step in developing a Web application through a CMF. Using those elements, it is possible to obtain the size of such an application. We also present the experimental validation performed on a 7-project dataset, provided by an Italian software company. @InProceedings{WETSoM12p14, author = {Erika Corona and Michele L. Marchesi and Giulio Barabino and Daniele Grechi and Laura Piccinno}, title = {Size Estimation of Web Applications through Web CMF Object}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {14--20}, doi = {}, year = {2012}, } |
|
Harrison, Rachel |
WETSoM '12: "On the Statistical Distribution ..."
On the Statistical Distribution of Object-Oriented System Properties
Israel Herraiz, Daniel Rodriguez, and Rachel Harrison (TU Madrid, Spain; University of Alcalá, Spain; Oxford Brookes University, UK) The statistical distributions of different software properties have been thoroughly studied in the past, including software size, complexity and the number of defects. In the case of object-oriented systems, these distributions have been found to obey a power law, a common statistical distribution also found in many other fields. However, we have found that for some statistical properties, the behavior does not entirely follow a power law, but a mixture between a lognormal and a power law distribution. Our study is based on the Qualitas Corpus, a large compendium of diverse Java-based software projects. We have measured the Chidamber and Kemerer metrics suite for every file of every Java project in the corpus. Our results show that the range of high values for the different metrics follows a power law distribution, whereas the rest of the range follows a lognormal distribution. This is a pattern typical of so-called double Pareto distributions, also found in empirical studies for other software properties. @InProceedings{WETSoM12p56, author = {Israel Herraiz and Daniel Rodriguez and Rachel Harrison}, title = {On the Statistical Distribution of Object-Oriented System Properties}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {56--62}, doi = {}, year = {2012}, } |
|
Herraiz, Israel |
WETSoM '12: "Modification and Developer ..."
Modification and Developer Metrics at the Function Level: Metrics for the Study of the Evolution of a Software Project
Gregorio Robles, Israel Herraiz, Daniel M. Germán, and Daniel Izquierdo-Cortázar (Universidad Rey Juan Carlos, Spain; TU Madrid, Spain; University of Victoria, Canada) Software evolution, and particularly its growth, has been mainly studied at the file (also sometimes referred as module) level. In this paper we propose to move from the physical towards a level that includes semantic information by using functions or methods for measuring the evolution of a software system. We point out that use of functions-based metrics has many advantages over the use of files or lines of code. We demonstrate our approach with an empirical study of two Free/Open Source projects: a community-driven project, Apache, and a company-led project, Novell Evolution. We discovered that most functions never change; when they do their number of modifications is correlated with their size, and that very few authors who modify each; finally we show that the departure of a developer from a software project slows the evolution of the functions that she authored. @InProceedings{WETSoM12p49, author = {Gregorio Robles and Israel Herraiz and Daniel M. Germán and Daniel Izquierdo-Cortázar}, title = {Modification and Developer Metrics at the Function Level: Metrics for the Study of the Evolution of a Software Project}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {49--55}, doi = {}, year = {2012}, } WETSoM '12: "On the Statistical Distribution ..." On the Statistical Distribution of Object-Oriented System Properties Israel Herraiz, Daniel Rodriguez, and Rachel Harrison (TU Madrid, Spain; University of Alcalá, Spain; Oxford Brookes University, UK) The statistical distributions of different software properties have been thoroughly studied in the past, including software size, complexity and the number of defects. In the case of object-oriented systems, these distributions have been found to obey a power law, a common statistical distribution also found in many other fields. However, we have found that for some statistical properties, the behavior does not entirely follow a power law, but a mixture between a lognormal and a power law distribution. Our study is based on the Qualitas Corpus, a large compendium of diverse Java-based software projects. We have measured the Chidamber and Kemerer metrics suite for every file of every Java project in the corpus. Our results show that the range of high values for the different metrics follows a power law distribution, whereas the rest of the range follows a lognormal distribution. This is a pattern typical of so-called double Pareto distributions, also found in empirical studies for other software properties. @InProceedings{WETSoM12p56, author = {Israel Herraiz and Daniel Rodriguez and Rachel Harrison}, title = {On the Statistical Distribution of Object-Oriented System Properties}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {56--62}, doi = {}, year = {2012}, } |
|
Ingram, Claire |
WETSoM '12: "Using Early Stage Project ..."
Using Early Stage Project Data to Predict Change-Proneness
Claire Ingram and Steve Riddle (Newcastle University, UK) Several previous studies have suggested methods for predicting change-proneness based on software complexity metrics. We hypothesise that data from the early stages of a development project such as requirements and design could also be used to make such predictions. We define here a set of new metrics to capture data from the requirements and/or design stages, and derive values for these metrics using a case study project. We do find that significant differences in change-proneness can be detected between components with high or with low values for our metrics, suggesting that this is an area which would benefit from further study. @InProceedings{WETSoM12p42, author = {Claire Ingram and Steve Riddle}, title = {Using Early Stage Project Data to Predict Change-Proneness}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {42--48}, doi = {}, year = {2012}, } |
|
Izquierdo-Cortázar, Daniel |
WETSoM '12: "Modification and Developer ..."
Modification and Developer Metrics at the Function Level: Metrics for the Study of the Evolution of a Software Project
Gregorio Robles, Israel Herraiz, Daniel M. Germán, and Daniel Izquierdo-Cortázar (Universidad Rey Juan Carlos, Spain; TU Madrid, Spain; University of Victoria, Canada) Software evolution, and particularly its growth, has been mainly studied at the file (also sometimes referred as module) level. In this paper we propose to move from the physical towards a level that includes semantic information by using functions or methods for measuring the evolution of a software system. We point out that use of functions-based metrics has many advantages over the use of files or lines of code. We demonstrate our approach with an empirical study of two Free/Open Source projects: a community-driven project, Apache, and a company-led project, Novell Evolution. We discovered that most functions never change; when they do their number of modifications is correlated with their size, and that very few authors who modify each; finally we show that the departure of a developer from a software project slows the evolution of the functions that she authored. @InProceedings{WETSoM12p49, author = {Gregorio Robles and Israel Herraiz and Daniel M. Germán and Daniel Izquierdo-Cortázar}, title = {Modification and Developer Metrics at the Function Level: Metrics for the Study of the Evolution of a Software Project}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {49--55}, doi = {}, year = {2012}, } |
|
Jäger, Jens |
WETSoM '12: "The 3C Approach for Agile ..."
The 3C Approach for Agile Quality Assurance
André Janus, Andreas Schmietendorf, Reiner Dumke, and Jens Jäger (André Janus - IT Consulting, Germany; HWR Berlin, Germany; University of Magdeburg, Germany; Jens Jäger Consulting, Germany) Continuous Integration is an Agile Practice for the continuous integration of new Source Code into the Code Base including the automated compile, build and running of tests. From traditional Quality Assurance we know Software Metrics as a very good approach to measure Software Quality. Combining both there is a promising approach to control and ensure the internal Software Quality. This paper introduces the 3C Approach, which is an extension to the Agile Practice Continuous Integration: It adds Continuous Measurement and Continuous Improvement as subsequent Activities to CI and establishes Metric-based Quality-Gates for an Agile Quality Assurance. It was developed and proven in an Agile Maintenance and Evolution project for the Automotive Industry at T-Systems International – a large German ICT company. Within the project the approach was used for a (legacy) Java-based Web Application including the use of Open Source Tools from the Java Eco-System. But the approach is not limited to these technical boundaries as similar tools are available also for other technical platforms. @InProceedings{WETSoM12p9, author = {André Janus and Andreas Schmietendorf and Reiner Dumke and Jens Jäger}, title = {The 3C Approach for Agile Quality Assurance}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {9--13}, doi = {}, year = {2012}, } |
|
Janus, André |
WETSoM '12: "The 3C Approach for Agile ..."
The 3C Approach for Agile Quality Assurance
André Janus, Andreas Schmietendorf, Reiner Dumke, and Jens Jäger (André Janus - IT Consulting, Germany; HWR Berlin, Germany; University of Magdeburg, Germany; Jens Jäger Consulting, Germany) Continuous Integration is an Agile Practice for the continuous integration of new Source Code into the Code Base including the automated compile, build and running of tests. From traditional Quality Assurance we know Software Metrics as a very good approach to measure Software Quality. Combining both there is a promising approach to control and ensure the internal Software Quality. This paper introduces the 3C Approach, which is an extension to the Agile Practice Continuous Integration: It adds Continuous Measurement and Continuous Improvement as subsequent Activities to CI and establishes Metric-based Quality-Gates for an Agile Quality Assurance. It was developed and proven in an Agile Maintenance and Evolution project for the Automotive Industry at T-Systems International – a large German ICT company. Within the project the approach was used for a (legacy) Java-based Web Application including the use of Open Source Tools from the Java Eco-System. But the approach is not limited to these technical boundaries as similar tools are available also for other technical platforms. @InProceedings{WETSoM12p9, author = {André Janus and Andreas Schmietendorf and Reiner Dumke and Jens Jäger}, title = {The 3C Approach for Agile Quality Assurance}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {9--13}, doi = {}, year = {2012}, } |
|
Kaulgud, Vikrant |
WETSoM '12: "PIVoT: Project Insights and ..."
PIVoT: Project Insights and Visualization Toolkit
Vibhu Saujanya Sharma and Vikrant Kaulgud (Accenture Technology Labs, India) An in-process view into a software development project's health is critical for its success. However, in services organizations, a typical software development team employs a heterogeneous set of tools based on client requirements through the different phases of the software project. The use of disparate tools with non-compatible outputs makes it very difficult to extract one coherent picture of the project's health and status. Existing project management tools either work at the process layer and rely on manually entered information, or are activity centric, without a holistic view. In this paper, we present PIVoT, a metric-based framework for automated, non-invasive, and in-process data collection and analysis in heterogeneous software project environments, that provides rich, multi-dimensional insights into the project's health and trajectory. Here, we introduce the different analyses, insights and metrics, and discuss their usage in typical software projects. @InProceedings{WETSoM12p63, author = {Vibhu Saujanya Sharma and Vikrant Kaulgud}, title = {PIVoT: Project Insights and Visualization Toolkit}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {63--69}, doi = {}, year = {2012}, } |
|
Kulkarni, Vinay |
WETSoM '12: "Measuring Metadata-Based Aspect-Oriented ..."
Measuring Metadata-Based Aspect-Oriented Code in Model-Driven Engineering
Sagar Sunkle, Vinay Kulkarni, and Suman Roychoudhury (Tata Consultancy Services, India) Metrics measurement for cost estimation in model-driven engineering (MDE) is complex because of number of different artifacts that can potentially be generated. The complexity arises as auto-generated code, manually added code, and non-code artifacts must be sized separately for their contribution to overall effort. In this paper, we address measurement of a special kind of code artifacts called metadata-based aspect-oriented code. Our MDE toolset delivers large database-centric business-critical enterprise applications. We cater to special needs of enterprises by providing support for customization along three concerns, namely design strategies, architecture, and technology platforms (<d, a, t>) in customer-specific applications. Code that is generated for these customizations is conditional in nature, in the sense that model-to-text transformation takes place differently based on choices along these concerns. In our recent efforts to apply Constructive Cost Model (COCOMO) II to our MDE practices, we discovered that while the measurement of the rest of code and non-code artifacts can be easily automated, product-line-like nature of code generation for specifics of <d, a, t> requires special treatment. Our contribution is the use of feature models to capture variations in these dimensions and their mapping to code size estimates. Our initial implementation suggests that this approach scales well considering the size of our applications and takes a step forward in providing complete cost estimation for MDE applications using COCOMO II. @InProceedings{WETSoM12p2, author = {Sagar Sunkle and Vinay Kulkarni and Suman Roychoudhury}, title = {Measuring Metadata-Based Aspect-Oriented Code in Model-Driven Engineering}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {2--8}, doi = {}, year = {2012}, } |
|
Marchesi, Michele L. |
WETSoM '12: "Size Estimation of Web Applications ..."
Size Estimation of Web Applications through Web CMF Object
Erika Corona, Michele L. Marchesi, Giulio Barabino, Daniele Grechi, and Laura Piccinno (University of Cagliari, Italy; University of Genova, Italy; Datasiel s.p.a., Italy) This work outlines a new methodology for estimating the size of Web applications developed with a Content Management Framework (CMF). The reason for proposing – through this work – a new methodology for size estimation is the realization of the inadequacy of the RWO method, which we had recently developed, in estimating the effort of the latest Web applications. The size metric used in the RWO method was found not to be well suited for Web applications developed through a CMF. In this work, we present the new key elements for analysis and planning, needed to define every important step in developing a Web application through a CMF. Using those elements, it is possible to obtain the size of such an application. We also present the experimental validation performed on a 7-project dataset, provided by an Italian software company. @InProceedings{WETSoM12p14, author = {Erika Corona and Michele L. Marchesi and Giulio Barabino and Daniele Grechi and Laura Piccinno}, title = {Size Estimation of Web Applications through Web CMF Object}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {14--20}, doi = {}, year = {2012}, } WETSoM '12: "Entropy of the Degree Distribution ..." Entropy of the Degree Distribution and Object-Oriented Software Quality Ivana Turnu, Michele L. Marchesi, and Roberto Tonelli (University of Cagliari, Italy) The entropy of degree distribution has been considered from many authors as a measure of a network's heterogeneity and consequently of the resilience to random failures. In this paper we propose the entropy of degree distribution as a new measure of software quality. We present a study were software systems are considered as complex networks which are characterized by heterogeneous distribution of links. On such complex software networks we computed the entropy of degree distribution. We analyzed various releases of the publically available Eclipse and Netbeans software systems, calculating the entropy of degree distribution for every release analyzed. Our results display a good correlation between the entropy of degree distribution and the number of bugs for Eclipse and Netbeans. Complexity and quality metrics are in general computed on every system module while the entropy is just a scalar number that characterizes a whole system, this result suggests that the entropy of degree distribution could be considered as a global quality metric for large software systems. Our results need however to be confirmed for other large software systems. @InProceedings{WETSoM12p77, author = {Ivana Turnu and Michele L. Marchesi and Roberto Tonelli}, title = {Entropy of the Degree Distribution and Object-Oriented Software Quality}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {77--82}, doi = {}, year = {2012}, } |
|
Moreira, Roberta Coeli Neves |
WETSoM '12: "The Evolving Structures of ..."
The Evolving Structures of Software Systems
Kecia Aline Marques Ferreira, Roberta Coeli Neves Moreira, Mariza Andrade S. Bigonha, and Roberto S. Bigonha (CEFET-MG, Brazil; UFMG, Brazil) Software maintenance is an important problem because software is an evolving complex system. To make software maintenance viable, it is important to know the real nature of the systems we have to deal with. Little House is a model that provides a macroscopic view of software systems. According to Little House, a software system can be modeled as a graph with five components. This model is intended to be an approach to improve the understanding and the analysis of software structures. However, to achieve this aim, it is necessary to determine its characteristics and its implications. This paper presents the results of an empirical study aiming to characterize software evolution by means of Little House and software metrics. We analyzed several versions of 13 open source software systems, which have been developed over nearly 10 years. The results of the study show that there are two main components of Little House which suffer substantial degradation as the software system evolves. This finding indicates that those components should be carefully taken in consideration when maintenance tasks are performed in the system. @InProceedings{WETSoM12p28, author = {Kecia Aline Marques Ferreira and Roberta Coeli Neves Moreira and Mariza Andrade S. Bigonha and Roberto S. Bigonha}, title = {The Evolving Structures of Software Systems}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {28--34}, doi = {}, year = {2012}, } |
|
Nocentini, Carlo |
WETSoM '12: "Integrating Metrics in an ..."
Integrating Metrics in an Ontological Framework Supporting SW-FMEA
Irene Bicchierai, Giacomo Bucci, Carlo Nocentini, and Enrico Vicario (Università di Firenze, Italy) The development process of safety-critical systems benefits from the early identification of failures affecting them. Several techniques have been designed in order to face this issue, among them Failure Mode Effect Analysis (FMEA). Although FMEA has been mainly thought for hardware systems, the increasing responsibilities assigned to software (SW) have fostered its application to SW as well (SW-FMEA), exacerbating the complexity of the analysis. Ontologies have been proposed as a way to formalize the SW-FMEA process and to give precise semantics to the involved concepts and data. We present a framework, based on an ontological model, which, beyond other capabilities, supports the collection of SW metrics enabling automatic identification of SW components not attaining the required level of assurance. @InProceedings{WETSoM12p35, author = {Irene Bicchierai and Giacomo Bucci and Carlo Nocentini and Enrico Vicario}, title = {Integrating Metrics in an Ontological Framework Supporting SW-FMEA}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {35--41}, doi = {}, year = {2012}, } |
|
Pakhira, Anjan |
WETSoM '12: "Using Network Analysis Metrics ..."
Using Network Analysis Metrics to Discover Functionally Important Methods in Large-Scale Software Systems
Anjan Pakhira and Peter Andras (Newcastle University, UK) In large-scale software systems that integrate many components originating from different vendors, the understanding of the functional importance of the components is critical for the dependability of the system. However, in general, gaining such understanding is difficult. Here we describe the application of the combination of dynamic analysis and network analysis to large-scale software systems with the aim to determine methods of classes that are functionally important with respect to a given functionality of the software. We use as a test case the Google Chrome and predict functionally important methods in a weak sense in the context of usage scenarios. We validate the predictions using mutation testing and evaluate the behavior of the software following the mutation change. Our results indicate that network analysis metrics based on measurement of structural integrity can be used to predict methods of classes that are functionally important with respect to a given functionality of the software system. @InProceedings{WETSoM12p70, author = {Anjan Pakhira and Peter Andras}, title = {Using Network Analysis Metrics to Discover Functionally Important Methods in Large-Scale Software Systems}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {70--76}, doi = {}, year = {2012}, } |
|
Piccinno, Laura |
WETSoM '12: "Size Estimation of Web Applications ..."
Size Estimation of Web Applications through Web CMF Object
Erika Corona, Michele L. Marchesi, Giulio Barabino, Daniele Grechi, and Laura Piccinno (University of Cagliari, Italy; University of Genova, Italy; Datasiel s.p.a., Italy) This work outlines a new methodology for estimating the size of Web applications developed with a Content Management Framework (CMF). The reason for proposing – through this work – a new methodology for size estimation is the realization of the inadequacy of the RWO method, which we had recently developed, in estimating the effort of the latest Web applications. The size metric used in the RWO method was found not to be well suited for Web applications developed through a CMF. In this work, we present the new key elements for analysis and planning, needed to define every important step in developing a Web application through a CMF. Using those elements, it is possible to obtain the size of such an application. We also present the experimental validation performed on a 7-project dataset, provided by an Italian software company. @InProceedings{WETSoM12p14, author = {Erika Corona and Michele L. Marchesi and Giulio Barabino and Daniele Grechi and Laura Piccinno}, title = {Size Estimation of Web Applications through Web CMF Object}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {14--20}, doi = {}, year = {2012}, } |
|
Riddle, Steve |
WETSoM '12: "Using Early Stage Project ..."
Using Early Stage Project Data to Predict Change-Proneness
Claire Ingram and Steve Riddle (Newcastle University, UK) Several previous studies have suggested methods for predicting change-proneness based on software complexity metrics. We hypothesise that data from the early stages of a development project such as requirements and design could also be used to make such predictions. We define here a set of new metrics to capture data from the requirements and/or design stages, and derive values for these metrics using a case study project. We do find that significant differences in change-proneness can be detected between components with high or with low values for our metrics, suggesting that this is an area which would benefit from further study. @InProceedings{WETSoM12p42, author = {Claire Ingram and Steve Riddle}, title = {Using Early Stage Project Data to Predict Change-Proneness}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {42--48}, doi = {}, year = {2012}, } |
|
Robles, Gregorio |
WETSoM '12: "Modification and Developer ..."
Modification and Developer Metrics at the Function Level: Metrics for the Study of the Evolution of a Software Project
Gregorio Robles, Israel Herraiz, Daniel M. Germán, and Daniel Izquierdo-Cortázar (Universidad Rey Juan Carlos, Spain; TU Madrid, Spain; University of Victoria, Canada) Software evolution, and particularly its growth, has been mainly studied at the file (also sometimes referred as module) level. In this paper we propose to move from the physical towards a level that includes semantic information by using functions or methods for measuring the evolution of a software system. We point out that use of functions-based metrics has many advantages over the use of files or lines of code. We demonstrate our approach with an empirical study of two Free/Open Source projects: a community-driven project, Apache, and a company-led project, Novell Evolution. We discovered that most functions never change; when they do their number of modifications is correlated with their size, and that very few authors who modify each; finally we show that the departure of a developer from a software project slows the evolution of the functions that she authored. @InProceedings{WETSoM12p49, author = {Gregorio Robles and Israel Herraiz and Daniel M. Germán and Daniel Izquierdo-Cortázar}, title = {Modification and Developer Metrics at the Function Level: Metrics for the Study of the Evolution of a Software Project}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {49--55}, doi = {}, year = {2012}, } |
|
Rodriguez, Daniel |
WETSoM '12: "On the Statistical Distribution ..."
On the Statistical Distribution of Object-Oriented System Properties
Israel Herraiz, Daniel Rodriguez, and Rachel Harrison (TU Madrid, Spain; University of Alcalá, Spain; Oxford Brookes University, UK) The statistical distributions of different software properties have been thoroughly studied in the past, including software size, complexity and the number of defects. In the case of object-oriented systems, these distributions have been found to obey a power law, a common statistical distribution also found in many other fields. However, we have found that for some statistical properties, the behavior does not entirely follow a power law, but a mixture between a lognormal and a power law distribution. Our study is based on the Qualitas Corpus, a large compendium of diverse Java-based software projects. We have measured the Chidamber and Kemerer metrics suite for every file of every Java project in the corpus. Our results show that the range of high values for the different metrics follows a power law distribution, whereas the rest of the range follows a lognormal distribution. This is a pattern typical of so-called double Pareto distributions, also found in empirical studies for other software properties. @InProceedings{WETSoM12p56, author = {Israel Herraiz and Daniel Rodriguez and Rachel Harrison}, title = {On the Statistical Distribution of Object-Oriented System Properties}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {56--62}, doi = {}, year = {2012}, } |
|
Roychoudhury, Suman |
WETSoM '12: "Measuring Metadata-Based Aspect-Oriented ..."
Measuring Metadata-Based Aspect-Oriented Code in Model-Driven Engineering
Sagar Sunkle, Vinay Kulkarni, and Suman Roychoudhury (Tata Consultancy Services, India) Metrics measurement for cost estimation in model-driven engineering (MDE) is complex because of number of different artifacts that can potentially be generated. The complexity arises as auto-generated code, manually added code, and non-code artifacts must be sized separately for their contribution to overall effort. In this paper, we address measurement of a special kind of code artifacts called metadata-based aspect-oriented code. Our MDE toolset delivers large database-centric business-critical enterprise applications. We cater to special needs of enterprises by providing support for customization along three concerns, namely design strategies, architecture, and technology platforms (<d, a, t>) in customer-specific applications. Code that is generated for these customizations is conditional in nature, in the sense that model-to-text transformation takes place differently based on choices along these concerns. In our recent efforts to apply Constructive Cost Model (COCOMO) II to our MDE practices, we discovered that while the measurement of the rest of code and non-code artifacts can be easily automated, product-line-like nature of code generation for specifics of <d, a, t> requires special treatment. Our contribution is the use of feature models to capture variations in these dimensions and their mapping to code size estimates. Our initial implementation suggests that this approach scales well considering the size of our applications and takes a step forward in providing complete cost estimation for MDE applications using COCOMO II. @InProceedings{WETSoM12p2, author = {Sagar Sunkle and Vinay Kulkarni and Suman Roychoudhury}, title = {Measuring Metadata-Based Aspect-Oriented Code in Model-Driven Engineering}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {2--8}, doi = {}, year = {2012}, } |
|
Sarro, Federica |
WETSoM '12: "Functional versus Design Measures ..."
Functional versus Design Measures for Model-Driven Web Applications: A Case Study in the Context of Web Effort Estimation
Lucia De Marco, Filomena Ferrucci, Carmine Gravino, Federica Sarro, Silvia Abrahao, and Jaime Gomez (University of Salerno, Italy; Universidad Politecnica de Valencia, Spain; University of Alicante, Spain) In the literature we can identify two main approaches for sizing model-driven Web applications: one based on design measures and another based on functional measures. Design measures take into account the modeling primitives characterizing the models of the specific model-driven approach. On the other hand, the functional measures are obtained by applying functional size measurement procedures specifically conceived to map the modeling primitives of the model-driven approach into concepts of a functional size measurement method. In this paper, we focus our attention on the Object-Oriented Hypermedia (OO-H) method, a model-driven approach to design and develop Web applications. We report on the results of an empirical study carried out to compare the ability of some design measures and OO-HFP (a model-driven functional size measurement procedure) to predict the development effort of Web applications. To this aim, we exploited a dataset with 31 Web projects developed using OO-H. The analysis highlighted that each design measure was positively correlated with the Web application development effort. However, the best estimation model obtained by exploiting the Manual Stepwise Regression employed only the measure Internal Links (IL). Furthermore, the study highlighted that the estimates obtained with the IL based prediction model were significantly better than those achieved using the OO-HFP based prediction model. These results seem to confirm previous investigations suggesting that Function Point Analysis can fail to capture some specific features of Web applications. @InProceedings{WETSoM12p21, author = {Lucia De Marco and Filomena Ferrucci and Carmine Gravino and Federica Sarro and Silvia Abrahao and Jaime Gomez}, title = {Functional versus Design Measures for Model-Driven Web Applications: A Case Study in the Context of Web Effort Estimation}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {21--27}, doi = {}, year = {2012}, } |
|
Schmietendorf, Andreas |
WETSoM '12: "The 3C Approach for Agile ..."
The 3C Approach for Agile Quality Assurance
André Janus, Andreas Schmietendorf, Reiner Dumke, and Jens Jäger (André Janus - IT Consulting, Germany; HWR Berlin, Germany; University of Magdeburg, Germany; Jens Jäger Consulting, Germany) Continuous Integration is an Agile Practice for the continuous integration of new Source Code into the Code Base including the automated compile, build and running of tests. From traditional Quality Assurance we know Software Metrics as a very good approach to measure Software Quality. Combining both there is a promising approach to control and ensure the internal Software Quality. This paper introduces the 3C Approach, which is an extension to the Agile Practice Continuous Integration: It adds Continuous Measurement and Continuous Improvement as subsequent Activities to CI and establishes Metric-based Quality-Gates for an Agile Quality Assurance. It was developed and proven in an Agile Maintenance and Evolution project for the Automotive Industry at T-Systems International – a large German ICT company. Within the project the approach was used for a (legacy) Java-based Web Application including the use of Open Source Tools from the Java Eco-System. But the approach is not limited to these technical boundaries as similar tools are available also for other technical platforms. @InProceedings{WETSoM12p9, author = {André Janus and Andreas Schmietendorf and Reiner Dumke and Jens Jäger}, title = {The 3C Approach for Agile Quality Assurance}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {9--13}, doi = {}, year = {2012}, } |
|
Sharma, Vibhu Saujanya |
WETSoM '12: "PIVoT: Project Insights and ..."
PIVoT: Project Insights and Visualization Toolkit
Vibhu Saujanya Sharma and Vikrant Kaulgud (Accenture Technology Labs, India) An in-process view into a software development project's health is critical for its success. However, in services organizations, a typical software development team employs a heterogeneous set of tools based on client requirements through the different phases of the software project. The use of disparate tools with non-compatible outputs makes it very difficult to extract one coherent picture of the project's health and status. Existing project management tools either work at the process layer and rely on manually entered information, or are activity centric, without a holistic view. In this paper, we present PIVoT, a metric-based framework for automated, non-invasive, and in-process data collection and analysis in heterogeneous software project environments, that provides rich, multi-dimensional insights into the project's health and trajectory. Here, we introduce the different analyses, insights and metrics, and discuss their usage in typical software projects. @InProceedings{WETSoM12p63, author = {Vibhu Saujanya Sharma and Vikrant Kaulgud}, title = {PIVoT: Project Insights and Visualization Toolkit}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {63--69}, doi = {}, year = {2012}, } |
|
Sunkle, Sagar |
WETSoM '12: "Measuring Metadata-Based Aspect-Oriented ..."
Measuring Metadata-Based Aspect-Oriented Code in Model-Driven Engineering
Sagar Sunkle, Vinay Kulkarni, and Suman Roychoudhury (Tata Consultancy Services, India) Metrics measurement for cost estimation in model-driven engineering (MDE) is complex because of number of different artifacts that can potentially be generated. The complexity arises as auto-generated code, manually added code, and non-code artifacts must be sized separately for their contribution to overall effort. In this paper, we address measurement of a special kind of code artifacts called metadata-based aspect-oriented code. Our MDE toolset delivers large database-centric business-critical enterprise applications. We cater to special needs of enterprises by providing support for customization along three concerns, namely design strategies, architecture, and technology platforms (<d, a, t>) in customer-specific applications. Code that is generated for these customizations is conditional in nature, in the sense that model-to-text transformation takes place differently based on choices along these concerns. In our recent efforts to apply Constructive Cost Model (COCOMO) II to our MDE practices, we discovered that while the measurement of the rest of code and non-code artifacts can be easily automated, product-line-like nature of code generation for specifics of <d, a, t> requires special treatment. Our contribution is the use of feature models to capture variations in these dimensions and their mapping to code size estimates. Our initial implementation suggests that this approach scales well considering the size of our applications and takes a step forward in providing complete cost estimation for MDE applications using COCOMO II. @InProceedings{WETSoM12p2, author = {Sagar Sunkle and Vinay Kulkarni and Suman Roychoudhury}, title = {Measuring Metadata-Based Aspect-Oriented Code in Model-Driven Engineering}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {2--8}, doi = {}, year = {2012}, } |
|
Tonelli, Roberto |
WETSoM '12: "Entropy of the Degree Distribution ..."
Entropy of the Degree Distribution and Object-Oriented Software Quality
Ivana Turnu, Michele L. Marchesi, and Roberto Tonelli (University of Cagliari, Italy) The entropy of degree distribution has been considered from many authors as a measure of a network's heterogeneity and consequently of the resilience to random failures. In this paper we propose the entropy of degree distribution as a new measure of software quality. We present a study were software systems are considered as complex networks which are characterized by heterogeneous distribution of links. On such complex software networks we computed the entropy of degree distribution. We analyzed various releases of the publically available Eclipse and Netbeans software systems, calculating the entropy of degree distribution for every release analyzed. Our results display a good correlation between the entropy of degree distribution and the number of bugs for Eclipse and Netbeans. Complexity and quality metrics are in general computed on every system module while the entropy is just a scalar number that characterizes a whole system, this result suggests that the entropy of degree distribution could be considered as a global quality metric for large software systems. Our results need however to be confirmed for other large software systems. @InProceedings{WETSoM12p77, author = {Ivana Turnu and Michele L. Marchesi and Roberto Tonelli}, title = {Entropy of the Degree Distribution and Object-Oriented Software Quality}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {77--82}, doi = {}, year = {2012}, } |
|
Turnu, Ivana |
WETSoM '12: "Entropy of the Degree Distribution ..."
Entropy of the Degree Distribution and Object-Oriented Software Quality
Ivana Turnu, Michele L. Marchesi, and Roberto Tonelli (University of Cagliari, Italy) The entropy of degree distribution has been considered from many authors as a measure of a network's heterogeneity and consequently of the resilience to random failures. In this paper we propose the entropy of degree distribution as a new measure of software quality. We present a study were software systems are considered as complex networks which are characterized by heterogeneous distribution of links. On such complex software networks we computed the entropy of degree distribution. We analyzed various releases of the publically available Eclipse and Netbeans software systems, calculating the entropy of degree distribution for every release analyzed. Our results display a good correlation between the entropy of degree distribution and the number of bugs for Eclipse and Netbeans. Complexity and quality metrics are in general computed on every system module while the entropy is just a scalar number that characterizes a whole system, this result suggests that the entropy of degree distribution could be considered as a global quality metric for large software systems. Our results need however to be confirmed for other large software systems. @InProceedings{WETSoM12p77, author = {Ivana Turnu and Michele L. Marchesi and Roberto Tonelli}, title = {Entropy of the Degree Distribution and Object-Oriented Software Quality}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {77--82}, doi = {}, year = {2012}, } |
|
Vicario, Enrico |
WETSoM '12: "Integrating Metrics in an ..."
Integrating Metrics in an Ontological Framework Supporting SW-FMEA
Irene Bicchierai, Giacomo Bucci, Carlo Nocentini, and Enrico Vicario (Università di Firenze, Italy) The development process of safety-critical systems benefits from the early identification of failures affecting them. Several techniques have been designed in order to face this issue, among them Failure Mode Effect Analysis (FMEA). Although FMEA has been mainly thought for hardware systems, the increasing responsibilities assigned to software (SW) have fostered its application to SW as well (SW-FMEA), exacerbating the complexity of the analysis. Ontologies have been proposed as a way to formalize the SW-FMEA process and to give precise semantics to the involved concepts and data. We present a framework, based on an ontological model, which, beyond other capabilities, supports the collection of SW metrics enabling automatic identification of SW components not attaining the required level of assurance. @InProceedings{WETSoM12p35, author = {Irene Bicchierai and Giacomo Bucci and Carlo Nocentini and Enrico Vicario}, title = {Integrating Metrics in an Ontological Framework Supporting SW-FMEA}, booktitle = {Proc.\ WETSoM}, publisher = {IEEE}, pages = {35--41}, doi = {}, year = {2012}, } |
43 authors
proc time: 0.06