Powered by
14th International Workshop on Principles of Software Evolution (IWPSE 2015),
August 30, 2015,
Bergamo, Italy
Frontmatter
Foreword
Welcome to the 14th International Workshop on Principles of Software Evolution (IWPSE'15) co-located with ESEC/FSE 15, August 30th 2015 in Bergamo, Italy. IWPSE'15 groups high-quality papers presenting experiments, surveys, approaches, techniques and tools related to the evolution of software systems.
Refactoring and Testing
Localising Faults in Test Execution Traces
Gulsher Laghari, Alessandro Murgia, and
Serge Demeyer
(University of Antwerp, Belgium)
With the advent of agile processes and their emphasis on continuous integration, automated tests became the prominent driver of the development process. When one of the thousands of tests fails, the corresponding fault should be localised as quickly as possible as development can only proceed when the fault is repaired. In this paper we propose a heuristic named SPEQTRA which mines the execution traces of a series of passing and failing tests, to localise the class which contains the fault. SPEQTRA produces ranking of classes that indicates the likelihood of classes to be at fault. We compare our spectrum based fault localisation heuristic with the state of the art (AMPLE) and demonstrate on a small yet representative case (NanoXML) that the ranking of classes proposed by SPEQTRA is significantly better than the one of AMPLE.
@InProceedings{IWPSE15p1,
author = {Gulsher Laghari and Alessandro Murgia and Serge Demeyer},
title = {Localising Faults in Test Execution Traces},
booktitle = {Proc.\ IWPSE},
publisher = {ACM},
pages = {1--8},
doi = {},
year = {2015},
}
Circumventing Refactoring Masking using Fine-Grained Change Recording
Quinten David Soetens, Javier Pérez, Serge Demeyer, and Andy Zaidman
(University of Antwerp, Belgium; Delft University of Technology, Netherlands)
Today, refactoring reconstruction techniques are snapshot-based: they compare two revisions from a source code management system and calculate the shortest path of edit operations to go from the one to the other. An inherent risk with snapshot-based approaches is that a refactoring may be concealed by later edit operations acting on the same source code entity, a phenomenon we call refactoring masking. In this paper, we performed an experiment to find out at which point refactoring masking occurs and confirmed that a snapshot-based technique misses refactorings when several edit operations are performed on the same source code entity. We present a way of reconstructing refactorings using fine grained changes that are recorded live from an integrated development environment and demonstrate on two cases ---PMD and Cruisecontrol--- that our approach is more accurate in a significant number of situations than the state-of-the-art snapshot-based technique RefFinder.
@InProceedings{IWPSE15p9,
author = {Quinten David Soetens and Javier Pérez and Serge Demeyer and Andy Zaidman},
title = {Circumventing Refactoring Masking using Fine-Grained Change Recording},
booktitle = {Proc.\ IWPSE},
publisher = {ACM},
pages = {9--18},
doi = {},
year = {2015},
}
Hierarchical Categorization of Edit Operations for Separately Committing Large Refactoring Results
Jumpei Matsuda, Shinpei Hayashi, and Motoshi Saeki
(Tokyo Institute of Technology, Japan)
In software configuration management using a version control system, developers have to follow the commit policy of the project. However, preparing changes according to the policy are sometimes cumbersome and time-consuming, in particular when applying large refactoring consisting of multiple primitive refactoring instances. In this paper, we propose a technique for re-organizing changes by recording editing operations of source code. Editing operations including refactoring operations are hierarchically managed based on their types provided by an integrated development environment. Using the obtained hierarchy, developers can easily configure the granularity of changes and obtain the resulting changes based on the configured granularity. We confirmed the feasibility of the technique by applying it to the recorded changes in a large refactoring process.
@InProceedings{IWPSE15p19,
author = {Jumpei Matsuda and Shinpei Hayashi and Motoshi Saeki},
title = {Hierarchical Categorization of Edit Operations for Separately Committing Large Refactoring Results},
booktitle = {Proc.\ IWPSE},
publisher = {ACM},
pages = {19--27},
doi = {},
year = {2015},
}
APIs and Human Factors
The Driving Forces of API Evolution
William Granli, John Burchell, Imed Hammouda, and Eric Knauss
(University of Gothenburg, Sweden; Chalmers University of Technology, Sweden)
Evolving an Application Programming Interface (API) is a delicate activity, as modifications to them can significantly impact their users. The increasing use of APIs means that software development organisations must take an empirical and scientific approach to the way they manage the evolution of their APIs. If no attempt at analysing or quantifying the evolution of an API is made, there will be a diminished understanding of the evolution, and possible improvements to the maintenance strategy will be difficult to identify. We believe that long-standing software evolution theories can provide additional insight to the field of APIs, and can be of great use to companies maintaining APIs. In this case study, we conduct a qualitative investigation to understand what drives the evolution of an industry company's existing API, by examining two versions of the API interface. The changes were analysed based on two software evolution theories, and the extent to which we could reverse engineer the change decisions was determined by interviewing an architect of the API. The results of this analysis show that the largest driving force of the APIs evolution was the desire for new functionality. Our findings which show that changes happen sporadically, rather than continuously, appear to show that the law of Conservation of Organisational Stability was not a considerable factor for the evolution of the API. We also found that it is possible to reverse engineer change decisions and in doing so, identified that the feedback loop of an API is an important area of improvement.
@InProceedings{IWPSE15p28,
author = {William Granli and John Burchell and Imed Hammouda and Eric Knauss},
title = {The Driving Forces of API Evolution},
booktitle = {Proc.\ IWPSE},
publisher = {ACM},
pages = {28--37},
doi = {},
year = {2015},
}
The Impact of Developer Team Sizes on the Structural Attributes of Software
Ahmmad Youssef and Andrea Capiluppi
(Brunel University, UK)
It is established that the internal quality of software is a key determinant of the total cost of ownership of that software. The objective of this research is to determine the impact that the development team’s size has on the internal structural attributes of a codebase and, in doing so, we consider the impact that the team’s size may have on the internal quality of the software that they produce. In this paper we leverage the wealth of data available in the open-source domain by mining detailed data from 1000 projects in GoogleCode and, coupled with one of the most established of object-oriented metric suites, we isolate and identify the effect that the development team size has on internal structural attributes of the software produced. We will find that some measures of functional decomposition are enhanced when we compare projects authored by fewer developers against those authored by a larger number of developers while measures of cohesion and complexity are degraded.
@InProceedings{IWPSE15p38,
author = {Ahmmad Youssef and Andrea Capiluppi},
title = {The Impact of Developer Team Sizes on the Structural Attributes of Software},
booktitle = {Proc.\ IWPSE},
publisher = {ACM},
pages = {38--45},
doi = {},
year = {2015},
}
Revisiting the Applicability of the Pareto Principle to Core Development Teams in Open Source Software Projects
Kazuhiro Yamashita, Shane McIntosh,
Yasutaka Kamei,
Ahmed E. Hassan, and Naoyasu Ubayashi
(Kyushu University, Japan; McGill University, Canada; Queen's University, Canada)
It is often observed that the majority of the development work of an Open Source Software (OSS) project is contributed by a core team, i.e., a small subset of the pool of active devel- opers. In fact, recent work has found that core development teams follow the Pareto principle — roughly 80% of the code contributions are produced by 20% of the active developers. However, those findings are based on samples of between one and nine studied systems. In this paper, we revisit prior studies about core developers using 2,496 projects hosted on GitHub. We find that even when we vary the heuristic for detecting core developers, and when we control for system size, team size, and project age: (1) the Pareto principle does not seem to apply for 40%-87% of GitHub projects; and (2) more than 88% of GitHub projects have fewer than 16 core developers. Moreover, we find that when we control for the quantity of contributions, bug fixing accounts for a similar proportion of the contributions of both core (18%-20%) and non-core developers (21%-22%). Our findings suggest that the Pareto principle is not compatible with the core teams of many GitHub projects. In fact, several of the studied GitHub projects are susceptible to the “bus factor,” where the impact of a core developer leaving would be quite harmful.
@InProceedings{IWPSE15p46,
author = {Kazuhiro Yamashita and Shane McIntosh and Yasutaka Kamei and Ahmed E. Hassan and Naoyasu Ubayashi},
title = {Revisiting the Applicability of the Pareto Principle to Core Development Teams in Open Source Software Projects},
booktitle = {Proc.\ IWPSE},
publisher = {ACM},
pages = {46--55},
doi = {},
year = {2015},
}
Analysis Techniques
Software Evolution and Time Series Volatility: An Empirical Exploration
Jukka Ruohonen, Sami Hyrynsalmi, and Ville Leppänen
(University of Turku, Finland)
The paper presents the first empirical study to examine econometric time series volatility modeling in the software evolution context. The econometric volatility concept is related to the conditional variance of a time series rather than the conditional mean targeted in conventional regression analysis. The software evolution context is motivated by relating these variance characteristics to the proximity of operating system releases, the theoretical hypothesis being that volatile characteristics increase nearby new milestone releases. The empirical experiment is done with a case study of FreeBSD. The analysis is carried out with 12 time series related to bug tracking, development activity, and communication. A historical period from 1995 to 2011 is covered under a daily sampling frequency. According to the results the time series dataset contains visible volatility characteristics, but these cannot be explained by the time windows around the six observed major FreeBSD releases. The paper consequently contributes to the software evolution research field with new methodological ideas, as well as with both positive and negative empirical results.
@InProceedings{IWPSE15p56,
author = {Jukka Ruohonen and Sami Hyrynsalmi and Ville Leppänen},
title = {Software Evolution and Time Series Volatility: An Empirical Exploration},
booktitle = {Proc.\ IWPSE},
publisher = {ACM},
pages = {56--65},
doi = {},
year = {2015},
}
Estimating Product Evolution Graph using Kolmogorov Complexity
Yasuhiro Hayase, Tetsuya Kanda, and Takashi Ishio
(University of Tsukuba, Japan; Osaka University, Japan)
This paper proposes a method of estimating a product evolution graph based on Kolmogorov complexity. The method EEGL applies lossless compression to the source code of products, then, presumes a derivation relationship between two products when the increase of information between the two products is small. An evaluation experiment confirms that EEGL and an existing method PRET tends to produce different errors when estimating evolution graph results.
@InProceedings{IWPSE15p66,
author = {Yasuhiro Hayase and Tetsuya Kanda and Takashi Ishio},
title = {Estimating Product Evolution Graph using Kolmogorov Complexity},
booktitle = {Proc.\ IWPSE},
publisher = {ACM},
pages = {66--72},
doi = {},
year = {2015},
}
Using Control Flow Analysis to Improve the Effectiveness of Incremental Mutation Testing
Luke Bajada, Mark Micallef, and
Christian Colombo
(University of Malta, Malta)
Incremental Mutation Testing attempts to make mutation testing less expensive by applying it incrementally to a system as it evolves. This approach fits current trends of iterative software development with the main idea being that by carrying out mutation analysis in frequent bite-sized chunks focused on areas of the code which have changed, one can build confidence in the adequacy of a test suite incrementally. Yet this depends on how precisely one can characterise the effects of a change to a program. The original technique uses a naive approach whereby changes are characterised only by syntactic changes. In this paper we propose bolstering incremental mutation testing by using control flow analysis to identify semantic repercussions which a syntactic change will have on a system. Our initial results based on two case studies demonstrate that numerous relevant mutants which would have otherwise not been considered using the naive approach, are now being generated. However, the cost of identifying these mutants is significant when compared to the naive approach, although it remains advantageous when compared to traditional mutation testing so long as the increment is sufficiently small.
@InProceedings{IWPSE15p73,
author = {Luke Bajada and Mark Micallef and Christian Colombo},
title = {Using Control Flow Analysis to Improve the Effectiveness of Incremental Mutation Testing},
booktitle = {Proc.\ IWPSE},
publisher = {ACM},
pages = {73--78},
doi = {},
year = {2015},
}
proc time: 0.74