Powered by
2015 IEEE International Conference on Software Maintenance and Evolution (ICSME),
September 29 – October 1, 2015,
Bremen, Germany
Frontmatter
Keynote
Fri, Oct 2, 09:20 - 10:30, GW2 B2880 (Chair: Andreas Winter)
Migrating from Legacy to SoA (Invited Talk)
Harry M. Sneed
(SoRing, Hungary; TU Dresden, Germany)
This presentation discusses a strategy for migrating to a service-oriented architecture. The starting point is legacy code in a procedural or object-oriented language. The result is a set of web services that can be accessed in a private or public cloud. The technique used is to cut out selected portions of code and to wrap them behind a service interface. The code itself can be left in the original language. The service interface is in WSDL. The speaker describes how to go about selecting code for reuse and how to extract that code from its current environment. Case studies are given for the languages COBOL and Java. The presentation then goes on to describe how to test the services using a web service testing tool which generates artificial requests from the service interface definition and validates the responses against the assertions provided by the tester.
@InProceedings{MESOCA15p1,
author = {Harry M. Sneed},
title = {Migrating from Legacy to SoA (Invited Talk)},
booktitle = {Proc.\ MESOCA},
publisher = {IEEE},
pages = {1--6},
doi = {},
year = {2015},
}
Migration and Evolution in Cloud Environments
Fri, Oct 2, 15:45 - 16:45, GW2 B2880 (Chair: Jens Borchers)
Challenges and Assessment in Migrating IT Legacy Applications to the Cloud
Patrizia Scandurra, Giuseppe Psaila, Rafael Capilla, and Raffaela Mirandola
(University of Bergamo, Italy; Rey Juan Carlos University, Spain; Politecnico di Milano, Italy)
The incessant trend where software engineers need to redesign legacy systems adopting a service-centric engineering approach brings new challenges for software architects and developers. Today, engineering and deploying software as a service requires specific Internet protocols, middleware and languages that often complicate the interoperability of software at all levels. Moreover, cloud computing demands stringent quality requirements, such as security, scalability, and interoperability among others, to provide services and data across networks more efficiently. As software engineers must face the problem to redesign and redeploy systems as services, we explore in this paper the challenges found during the migration of an existing system to a cloud solution and based on a set of quality requirements that includes the vendor lock-in factor. We also present a set of as-sessment activities and guidelines to support migration to the Cloud by adopting SOA and Cloud modeling standards and tools.
@InProceedings{MESOCA15p7,
author = {Patrizia Scandurra and Giuseppe Psaila and Rafael Capilla and Raffaela Mirandola},
title = {Challenges and Assessment in Migrating IT Legacy Applications to the Cloud},
booktitle = {Proc.\ MESOCA},
publisher = {IEEE},
pages = {7--14},
doi = {},
year = {2015},
}
Modeling and Evaluation of Mixed Redundancy Strategy with Instant Switching in Cloud-Based Systems
Pan He, Chun Tan, Xueliang Zhao, and Zhihao Zheng
(Chinese Academy of Sciences, China)
Mixed redundancy strategy is generally used in cloud-based sys-tems, with different node switch mechanism from traditional mixed strategy. However, related researches often concentrates on traditional mixed redundancy strategy in which cold standby components is working only after all active nodes fail. So a model is developed to evaluate the reliability and performance of cloud-based degraded system subjected to mixed active and cold standby redundancy strategy with continual monitoring and detection mechanism. It is assumed that the node switching pro-cess is triggered once some active nodes fail and there are availa-ble standby nodes. A continuous-time Markov chain is built on top of the state transition process and both transient and steady state availability and expected job completion rate are used to evaluate system metrics with or with repair facilities. A numerical method is used to solve the model and sensitivity analysis is con-ducted on different redundancy strategy. Illustrative examples using real-world data were presented to explain the process of calculating the probability of each state and the different kinds of availability and performance. The comparison with traditional mixed redundancy strategy proved that the system behavior was different using different kinds of mixed strategy and the analysis model for traditional strategy was not suitable for strategies in cloud-bases system.
@InProceedings{MESOCA15p15,
author = {Pan He and Chun Tan and Xueliang Zhao and Zhihao Zheng},
title = {Modeling and Evaluation of Mixed Redundancy Strategy with Instant Switching in Cloud-Based Systems},
booktitle = {Proc.\ MESOCA},
publisher = {IEEE},
pages = {15--22},
doi = {},
year = {2015},
}
Evaluating Cluster Configurations for Big Data Processing: An Exploratory Study
Roni Sandel, Mark Shtern, Marios Fokaefs, and
Marin Litoiu
(York University, Canada)
As data continues to grow rapidly, NoSQL clusters have been increasingly adopted to address the storage and processing demands of these large amounts of data. In parallel, cloud computing is also increasingly being adopted due to its flexibility, cost efficiency and scalability. However, evaluating and modelling NoSQL clusters present many challenges. In this work, we explore these challenges by performing a series of experiments with various configurations. The intuition is that this process is laborious and expensive and the goal of our experiments is to confirm this intuition and to identify the factors that impact the performance of a Big Data cluster. Our experiments mostly focus on three factors: data compression, data schema and cluster topology. We performed a number of experiments based on these factors and measured and compared the response times of the resulting configurations. Eventually, the outcomes of our study are encapsulated in a performance model that predicts the cluster's response time as a function of the incoming workload and evaluates the cluster's performance less costly and faster. This systematic and effortless evaluation method will facilitate the selection and migration to a better cluster as the performance and budget goals change. We use HBase as the large data processing cluster and we conduct our experiments on traffic data from a large city and on a distributed community cloud infrastructure.
@InProceedings{MESOCA15p23,
author = {Roni Sandel and Mark Shtern and Marios Fokaefs and Marin Litoiu},
title = {Evaluating Cluster Configurations for Big Data Processing: An Exploratory Study},
booktitle = {Proc.\ MESOCA},
publisher = {IEEE},
pages = {23--30},
doi = {},
year = {2015},
}
Emerging Ideas in Cloud Computing Migration, Evolution, and Management (Short Papers)
Fri, Oct 2, 14:00 - 15:20, GW2 B2880 (Chair: Marin Litoiu)
Sustainability Forecast for Cloud Migration
Alifah Aida Lope Abdul Rahman and Shareeful Islam
(National Audit Department, Malaysia; University of East London, UK)
In this paper, a sustainability driven approach is proposed to measure the viability of cloud migration. The decision on cloud migration is based on sustainability dimensions, i.e., economic, environmental, social and technology, and risks associated with in these dimensions. We use Analytic Hierarchy Process and fuzzy scale to prioritize the sustainability dimensions based on a migration context to calculate Total Sustainability Index (TSI). TSI is then used to determine the viability of cloud migration according to three different scales, i.e., convincing, moderate, and ineffective. Finally, we used a practical migration use case from Ministry of Health (MoH), Malaysia to demonstrate the applicability of our work. The results from the studied context concluded that economic and business continuity are the key influential concerns for a sustainable cloud migration.
@InProceedings{MESOCA15p31,
author = {Alifah Aida Lope Abdul Rahman and Shareeful Islam},
title = {Sustainability Forecast for Cloud Migration},
booktitle = {Proc.\ MESOCA},
publisher = {IEEE},
pages = {31--35},
doi = {},
year = {2015},
}
Architectural Run-Time Models for Operator-in-the-Loop Adaptation of Cloud Applications
Robert Heinrich,
Reiner Jung, Eric Schmieders, Andreas Metzger
,
Wilhelm Hasselbring, Ralf Reussner, and Klaus Pohl
(KIT, Germany; Kiel University, Germany; University of Duisburg-Essen, Germany)
Building software systems by composing third-party cloud services promises many benefits. However, the increased complexity, heterogeneity, and limited observability of cloud services brings fully automatic adaption to its limits. We propose architectural run-time models as a means for combining automatic and operator-in-the-loop adaptations of cloud services.
@InProceedings{MESOCA15p36,
author = {Robert Heinrich and Reiner Jung and Eric Schmieders and Andreas Metzger and Wilhelm Hasselbring and Ralf Reussner and Klaus Pohl},
title = {Architectural Run-Time Models for Operator-in-the-Loop Adaptation of Cloud Applications},
booktitle = {Proc.\ MESOCA},
publisher = {IEEE},
pages = {36--40},
doi = {},
year = {2015},
}
Cloud Compliant Applications: A Reference Framework to Assess the Maturity of Software Applications with Respect to Cloud
Juncal Alonso, Leire Orue-Echevarria, and Marisa Escalante
(Tecnalia, Spain)
Over the last years several standards and reports have been pub-lished and released (ISO CCRA, TOSCA), where best practices with respect to Cloud based application design, development and deployment are described. Other frameworks such as ITIL or EFQM provide recommendations for identifying, planning, deliv-ering and supporting IT services to the business through adapta-tion of the business models and processes. All this best practices are scattered through different sources and there is not a unique criteria covering all the aspects. This paper proposes a maturity assessment approach support-ed by tools based on standards widely adopted in the industry. It covers best practices in the three dimensions and ranks the pos-sible solutions in terms of the most suitable alternatives for cloud based solutions. The maturity assessment has been also comple-mented with the functionality of capturing user information, and transforming it in useful information in terms of reports and files for the migration process.
@InProceedings{MESOCA15p41,
author = {Juncal Alonso and Leire Orue-Echevarria and Marisa Escalante},
title = {Cloud Compliant Applications: A Reference Framework to Assess the Maturity of Software Applications with Respect to Cloud},
booktitle = {Proc.\ MESOCA},
publisher = {IEEE},
pages = {41--45},
doi = {},
year = {2015},
}
MonSLAR: A Middleware for Monitoring SLA for RESTFUL Services in Cloud Computing
Shaymaa Al-Shammari and Adil Al-Yasiri
(University of Salford, UK)
Measuring the quality of cloud computing provision from the client’s point of view is important in order to ensure that the service conforms to the level specified in the service level agreement (SLA). With a view to avoid SLA violation, the main parameters should be determined in the agreement and then used to evaluate the fulfillment of the SLA terms at the client’s side. Current studies in cloud monitoring only handle monitoring the provider resources with little or no consideration to the client’s side. This paper presents MonSLAR, a user-centric middleware for Monitoring SLA for Restful services in SaaS cloud computing environments. MonSLAR uses a distributed architecture that allows SLA parameters and the monitored data to be embedded in the requests and responses of the REST protocol.
@InProceedings{MESOCA15p46,
author = {Shaymaa Al-Shammari and Adil Al-Yasiri},
title = {MonSLAR: A Middleware for Monitoring SLA for RESTFUL Services in Cloud Computing},
booktitle = {Proc.\ MESOCA},
publisher = {IEEE},
pages = {46--50},
doi = {},
year = {2015},
}
Migration and Evolution in Service-Oriented Software
Fri, Oct 2, 11:00 - 12:30, GW2 B2880 (Chair: Jan Jelschen)
Service-Oriented Toolchains for Software Evolution
Jan Jelschen
(University of Oldenburg, Germany)
Software evolution projects need to be supported by integrated toolchains, yet can suffer from inadequate tool interoperability. Practitioners are forced to deal with technical integration issues, instead of focusing on their projects' actual objectives. Lacking integration support, the resulting toolchains are rigid and inflexible, impeding project progress. This paper presents SENSEI, a service-oriented support framework for toolchain-building, that clearly separates software evolution needs from implementing tools and interoperability issues. It aims to improve interoperability using component-based principles, and provides model-driven code generation to partly automate the integration process. The approach has been prototypically implemented, and was applied in the context of the Q-MIG project, to build parts of an integrated software migration and quality assessment toolchain.
@InProceedings{MESOCA15p51,
author = {Jan Jelschen},
title = {Service-Oriented Toolchains for Software Evolution},
booktitle = {Proc.\ MESOCA},
publisher = {IEEE},
pages = {51--58},
doi = {},
year = {2015},
}
Measuring Test Coverage of SoA Services
Harry M. Sneed and Chris Verhoef
(SoRing, Hungary; TU Dresden, Germany; VU University Amsterdam, Netherlands)
One of the challenges of testing in a SoA environment is that testers do not have access to the source code of the services they are testing. Therefore they are not able to measure test coverage at the code level, as is done in conventional white-box testing. They are compelled to measure test coverage in other ways which satisfy the constraints of black-box testing. We propose some alternate means of measuring test coverage by focusing on the structure and content of the service interface without regarding the code. The result is a new way of measuring test coverage which can apply to testing in a SoA environment.
@InProceedings{MESOCA15p59,
author = {Harry M. Sneed and Chris Verhoef},
title = {Measuring Test Coverage of SoA Services},
booktitle = {Proc.\ MESOCA},
publisher = {IEEE},
pages = {59--66},
doi = {},
year = {2015},
}
Frontmatter
Message from the Chairs
Welcome to the Seventh International Workshop on Managing Technical Debt (MTD 2015), co-located with the 31st International Conference on Software Maintenance and Evolution (ICSME) in Bremen, Germany. MTD is collocated with ICSME for the second time. Technical debt is a metaphor that software developers and managers increasingly use to communicate key trade-offs related to time planning and quality issues. The Managing Technical Debt workshop series has, since 2010, brought together practitioners and researchers to discuss and define issues related to technical debt and how they can be studied. Workshop participants have reiterated the usefulness of the metaphor each year, shared emerging practices used in software development organizations, and emphasized the need for more research and better means for sharing emerging practices and results
Tools and Technical Debt
Fri, Oct 2, 11:00 - 12:30, GW2 B2900 (Chair: Robert Nord)
Towards an Open-Source Tool for Measuring and Visualizing the Interest of Technical Debt
Davide Falessi and Andreas Reichel
(California Polytechnic State University, USA; Mannheim University of Applied Sciences, Germany)
Current tools for managing technical debt are able to report the principal of the debt, i.e., the amount of effort required to fix all the quality rules violated in a project. However, they do not report the interest, i.e., the disadvantages the project had or will have due to quality rules violations. As a consequence, the user lacks support in understanding how much the principal should be reduced and why. We claim that information about the interest is, at least, as important as the information about the principal; the interest should be quantified and treated as a first-class entity like the principal. In this paper we aim to advance the state of the art of how the interest is measured and visualized. The goal of the paper is to describe MIND, an open-source tool which is, to the best of our knowledge, the first tool supporting the quantification and visualization of the interest. MIND, by analyzing historical data coming from Redmine and Git repositories, reports the interest incurring in a software project in terms of how many extra defects occurred, or will occur, due to quality rules violations. We evaluated MIND by using it to analyze a software project stored in a dataset of more than a million lines of code. Results suggest that MIND accurately measures the interest of technical debt.
@InProceedings{MTD15p1,
author = {Davide Falessi and Andreas Reichel},
title = {Towards an Open-Source Tool for Measuring and Visualizing the Interest of Technical Debt},
booktitle = {Proc.\ MTD},
publisher = {IEEE},
pages = {1--8},
doi = {},
year = {2015},
}
Detecting and Quantifying Different Types of Self-Admitted Technical Debt
Everton da S. Maldonado and Emad Shihab
(Concordia University, Canada)
Technical Debt is a term that has been used to express non-optimal solutions during the development of software projects. These non optimal solutions are often shortcuts that allow the project to move faster in the short term, at the cost of increased maintenance in the future. To help alleviate the impact of technical debt, a number of studies focused on the detection of technical debt. More recently, our work shown that one possible source to detect technical debt is using source code comments, also referred to as self-admitted technical debt. However, what types of technical debt can be detected using source code comments remains as an open question. Therefore, in this paper we examine code comments to determine the different types of technical debt. First, we propose four simple filtering heuristics to eliminate comments that are not likely to contain technical debt. Second, we read through more than 33K comments, and we find that self-admitted technical debt can be classified into five main types - design debt, defect debt, documentation debt, requirement debt and test debt. The most common type of self-admitted technical debt is design debt, making up between 42% to 84% of the classified comments. Lastly, we make the classified dataset of more than 33K comments publicly available for the community as a way to encourage future research and the evolution of the technical debt landscape.
@InProceedings{MTD15p9,
author = {Everton da S. Maldonado and Emad Shihab},
title = {Detecting and Quantifying Different Types of Self-Admitted Technical Debt},
booktitle = {Proc.\ MTD},
publisher = {IEEE},
pages = {9--15},
doi = {},
year = {2015},
}
Towards a Prioritization of Code Debt: A Code Smell Intensity Index
Francesca Arcelli Fontana,
Vincenzo Ferme,
Marco Zanoni, and Riccardo Roveda
(University of Milano-Bicocca, Italy; University of Lugano, Switzerland)
Code smells can be used to capture symptoms of code decay and potential maintenance problems that can be avoided by applying the right refactoring. They can be seen as a source of technical debt. However, tools for code smell detection often provide far too many and different results, and identify many false positive code smell instances. In fact, these tools are rooted on initial and rather informal code smell definitions. This represents a challenge to interpret their results in different ways. In this paper, we provide an Intensity Index, to be used as an estimator to determine the most critical instances, prioritizing the examination of smells and, potentially, their removal. We apply Intensity on the detection of six well known and common smells and we report their Intensity distribution from an analysis performed on 74 systems of the Qualitas Corpus, showing how Intensity could be used to prioritize code smells inspection.
@InProceedings{MTD15p16,
author = {Francesca Arcelli Fontana and Vincenzo Ferme and Marco Zanoni and Riccardo Roveda},
title = {Towards a Prioritization of Code Debt: A Code Smell Intensity Index},
booktitle = {Proc.\ MTD},
publisher = {IEEE},
pages = {16--24},
doi = {},
year = {2015},
}
A Contextualized Vocabulary Model for Identifying Technical Debt on Code Comments
Mário André de Freitas Farias, André Batista da Silva, Manoel Gomes de Mendonça Neto, and
Rodrigo Oliveira Spínola
(Federal Institute of Sergipe, Brazil; Federal University of Bahia, Brazil; Federal University of Sergipe, Brazil; FPC-UFBA, Brazil; Salvador University, Brazil)
Context: The identification of technical debt (TD) is an important step to effectively manage it. In this context, a set of indicators has been used by automated approaches to identify TD items, but some debt may not be directly identified using only metrics collected from the source code. Goal: In this work we propose CVM-TD, a model to support the identification of technical debt through code comment analysis. Method: we performed an exploratory study on two large open sources projects with the goal of characterizing the feasibility of the proposed model to support the detection of TD through code comments analysis. Results: The results indicate that (1) developers use the dimensions considered by CVM-TD when writing code comments, (2) CVM-TD provides a vocabulary that may be used to detect TD items, and (iii) the proposed model needs to be calibrated in order calibrated in order to reduce the difference between comments returned by the vocabulary and those that may indicate a TD item. Conclusion: Code comments analysis can be used to detect TD in software projects and CVM-TD may support the development team to perform this task.
@InProceedings{MTD15p25,
author = {Mário André de Freitas Farias and André Batista da Silva and Manoel Gomes de Mendonça Neto and Rodrigo Oliveira Spínola},
title = {A Contextualized Vocabulary Model for Identifying Technical Debt on Code Comments},
booktitle = {Proc.\ MTD},
publisher = {IEEE},
pages = {25--32},
doi = {},
year = {2015},
}
Identifying and Visualizing Architectural Debt and Its Efficiency Interest in the Automotive Domain: A Case Study
Ulf Eliasson, Antonio Martini, Robert Kaufmann, and Sam Odeh
(Volvo, Sweden; Chalmers University of Technology, Sweden; University of Gothenburg, Sweden)
Architectural Technical Debt has recently received the attention of the scientific community, as a suitable metaphor for describing sub-optimal architectural solutions having short-term benefits but causing a long-term negative impact. We study such phenomenon in the context of Volvo Car Group, where the development of modern cars includes complex systems with mechanical components, electronics and software working together in a complicated network to perform an increasing number of functions and meet the demands of many customers. This puts high requirements on having an architecture and design that can handle these demands. Therefore, it is of utmost importance to manage Architecture Technical Debt, in order to make sure that the advantages of sub-optimal solutions do not lead to the payment of a large interest. We conducted a case study at Volvo Car Group and we discovered that architectural violations in the detailed design had an impact on the efficiency of the communication between components, which is an essential quality in cars and other embedded systems. Such interest is not studied in literature, which usually focuses on the maintainability aspects of Technical Debt. To explore how this Architectural Technical Debt and its interest could be communicated to stakeholders, we developed a visual tool. We found that not only was the Architectural Debt highly interesting for the architects and other stakeholders at VCG, but the proposed visualization was useful in increasing the awareness of the impact that Architectural Technical Debt had on efficiency.
@InProceedings{MTD15p33,
author = {Ulf Eliasson and Antonio Martini and Robert Kaufmann and Sam Odeh},
title = {Identifying and Visualizing Architectural Debt and Its Efficiency Interest in the Automotive Domain: A Case Study},
booktitle = {Proc.\ MTD},
publisher = {IEEE},
pages = {33--40},
doi = {},
year = {2015},
}
Validating and Prioritizing Quality Rules for Managing Technical Debt: An Industrial Case Study
Davide Falessi and Alexander Voegele
(California Polytechnic State University, USA; Elsevier, Germany)
One major problem in using static analyzers to manage, monitor, control, and reason about technical debt is that industrial projects have a huge amount of technical debt which reflects hundreds of quality rule violations (e.g., high complex module or low comment density). Moreover the negative impact of violating quality rules (i.e., technical debt interest) may vary across rules or even across contexts. Thus, without a context-specific validation and prioritization of quality rules, developers cannot effectively manage technical debt. This paper reports on a case study aimed at exploring the interest associated with violating quality rules; i.e., we investigate if and which quality rules are important for software developers. Our empirical method consists of a survey and a quantitative analysis of the historical data of a CMMI Level 5 software company. The main result of the quantitative analysis is that classes violating several quality rules are five times more defect prone than classes not violating any rule. The main result of the survey is that some rules are perceived by developers as more important than others; however, there is no false positive (i.e., incorrect rule or null interest). These results pave the way to a better practical use of quality rules to manage technical debt and describe new research directions for building a scientific foundation to the technical debt metaphor.
@InProceedings{MTD15p41,
author = {Davide Falessi and Alexander Voegele},
title = {Validating and Prioritizing Quality Rules for Managing Technical Debt: An Industrial Case Study},
booktitle = {Proc.\ MTD},
publisher = {IEEE},
pages = {41--48},
doi = {},
year = {2015},
}
Emerging Ideas in Technical Debt
Fri, Oct 2, 14:00 - 15:30, GW2 B2900 (Chair: Alexander Serebrenik)
Technical Debt in Automated Production Systems
Birgit Vogel-Heuser, Susanne Rösch, Antonio Martini, and Matthias Tichy
(TU München, Germany; Chalmers University of Technology, Sweden; University of Gothenburg, Sweden; University of Ulm, Germany)
The term technical debt borrowed from financial debt describes the long-term negative effects of sub-optimal solutions to achieve short-term benefits. It has been widely studied so far in pure software systems. However, there is a lack of studies on tech-nical debt in technical systems, which contain mechanical, electrical and software parts. Automated Production Systems are such technical systems. In this position paper, we introduce technical debt for Automated Production Systems and give examples from the different disciplines. Based on that description, we outline future research directions on technical debt in this field.
@InProceedings{MTD15p49,
author = {Birgit Vogel-Heuser and Susanne Rösch and Antonio Martini and Matthias Tichy},
title = {Technical Debt in Automated Production Systems},
booktitle = {Proc.\ MTD},
publisher = {IEEE},
pages = {49--52},
doi = {},
year = {2015},
}
Estimating the Breaking Point for Technical Debt
Alexander Chatzigeorgiou ,
Apostolos Ampatzoglou,
Areti Ampatzoglou, and Theodoros Amanatidis
(University of Macedonia, Greece; University of Groningen, Netherlands)
In classic economics, when borrowing an amount of money that causes a debt to the issuer, it is not usual to have interest which can become larger than the principal. In the context of technical debt however, accumulated debt in the form of interest can in some cases quickly sum up to an amount that at some point, be-comes larger than the effort required to repay the initial amount of technical debt. In this paper we propose an approach for estimating this breaking point. Anticipating how late the breaking point is expected to come can support decision making with respect to investments on improving quality. The approach is based on a search-based optimization tool that is capable of identifying the distance of an actual object-oriented design to the corresponding optimum one.
@InProceedings{MTD15p53,
author = {Alexander Chatzigeorgiou and Apostolos Ampatzoglou and Areti Ampatzoglou and Theodoros Amanatidis},
title = {Estimating the Breaking Point for Technical Debt},
booktitle = {Proc.\ MTD},
publisher = {IEEE},
pages = {53--56},
doi = {},
year = {2015},
}
Technical Debt of Standardized Test Software
Kristóf Szabados and
Attila Kovács
(Eötvös Loránd University, Hungary)
Recently, technical debt investigations became more and more important in the software development industry. In this paper we show that the same challenges are valid for the automated test systems. We present an internal quality analysis of standardized test software developed by ETSI and 3GPP, performed on the systems publicly available at www.ttcn-3.org.
@InProceedings{MTD15p57,
author = {Kristóf Szabados and Attila Kovács},
title = {Technical Debt of Standardized Test Software},
booktitle = {Proc.\ MTD},
publisher = {IEEE},
pages = {57--60},
doi = {},
year = {2015},
}
Decision-Making Framework for Refactoring
Marko Leppänen, Samuel Lahtinen, Kati Kuusinen,
Simo Mäkinen, Tomi Männistö,
Juha Itkonen, Jesse Yli-Huumo, and Timo Lehtonen
(Tampere University of Technology, Finland; University of Helsinki, Finland; Aalto University, Finland; Lappeenranta University of Technology, Finland; Solita, Finland)
Refactoring has been defined as improving code quality without affecting its functionality. When refactoring is overlooked in daily development, the likelihood of larger refactorings increases with time. Disadvantages of larger refactorings include that they disrupt the daily work, require additional planning effort, and often they need to be justified to stakeholders. In this paper, we investigate with interviews how professionals make refactoring decisions. As a result, we present a framework for decision making for larger refactoring operations describing the key stages in a refactoring workflow. Furthermore, one actual industry case of refactoring decision making is presented in detail.
@InProceedings{MTD15p61,
author = {Marko Leppänen and Samuel Lahtinen and Kati Kuusinen and Simo Mäkinen and Tomi Männistö and Juha Itkonen and Jesse Yli-Huumo and Timo Lehtonen},
title = {Decision-Making Framework for Refactoring},
booktitle = {Proc.\ MTD},
publisher = {IEEE},
pages = {61--68},
doi = {},
year = {2015},
}
A Framework to Aid in Decision Making for Technical Debt Management
Carlos Fernández-Sánchez, Juan Garbajosa, and Agustín Yagüe
(Technical University of Madrid, Spain)
Current technical debt management approaches mainly address specific types of technical debt. This paper introduces a framework to aid in decision making for technical debt management, and it includes those elements considered in technical debt management in the available literature, which are classified in three groups and mapped into three stakeholders’ points of view. The research method was systematic mapping. In contrast to current approaches, the framework is not constrained by a concrete type of technical debt. Using this framework it will be possible to build specific models to assist in decision making for technical debt management.
@InProceedings{MTD15p69,
author = {Carlos Fernández-Sánchez and Juan Garbajosa and Agustín Yagüe},
title = {A Framework to Aid in Decision Making for Technical Debt Management},
booktitle = {Proc.\ MTD},
publisher = {IEEE},
pages = {69--76},
doi = {},
year = {2015},
}
Working Session
Fri, Oct 2, 15:45 - 17:30, GW2 B2900 (Chair: Neil Ernst)
Restructuring and Refinancing Technical Debt
Raul Zablah and Christian Murphy
(University of Pennsylvania, USA)
Given the increasing importance of software to society, the issue of technical debt is becoming more pervasive in software development. Its implications range from incurring small amounts of technical debt to speed up development - a positive - to stalling and making development no longer possible - a huge negative. In this paper, we present a framework that attempts to refine the understanding of technical debt by tracing more links to the financial metaphor, specifically focusing on the concepts of restructuring and refinancing technical debt. This paper looks at technical debt as a leverage product that is contingent upon the liquidity of the debtor. From this perspective, it is then possible to more effectively assess the incurment of technical debt and also to more effectively strategize the use of leverage in software development - accounting for the respective risks and benefits it provides.
@InProceedings{MTD15p77,
author = {Raul Zablah and Christian Murphy},
title = {Restructuring and Refinancing Technical Debt},
booktitle = {Proc.\ MTD},
publisher = {IEEE},
pages = {77--80},
doi = {},
year = {2015},
}
Frontmatter
Message from the Chairs
Welcome to MUD 2015, the 5th Workshop on Mining Unstructured Data. The workshop is co-located with the 31st International Conference on Software Maintenance and Evolution (ICSME 2015) and is taking place in Bremen, Germany.
Paper Presentations and Group Discussion
Mon, Sep 28, 14:00 - 15:30, GW2 B2900
Heuristic-Based Part-of-Speech Tagging of Source Code Identifiers and Comments
Reem S. AlSuhaibani, Christian D. Newman,
Michael L. Collard, and
Jonathan I. Maletic
(Kent State University, USA; University of Akron, USA)
An approach for using heuristics and static program analysis information to markup part-of-speech for program identifiers is presented. It does not use a natural language part-of-speech tagger for identifiers within the code. A set of heuristics is defined akin to natural language usage of identifiers usage in code. Additionally, method stereotype information, which is automatically derived, is used in the tagging process. The approach is built using the srcML infrastructure and adds part-of-speech information directly into the srcML markup.
@InProceedings{MUD15p1,
author = {Reem S. AlSuhaibani and Christian D. Newman and Michael L. Collard and Jonathan I. Maletic},
title = {Heuristic-Based Part-of-Speech Tagging of Source Code Identifiers and Comments},
booktitle = {Proc.\ MUD},
publisher = {IEEE},
pages = {1--6},
doi = {},
year = {2015},
}
SODA: The Stack Overflow Dataset Almanac
Nicolas Latorre,
Roberto Minelli,
Andrea Mocci,
Luca Ponzanelli, and Michele Lanza
(University of Lugano, Switzerland)
Stack Overflow has become a fundamental resource for developers, becoming the de facto Question and Answer (Q&A) website, and one of the standard unstructured data sources for software engineering research to mine knowledge about development. We present Soda, the Stack Overflow Dataset Almanac, a tool that helps researchers and developers to better understand the trends of discussion topics in Stack Overflow, based on the available tagging system. Soda provides an effective visualization to support the analysis of topics in different time intervals and frames, leveraging single or co-occurrent tags. We show, through simple usage scenarios, how Soda can be used to find interesting peculiar moments in the evolution of Stack Overflow discussions that closely match specific recent events in the area of software development. Soda is available at http://rio.inf.usi.ch/soda/.
@InProceedings{MUD15p7,
author = {Nicolas Latorre and Roberto Minelli and Andrea Mocci and Luca Ponzanelli and Michele Lanza},
title = {SODA: The Stack Overflow Dataset Almanac},
booktitle = {Proc.\ MUD},
publisher = {IEEE},
pages = {7--11},
doi = {},
year = {2015},
}
Info
Matching Machine-Code Functions in Executables within One Product Line via Bioinformatic Sequence Alignment
Arne Wichmann and
Sibylle Schupp
(TU Hamburg, Germany)
In this paper we evaluate whether different executables from the same software product line have similar sequences of machine-code functions. We provide a method of creating matchings of machine-code functions using alignment techniques known from bioinformatics. We map, per function, vectors of code metrics to symbols from an alphabet using machine learning techniques, and construct sequence alignments using off-the-shelf alignment tools. Our evaluation of alignments of glibc versions, musl optimizations, different RedBoot platforms and architectures, and the Linux kernel shows that the above statement holds in all cases except for differing architectures. Our method can therefore be used to match functions in executables for most variations within one product line.
@InProceedings{MUD15p12,
author = {Arne Wichmann and Sibylle Schupp},
title = {Matching Machine-Code Functions in Executables within One Product Line via Bioinformatic Sequence Alignment},
booktitle = {Proc.\ MUD},
publisher = {IEEE},
pages = {12--16},
doi = {},
year = {2015},
}
proc time: 1.28