Powered by
2nd ACM SIGSOFT International Workshop on Software Qualities and Their Dependencies (SQUADE 2019),
August 26, 2019,
Tallinn, Estonia
2nd ACM SIGSOFT International Workshop on Software Qualities and Their Dependencies (SQUADE 2019)
Frontmatter
Welcome from the Chairs
Welcome to the second international workshop on Software Qualities and their Dependencies (SQUADE’19). This year the workshop is co-located with ESEC-FSE, in Tallin, Estonia and is held on August, 26 2019. SQUADE focuses on increasing the understanding of the nature of Software Qualities (SQs), -ilities, non-functional properties or extra-functional requirements (e.g., reliability, security, maintainability, etc.), and their interrelationships with the aim of bringing them into balance in the practice of software engineering.
Papers
A Heuristic Fuzz Test Generator for Java Native Interface
Jinjing Zhao,
Yan Wen,
Xiang Li,
Ling Pang,
Xiaohui Kuang, and
Dongxia Wang
(National Key Laboratory of Science and Technology on Information System Security, China; Beijing Linzhuo Infomation Technology, China)
It is well known that once a Java application uses native C/C++ methods through the Java Native Interface (JNI), any security guarantees provided by Java might be invalidated by the native methods. So any vulnerability in this trusted native code can compromise the security of the Java program. Fuzzing test is an approach to software testing whereby the system being tested is bombarded with inputs generated by another program. When using fuzzer to test JNI programs, how to accurately reach the JNI functions and run through them to find the sensitive system APIs is the pre-condition of the test. In this paper, we present a heuristic fuzz generator method on JNI vulnerability detection based on the branch predication information of program. The result in the experiment shows our method can use less fuzzing times to reach more sensitive windows APIs in Java native code.
@InProceedings{SQUADE19p1,
author = {Jinjing Zhao and Yan Wen and Xiang Li and Ling Pang and Xiaohui Kuang and Dongxia Wang},
title = {A Heuristic Fuzz Test Generator for Java Native Interface},
booktitle = {Proc.\ SQUADE},
publisher = {ACM},
pages = {1--7},
doi = {10.1145/3340495.3342749},
year = {2019},
}
Publisher's Version
A Comparative Study of FAQs for Software Development
Mathias Ellmann and
Irmo Timmann
(DIPLOMA University of Applied Sciences at Bad Sooden-Allendorf, Germany)
Developers use FAQs (Frequently Asked Questions) to access and
share knowledge about software libraries, APIs, and platforms. This
paper studies 2,660 questions from 43 FAQ websites. We analyzed
accessibility metrics such as the steps from the main documentation
page, tagging or multilingualism as well as structure and readability
metrics such as code-to-text ratio, number of links, and Flesch
Reading-Ease. In addition, we compared these FAQs to 69,548 Stack
Overflow (SO) posts, which cover the same topics and which have
been posted by developers at least twice (i.e. duplicates). Our results
reveal that different software vendors give different importance
to their FAQs, e.g. by investing more effort or less in structuring
and presenting them. We found that studied FAQs include more
references (e.g. to corresponding API documentation) and are more
verbose and difficult to read than corresponding SO duplicates. We
also found that FAQs cover additional topics compared to corresponding
duplicate posts.
@InProceedings{SQUADE19p8,
author = {Mathias Ellmann and Irmo Timmann},
title = {A Comparative Study of FAQs for Software Development},
booktitle = {Proc.\ SQUADE},
publisher = {ACM},
pages = {8--11},
doi = {10.1145/3340495.3342750},
year = {2019},
}
Publisher's Version
Hospitality of Chatbot Building Platforms
Saurabh Srivastava and
T.V. Prabhakar
(IIT Kanpur, India)
The temptation to be able to talk to a machine is not new. Recent advancements in the field of Natural Language Understanding has made it possible to build conversational components that can be plugged inside an application, similar to other components. These components, called chatbots, can be created from scratch or with the help of commercially available platforms. These platforms make it easier to build and deploy chatbots, often without writing a single line of code. However, similar to any other software component, chatbots also have quality concerns. Despite significant contributions in the field, an architectural perspective of building chatbots with desired quality requirements is missing in the literature.
In the current work, we highlight the impact of features provided by these platforms (along with their quality) on the application design process and overall quality attributes. We propose a methodological framework to evaluate support provided by a chatbot platform towards achieving quality in the application. The framework, called Hospitality Framework, is based on software architectural body of knowledge, especially architectural tactics. The framework produces a metric, called Hospitality Index, which has utilities for making various design decisions for the overall application. We present the use of our framework on a simple use case to highlight the phases of evaluation. We showcase the process by picking three popular chatbot platforms - Watson Assistant, DialogFlow and Lex, over four quality attributes - Modifiability, Security & Privacy, Interoperability and Reliability. Our results show that different platforms provide different support for these four quality attributes.
@InProceedings{SQUADE19p12,
author = {Saurabh Srivastava and T.V. Prabhakar},
title = {Hospitality of Chatbot Building Platforms},
booktitle = {Proc.\ SQUADE},
publisher = {ACM},
pages = {12--19},
doi = {10.1145/3340495.3342751},
year = {2019},
}
Publisher's Version
Integrating Runtime Data with Development Data to Monitor External Quality: Challenges from Practice
Aytaj Aghabayli,
Dietmar Pfahl,
Silverio Martínez-Fernández, and
Adam Trendowicz
(University of Tartu, Estonia; Fraunhofer IESE, Germany)
The use of software analytics in software development companies has grown in the last years. Still, there is little support for such companies to obtain integrated insightful and actionable information at the right time. This research aims at exploring the integration of runtime and development data to analyze to what extent external quality is related to internal quality based on real project data. Over the course of more than three months, we collected and analyzed data of a software product following the CRISP-DM process. We studied the integration possibilities between runtime and development data, and implemented two integrations. The number of bugs found in code has a weak positive correlation with code quality measures and a moderate negative correlation with the number of rule violations found. Other types of correlations require more data cleaning and higher quality data for their exploration. During our study, several challenges to exploit data gathered both at runtime and during development were encountered. Lessons learned from integrating external and internal data in software projects may be useful for practitioners and researchers alike.
@InProceedings{SQUADE19p20,
author = {Aytaj Aghabayli and Dietmar Pfahl and Silverio Martínez-Fernández and Adam Trendowicz},
title = {Integrating Runtime Data with Development Data to Monitor External Quality: Challenges from Practice},
booktitle = {Proc.\ SQUADE},
publisher = {ACM},
pages = {20--26},
doi = {10.1145/3340495.3342752},
year = {2019},
}
Publisher's Version
Predicting Reliability by Severity and Priority of Defects
Camelia Serban and
Andreea Vescan
(Babes-Bolyai University, Romania)
Quality of software systems is continuing to be an important investigation of software systems. Assessing and predicting quality attributes of object-oriented design are performed by using software metrics, knowing that a good internal structure of software system influences in a great extent its external quality attributes.
This study presents an empirical investigation of software reliability. The goal is to identify the applicability of object-oriented design metrics for reliability prediction. Firstly, an estimation of the reliability is conducted. We proposed a new reliability metric at the class level considering two perspectives related to failures/bugs found, i.e. priority and severity. Later, the estimated reliability value helps us to predict the reliability of other software projects based on their internal structure. The prediction value for reliability can be made earlier in the software development life cycle.
The approach’s methodology for prediction is a statistical method, the multiple linear regression considering as dependent variable for our analysis the bugs count for a class (reflected in the newly proposed metric) and as independent variables the values of the Chidamber and Kemerer (CK) metrics. The results indicated that the most influential CK metrics in predicting reliability are WMC (Weighted Methods per Class) and CBO (Coupling Between Object classes), and that the RFC (Response For Class) and LCOM (Lack of Cohesion of Methods) metrics have no impact on the value of reliability. The root mean square error is used to validate our proposed regression equation considering data from the other four projects.
@InProceedings{SQUADE19p27,
author = {Camelia Serban and Andreea Vescan},
title = {Predicting Reliability by Severity and Priority of Defects},
booktitle = {Proc.\ SQUADE},
publisher = {ACM},
pages = {27--34},
doi = {10.1145/3340495.3342753},
year = {2019},
}
Publisher's Version
Contributors’ Impact on a FOSS Project’s Quality
Thomas Schranz,
Christian Schindler,
Matthias Müller, and
Wolfgang Slany
(Graz University of Technology, Austria)
Engaging contributors in a Free Open Source Software (FOSS) project can be challenging. Finding an appropriate task to start with is a common entrance barrier for newcomers. Poor code quality contributes to difficulties in the onboarding process and limits contributor satisfaction in general. In turn, dissatisfied developers tend to exacerbate problems with system integrity. Poorly designed systems are difficult to maintain and extend. Users can often directly experience these issues as instabilities in system behavior. Thus code quality is a key issue for users and contributors in FOSS. We present a case study on the interactions between code quality and contributor experience in the real-world FOSS project Catrobat. We describe the implications of a refactoring process in terms of code metrics and benefits for developers and users.
@InProceedings{SQUADE19p35,
author = {Thomas Schranz and Christian Schindler and Matthias Müller and Wolfgang Slany},
title = {Contributors’ Impact on a FOSS Project’s Quality},
booktitle = {Proc.\ SQUADE},
publisher = {ACM},
pages = {35--38},
doi = {10.1145/3340495.3342754},
year = {2019},
}
Publisher's Version
proc time: 1.73