ASE 2017

2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE 2017), October 30 – November 3, 2017, Urbana-Champaign, IL, USA

Desktop Layout

Program Comprehension
Technical Research
Automatically Assessing Code Understandability: How Far Are We?
Simone Scalabrino, Gabriele Bavota, Christopher Vendome, Mario Linares-Vásquez, Denys Poshyvanyk, and Rocco Oliveto
(University of Molise, Italy; University of Lugano, Switzerland; College of William and Mary, USA; Universidad de los Andes, Colombia)
Preprint
Supplementary Material
Abstract: Program understanding plays a pivotal role in software maintenance and evolution: a deep understanding of code is the stepping stone for most software-related activities, such as bug fixing or testing. Being able to measure the understandability of a piece of code might help in estimating the effort required for a maintenance activity, in comparing the quality of alternative implementations, or even in predicting bugs. Unfortunately, there are no existing metrics specifically designed to assess the understandability of a given code snippet. In this paper, we perform a first step in this direction, by studying the extent to which several types of metrics computed on code, documentation, and developers correlate with code understandability. To perform such an investigation we ran a study with 46 participants who were asked to understand eight code snippets each. We collected a total of 324 evaluations aiming at assessing the perceived understandability, the actual level of understanding, and the time needed to understand a code snippet. Our results demonstrate that none of the (existing and new) metrics we considered is able to capture code understandability, not even the ones assumed to assess quality attributes strongly related with it, such as code readability and complexity.

Authors:


Time stamp: 2019-06-25T06:14:40+02:00