Workshop NL4SE 2018 – Author Index |
Contents -
Abstracts -
Authors
|
Ahmed, Iftekhar |
NL4SE '18: "Towards Understanding Code ..."
Towards Understanding Code Readability and Its Impact on Design Quality
Umme Ayda Mannan, Iftekhar Ahmed, and Anita Sarma (Oregon State University, USA; University of California at Irvine, USA) Readability of code is commonly believed to impact the overall quality of software. Poor readability not only hinders developers from understanding what the code is doing but also can cause developers to make sub-optimal changes and introduce bugs. Developers also recognize this risk and state readability among their top information needs. Researchers have modeled readability scores. However, thus far, no one has investigated how readability evolves over time and how that impacts design quality of software. We perform a large scale study of 49 open source Java projects, spanning 8296 commits and 1766 files. We find that readability is high in open source projects and does not fluctuate over project’s lifetime unlike design quality of a project. Also readability has a non-significant correlation of 0.151 (Kendall’s τ ) with code smell count (indicator of design quality). Since current readability measure is unable to capture the increased difficulty in reading code due to the degraded design quality, our results hint towards the need of a better measurement and modeling of code readability. @InProceedings{NL4SE18p18, author = {Umme Ayda Mannan and Iftekhar Ahmed and Anita Sarma}, title = {Towards Understanding Code Readability and Its Impact on Design Quality}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {18--21}, doi = {10.1145/3283812.3283820}, year = {2018}, } Publisher's Version |
|
Bowers, Kate M. |
NL4SE '18: "3CAP: Categorizing the Cognitive ..."
3CAP: Categorizing the Cognitive Capabilities of Alzheimer’s Patients in a Smart Home Environment
Kate M. Bowers, Reihaneh H. Hariri, and Katey A. Price (Oakland University, USA; Albion College, USA) Alzheimer’s disease is a progressive illness that affects more than 5.5 million people in the United States with no effective cure or treatment. Symptoms of the disease include declines in memory and speech abilities and increases in aggression and insomnia. Recent research suggests that NLP techniques can detect early cognitive decline as well as monitor the rate of decline over time. The processed data can be used in a smart home environment to enhance the level of home care for Alzheimer’s patients. This paper proposes early-stage research in software engineering and natural language processing for quantifying and evaluating the patient’s cognitive state to determine the required level of support in a smart home. @InProceedings{NL4SE18p34, author = {Kate M. Bowers and Reihaneh H. Hariri and Katey A. Price}, title = {3CAP: Categorizing the Cognitive Capabilities of Alzheimer’s Patients in a Smart Home Environment}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {34--37}, doi = {10.1145/3283812.3283824}, year = {2018}, } Publisher's Version |
|
Brockschmidt, Marc |
NL4SE '18: "Learning from Code with Graphs ..."
Learning from Code with Graphs (Keynote)
Marc Brockschmidt (Microsoft Research, UK) Learning from large corpora of source code ("Big Code") has seen increasing interest over the past few years. A first wave of work has focused on leveraging off-the-shelf methods from other machine learning fields such as natural language processing. While these techniques have succeeded in showing the feasibility of learning from code, and led to some initial practical solutions, they forego explicit use of known program semantics. In a range of recent work, we have tried to solve this issue by integrating deep learning techniques with program analysis methods in graphs. Graphs are a convenient, general formalism to model entities and their relationships, and are seeing increasing interest from machine learning researchers as well. In this talk, I present two applications of graph-based learning to understanding and generating programs and discuss a range of future work building on the success of this work. @InProceedings{NL4SE18p1, author = {Marc Brockschmidt}, title = {Learning from Code with Graphs (Keynote)}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {1--1}, doi = {10.1145/3283812.3283813}, year = {2018}, } Publisher's Version |
|
Cojocar, Grigoreta Sofia |
NL4SE '18: "Mining Monitoring Concerns ..."
Mining Monitoring Concerns Implementation in Java-Based Software Systems
Grigoreta Sofia Cojocar and Adriana-Mihaela Guran (Babes-Bolyai University, Romania) In this paper we describe a new approach for automatic identification of monitoring concerns implementation in Java-based software systems. We also present the results obtained by using our approach on 21 Java-based systems, ranging from small to very large systems. @InProceedings{NL4SE18p22, author = {Grigoreta Sofia Cojocar and Adriana-Mihaela Guran}, title = {Mining Monitoring Concerns Implementation in Java-Based Software Systems}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {22--25}, doi = {10.1145/3283812.3283821}, year = {2018}, } Publisher's Version |
|
Ellmann, Mathias |
NL4SE '18: "Two Perspectives on Software ..."
Two Perspectives on Software Documentation Quality in Stack Overflow
Mathias Ellmann and Marko Schnecke (University of Hamburg, Germany) This paper studies the software documentation quality in Stack Overflow from two perspectives: the questioners’ who are accepting answers and the community’s who is voting for answers. We show what developers can do to increase the chance that their questions or answers get accepted by the community or by the questioners. We found different expectations of what information such as code or images should be included in a question or an answer. We evaluated six different quality indicators (such as Flesch Reading Ease or images) which a developer should consider before posting a question and an answer. In addition, we found different quality indicators for different types of questions, in particular error, discrepancy, and how-to questions. Finally we use a supervised machine-learning algorithm to predict when an answer will be accepted or voted. @InProceedings{NL4SE18p6, author = {Mathias Ellmann and Marko Schnecke}, title = {Two Perspectives on Software Documentation Quality in Stack Overflow}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {6--9}, doi = {10.1145/3283812.3283816}, year = {2018}, } Publisher's Version NL4SE '18: "Natural Language Processing ..." Natural Language Processing (NLP) Applied on Issue Trackers Mathias Ellmann (University of Hamburg, Germany) In the domain of software engineering NLP techniques are needed to use and find duplicate or similar development knowledge which are stored in development documentation as development tasks. To understand duplicate and similar development documentations we will discuss different NLP techniques as descriptive statistics, topic analysis and similarity algorithms as N-grams, the Jaccard or LSI algorithm as well as machine learning algorithms as Decision trees or support vector machines (SVM). Those techniques are used to reach a better understanding of the characteristics, the lexical relations (syntactical and semantical) and the classification and prediction of duplicate development tasks. We found that duplicate tasks share conceptual information and are rather created by inexperienced developers. By tuning different features to predict development tasks with a gradient or a Fidelity loss function a system can identify a duplicate tasks with a 100% accuracy. @InProceedings{NL4SE18p38, author = {Mathias Ellmann}, title = {Natural Language Processing (NLP) Applied on Issue Trackers}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {38--41}, doi = {10.1145/3283812.3283825}, year = {2018}, } Publisher's Version |
|
Gonzalez, Danielle |
NL4SE '18: "A Fine-Grained Approach for ..."
A Fine-Grained Approach for Automated Conversion of JUnit Assertions to English
Danielle Gonzalez, Suzanne Prentice, and Mehdi Mirakhorli (Rochester Institute of Technology, USA; University of South Carolina, USA) Converting source or unit test code to English has been shown to improve the maintainability, understandability, and analysis of software and tests. Code summarizers identify 'important' statements in the source/tests and convert them to easily understood English sentences using static analysis and NLP techniques. However, current test summarization approaches handle only a subset of the variation and customization allowed in the JUnit assert API (a critical component of test cases) which may affect the accuracy of conversions. In this paper, we present our work towards improving JUnit test summarization with a detailed process for converting a total of 45 unique JUnit assertions to English, including 37 previously-unhandled variations of the assertThat method. This process has also been implemented and released as the AssertConvert tool. Initial evaluations have shown that this tool generates English conversions that accurately represent a wide variety of assertion statements which could be used for code summarization or other NLP analyses. @InProceedings{NL4SE18p14, author = {Danielle Gonzalez and Suzanne Prentice and Mehdi Mirakhorli}, title = {A Fine-Grained Approach for Automated Conversion of JUnit Assertions to English}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {14--17}, doi = {10.1145/3283812.3283819}, year = {2018}, } Publisher's Version Info |
|
Gordon, Colin S. |
NL4SE '18: "Generating Comments from Source ..."
Generating Comments from Source Code with CCGs
Sergey Matskevich and Colin S. Gordon (Drexel University, USA) Good comments help developers understand software faster and provide better maintenance. However, comments are often missing, generally inaccurate, or out of date. Many of these problems can be avoided by automatic comment generation. This paper presents a method to generate informative comments directly from the source code using general-purpose techniques from natural language processing. We generate comments using an existing natural language model that couples words with their individual logical meaning and grammar rules, allowing comment generation to proceed by search from declarative descriptions of program text. We evaluate our algorithm on several classic algorithms implemented in Python. @InProceedings{NL4SE18p26, author = {Sergey Matskevich and Colin S. Gordon}, title = {Generating Comments from Source Code with CCGs}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {26--29}, doi = {10.1145/3283812.3283822}, year = {2018}, } Publisher's Version |
|
Guran, Adriana-Mihaela |
NL4SE '18: "Mining Monitoring Concerns ..."
Mining Monitoring Concerns Implementation in Java-Based Software Systems
Grigoreta Sofia Cojocar and Adriana-Mihaela Guran (Babes-Bolyai University, Romania) In this paper we describe a new approach for automatic identification of monitoring concerns implementation in Java-based software systems. We also present the results obtained by using our approach on 21 Java-based systems, ranging from small to very large systems. @InProceedings{NL4SE18p22, author = {Grigoreta Sofia Cojocar and Adriana-Mihaela Guran}, title = {Mining Monitoring Concerns Implementation in Java-Based Software Systems}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {22--25}, doi = {10.1145/3283812.3283821}, year = {2018}, } Publisher's Version |
|
Hariri, Reihaneh H. |
NL4SE '18: "3CAP: Categorizing the Cognitive ..."
3CAP: Categorizing the Cognitive Capabilities of Alzheimer’s Patients in a Smart Home Environment
Kate M. Bowers, Reihaneh H. Hariri, and Katey A. Price (Oakland University, USA; Albion College, USA) Alzheimer’s disease is a progressive illness that affects more than 5.5 million people in the United States with no effective cure or treatment. Symptoms of the disease include declines in memory and speech abilities and increases in aggression and insomnia. Recent research suggests that NLP techniques can detect early cognitive decline as well as monitor the rate of decline over time. The processed data can be used in a smart home environment to enhance the level of home care for Alzheimer’s patients. This paper proposes early-stage research in software engineering and natural language processing for quantifying and evaluating the patient’s cognitive state to determine the required level of support in a smart home. @InProceedings{NL4SE18p34, author = {Kate M. Bowers and Reihaneh H. Hariri and Katey A. Price}, title = {3CAP: Categorizing the Cognitive Capabilities of Alzheimer’s Patients in a Smart Home Environment}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {34--37}, doi = {10.1145/3283812.3283824}, year = {2018}, } Publisher's Version |
|
Krinke, Jens |
NL4SE '18: "TestNMT: Function-to-Test ..."
TestNMT: Function-to-Test Neural Machine Translation
Robert White and Jens Krinke (University College London, UK) Test generation can have a large impact on the software engineering process by decreasing the amount of time and effort required to maintain a high level of test coverage. This increases the quality of the resultant software while decreasing the associated effort. In this paper, we present TestNMT, an experimental approach to test generation using neural machine translation. TestNMT aims to learn to translate from functions to tests, allowing a developer to generate an approximate test for a given function, which can then be adapted to produce the final desired test. We also present a preliminary quantitative and qualitative evaluation of TestNMT in both cross-project and within-project scenarios. This evaluation shows that TestNMT is potentially useful in the within-project scenario, where it achieves a maximum BLEU score of 21.2, a maximum ROUGE-L score of 38.67, and is shown to be capable of generating approximate tests that are easy to adapt to working tests. @InProceedings{NL4SE18p30, author = {Robert White and Jens Krinke}, title = {TestNMT: Function-to-Test Neural Machine Translation}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {30--33}, doi = {10.1145/3283812.3283823}, year = {2018}, } Publisher's Version |
|
Leng, Yue |
NL4SE '18: "LinkSO: A Dataset for Learning ..."
LinkSO: A Dataset for Learning to Retrieve Similar Question Answer Pairs on Software Development Forums
Xueqing Liu, Chi Wang, Yue Leng, and ChengXiang Zhai (University of Illinois at Urbana-Champaign, USA; Microsoft, USA) We present LinkSO, a dataset for learning to rank similar questions on Stack Overflow. Stack Overflow contains a massive amount of crowd-sourced question links of high quality, which provides a great opportunity for evaluating retrieval algorithms for community-based question answer (cQA) archives and for learning to rank such archives. However, due to the existence of missing links, one question is whether question links can be readily used as the relevance judgment for evaluation. We study this question by measuring the closeness between question links and the relevance judgment, and we find their agreement rates range from 80% to 88%. We conduct an empirical study on the performance of existing work on LinkSO. While existing work focuses on non-learning approaches, our study results reveal that learning-based approaches has great potential to further improve the retrieval performance. @InProceedings{NL4SE18p2, author = {Xueqing Liu and Chi Wang and Yue Leng and ChengXiang Zhai}, title = {LinkSO: A Dataset for Learning to Retrieve Similar Question Answer Pairs on Software Development Forums}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {2--5}, doi = {10.1145/3283812.3283815}, year = {2018}, } Publisher's Version Info |
|
Liu, Xueqing |
NL4SE '18: "LinkSO: A Dataset for Learning ..."
LinkSO: A Dataset for Learning to Retrieve Similar Question Answer Pairs on Software Development Forums
Xueqing Liu, Chi Wang, Yue Leng, and ChengXiang Zhai (University of Illinois at Urbana-Champaign, USA; Microsoft, USA) We present LinkSO, a dataset for learning to rank similar questions on Stack Overflow. Stack Overflow contains a massive amount of crowd-sourced question links of high quality, which provides a great opportunity for evaluating retrieval algorithms for community-based question answer (cQA) archives and for learning to rank such archives. However, due to the existence of missing links, one question is whether question links can be readily used as the relevance judgment for evaluation. We study this question by measuring the closeness between question links and the relevance judgment, and we find their agreement rates range from 80% to 88%. We conduct an empirical study on the performance of existing work on LinkSO. While existing work focuses on non-learning approaches, our study results reveal that learning-based approaches has great potential to further improve the retrieval performance. @InProceedings{NL4SE18p2, author = {Xueqing Liu and Chi Wang and Yue Leng and ChengXiang Zhai}, title = {LinkSO: A Dataset for Learning to Retrieve Similar Question Answer Pairs on Software Development Forums}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {2--5}, doi = {10.1145/3283812.3283815}, year = {2018}, } Publisher's Version Info |
|
Mannan, Umme Ayda |
NL4SE '18: "Towards Understanding Code ..."
Towards Understanding Code Readability and Its Impact on Design Quality
Umme Ayda Mannan, Iftekhar Ahmed, and Anita Sarma (Oregon State University, USA; University of California at Irvine, USA) Readability of code is commonly believed to impact the overall quality of software. Poor readability not only hinders developers from understanding what the code is doing but also can cause developers to make sub-optimal changes and introduce bugs. Developers also recognize this risk and state readability among their top information needs. Researchers have modeled readability scores. However, thus far, no one has investigated how readability evolves over time and how that impacts design quality of software. We perform a large scale study of 49 open source Java projects, spanning 8296 commits and 1766 files. We find that readability is high in open source projects and does not fluctuate over project’s lifetime unlike design quality of a project. Also readability has a non-significant correlation of 0.151 (Kendall’s τ ) with code smell count (indicator of design quality). Since current readability measure is unable to capture the increased difficulty in reading code due to the degraded design quality, our results hint towards the need of a better measurement and modeling of code readability. @InProceedings{NL4SE18p18, author = {Umme Ayda Mannan and Iftekhar Ahmed and Anita Sarma}, title = {Towards Understanding Code Readability and Its Impact on Design Quality}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {18--21}, doi = {10.1145/3283812.3283820}, year = {2018}, } Publisher's Version |
|
Matskevich, Sergey |
NL4SE '18: "Generating Comments from Source ..."
Generating Comments from Source Code with CCGs
Sergey Matskevich and Colin S. Gordon (Drexel University, USA) Good comments help developers understand software faster and provide better maintenance. However, comments are often missing, generally inaccurate, or out of date. Many of these problems can be avoided by automatic comment generation. This paper presents a method to generate informative comments directly from the source code using general-purpose techniques from natural language processing. We generate comments using an existing natural language model that couples words with their individual logical meaning and grammar rules, allowing comment generation to proceed by search from declarative descriptions of program text. We evaluate our algorithm on several classic algorithms implemented in Python. @InProceedings{NL4SE18p26, author = {Sergey Matskevich and Colin S. Gordon}, title = {Generating Comments from Source Code with CCGs}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {26--29}, doi = {10.1145/3283812.3283822}, year = {2018}, } Publisher's Version |
|
Menzies, Tim |
NL4SE '18: "Total Recall, Language Processing, ..."
Total Recall, Language Processing, and Software Engineering
Zhe Yu and Tim Menzies (North Carolina State University, USA) A broad class of software engineering problems can be generalized as the "total recall problem". This short paper claims that identifying and exploring the total recall problems in software engineering is an important task with wide applicability. To make that case, we show that by applying and adapting the state of the art active learning and natural language processing algorithms for solving the total recall problem, two important software engineering tasks can also be addressed : (a) supporting large literature reviews and (b) identifying software security vulnerabilities. Furthermore, we conjecture that (c) test case prioritization and (d) static warning identification can also be generalized as and benefit from the total recall problem. The widespread applicability of "total recall" to software engineering suggests that there exists some underlying framework that encompasses not just natural language processing, but a wide range of important software engineering tasks. @InProceedings{NL4SE18p10, author = {Zhe Yu and Tim Menzies}, title = {Total Recall, Language Processing, and Software Engineering}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {10--13}, doi = {10.1145/3283812.3283818}, year = {2018}, } Publisher's Version |
|
Mirakhorli, Mehdi |
NL4SE '18: "A Fine-Grained Approach for ..."
A Fine-Grained Approach for Automated Conversion of JUnit Assertions to English
Danielle Gonzalez, Suzanne Prentice, and Mehdi Mirakhorli (Rochester Institute of Technology, USA; University of South Carolina, USA) Converting source or unit test code to English has been shown to improve the maintainability, understandability, and analysis of software and tests. Code summarizers identify 'important' statements in the source/tests and convert them to easily understood English sentences using static analysis and NLP techniques. However, current test summarization approaches handle only a subset of the variation and customization allowed in the JUnit assert API (a critical component of test cases) which may affect the accuracy of conversions. In this paper, we present our work towards improving JUnit test summarization with a detailed process for converting a total of 45 unique JUnit assertions to English, including 37 previously-unhandled variations of the assertThat method. This process has also been implemented and released as the AssertConvert tool. Initial evaluations have shown that this tool generates English conversions that accurately represent a wide variety of assertion statements which could be used for code summarization or other NLP analyses. @InProceedings{NL4SE18p14, author = {Danielle Gonzalez and Suzanne Prentice and Mehdi Mirakhorli}, title = {A Fine-Grained Approach for Automated Conversion of JUnit Assertions to English}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {14--17}, doi = {10.1145/3283812.3283819}, year = {2018}, } Publisher's Version Info |
|
Prentice, Suzanne |
NL4SE '18: "A Fine-Grained Approach for ..."
A Fine-Grained Approach for Automated Conversion of JUnit Assertions to English
Danielle Gonzalez, Suzanne Prentice, and Mehdi Mirakhorli (Rochester Institute of Technology, USA; University of South Carolina, USA) Converting source or unit test code to English has been shown to improve the maintainability, understandability, and analysis of software and tests. Code summarizers identify 'important' statements in the source/tests and convert them to easily understood English sentences using static analysis and NLP techniques. However, current test summarization approaches handle only a subset of the variation and customization allowed in the JUnit assert API (a critical component of test cases) which may affect the accuracy of conversions. In this paper, we present our work towards improving JUnit test summarization with a detailed process for converting a total of 45 unique JUnit assertions to English, including 37 previously-unhandled variations of the assertThat method. This process has also been implemented and released as the AssertConvert tool. Initial evaluations have shown that this tool generates English conversions that accurately represent a wide variety of assertion statements which could be used for code summarization or other NLP analyses. @InProceedings{NL4SE18p14, author = {Danielle Gonzalez and Suzanne Prentice and Mehdi Mirakhorli}, title = {A Fine-Grained Approach for Automated Conversion of JUnit Assertions to English}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {14--17}, doi = {10.1145/3283812.3283819}, year = {2018}, } Publisher's Version Info |
|
Price, Katey A. |
NL4SE '18: "3CAP: Categorizing the Cognitive ..."
3CAP: Categorizing the Cognitive Capabilities of Alzheimer’s Patients in a Smart Home Environment
Kate M. Bowers, Reihaneh H. Hariri, and Katey A. Price (Oakland University, USA; Albion College, USA) Alzheimer’s disease is a progressive illness that affects more than 5.5 million people in the United States with no effective cure or treatment. Symptoms of the disease include declines in memory and speech abilities and increases in aggression and insomnia. Recent research suggests that NLP techniques can detect early cognitive decline as well as monitor the rate of decline over time. The processed data can be used in a smart home environment to enhance the level of home care for Alzheimer’s patients. This paper proposes early-stage research in software engineering and natural language processing for quantifying and evaluating the patient’s cognitive state to determine the required level of support in a smart home. @InProceedings{NL4SE18p34, author = {Kate M. Bowers and Reihaneh H. Hariri and Katey A. Price}, title = {3CAP: Categorizing the Cognitive Capabilities of Alzheimer’s Patients in a Smart Home Environment}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {34--37}, doi = {10.1145/3283812.3283824}, year = {2018}, } Publisher's Version |
|
Sarma, Anita |
NL4SE '18: "Towards Understanding Code ..."
Towards Understanding Code Readability and Its Impact on Design Quality
Umme Ayda Mannan, Iftekhar Ahmed, and Anita Sarma (Oregon State University, USA; University of California at Irvine, USA) Readability of code is commonly believed to impact the overall quality of software. Poor readability not only hinders developers from understanding what the code is doing but also can cause developers to make sub-optimal changes and introduce bugs. Developers also recognize this risk and state readability among their top information needs. Researchers have modeled readability scores. However, thus far, no one has investigated how readability evolves over time and how that impacts design quality of software. We perform a large scale study of 49 open source Java projects, spanning 8296 commits and 1766 files. We find that readability is high in open source projects and does not fluctuate over project’s lifetime unlike design quality of a project. Also readability has a non-significant correlation of 0.151 (Kendall’s τ ) with code smell count (indicator of design quality). Since current readability measure is unable to capture the increased difficulty in reading code due to the degraded design quality, our results hint towards the need of a better measurement and modeling of code readability. @InProceedings{NL4SE18p18, author = {Umme Ayda Mannan and Iftekhar Ahmed and Anita Sarma}, title = {Towards Understanding Code Readability and Its Impact on Design Quality}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {18--21}, doi = {10.1145/3283812.3283820}, year = {2018}, } Publisher's Version |
|
Schnecke, Marko |
NL4SE '18: "Two Perspectives on Software ..."
Two Perspectives on Software Documentation Quality in Stack Overflow
Mathias Ellmann and Marko Schnecke (University of Hamburg, Germany) This paper studies the software documentation quality in Stack Overflow from two perspectives: the questioners’ who are accepting answers and the community’s who is voting for answers. We show what developers can do to increase the chance that their questions or answers get accepted by the community or by the questioners. We found different expectations of what information such as code or images should be included in a question or an answer. We evaluated six different quality indicators (such as Flesch Reading Ease or images) which a developer should consider before posting a question and an answer. In addition, we found different quality indicators for different types of questions, in particular error, discrepancy, and how-to questions. Finally we use a supervised machine-learning algorithm to predict when an answer will be accepted or voted. @InProceedings{NL4SE18p6, author = {Mathias Ellmann and Marko Schnecke}, title = {Two Perspectives on Software Documentation Quality in Stack Overflow}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {6--9}, doi = {10.1145/3283812.3283816}, year = {2018}, } Publisher's Version |
|
Wang, Chi |
NL4SE '18: "LinkSO: A Dataset for Learning ..."
LinkSO: A Dataset for Learning to Retrieve Similar Question Answer Pairs on Software Development Forums
Xueqing Liu, Chi Wang, Yue Leng, and ChengXiang Zhai (University of Illinois at Urbana-Champaign, USA; Microsoft, USA) We present LinkSO, a dataset for learning to rank similar questions on Stack Overflow. Stack Overflow contains a massive amount of crowd-sourced question links of high quality, which provides a great opportunity for evaluating retrieval algorithms for community-based question answer (cQA) archives and for learning to rank such archives. However, due to the existence of missing links, one question is whether question links can be readily used as the relevance judgment for evaluation. We study this question by measuring the closeness between question links and the relevance judgment, and we find their agreement rates range from 80% to 88%. We conduct an empirical study on the performance of existing work on LinkSO. While existing work focuses on non-learning approaches, our study results reveal that learning-based approaches has great potential to further improve the retrieval performance. @InProceedings{NL4SE18p2, author = {Xueqing Liu and Chi Wang and Yue Leng and ChengXiang Zhai}, title = {LinkSO: A Dataset for Learning to Retrieve Similar Question Answer Pairs on Software Development Forums}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {2--5}, doi = {10.1145/3283812.3283815}, year = {2018}, } Publisher's Version Info |
|
White, Robert |
NL4SE '18: "TestNMT: Function-to-Test ..."
TestNMT: Function-to-Test Neural Machine Translation
Robert White and Jens Krinke (University College London, UK) Test generation can have a large impact on the software engineering process by decreasing the amount of time and effort required to maintain a high level of test coverage. This increases the quality of the resultant software while decreasing the associated effort. In this paper, we present TestNMT, an experimental approach to test generation using neural machine translation. TestNMT aims to learn to translate from functions to tests, allowing a developer to generate an approximate test for a given function, which can then be adapted to produce the final desired test. We also present a preliminary quantitative and qualitative evaluation of TestNMT in both cross-project and within-project scenarios. This evaluation shows that TestNMT is potentially useful in the within-project scenario, where it achieves a maximum BLEU score of 21.2, a maximum ROUGE-L score of 38.67, and is shown to be capable of generating approximate tests that are easy to adapt to working tests. @InProceedings{NL4SE18p30, author = {Robert White and Jens Krinke}, title = {TestNMT: Function-to-Test Neural Machine Translation}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {30--33}, doi = {10.1145/3283812.3283823}, year = {2018}, } Publisher's Version |
|
Yu, Zhe |
NL4SE '18: "Total Recall, Language Processing, ..."
Total Recall, Language Processing, and Software Engineering
Zhe Yu and Tim Menzies (North Carolina State University, USA) A broad class of software engineering problems can be generalized as the "total recall problem". This short paper claims that identifying and exploring the total recall problems in software engineering is an important task with wide applicability. To make that case, we show that by applying and adapting the state of the art active learning and natural language processing algorithms for solving the total recall problem, two important software engineering tasks can also be addressed : (a) supporting large literature reviews and (b) identifying software security vulnerabilities. Furthermore, we conjecture that (c) test case prioritization and (d) static warning identification can also be generalized as and benefit from the total recall problem. The widespread applicability of "total recall" to software engineering suggests that there exists some underlying framework that encompasses not just natural language processing, but a wide range of important software engineering tasks. @InProceedings{NL4SE18p10, author = {Zhe Yu and Tim Menzies}, title = {Total Recall, Language Processing, and Software Engineering}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {10--13}, doi = {10.1145/3283812.3283818}, year = {2018}, } Publisher's Version |
|
Zhai, ChengXiang |
NL4SE '18: "LinkSO: A Dataset for Learning ..."
LinkSO: A Dataset for Learning to Retrieve Similar Question Answer Pairs on Software Development Forums
Xueqing Liu, Chi Wang, Yue Leng, and ChengXiang Zhai (University of Illinois at Urbana-Champaign, USA; Microsoft, USA) We present LinkSO, a dataset for learning to rank similar questions on Stack Overflow. Stack Overflow contains a massive amount of crowd-sourced question links of high quality, which provides a great opportunity for evaluating retrieval algorithms for community-based question answer (cQA) archives and for learning to rank such archives. However, due to the existence of missing links, one question is whether question links can be readily used as the relevance judgment for evaluation. We study this question by measuring the closeness between question links and the relevance judgment, and we find their agreement rates range from 80% to 88%. We conduct an empirical study on the performance of existing work on LinkSO. While existing work focuses on non-learning approaches, our study results reveal that learning-based approaches has great potential to further improve the retrieval performance. @InProceedings{NL4SE18p2, author = {Xueqing Liu and Chi Wang and Yue Leng and ChengXiang Zhai}, title = {LinkSO: A Dataset for Learning to Retrieve Similar Question Answer Pairs on Software Development Forums}, booktitle = {Proc.\ NL4SE}, publisher = {ACM}, pages = {2--5}, doi = {10.1145/3283812.3283815}, year = {2018}, } Publisher's Version Info |
24 authors
proc time: 6.18