Workshop AISTA 2021 – Author Index |
Contents -
Abstracts -
Authors
|
Bodden, Eric |
AISTA '21: "Automated Cell Header Generator ..."
Automated Cell Header Generator for Jupyter Notebooks
Ashwin Prasad Shivarpatna Venkatesh and Eric Bodden (University of Paderborn, Germany; Fraunhofer IEM, Germany) Jupyter notebooks are now widely adopted by data analysts as they provide a convenient environment for presenting computational results in a literate-programming document that combines code snippets, rich text, and inline visualizations. Literate-programming documents are intended to be computational narratives that are supplemented with self-explanatory text, but, recent studies have shown that this is lacking in practice. Efforts in the software engineering community to increase code comprehension in literate programming are limited. To address this, as a first step, this paper presents a prototype Jupyter notebook annotator, HeaderGen, that automatically creates a narrative structure in notebooks by classifying and annotating code cells based on the machine learning workflow. HeaderGen generates a markdown cell header for each code cell by statically analyzing the notebook, and in addition, associates these cell headers with a clickable table of contents for easier navigation. Further, we discuss our vision and opportunities based on this prototype. @InProceedings{AISTA21p17, author = {Ashwin Prasad Shivarpatna Venkatesh and Eric Bodden}, title = {Automated Cell Header Generator for Jupyter Notebooks}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {17--20}, doi = {10.1145/3464968.3468410}, year = {2021}, } Publisher's Version |
|
Chisalita-Cretu, Camelia |
AISTA '21: "On the Use of Evolutionary ..."
On the Use of Evolutionary Algorithms for Test Case Prioritization in Regression Testing Considering Requirements Dependencies
Andreea Vescan, Camelia Chisalita-Cretu, Camelia Serban, and Laura Diosan (Babes-Bolyai University, Romania) Nowadays, software systems encounter repeated modifications in order to satisfy any requirement regarding a business change. To assure that these changes do not affect systems' proper functioning, those parts affected by the changes need to be retested, minimizing the negative impact of performed modifications on another part of the software. In this research, we investigate how different optimization techniques (with various criteria) could improve the effectiveness of the testing activity, in particular the effectiveness of test case prioritization. The most efficient test schedules are identified by using either a Greedy algorithm or a Genetic Algorithm, optimizing a quality function that incorporates single or multiple criteria. Both functional requirements (with the existing dependencies between them) and non-functional requirements (i.e. quality attributes for test cases) are integrated with the quality assessment of a test order. Therefore, the conducted experiments considered various criteria combinations (faults, costs, and number of test cases), being applied to both theoretical case studies and a real-world benchmark. The conclusion of the experiments shows that the Genetic Algorithm outperforms the Greedy on all considered criteria. @InProceedings{AISTA21p1, author = {Andreea Vescan and Camelia Chisalita-Cretu and Camelia Serban and Laura Diosan}, title = {On the Use of Evolutionary Algorithms for Test Case Prioritization in Regression Testing Considering Requirements Dependencies}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {1--8}, doi = {10.1145/3464968.3468407}, year = {2021}, } Publisher's Version |
|
Chouchane, Amine |
AISTA '21: "Impact of Programming Languages ..."
Impact of Programming Languages on Machine Learning Bugs
Sebastian Sztwiertnia, Maximilian Grübel, Amine Chouchane, Daniel Sokolowski, Krishna Narasimhan, and Mira Mezini (TU Darmstadt, Germany) Machine learning (ML) is on the rise to be ubiquitous in modern software. Still, its use is challenging for software developers. So far, research has focused on the ML libraries to find and mitigate these challenges. However, there is initial evidence that programming languages also add to the challenges, identifiable in different distributions of bugs in ML programs. To fill this research gap, we propose the first empirical study on the impact of programming languages on bugs in ML programs. We plan to analyze software from GitHub and related discussions in GitHub issues and Stack Overflow for bug distributions in ML programs, aiming to identify correlations with the chosen programming language, its features and the application domain. This study's results enable better-targeted use of available programming language technology in ML programs, preventing bugs, reducing errors and speeding up development. @InProceedings{AISTA21p9, author = {Sebastian Sztwiertnia and Maximilian Grübel and Amine Chouchane and Daniel Sokolowski and Krishna Narasimhan and Mira Mezini}, title = {Impact of Programming Languages on Machine Learning Bugs}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {9--12}, doi = {10.1145/3464968.3468408}, year = {2021}, } Publisher's Version |
|
Diosan, Laura |
AISTA '21: "On the Use of Evolutionary ..."
On the Use of Evolutionary Algorithms for Test Case Prioritization in Regression Testing Considering Requirements Dependencies
Andreea Vescan, Camelia Chisalita-Cretu, Camelia Serban, and Laura Diosan (Babes-Bolyai University, Romania) Nowadays, software systems encounter repeated modifications in order to satisfy any requirement regarding a business change. To assure that these changes do not affect systems' proper functioning, those parts affected by the changes need to be retested, minimizing the negative impact of performed modifications on another part of the software. In this research, we investigate how different optimization techniques (with various criteria) could improve the effectiveness of the testing activity, in particular the effectiveness of test case prioritization. The most efficient test schedules are identified by using either a Greedy algorithm or a Genetic Algorithm, optimizing a quality function that incorporates single or multiple criteria. Both functional requirements (with the existing dependencies between them) and non-functional requirements (i.e. quality attributes for test cases) are integrated with the quality assessment of a test order. Therefore, the conducted experiments considered various criteria combinations (faults, costs, and number of test cases), being applied to both theoretical case studies and a real-world benchmark. The conclusion of the experiments shows that the Genetic Algorithm outperforms the Greedy on all considered criteria. @InProceedings{AISTA21p1, author = {Andreea Vescan and Camelia Chisalita-Cretu and Camelia Serban and Laura Diosan}, title = {On the Use of Evolutionary Algorithms for Test Case Prioritization in Regression Testing Considering Requirements Dependencies}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {1--8}, doi = {10.1145/3464968.3468407}, year = {2021}, } Publisher's Version |
|
Grübel, Maximilian |
AISTA '21: "Impact of Programming Languages ..."
Impact of Programming Languages on Machine Learning Bugs
Sebastian Sztwiertnia, Maximilian Grübel, Amine Chouchane, Daniel Sokolowski, Krishna Narasimhan, and Mira Mezini (TU Darmstadt, Germany) Machine learning (ML) is on the rise to be ubiquitous in modern software. Still, its use is challenging for software developers. So far, research has focused on the ML libraries to find and mitigate these challenges. However, there is initial evidence that programming languages also add to the challenges, identifiable in different distributions of bugs in ML programs. To fill this research gap, we propose the first empirical study on the impact of programming languages on bugs in ML programs. We plan to analyze software from GitHub and related discussions in GitHub issues and Stack Overflow for bug distributions in ML programs, aiming to identify correlations with the chosen programming language, its features and the application domain. This study's results enable better-targeted use of available programming language technology in ML programs, preventing bugs, reducing errors and speeding up development. @InProceedings{AISTA21p9, author = {Sebastian Sztwiertnia and Maximilian Grübel and Amine Chouchane and Daniel Sokolowski and Krishna Narasimhan and Mira Mezini}, title = {Impact of Programming Languages on Machine Learning Bugs}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {9--12}, doi = {10.1145/3464968.3468408}, year = {2021}, } Publisher's Version |
|
Jafarinejad, Foad |
AISTA '21: "NerdBug: Automated Bug Detection ..."
NerdBug: Automated Bug Detection in Neural Networks
Foad Jafarinejad, Krishna Narasimhan, and Mira Mezini (TU Darmstadt, Germany) Despite the exponential growth of deep learning software during the last decade, there is a lack of tools to test and debug issues in deep learning programs. Current static analysis tools do not address challenges specific to deep learning as observed by past research on bugs specific to this area. Existing deep learning bug detection tools focus on specific issues like shape mismatches. In this paper, we present a vision for an abstraction-based approach to detect deep learning bugs and the plan to evaluate our approach. The motivation behind the abstraction-based approach is to be able to build an intermediate version of the neural network that can be analyzed in development time to provide live feedback programmers are used to with other kind of bugs. @InProceedings{AISTA21p13, author = {Foad Jafarinejad and Krishna Narasimhan and Mira Mezini}, title = {NerdBug: Automated Bug Detection in Neural Networks}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {13--16}, doi = {10.1145/3464968.3468409}, year = {2021}, } Publisher's Version |
|
Mezini, Mira |
AISTA '21: "NerdBug: Automated Bug Detection ..."
NerdBug: Automated Bug Detection in Neural Networks
Foad Jafarinejad, Krishna Narasimhan, and Mira Mezini (TU Darmstadt, Germany) Despite the exponential growth of deep learning software during the last decade, there is a lack of tools to test and debug issues in deep learning programs. Current static analysis tools do not address challenges specific to deep learning as observed by past research on bugs specific to this area. Existing deep learning bug detection tools focus on specific issues like shape mismatches. In this paper, we present a vision for an abstraction-based approach to detect deep learning bugs and the plan to evaluate our approach. The motivation behind the abstraction-based approach is to be able to build an intermediate version of the neural network that can be analyzed in development time to provide live feedback programmers are used to with other kind of bugs. @InProceedings{AISTA21p13, author = {Foad Jafarinejad and Krishna Narasimhan and Mira Mezini}, title = {NerdBug: Automated Bug Detection in Neural Networks}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {13--16}, doi = {10.1145/3464968.3468409}, year = {2021}, } Publisher's Version AISTA '21: "Impact of Programming Languages ..." Impact of Programming Languages on Machine Learning Bugs Sebastian Sztwiertnia, Maximilian Grübel, Amine Chouchane, Daniel Sokolowski, Krishna Narasimhan, and Mira Mezini (TU Darmstadt, Germany) Machine learning (ML) is on the rise to be ubiquitous in modern software. Still, its use is challenging for software developers. So far, research has focused on the ML libraries to find and mitigate these challenges. However, there is initial evidence that programming languages also add to the challenges, identifiable in different distributions of bugs in ML programs. To fill this research gap, we propose the first empirical study on the impact of programming languages on bugs in ML programs. We plan to analyze software from GitHub and related discussions in GitHub issues and Stack Overflow for bug distributions in ML programs, aiming to identify correlations with the chosen programming language, its features and the application domain. This study's results enable better-targeted use of available programming language technology in ML programs, preventing bugs, reducing errors and speeding up development. @InProceedings{AISTA21p9, author = {Sebastian Sztwiertnia and Maximilian Grübel and Amine Chouchane and Daniel Sokolowski and Krishna Narasimhan and Mira Mezini}, title = {Impact of Programming Languages on Machine Learning Bugs}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {9--12}, doi = {10.1145/3464968.3468408}, year = {2021}, } Publisher's Version |
|
Narasimhan, Krishna |
AISTA '21: "NerdBug: Automated Bug Detection ..."
NerdBug: Automated Bug Detection in Neural Networks
Foad Jafarinejad, Krishna Narasimhan, and Mira Mezini (TU Darmstadt, Germany) Despite the exponential growth of deep learning software during the last decade, there is a lack of tools to test and debug issues in deep learning programs. Current static analysis tools do not address challenges specific to deep learning as observed by past research on bugs specific to this area. Existing deep learning bug detection tools focus on specific issues like shape mismatches. In this paper, we present a vision for an abstraction-based approach to detect deep learning bugs and the plan to evaluate our approach. The motivation behind the abstraction-based approach is to be able to build an intermediate version of the neural network that can be analyzed in development time to provide live feedback programmers are used to with other kind of bugs. @InProceedings{AISTA21p13, author = {Foad Jafarinejad and Krishna Narasimhan and Mira Mezini}, title = {NerdBug: Automated Bug Detection in Neural Networks}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {13--16}, doi = {10.1145/3464968.3468409}, year = {2021}, } Publisher's Version AISTA '21: "Impact of Programming Languages ..." Impact of Programming Languages on Machine Learning Bugs Sebastian Sztwiertnia, Maximilian Grübel, Amine Chouchane, Daniel Sokolowski, Krishna Narasimhan, and Mira Mezini (TU Darmstadt, Germany) Machine learning (ML) is on the rise to be ubiquitous in modern software. Still, its use is challenging for software developers. So far, research has focused on the ML libraries to find and mitigate these challenges. However, there is initial evidence that programming languages also add to the challenges, identifiable in different distributions of bugs in ML programs. To fill this research gap, we propose the first empirical study on the impact of programming languages on bugs in ML programs. We plan to analyze software from GitHub and related discussions in GitHub issues and Stack Overflow for bug distributions in ML programs, aiming to identify correlations with the chosen programming language, its features and the application domain. This study's results enable better-targeted use of available programming language technology in ML programs, preventing bugs, reducing errors and speeding up development. @InProceedings{AISTA21p9, author = {Sebastian Sztwiertnia and Maximilian Grübel and Amine Chouchane and Daniel Sokolowski and Krishna Narasimhan and Mira Mezini}, title = {Impact of Programming Languages on Machine Learning Bugs}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {9--12}, doi = {10.1145/3464968.3468408}, year = {2021}, } Publisher's Version |
|
Serban, Camelia |
AISTA '21: "On the Use of Evolutionary ..."
On the Use of Evolutionary Algorithms for Test Case Prioritization in Regression Testing Considering Requirements Dependencies
Andreea Vescan, Camelia Chisalita-Cretu, Camelia Serban, and Laura Diosan (Babes-Bolyai University, Romania) Nowadays, software systems encounter repeated modifications in order to satisfy any requirement regarding a business change. To assure that these changes do not affect systems' proper functioning, those parts affected by the changes need to be retested, minimizing the negative impact of performed modifications on another part of the software. In this research, we investigate how different optimization techniques (with various criteria) could improve the effectiveness of the testing activity, in particular the effectiveness of test case prioritization. The most efficient test schedules are identified by using either a Greedy algorithm or a Genetic Algorithm, optimizing a quality function that incorporates single or multiple criteria. Both functional requirements (with the existing dependencies between them) and non-functional requirements (i.e. quality attributes for test cases) are integrated with the quality assessment of a test order. Therefore, the conducted experiments considered various criteria combinations (faults, costs, and number of test cases), being applied to both theoretical case studies and a real-world benchmark. The conclusion of the experiments shows that the Genetic Algorithm outperforms the Greedy on all considered criteria. @InProceedings{AISTA21p1, author = {Andreea Vescan and Camelia Chisalita-Cretu and Camelia Serban and Laura Diosan}, title = {On the Use of Evolutionary Algorithms for Test Case Prioritization in Regression Testing Considering Requirements Dependencies}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {1--8}, doi = {10.1145/3464968.3468407}, year = {2021}, } Publisher's Version |
|
Sokolowski, Daniel |
AISTA '21: "Impact of Programming Languages ..."
Impact of Programming Languages on Machine Learning Bugs
Sebastian Sztwiertnia, Maximilian Grübel, Amine Chouchane, Daniel Sokolowski, Krishna Narasimhan, and Mira Mezini (TU Darmstadt, Germany) Machine learning (ML) is on the rise to be ubiquitous in modern software. Still, its use is challenging for software developers. So far, research has focused on the ML libraries to find and mitigate these challenges. However, there is initial evidence that programming languages also add to the challenges, identifiable in different distributions of bugs in ML programs. To fill this research gap, we propose the first empirical study on the impact of programming languages on bugs in ML programs. We plan to analyze software from GitHub and related discussions in GitHub issues and Stack Overflow for bug distributions in ML programs, aiming to identify correlations with the chosen programming language, its features and the application domain. This study's results enable better-targeted use of available programming language technology in ML programs, preventing bugs, reducing errors and speeding up development. @InProceedings{AISTA21p9, author = {Sebastian Sztwiertnia and Maximilian Grübel and Amine Chouchane and Daniel Sokolowski and Krishna Narasimhan and Mira Mezini}, title = {Impact of Programming Languages on Machine Learning Bugs}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {9--12}, doi = {10.1145/3464968.3468408}, year = {2021}, } Publisher's Version |
|
Sztwiertnia, Sebastian |
AISTA '21: "Impact of Programming Languages ..."
Impact of Programming Languages on Machine Learning Bugs
Sebastian Sztwiertnia, Maximilian Grübel, Amine Chouchane, Daniel Sokolowski, Krishna Narasimhan, and Mira Mezini (TU Darmstadt, Germany) Machine learning (ML) is on the rise to be ubiquitous in modern software. Still, its use is challenging for software developers. So far, research has focused on the ML libraries to find and mitigate these challenges. However, there is initial evidence that programming languages also add to the challenges, identifiable in different distributions of bugs in ML programs. To fill this research gap, we propose the first empirical study on the impact of programming languages on bugs in ML programs. We plan to analyze software from GitHub and related discussions in GitHub issues and Stack Overflow for bug distributions in ML programs, aiming to identify correlations with the chosen programming language, its features and the application domain. This study's results enable better-targeted use of available programming language technology in ML programs, preventing bugs, reducing errors and speeding up development. @InProceedings{AISTA21p9, author = {Sebastian Sztwiertnia and Maximilian Grübel and Amine Chouchane and Daniel Sokolowski and Krishna Narasimhan and Mira Mezini}, title = {Impact of Programming Languages on Machine Learning Bugs}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {9--12}, doi = {10.1145/3464968.3468408}, year = {2021}, } Publisher's Version |
|
Venkatesh, Ashwin Prasad Shivarpatna |
AISTA '21: "Automated Cell Header Generator ..."
Automated Cell Header Generator for Jupyter Notebooks
Ashwin Prasad Shivarpatna Venkatesh and Eric Bodden (University of Paderborn, Germany; Fraunhofer IEM, Germany) Jupyter notebooks are now widely adopted by data analysts as they provide a convenient environment for presenting computational results in a literate-programming document that combines code snippets, rich text, and inline visualizations. Literate-programming documents are intended to be computational narratives that are supplemented with self-explanatory text, but, recent studies have shown that this is lacking in practice. Efforts in the software engineering community to increase code comprehension in literate programming are limited. To address this, as a first step, this paper presents a prototype Jupyter notebook annotator, HeaderGen, that automatically creates a narrative structure in notebooks by classifying and annotating code cells based on the machine learning workflow. HeaderGen generates a markdown cell header for each code cell by statically analyzing the notebook, and in addition, associates these cell headers with a clickable table of contents for easier navigation. Further, we discuss our vision and opportunities based on this prototype. @InProceedings{AISTA21p17, author = {Ashwin Prasad Shivarpatna Venkatesh and Eric Bodden}, title = {Automated Cell Header Generator for Jupyter Notebooks}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {17--20}, doi = {10.1145/3464968.3468410}, year = {2021}, } Publisher's Version |
|
Vescan, Andreea |
AISTA '21: "On the Use of Evolutionary ..."
On the Use of Evolutionary Algorithms for Test Case Prioritization in Regression Testing Considering Requirements Dependencies
Andreea Vescan, Camelia Chisalita-Cretu, Camelia Serban, and Laura Diosan (Babes-Bolyai University, Romania) Nowadays, software systems encounter repeated modifications in order to satisfy any requirement regarding a business change. To assure that these changes do not affect systems' proper functioning, those parts affected by the changes need to be retested, minimizing the negative impact of performed modifications on another part of the software. In this research, we investigate how different optimization techniques (with various criteria) could improve the effectiveness of the testing activity, in particular the effectiveness of test case prioritization. The most efficient test schedules are identified by using either a Greedy algorithm or a Genetic Algorithm, optimizing a quality function that incorporates single or multiple criteria. Both functional requirements (with the existing dependencies between them) and non-functional requirements (i.e. quality attributes for test cases) are integrated with the quality assessment of a test order. Therefore, the conducted experiments considered various criteria combinations (faults, costs, and number of test cases), being applied to both theoretical case studies and a real-world benchmark. The conclusion of the experiments shows that the Genetic Algorithm outperforms the Greedy on all considered criteria. @InProceedings{AISTA21p1, author = {Andreea Vescan and Camelia Chisalita-Cretu and Camelia Serban and Laura Diosan}, title = {On the Use of Evolutionary Algorithms for Test Case Prioritization in Regression Testing Considering Requirements Dependencies}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {1--8}, doi = {10.1145/3464968.3468407}, year = {2021}, } Publisher's Version |
13 authors
proc time: 1.9