Workshop CHESE 2016 – Author Index |
Contents -
Abstracts -
Authors
|
Combéfis, Sébastien |
![]() Sébastien Combéfis and Arnaud Schils (École Centrale des Arts et Métiers, Belgium; Université Catholique de Louvain, Belgium) Online platforms to learn programming are very popular nowadays. These platforms must automatically assess codes submitted by the learners and must provide good quality feedbacks in order to support their learning. Classical techniques to produce useful feedbacks include using unit testing frameworks to perform systematic functional tests of the submitted codes or using code quality assessment tools. This paper explores how to automatically identify error classes by clustering a set of submitted codes, using code plagiarism detection tools to measure the similarity between the codes. The proposed approach and analysis framework are presented in the paper, along with a first experiment using the Code Hunt dataset. ![]() |
|
Isiaka, Faramola |
![]() Pierre McCauley, Brandon Nsiah-Ababio, Joshua Reed, Faramola Isiaka, and Tao Xie (University of Illinois at Urbana-Champaign, USA) Code Hunt (https://www.codehunt.com/) from Microsoft Research is a web-based serious gaming platform being popularly used for various programming contests. In this paper, we demonstrate preliminary statistical analysis of a Code Hunt data set that contains the programs written by students (only) worldwide during a contest over 48 hours. There are 259 users, 24 puzzles (organized into 4 sectors), and about 13,000 programs submitted by these users. Our analysis results can help improve the creation of puzzles in a future contest. ![]() |
|
McCauley, Pierre |
![]() Pierre McCauley, Brandon Nsiah-Ababio, Joshua Reed, Faramola Isiaka, and Tao Xie (University of Illinois at Urbana-Champaign, USA) Code Hunt (https://www.codehunt.com/) from Microsoft Research is a web-based serious gaming platform being popularly used for various programming contests. In this paper, we demonstrate preliminary statistical analysis of a Code Hunt data set that contains the programs written by students (only) worldwide during a contest over 48 hours. There are 259 users, 24 puzzles (organized into 4 sectors), and about 13,000 programs submitted by these users. Our analysis results can help improve the creation of puzzles in a future contest. ![]() |
|
Nsiah-Ababio, Brandon |
![]() Pierre McCauley, Brandon Nsiah-Ababio, Joshua Reed, Faramola Isiaka, and Tao Xie (University of Illinois at Urbana-Champaign, USA) Code Hunt (https://www.codehunt.com/) from Microsoft Research is a web-based serious gaming platform being popularly used for various programming contests. In this paper, we demonstrate preliminary statistical analysis of a Code Hunt data set that contains the programs written by students (only) worldwide during a contest over 48 hours. There are 259 users, 24 puzzles (organized into 4 sectors), and about 13,000 programs submitted by these users. Our analysis results can help improve the creation of puzzles in a future contest. ![]() |
|
Reed, Joshua |
![]() Pierre McCauley, Brandon Nsiah-Ababio, Joshua Reed, Faramola Isiaka, and Tao Xie (University of Illinois at Urbana-Champaign, USA) Code Hunt (https://www.codehunt.com/) from Microsoft Research is a web-based serious gaming platform being popularly used for various programming contests. In this paper, we demonstrate preliminary statistical analysis of a Code Hunt data set that contains the programs written by students (only) worldwide during a contest over 48 hours. There are 259 users, 24 puzzles (organized into 4 sectors), and about 13,000 programs submitted by these users. Our analysis results can help improve the creation of puzzles in a future contest. ![]() |
|
Schils, Arnaud |
![]() Sébastien Combéfis and Arnaud Schils (École Centrale des Arts et Métiers, Belgium; Université Catholique de Louvain, Belgium) Online platforms to learn programming are very popular nowadays. These platforms must automatically assess codes submitted by the learners and must provide good quality feedbacks in order to support their learning. Classical techniques to produce useful feedbacks include using unit testing frameworks to perform systematic functional tests of the submitted codes or using code quality assessment tools. This paper explores how to automatically identify error classes by clustering a set of submitted codes, using code plagiarism detection tools to measure the similarity between the codes. The proposed approach and analysis framework are presented in the paper, along with a first experiment using the Code Hunt dataset. ![]() |
|
Xie, Tao |
![]() Pierre McCauley, Brandon Nsiah-Ababio, Joshua Reed, Faramola Isiaka, and Tao Xie (University of Illinois at Urbana-Champaign, USA) Code Hunt (https://www.codehunt.com/) from Microsoft Research is a web-based serious gaming platform being popularly used for various programming contests. In this paper, we demonstrate preliminary statistical analysis of a Code Hunt data set that contains the programs written by students (only) worldwide during a contest over 48 hours. There are 259 users, 24 puzzles (organized into 4 sectors), and about 13,000 programs submitted by these users. Our analysis results can help improve the creation of puzzles in a future contest. ![]() |
7 authors
proc time: 0.67