Powered by
1st International Code Hunt Workshop on Educational Software Engineering (CHESE 2015),
July 14, 2015,
Baltimore, MD, USA
1st International Code Hunt Workshop on Educational Software Engineering (CHESE 2015)
Message from the Chairs
Two of the backbones of software engineering are programming and testing. Both of these require many hours of practice to acquire mastery. To encourage students to put in these hours of practice, educators often incorporate the element of fun. Generally, such incorporation involves setting engaging assignments that emphasize the visual, audio, mobile, and social world in which the students now live. However, a common complaint in second or third year is that “students cannot program”, which is usually interpreted as meaning they are not able to produce code readily for fundamental algorithms such as reading a file or searching a list. Recruiters in industry are well known for requiring applicants to write such code on the spot. Thus there is a dichotomy: how to maintain the self-motivation of students to practice coding skills, and at the same time focus on core algorithmic problems.
Experience with Constructing Code Hunt Contests
R. Nigel Horspool, Judith Bishop,
Jonathan de Halleux, and Nikolai Tillmann
(University of Victoria, Canada; Microsoft Research, USA)
Puzzles are the basic building block of Code Hunt contests. Creating puzzles and choosing suitable puzzles from the puzzle bank turns out to be a complex operation requiring skill and experience. Constructing a varied and interesting mix of puzzles is based on several factors. The major factor is the difficulty of the puzzle, so that the contest can build up from easier puzzles to more difficult ones. For a successful and fun contest aimed at the expected abilities of the contestants, other factors include the language features needed to solve the puzzle, clues to provide when the puzzle is presented to the player, and test cases to seed into the Code Hunt engine. We describe our experience with contest construction over a period of year and provide guidelines for choosing and making adjustments to the puzzles so that a Code Hunt contest will provide a satisfying trouble-free experience for the contestants.
@InProceedings{CHESE15p1,
author = {R. Nigel Horspool and Judith Bishop and Jonathan de Halleux and Nikolai Tillmann},
title = {Experience with Constructing Code Hunt Contests},
booktitle = {Proc.\ CHESE},
publisher = {ACM},
pages = {1--4},
doi = {},
year = {2015},
}
Pythia Reloaded: An Intelligent Unit Testing-Based Code Grader for Education
Sébastien Combéfis and Alexis Paques
(École Centrale des Arts et Métiers, Belgium; Computer Science and IT in Education ASBL, Belgium)
Automatic assessment of code to support education is an important feature of many programming learning platforms. Unit testing frameworks can be used to perform a systematic functional test of codes; they are mainly used by developers. Competition graders can be used to safely execute code in sandboxed environments; they are mainly used for programming contests. This paper proposes a platform combining the advantages of unit testing and competition graders to provide a unit testing-based grader. The proposed platform assesses codes and produces relevant and "intelligent" feedbacks to support learning. The paper presents the architecture of the platform and how the unit tests are designed.
@InProceedings{CHESE15p5,
author = {Sébastien Combéfis and Alexis Paques},
title = {Pythia Reloaded: An Intelligent Unit Testing-Based Code Grader for Education},
booktitle = {Proc.\ CHESE},
publisher = {ACM},
pages = {5--8},
doi = {},
year = {2015},
}
Code Hunt as Platform for Gamification of Cybersecurity Training
Sandro Fouché and Andrew H. Mangle
(Towson University, USA)
The nation needs more cybersecurity professionals. Beyond just a general shortage, women, African Americans, and Latino Americans are underrepresented in the field. This not only contributes to the scarcity of qualified cybersecurity professionals, but the absence of diversity leads to a lack of perspective and differing viewpoints. Part of the problem is that cybersecurity suffers from barriers to entry that include expensive training, exclusionary culture, and the need for costly infrastructure. In order for students to start learning about cybersecurity, access to training, infrastructure and subject matter experts is imperative. The existing Code Hunt framework, used to help students master programming, could be a springboard to help reduce the challenges facing students interested in cybersecurity. Code Hunt offers gamification, community supported development, and a cloud infrastructure that provides an on-ramp to immediate learning. Leveraging Code Hunt's structured gaming model can addresses these weaknesses and makes cybersecurity training more accessible to those without the means or inclination to participate in more traditional cybersecurity competitions.
@InProceedings{CHESE15p9,
author = {Sandro Fouché and Andrew H. Mangle},
title = {Code Hunt as Platform for Gamification of Cybersecurity Training},
booktitle = {Proc.\ CHESE},
publisher = {ACM},
pages = {9--11},
doi = {},
year = {2015},
}
TeamWATCH Demonstration: A Web-Based 3D Software Source Code Visualization for Education
Minyuan Gao and Chang Liu
(Ohio University, USA)
TeamWATCH is a three-dimensional source code visualization and team collaboration tool. The Web-based version of TeamWATCH can be used to visualize properties of files and revisions in a source code version control repository. It helps project managers and developers to quickly assess the progress of software projects. In the educational setting, instructors and teaching assistants can use it to monitor student team projects. A demonstration of TeamWATCH is described in this paper. How it can be integrated with Code Hunt for instructor use is also discussed.
@InProceedings{CHESE15p12,
author = {Minyuan Gao and Chang Liu},
title = {TeamWATCH Demonstration: A Web-Based 3D Software Source Code Visualization for Education},
booktitle = {Proc.\ CHESE},
publisher = {ACM},
pages = {12--15},
doi = {},
year = {2015},
}
proc time: 1.24