SPLASH-E 2020 – Author Index |
Contents -
Abstracts -
Authors
|
Baniassad, Elisa |
![]() Lucas Zamprogno, Reid Holmes, and Elisa Baniassad (University of British Columbia, Canada) Automated assessment tools are widely used as a means for providing formative feedback to undergraduate students in computer science courses while helping those courses simultaneously scale to meet student demand. While formative feedback is a laudable goal, we have observed many students trying to debug their solutions into existence using only the feedback given, while losing context of the learning goals intended by the course staff. In this paper, we detail two case studies from second and third-year undergraduate software engineering courses indicating that using only nudges about where students should focus their efforts can improve how they act on generated feedback. By carefully reasoning about errors uncovered by our automated assessment approaches, we have been able to create feedback for students that helps them to revisit the learning outcomes for the assignment or course. This approach has been applied to both multiple-choice feedback in an online quiz taking system and automated assessment of student programming tasks. We have found that student performance has not suffered and that students reflect positively about how they investigate automated assessment failures. ![]() ![]() |
|
Holmes, Reid |
![]() Lucas Zamprogno, Reid Holmes, and Elisa Baniassad (University of British Columbia, Canada) Automated assessment tools are widely used as a means for providing formative feedback to undergraduate students in computer science courses while helping those courses simultaneously scale to meet student demand. While formative feedback is a laudable goal, we have observed many students trying to debug their solutions into existence using only the feedback given, while losing context of the learning goals intended by the course staff. In this paper, we detail two case studies from second and third-year undergraduate software engineering courses indicating that using only nudges about where students should focus their efforts can improve how they act on generated feedback. By carefully reasoning about errors uncovered by our automated assessment approaches, we have been able to create feedback for students that helps them to revisit the learning outcomes for the assignment or course. This approach has been applied to both multiple-choice feedback in an online quiz taking system and automated assessment of student programming tasks. We have found that student performance has not suffered and that students reflect positively about how they investigate automated assessment failures. ![]() ![]() |
|
Reichenbach, Christoph |
![]() Christoph Reichenbach (Lund University, Sweden) The semantics of programming languages comprise many concepts that are alternatives to each other, such as by-reference and by-value parameter passing. To help teach these concepts, Diwan et al. introduced the programming language Mystery, with fixed syntax but configurable semantics, and described how this language enables new approaches to teaching programming languages concepts. In this paper, we reproduce the studies by Diwan et al. in a Swedish setting, describe extensions to the original system, and introduce a new technique for evaluating the utility of student experiments. We largely confirm the earlier findings and show how our evaluation technique helps us in our understanding of student experiments. ![]() ![]() ![]() ![]() |
|
Stolley, Karl |
![]() Karl Stolley (Illinois Institute of Technology, USA) This course experience report details the teaching of Cascading Style Sheets (CSS) constrained by the rules of objective typography. The approach guides students in applying those rules to a subset of fewer than a dozen CSS properties. Students learn how to determine and reason about rule-governed values and ratios according to typographic principles. When successful, students produce typeset text that is accessible and readable across the range of screens and user-set preferences on web-enabled devices. Students learn to visually and mathematically verify the execution of their designs, and to apply the rules of objective typography to other areas of CSS, such as grid-based page layout. Experiential evidence suggests that these techniques do transfer to other aspects of CSS, but formal study is needed. ![]() ![]() |
|
Zamprogno, Lucas |
![]() Lucas Zamprogno, Reid Holmes, and Elisa Baniassad (University of British Columbia, Canada) Automated assessment tools are widely used as a means for providing formative feedback to undergraduate students in computer science courses while helping those courses simultaneously scale to meet student demand. While formative feedback is a laudable goal, we have observed many students trying to debug their solutions into existence using only the feedback given, while losing context of the learning goals intended by the course staff. In this paper, we detail two case studies from second and third-year undergraduate software engineering courses indicating that using only nudges about where students should focus their efforts can improve how they act on generated feedback. By carefully reasoning about errors uncovered by our automated assessment approaches, we have been able to create feedback for students that helps them to revisit the learning outcomes for the assignment or course. This approach has been applied to both multiple-choice feedback in an online quiz taking system and automated assessment of student programming tasks. We have found that student performance has not suffered and that students reflect positively about how they investigate automated assessment failures. ![]() ![]() |
5 authors
proc time: 2.34