Powered by
2025 ACM SIGPLAN International Symposium on SPLASH-E (SPLASH-E 2025), October 12–18, 2025,
Singapore, Singapore
2025 ACM SIGPLAN International Symposium on SPLASH-E (SPLASH-E 2025)
Frontmatter
Welcome from the Chairs
Welcome to the SPLASH-E 2025, the SPLASH community’s annual symposium
on software engineering and programming languages in education! Programming
languages are fundamental tools used throughout computer science curricula, and
so it is fruitful to encourage regular dialog between the programming language
community and the CS education community. This intersection can involve teaching
practice, teaching languages, teaching philosophy and teaching practicalities. We
are happy that SPLASH-E continues to exist as a space for these conversations.
Papers
Continuations for All: Language Design Considerations for Accessible Continuations
Youyou Cong,
Filip Strömbäck, and
Kazuki Ikemori
(Institute of Science Tokyo, Japan; Linköping University, Sweden)
Continuations are a useful concept to master, but their support
in programming languages is not necessarily accessible to
beginners.
We aim to improve the learning experience of continuations,
in particular delimited continuations, by finding
beginner-friendly designs of continuation support.
For this purpose, we develop a pedagogical language Pyret-cont and
teach a graduate course focused on continuations.
Our experience would provide insights to developers and
educators wishing to support and teach continuations.
@InProceedings{SPLASH-E25p1,
author = {Youyou Cong and Filip Strömbäck and Kazuki Ikemori},
title = {Continuations for All: Language Design Considerations for Accessible Continuations},
booktitle = {Proc.\ SPLASH-E},
publisher = {ACM},
pages = {1-11},
doi = {10.1145/3758317.3759677},
year = {2025},
}
Publisher's Version
Derivation Visualization for Context-Free Grammar Design: Helping Students Understand Context-Free Grammars
Marco T. Morazán,
Andrés M. Garced, and
Tijana Minić
(Seton Hall University, USA; University of Washington, USA)
In Formal Languages and Automata Theory courses, students are exposed to context-free grammars. They are expected to learn how to develop grammars and how to derive words. Despite multiple classroom and textbook examples, some students find grammar design and word derivation difficult due to nondeterminism. A modern pedagogy uses a programming-based approach to introduce students to context-free grammars. Using this methodology, students can design, implement, validate, and verify context-free grammars. However, they find the design and development task challenging. This article presents a novel dynamic visualization tool to help students with grammar design, implementation, and verification. The tool presents the user with the stepwise construction of a derivation tree and with support to visualize whether nonterminal invariants hold. Empirical data from a small formative study suggests that students find the visualization tool useful to understand word derivation, to debug grammars, and to develop correctness proofs.
@InProceedings{SPLASH-E25p12,
author = {Marco T. Morazán and Andrés M. Garced and Tijana Minić},
title = {Derivation Visualization for Context-Free Grammar Design: Helping Students Understand Context-Free Grammars},
booktitle = {Proc.\ SPLASH-E},
publisher = {ACM},
pages = {12-23},
doi = {10.1145/3758317.3759678},
year = {2025},
}
Publisher's Version
Interactive Theorem Provers for Proof Education
Romina Mahinpei,
Manoel Horta Ribeiro, and
Mae Milano
(Princeton University, USA)
Proof techniques are a core component of computer science (CS) education, yet many CS students struggle to engage with and understand proof-based material. Interactive Theorem Provers (ITPs), originally developed for formal verification, have emerged as promising educational tools that offer structured feedback and align well with CS students’ existing technical skills. While recent work has begun to explore the use of ITPs in educational settings, a notable gap remains: little is known about how well these tools support the range of proof techniques taught in CS curricula or how they might be improved to do so. We specifically seek to close this gap and contribute to the growing literature on ITPs in education through three concrete efforts: a user study of student experiences with the Coq proof assistant, a case study comparing proof development across Coq, Lean, and traditional methods, and a heuristic evaluation of each ITP using Nielsen’s usability heuristics.
Our results show that while the two ITPs can support the development of proof skills, they also present usability challenges --- such as complex syntax and unclear error messaging --- that can hinder learning. We also find that formalized ITP proofs tend to be more explicit and verbose than pen-and-paper proofs, which can affect students’ perception of proof difficulty. Our heuristic evaluation highlights specific areas for improvement in Coq and Lean, including clearer error messages, support for example-driven learning, and expanding proof libraries to better align with educational needs. Together, these findings provide actionable insights for instructors considering ITPs for education and highlight both the pedagogical benefits and current limitations of these systems, as well as opportunities for researchers to improve ITPs as educational tools.
@InProceedings{SPLASH-E25p24,
author = {Romina Mahinpei and Manoel Horta Ribeiro and Mae Milano},
title = {Interactive Theorem Provers for Proof Education},
booktitle = {Proc.\ SPLASH-E},
publisher = {ACM},
pages = {24-41},
doi = {10.1145/3758317.3759679},
year = {2025},
}
Publisher's Version
An Exploration of How Generative AI Affects Workflow and Collaboration in a Software Engineering Course
Marie Salomon,
Kyle D. Chin,
Reid Holmes,
Thomas Fritz, and
Gail C. Murphy
(University of British Columbia, Canada; University of Zurich, Switzerland)
How does Generative AI (GenAI) impact how students work and collaborate in a software engineering course? To explore this question, we conducted an exploratory study in a project-based course where students developed three versions of a system across agile sprints, with unrestricted access to GenAI tools. From survey responses of 349 students, we found that the technology was used extensively with 84% of students reporting use and 90% of them finding the technology useful. Through semi-structured interviews with 24 of the students, we delved deeper, learning that students used GenAI pervasively, not only to generate code but also to validate work retrospectively, such as checking alignment with requirements and design after implementation had begun. Students often turned to GenAI as their first point of contact, even before consulting teammates, which reduced direct interpersonal collaboration. These results suggest the need for new pedagogical strategies that address not just individual tool use, but also design reasoning and collaborative practices in GenAI-augmented teams.
@InProceedings{SPLASH-E25p42,
author = {Marie Salomon and Kyle D. Chin and Reid Holmes and Thomas Fritz and Gail C. Murphy},
title = {An Exploration of How Generative AI Affects Workflow and Collaboration in a Software Engineering Course},
booktitle = {Proc.\ SPLASH-E},
publisher = {ACM},
pages = {42-54},
doi = {10.1145/3758317.3759680},
year = {2025},
}
Publisher's Version
Daisy: An Exercise Environment for Learning Information Modeling
Jessica Belicia Cahyono,
Youyou Cong, and
Hidehiko Masuhara
(Institute of Science Tokyo, Japan)
When solving a problem through programming, we start with information modeling, i.e., representing the information in the problem description as data in the programming language. Information modeling plays an essential role in program development, but it can be challenging for novice programmers. The main obstacles include the lack of clear instructions on the process and the need for knowledge of programming language syntax.
We aim to support novices in acquiring information modeling skills. To achieve this goal, we define information modeling as a three-step process and develop Daisy, a block-based exercise environment for practicing information modeling. We also conducted a preliminary user study to investigate students' behavior and performance in information modeling, as well as to collect their opinions on Daisy's design. Overall, we received positive reactions, along with insights and suggestions for future improvements.
@InProceedings{SPLASH-E25p55,
author = {Jessica Belicia Cahyono and Youyou Cong and Hidehiko Masuhara},
title = {Daisy: An Exercise Environment for Learning Information Modeling},
booktitle = {Proc.\ SPLASH-E},
publisher = {ACM},
pages = {55-65},
doi = {10.1145/3758317.3759681},
year = {2025},
}
Publisher's Version
An Interactive Learning Environment for Program Design
Kouta Kumamoto,
Youyou Cong, and
Hidehiko Masuhara
(Institute of Science Tokyo, Japan)
Program design skills can be effectively acquired by following an explicit guideline that systematizes the process of designing programs.
While such guidelines exist, novice students often struggle to follow them without instructor support.
We propose an interactive learning environment for practicing program design
based on the design recipe.
Backed by an LLM, the environment guides the student through the design process via conversations in natural language, while automatically generating code and feedback based on the student’s responses.
In this paper, we describe the design of the environment and
report the results of a user study, through which we observed both the flexibility and challenges of our LLM-based approach.
@InProceedings{SPLASH-E25p66,
author = {Kouta Kumamoto and Youyou Cong and Hidehiko Masuhara},
title = {An Interactive Learning Environment for Program Design},
booktitle = {Proc.\ SPLASH-E},
publisher = {ACM},
pages = {66-74},
doi = {10.1145/3758317.3759682},
year = {2025},
}
Publisher's Version
Porpoise: An LLM-Based Sandbox for Novices to Practice Writing Purpose Statements
Shriram Krishnamurthi,
Thore Thießen, and
Jan Vahrenhold
(Brown University, USA; University of Münster, Germany)
Software developers have long emphasized the need for clear textual descriptions of programs, through documentation and comments. Similarly, curricula often expect students to write purpose statements that articulate in prose what program components are expected to do. Unfortunately, it is difficult to motivate students to do this and to evaluate student work at scale.
We leverage the use of a large language model for this purpose. Specifically, we describe a tool, Porpoise, that presents students with problem descriptions, passes their textual descriptions to a large language model to generate code, evaluates the result against tests, and gives students feedback. Essentially, it gives students practice writing quality purpose statements, and simultaneously also getting familiar with zero-shot prompting in a controlled manner.
We present the tool’s design as well as the experience of deploying it at two universities. This includes asking students to reflect on trade-offs between programming and zero-shot prompting, and seeing what difference it makes to give students different formats of problem descriptions. We also examine affective and load aspects of using the tool. Our findings are somewhat positive but mixed.
@InProceedings{SPLASH-E25p75,
author = {Shriram Krishnamurthi and Thore Thießen and Jan Vahrenhold},
title = {Porpoise: An LLM-Based Sandbox for Novices to Practice Writing Purpose Statements},
booktitle = {Proc.\ SPLASH-E},
publisher = {ACM},
pages = {75-89},
doi = {10.1145/3758317.3759683},
year = {2025},
}
Publisher's Version
Info
Personalization of Programming Education: An NLP-Based Bi-dimensional Classification of Programming Exercises
Tommie Lombarts,
Gijs Walravens,
Mazyar Seraj,
Lina Ochoa, and
Mark van den Brand
(Eindhoven University of Technology, Netherlands)
The need for scalable and personalized content in programming education is driving interest in automating exercise generation. This requires a clear understanding of existing exercises. Our research addresses this by classifying existing exercises by topic and difficulty level. We combine a lexicon-based analysis with machine learning and advanced natural language processing techniques, providing a foundation for AI-assisted content generation. Specifically, we utilize BERTopic for topic modeling and five machine learning models to predict difficulty levels in programming exercises. Our dataset includes 106 programming exercise descriptions from three introductory courses, plus performance data from up to 189 learners. The results demonstrate that lexicon-based approaches significantly improve topic modeling accuracy and coherence compared to the baseline, with reduced variance and more consistent cluster stability. Although difficulty prediction remains challenging due to the complexity of defining ground truth, lexicon integration leads to modest yet consistent performance gains. This work lays an essential groundwork for scalable and resource-efficient solutions for the classification and generation of personalized programming exercises.
@InProceedings{SPLASH-E25p90,
author = {Tommie Lombarts and Gijs Walravens and Mazyar Seraj and Lina Ochoa and Mark van den Brand},
title = {Personalization of Programming Education: An NLP-Based Bi-dimensional Classification of Programming Exercises},
booktitle = {Proc.\ SPLASH-E},
publisher = {ACM},
pages = {90-101},
doi = {10.1145/3758317.3759684},
year = {2025},
}
Publisher's Version
Info
proc time: 0.82