Powered by
Workshop Dedicated to Olivier Danvy on the Occasion of His 64th Birthday (OLIVIERFEST 2025), October 12–18, 2025,
Singapore, Singapore
Workshop Dedicated to Olivier Danvy on the Occasion of His 64th Birthday (OLIVIERFEST 2025)
Welcome from the Chairs
This festschrift brings together a collection of papers in celebration of the career and accomplishments of Olivier Danvy on the occasion of his 64th birthday. Olivier is a visionary in the field of programming languages and is well-known for his seminal work on control operators, partial evaluation, inter-derivation of program semantics, and applications of mechanized theorem provers in Computer Science research and education. Olivier’s research on abstract machines, interpreters, and program manipulation techniques has been instrumental in building common understanding of how programs compute, transform, and interact. His contributions inspired several generations of researchers to study connections between programs and data, syntax and semantics, interpreters and compilers. Known for being an extraordinarily kind and approachable person, Olivier has been a supportive mentor and a role model for his students, collaborators, and colleagues. Last but not least, a dedicated educator, Olivier has authored many influential courses on understanding and designing programs, and a number of papers on elegant and instructive uses of functional programming and proof assistants.
Controlling Copatterns: There and Back Again
Paul Downen
(University of Massachusetts at Lowell, USA)
Copatterns give functional programs a flexible mechanism for responding to their context, and composition can greatly enhance their expressiveness. However, that same expressive power makes it harder to precisely specify the behavior of programs. Using Danvy's functional and syntactic correspondence between different semantic artifacts, we derive a full suite of semantics for copatterns, twice. First, a calculus of monolithic copatterns is taken on a journey from small-step operational semantics to abstract machine to continuation-passing style. Then within continuation-passing style, we refactor the semantics to derive a more general calculus of compositional copatterns, and take the return journey back to derive the other semantic artifacts in reverse order.
@InProceedings{OLIVIERFEST25p1,
author = {Paul Downen},
title = {Controlling Copatterns: There and Back Again},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {1-13},
doi = {10.1145/3759427.3760362},
year = {2025},
}
Publisher's Version
Published Artifact
Info
Artifacts Available
Danvy’s Mystery Functions in Slang
Stefan Hallerstede,
Robby, and
John Hatcliff
(Aarhus University, Denmark; Kansas State University, USA)
Danvy introduced the concept of "mystery functions"
as a way to teach multiple programming-related concepts
that are often treated separately in common
computer science curricula, in particular,
specification, implementation, testing, recursion and induction.
He presented concepts using the Rocq Prover.
As part of a course on software correctness,
we have recast Danvy's pedagogical themes
using Slang, a verification-enabled subset of the Scala programming language,
and its companion verification tool Logika.
Together, Slang and Logika allow developers to program
in a rich programming language and apply a variety of
specification and verification techniques,
all phrased in directly in
Scala/Slang syntax that is easily accessible to programmers.
This synergistic combination of programming and proof features enables
us to convincingly encourage students with the mantras that "doing proofs is
programming" (and vice versa).
As we teach concepts using Danvy's "mystery function"
with Slang/Logika in a full programming language that blends programs
and proofs, we add another dimension to the students' learning.
In this paper, we describe mystery functions in Slang and demonstrate
the advances we have made in teaching the above-mentioned concepts
in programming classes.
@InProceedings{OLIVIERFEST25p14,
author = {Stefan Hallerstede and Robby and John Hatcliff},
title = {Danvy’s Mystery Functions in Slang},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {14-25},
doi = {10.1145/3759427.3760363},
year = {2025},
}
Publisher's Version
Defining Algebraic Effects and Handlers via Trails and Metacontinuations
Kenichi Asai and
Maika Fujii
(Ochanomizu University, Japan)
This paper investigates algebraic effects and handlers from the
viewpoint of well-established work on the four delimited continuation
constructs known as shift, control, shift0, and control0.
The whole development follows what Danvy and his colleagues have laid
out.
We start by defining two definitional interpreters for algebraic
effects and handlers based on trails as introduced for dynamic
continuation-passing style translation and metacontinuations as
introduced for a hierarchy of control operators.
Once definitional interpreters are obtained, we transform them
using well-known meaning-preserving program translations to obtain
abstract machines for algebraic effects and handlers.
We also derive monomorphic type systems for algebraic effects and
handlers by assigning types to the definitional interpreters,
ones similar in spirit to the type system for shift and reset with
answer type modification.
The resulting semantic artifacts are close to those for delimited
continuation constructs, enabling cross-fertilization between
delimited continuation constructs and algebraic effects and handlers.
@InProceedings{OLIVIERFEST25p26,
author = {Kenichi Asai and Maika Fujii},
title = {Defining Algebraic Effects and Handlers via Trails and Metacontinuations},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {26-39},
doi = {10.1145/3759427.3760364},
year = {2025},
}
Publisher's Version
Simple Closure Analysis Revisited
Fritz Henglein
(University of Copenhagen, Denmark)
Simple closure analysis (SCA) distills the essence of the machinery underlying guaranteed (near-)linear time constraint-based program analyses that emerged in the early 1990s. It involves three key steps. First, reduction to a constraint problem that captures abstract value flow in higher-order programming languages. Second, constraint normalization by instrumented unification closure. Finally, interpretation of the normalized constraints in the program analysis problem at hand.
In the present paper we revisit SCA and explain its original rationale of combining excellent asymptotic and practical run-time efficiency with guaranteed semantic robustness properties. We point out that SCA models induced flows bidirectionally, but keeps directional surface flow information. This preserves valuable directional flow information while breaking through the inherent quasi-cubic complexity of full closure analysis (0CFA), in which also induced flows are directional. We review some of the subsequent applications, extensions, and results on semantic robustness of SCA. We suggest that its modularity and the efficiency of its core data structure, a contracted and closed flow graph on abstract closures and structured data, be kept in mind for algorithmically efficient and semantically robust program analyses on large code bases.
This paper contains the original technical report on Simple Closure Analysis, with a few typographical changes and added explanations for clarification. It was posted on an FTP server at DIKU in 1992 and influenced several subsequent works, but was never submitted for publication (until now). It is a (very) late response to Olivier Danvy's original invitation to submit it to Higher Order Symbolic Computation with my heartfelt thanks for his encouragement.
@InProceedings{OLIVIERFEST25p40,
author = {Fritz Henglein},
title = {Simple Closure Analysis Revisited},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {40-49},
doi = {10.1145/3759427.3760365},
year = {2025},
}
Publisher's Version
Redundancy Checking in Reversible Flowcharts via Logic-Based Operational Semantics
Robert Glück and
Maurizio Proietti
(University of Copenhagen, Denmark; IASI-CNR, Italy)
This study addresses the problem of detecting redundant assertions in reversible programs. We investigate an interpretive method for the automatic verification of assertion redundancy by checking satisfiability of Constrained Horn Clauses (CHCs), a fragment of first-order logic. By formalizing a transition semantics of reversible flowcharts in CHC and using well-known examples, including Dijkstra’s permutation-to-code algorithm, we show that standard techniques and tools developed in the field of CHC-based verification appear suitable for addressing this unconventional optimization problem. The scalability of the approach may be achieved through techniques such as CHC specialization and abstract interpretation.
@InProceedings{OLIVIERFEST25p50,
author = {Robert Glück and Maurizio Proietti},
title = {Redundancy Checking in Reversible Flowcharts via Logic-Based Operational Semantics},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {50-61},
doi = {10.1145/3759427.3760366},
year = {2025},
}
Publisher's Version
Continuations in Music
Youyou Cong
(Institute of Science Tokyo, Japan)
Continuations are useful in both programming languages and
natural language.
Given successful applications in both kinds of languages,
it is tempting to explore the usefulness of continuations in
other forms of languages, such as music.
We identify common patterns in music that can be described as
manipulation of continuations.
The patterns guide us in how to design and implement musical
continuations.
@InProceedings{OLIVIERFEST25p62,
author = {Youyou Cong},
title = {Continuations in Music},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {62-64},
doi = {10.1145/3759427.3760367},
year = {2025},
}
Publisher's Version
On the Structure of Abstract Interpreters
Wonyeol Lee,
Matthieu Lemerre,
Xavier Rival, and
Hongseok Yang
(POSTECH, Republic of Korea; Université Paris-Saclay - CEA List, France; Inria - CNRS - Ecole Normale Superieure de Paris - PSL University, France; KAIST, Republic of Korea)
Static analysis aims at computing semantic properties of programs. Abstract interpretation provides a framework to design static analyses and allows one to divide the construction of a static analysis into the definition of abstract domains that describe families of logical predicates with operations to reason on them, and the semantic-guided formalisation of abstract interpreters. The latter relies on the abstract domains, that describe semantic properties, and on the concrete semantics. A large part of the research on static analysis focuses on the design of novel abstract domains, with ever more expressive and/or efficient computer representation for semantic properties. In this short paper, we consider more specifically the core of the abstract interpreters (also called the abstract iterators) and discuss several techniques to build them, that are inspired by functional programming. First, we briefly discuss common iteration techniques based on control flow graphs, which have often been used for program analyses aimed at computing state properties. Second, we consider iteration techniques that borrow principles from denotational semantics and are typically defined by induction over the syntax of programs. Last, we extend the latter family of techniques to relational abstractions that capture relations between program inputs and outputs.
@InProceedings{OLIVIERFEST25p65,
author = {Wonyeol Lee and Matthieu Lemerre and Xavier Rival and Hongseok Yang},
title = {On the Structure of Abstract Interpreters},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {65-71},
doi = {10.1145/3759427.3760368},
year = {2025},
}
Publisher's Version
A Compositional Semantics for eval in Scheme
Peter D. Mosses
(Delft University of Technology, Netherlands; Swansea University, UK)
The Scheme programming language was introduced in 1975 as a lexically-scoped dialect of Lisp. It has subsequently been improved and extended through many rounds of standardization, documented by the Scheme Reports.
In Lisp, eval is a function taking a single argument; when the value of the argument represents an expression, eval proceeds to evaluate it in the current environment. The Scheme procedure eval is similar, but takes an extra argument that determines the evaluation environment.
The so-called classic standards for Scheme and its latest modern standard all include a denotational semantics of core expressions and of some required procedures. However, the semantics does not cover eval. When Muchnick and Pleban compared the denotational semantics of Lisp and Scheme in 1980, they wrote that it was not possible to give a structural semantics of eval expressions. Similarly, in 1984, Clinger pointed out that the semantics of eval violates compositionality; he suggested to define eval using a non-compositional copy of the semantic function.
This paper adapts Clinger’s suggestion by letting the semantic function take an extra argument, then defining that argument by an explicit fixed point. The resulting semantics of eval expressions is fully compositional. The wellformedness of the semantics has been tested using a lightweight Agda formalization; verifying that the denotations have expected properties is work in progress.
@InProceedings{OLIVIERFEST25p72,
author = {Peter D. Mosses},
title = {A Compositional Semantics for eval in Scheme},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {72-81},
doi = {10.1145/3759427.3760369},
year = {2025},
}
Publisher's Version
Published Artifact
Artifacts Available
How to Fold a Tree: Programming Exercises on Calder’s Mobiles
Thibaut Balabonski
(Université Paris-Saclay - CNRS - ENS Paris-Saclay - LMF, France)
We explore several ways of folding over a binary tree, and illustrate their use to solve programming exercises on Calder's mobiles, focusing on algorithms that perform a single tree traversal.
Dedicated to Olivier Danvy on the occasion of his 64th birthday.
@InProceedings{OLIVIERFEST25p82,
author = {Thibaut Balabonski},
title = {How to Fold a Tree: Programming Exercises on Calder’s Mobiles},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {82-93},
doi = {10.1145/3759427.3760370},
year = {2025},
}
Publisher's Version
Understanding Linux Kernel Code through Formal Verification: A Case Study of the Task-Scheduler Function select_idle_core
Julia Lawall,
Keisuke Nishimura, and
Jean-Pierre Lozi
(Inria, France)
On the one hand, the Linux kernel task scheduler is critical to all application performance. On the other hand, it is widely agreed that its code complexity is spiraling out of control, and only a tiny handful of kernel developers understand it. We are exploring the opportunities and challenges in applying formal verification to Linux kernel task-scheduler code. Building on a previous work focusing on the evolution of the function should_we_balance, we here consider a version of the key task-placement function select_idle_core and the evolution of the iterators on which it relies.
@InProceedings{OLIVIERFEST25p94,
author = {Julia Lawall and Keisuke Nishimura and Jean-Pierre Lozi},
title = {Understanding Linux Kernel Code through Formal Verification: A Case Study of the Task-Scheduler Function select_idle_core},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {94-105},
doi = {10.1145/3759427.3760371},
year = {2025},
}
Publisher's Version
Published Artifact
Artifacts Available
Verified Nanopasses for Compiling Conditionals
Jeremy G. Siek
(Indiana University, USA)
We present a proof of correctness in Agda for four nanopasses that
translate a source language with let binding, integer arithmetic,
conditional expressions and Booleans into an x86-flavored register
transfer language. The most interesting of these four nanopasses is a
translation of conditional expressions into goto-style control flow
that uses the continuation-oriented approach of Olivier Danvy's
one-pass transformation into monadic normal form (2003).
@InProceedings{OLIVIERFEST25p106,
author = {Jeremy G. Siek},
title = {Verified Nanopasses for Compiling Conditionals},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {106-116},
doi = {10.1145/3759427.3760372},
year = {2025},
}
Publisher's Version
Published Artifact
Artifacts Available
What I Always Wanted to Know about Second Class Values
Peter Thiemann
(University of Freiburg, Germany)
Second class values are allocated on the run-time stack and they may contain pointers
to other values on the stack. They were first discussed in connection with the
infamous funarg problem, but then forgotten as heap-allocated closures were
discovered.
Recent work has resurrected the interest in second class values as they allow us
to safely allocate some data structures (e.g., closures) on the run-time stack.
This approach has the advantage of avoiding the cost and the
unpredictable timing of garbage collection for
these structures as their deallocation takes no extra time when the
stack is popped.
A system with qualified types ensures that second class values do not
escape across stack pops.
We take a second look at this work and extend/modify their formal framework
with first-class and second-class references in the style of ML. Along
the way we answer a question that left us mystified in the previous
work: What, exactly, is the meaning of a type qualifier?
@InProceedings{OLIVIERFEST25p117,
author = {Peter Thiemann},
title = {What I Always Wanted to Know about Second Class Values},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {117-127},
doi = {10.1145/3759427.3760373},
year = {2025},
}
Publisher's Version
Published Artifact
Artifacts Available
Untyped Logical Relations at Work: Control Operators, Contextual Equivalence and Full Abstraction
Patrycja Balik,
Dariusz Biernacki, and
Piotr Polesiuk
(University of Wrocław, Poland)
While syntactic logical relations have been defined and applied primarily for calculi with types, it has been occasionally demonstrated that the technique can be employed in an untyped context as well, by replacing the type structure with step-indexing. In this article, we show how untyped step-indexed logical relations can be used both to prove contextual equivalence of expressions within a single calculus and to establish strong properties of program translations from one calculus to another.
To this end, we focus on the untyped call-by-value λ-calculus extended with the standard abortive control operators call/cc and abort, for which we define biorthogonal step-indexed binary logical relations. In contrast to the existing untyped logical relations, which are defined intensionally, i.e., based on constructors, our definition is extensional, i.e., based on eliminators. Such an approach scales better to richer languages, but it poses certain technical challenges that we show how to overcome. Using these relations we verify some interesting program equivalences, one of which is known to be hard to prove with the existing techniques such as bisimulations. We then define untyped inter-language logical relations in order to present an elegant proof of the full abstraction theorem for a standard CPS translation from the calculus with control operators to (a fragment of) the pure λ-calculus.
@InProceedings{OLIVIERFEST25p128,
author = {Patrycja Balik and Dariusz Biernacki and Piotr Polesiuk},
title = {Untyped Logical Relations at Work: Control Operators, Contextual Equivalence and Full Abstraction},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {128-141},
doi = {10.1145/3759427.3760374},
year = {2025},
}
Publisher's Version
Published Artifact
Artifacts Available
Encoding Product Types
Sam Lindley
(University of Edinburgh, UK)
Can product types be encoded in simply-typed lambda calculus with base types and function types? It depends.
@InProceedings{OLIVIERFEST25p142,
author = {Sam Lindley},
title = {Encoding Product Types},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {142-151},
doi = {10.1145/3759427.3760375},
year = {2025},
}
Publisher's Version
A Pair of tricks
Oleg Kiselyov
(Tohoku University, Japan)
One way to ascertain the quality of the generated code -- its
performance, hygiene, memory safety, deadlock freedom, etc. -- is to
decorate it with annotations that explicate some aspect of the code:
e.g., the count of particular operations, its free variables, the
sequence of IO or memory allocation operations, correctness proof
obligations. The annotations are built along with generating
the code, and may be used to detect problems like scope
extrusion or deadlock -- well before the whole program is
generated, let alone executed. Annotations are a worthy
alternative to fancy types.
Generating code along with annotations is almost straightforward. The
challenge comes from generating functions or other variable-binding
forms, with the often used so-called higher-order abstract syntax
(HOAS). That was the challenge posed by Olivier Danvy.
The present paper describes a new, simpler and general solution of the
challenge -- actually, of its general form: pairing of
HOAS generators, or direct product of algebra-like structures with HOAS
operations. The solution also overcomes the hitherto unsolved HOAS problem:
latent effects.
@InProceedings{OLIVIERFEST25p152,
author = {Oleg Kiselyov},
title = {A Pair of tricks},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {152-165},
doi = {10.1145/3759427.3760376},
year = {2025},
}
Publisher's Version
Info
Generic Reduction-Based Interpreters
Casper Bach
(University of Southern Denmark, Denmark)
Reduction-based interpreters are traditionally defined in terms of a one-step reduction function which systematically decomposes a term into a potential redex and context, contracts the redex, and recomposes it to construct the new term to be further reduced. While implementing such interpreters follows a systematic recipe, they often require interpreter engineers to write a substantial amount of code---much of it boilerplate. In this paper, we apply well-known techniques from generic programming to reduce boilerplate code in reduction-based interpreters.
@InProceedings{OLIVIERFEST25p166,
author = {Casper Bach},
title = {Generic Reduction-Based Interpreters},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {166-179},
doi = {10.1145/3759427.3760377},
year = {2025},
}
Publisher's Version
Published Artifact
Artifacts Available
Property-Based Testing of OCaml 5’s Runtime System: Fun and Segfaults with Interpreters and State Transition Functions
Jan Midtgaard
(Independent, Denmark)
We describe our effort on using property-based testing to test the OCaml 5 multicore runtime system. In particular, we cover three case studies of increasing complexity that utilize a model-based state machine framework: (a) Testing the Array module, (b) testing weak hash sets, and (c) testing the garbage collector, with the latter two behaving non-deterministically from the point of view of the
black-box testing process. We evaluate the approach empirically by analyzing the bugs found, and discuss both limitations of the approach and challenges we have met underway.
@InProceedings{OLIVIERFEST25p180,
author = {Jan Midtgaard},
title = {Property-Based Testing of OCaml 5’s Runtime System: Fun and Segfaults with Interpreters and State Transition Functions},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {180-193},
doi = {10.1145/3759427.3760378},
year = {2025},
}
Publisher's Version
Safe-for-Space Linked Environments
Matthew Flatt and
Robert Bruce Findler
(University of Utah, USA; Northwestern University, USA)
Techniques for implementing a call-by-value λ-calculus are well known,
including Reynolds's definitional interpreter and
techniques that Danvy developed for moving between
reduction rules and abstract machines. Most techniques focus
on ensuring that an implementation produces the same evaluation result
as a model, but time and space properties are also within reach.
Proper tail-call handling falls out of Reynolds's approach,
for example, but the stronger property of space safety is less readily
available, particularly if
worst-case time complexity matters. In this paper, we explore
an approach to space safety that is realized through the garbage
collector, instead of the interpreter loop, in the hope of finding a
convenient implementation technique that matches the asymptotic time
bounds of fast evaluation models and the asymptotic space bounds of
compact evaluation models. We arrive within a size-of-source factor of
achieving those bounds. Our implementation technique is comparable in
complexity to some other interpreter variants; specifically, it requires
some library functions for binary trees, specialized environment
traversals in the garbage collector, and a compilation pass to gather
the free variables of each expression and to rewrite each variable
reference to its binding depth.
@InProceedings{OLIVIERFEST25p194,
author = {Matthew Flatt and Robert Bruce Findler},
title = {Safe-for-Space Linked Environments},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {194-206},
doi = {10.1145/3759427.3760379},
year = {2025},
}
Publisher's Version
Towards Metaprogramming Defunctionalization in Rocq
Chantal Keller and
Camille Noûs
(LMF - University Paris-Saclay, France; Laboratoire Cogitamus - Université Publique, France)
Defunctionalization, a process to transform higher-order programs into first-order ones, has been heavily studied for its structure, its use-cases, its logical semantics, etc. This extended abstract is an invitation to study defunctionalization through the prism of a shallow embedding in an interactive theorem prover. We use metaprogramming techniques to automatize program transformation and proof generation, with a starting implementation in Rocq. The long term goals are to define defunctionalization for a large subset of Type Theory, to prove and generate proofs such as semantics and termination preservation, and more generally to use metaprogramming with strong guaranties to have a better understanding of this program transformation.
@InProceedings{OLIVIERFEST25p207,
author = {Chantal Keller and Camille Noûs},
title = {Towards Metaprogramming Defunctionalization in Rocq},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {207-212},
doi = {10.1145/3759427.3760380},
year = {2025},
}
Publisher's Version
Invertible Syntax without the Tuples (Functional Pearl)
Mathieu Boespflug and
Arnaud Spiwack
(Tweag, France)
In the seminal paper Functional unparsing, Olivier Danvy used
continuation passing to reanalyse printf-like format strings as
combinators. In the intervening decades, the conversation shifted
towards a concurrent line of work --- applicative, monadic or
arrow-based combinator libraries --- in an effort to find combinators
for invertible syntax descriptions that
simultaneously determine a parser as well
as a printer, and with more expressive power, able to handle
inductive structures such as lists and trees. Along the way,
continuation passing got lost. This paper argues that Danvy's
insight remains as relevant to the general setting as it was to the
restricted setting of his original paper. Like him, we present three
solutions that exploit continuation-passing style as an alternative
to both dependent types and monoidal aggregation via nested pairs,
in our case to parse and print structured data with increasing
expressive power.
@InProceedings{OLIVIERFEST25p213,
author = {Mathieu Boespflug and Arnaud Spiwack},
title = {Invertible Syntax without the Tuples (Functional Pearl)},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {213-225},
doi = {10.1145/3759427.3760381},
year = {2025},
}
Publisher's Version
Functional Programming and Computational Quantum Structures
Jerzy Karczmarczuk
(University of Caen, France)
We present a simulation of quantum systems where the state vectors and observables are polymorphic Haskell functions. The model is not restricted to qubits: It admits state spaces of infinite dimension, but with finite number of degrees of freedom. (So, the subject is quantum mechanics and not quantum field theory).
Since the state vectors and operators are not represented or manipulated as symbolic formulae, they are opaque as all functions in Haskell, and they cannot be treated as classical data. In order to unveil their contents, they can only be applied.
We expect from the reader a moderate familiarity with elementary quantum theory, but the part about quantum computing is kept at a basic level.
This article is accompanied by the sources of the relevant Haskell programs.
@InProceedings{OLIVIERFEST25p226,
author = {Jerzy Karczmarczuk},
title = {Functional Programming and Computational Quantum Structures},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {226-238},
doi = {10.1145/3759427.3760382},
year = {2025},
}
Publisher's Version
Published Artifact
Artifacts Available
A Tale of Two Zippers
Philip Wadler,
Ramsay Taylor, and
Jacco O.G. Krijnen
(IOG, UK; University of Edinburgh, UK; Utrecht University, Netherlands)
We apply the zipper construct of Huet to prove correct an optimiser for a simply-typed lambda calculus with force and delay. The work here is used as the basis for a certifying optimising compiler for the Plutus smart contract language on the Cardano blockchain.
The paper is an executable literate Agda script, and its source may be found in the file
Zippers.lagda.md
available as an artifact associated with this paper.
@InProceedings{OLIVIERFEST25p239,
author = {Philip Wadler and Ramsay Taylor and Jacco O.G. Krijnen},
title = {A Tale of Two Zippers},
booktitle = {Proc.\ OLIVIERFEST},
publisher = {ACM},
pages = {239-250},
doi = {10.1145/3759427.3760383},
year = {2025},
}
Publisher's Version
Published Artifact
Artifacts Available
proc time: 0.35