Powered by
Proceedings of the ACM on Programming Languages, Volume 8, Number ICFP,
September 2–7, 2024,
Milan, Italy
Frontmatter
Editorial Message
The Proceedings of the ACM series presents the highest-quality research conducted in diverse areas of computer science, as represented by the ACM Special Interest Groups (SIGs).
The ACM Proceedings of the ACM on Programming Languages (PACMPL) focuses on research on all aspects of programming languages, from design to implementation and from mathematical formalisms to empirical studies. The journal operates in close collaboration with the Special Interest Group on Programming Languages (SIGPLAN) and is committed to making high-quality peer-reviewed scientific research in programming languages free of restrictions on both access and use.
This issue of the PACMPL journal publishes 35 articles
that were submitted in response to a call for papers seeking
contributions on all practical and theoretical investigations of
programming languages, systems, and environments.
The scope includes all languages that encourage functional
programming, including both purely applicative and imperative
languages, as well as languages with objects, concurrency, or
parallelism. Some submissions were marked as experience reports or
functional pearls, to be evaluated as elegant essays that need not
give new research results and may present known ideas in illuminating
new ways.
Papers
How to Bake a Quantum Π
Jacques Carette,
Chris Heunen,
Robin Kaarsgaard, and
Amr Sabry
(McMaster University, Canada; University of Edinburgh, United Kingdom; University of Southern Denmark, Denmark; Indiana University, USA)
We construct a computationally universal quantum programming language QuantumΠ from two copies of Π, the internal language of rig groupoids. The first step constructs a pure (measurement-free) term language by interpreting each copy of Π in a generalisation of the category Unitary in which every morphism is “rotated” by a particular angle, and the two copies are amalgamated using a free categorical construction expressed as a computational effect. The amalgamated language only exhibits quantum behaviour for specific values of the rotation angles, a property which is enforced by imposing a small number of equations on the resulting category. The second step in the construction introduces measurements by layering an additional computational effect.
@Article{ICFP24p236,
author = {Jacques Carette and Chris Heunen and Robin Kaarsgaard and Amr Sabry},
title = {How to Bake a Quantum Π},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {236},
numpages = {29},
doi = {10.1145/3674625},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Reusable
Story of Your Lazy Function’s Life: A Bidirectional Demand Semantics for Mechanized Cost Analysis of Lazy Programs
Li-yao Xia,
Laura Israel,
Maite Kramarz,
Nicholas Coltharp,
Koen Claessen,
Stephanie Weirich, and
Yao Li
(Unaffiliated, France; Portland State University, USA; University of Toronto, Canada; Chalmers University of Technology, Sweden; University of Pennsylvania, USA)
Lazy evaluation is a powerful tool that enables better compositionality and potentially better performance in functional programming, but it is challenging to analyze its computation cost. Existing works either require manually annotating sharing, or rely on separation logic to reason about heaps of mutable cells. In this paper, we propose a bidirectional demand semantics that allows for extrinsic reasoning about the computation cost of lazy programs without relying on special program logics. To show the effectiveness of our approach, we apply the demand semantics to a variety of case studies including insertion sort, selection sort, Okasaki's banker's queue, and the implicit queue. We formally prove that the banker's queue and the implicit queue are both amortized and persistent using the Rocq Prover (formerly known as Coq). We also propose the reverse physicist's method, a novel variant of the classical physicist's method, which enables mechanized, modular and compositional reasoning about amortization and persistence with the demand semantics.
@Article{ICFP24p237,
author = {Li-yao Xia and Laura Israel and Maite Kramarz and Nicholas Coltharp and Koen Claessen and Stephanie Weirich and Yao Li},
title = {Story of Your Lazy Function’s Life: A Bidirectional Demand Semantics for Mechanized Cost Analysis of Lazy Programs},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {237},
numpages = {34},
doi = {10.1145/3674626},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Functional
Compiled, Extensible, Multi-language DSLs (Functional Pearl)
Michael Ballantyne,
Mitch Gamburg, and
Jason Hemann
(Northeastern University, USA; Unaffiliated, USA; Seton Hall University, USA)
Implementations of domain-specific languages should offer both extensibility and performance optimizations. With the new syntax-spec metalanguage in Racket, programmers can easily create DSL implementations that are both automatically macro-extensible and subject to conventional compiler optimizations. This pearl illustrates this approach through a new implementation of miniKanren, a widely used relational programming DSL. The miniKanren community has explored, in separate implementations, optimization techniques and a wide range of extensions. We demonstrate how our new miniKanren implementation with syntax-spec reconciles these features in a single implementation that comes with both an optimizing compiler and an extension mechanism. Furthermore, programmers using the new implementation benefit from the same seamless integration between Racket and miniKanren as in existing shallow embeddings.
@Article{ICFP24p238,
author = {Michael Ballantyne and Mitch Gamburg and Jason Hemann},
title = {Compiled, Extensible, Multi-language DSLs (Functional Pearl)},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {238},
numpages = {24},
doi = {10.1145/3674627},
year = {2024},
}
Publisher's Version
Double-Ended Bit-Stealing for Algebraic Data Types
Martin Elsman
(University of Copenhagen, Denmark)
Algebraic data types are central to the development and evaluation of most functional programs. It is therefore important for compilers to choose compact and efficient representations of such types, in particular to achieve good memory footprints for applications.
Algebraic data types are most often represented using blocks of memory where the first word is used as a so-called tag, carrying information about the constructor, and the following words are used for carrying the constructor's arguments. As an optimisation, lists are usually represented more compactly, using a technique called bit-stealing, which, in its simplest form, uses the word-alignment property of pointers to byte-addressed allocated memory to discriminate between the nil constructor (often represented as 0x1) and the cons constructor (aligned pointer to allocated pair). Because the representation supports that all values can be held uniformly in one machine word, possibly pointing to blocks of memory, type erasure is upheld.
However, on today's 64-bit architectures, memory addresses (pointers) are represented using only a subset of the 64 bits available in a machine word, which leave many bits unused. In this paper, we explore the use, not only of the least-significant bits of pointers, but also of the most-significant bits, for representing algebraic data types in a full ML compiler. It turns out that, with such a particular utilisation of otherwise unused bits, which we call double-ended bit-stealing, it is possible to choose unboxed representations for a large set of data types, while still not violating the principle of uniform data-representations. Examples include Patricia trees, union-find data structures, stream data types, internal language representations for types and expressions, and mutually recursive ASTs for full language definitions.
The double-ended bit-stealing technique is implemented in the MLKit compiler and speedup ranges from 0 to 26 percent on benchmarks that are influenced by the technique. For MLKit, which uses abstract data types extensively, compilation speedups of around 9 percent are achieved for compiling MLton (another Standard ML compiler) and for compiling MLKit itself.
@Article{ICFP24p239,
author = {Martin Elsman},
title = {Double-Ended Bit-Stealing for Algebraic Data Types},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {239},
numpages = {33},
doi = {10.1145/3674628},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Reusable
A Safe Low-Level Language for Computer Algebra and Its Formally Verified Compiler
Guillaume Melquiond and
Josué Moreau
(Université Paris-Saclay - CNRS - ENS Paris-Saclay - Inria, France)
This article describes a programming language for writing low-level libraries for computer algebra systems. Such libraries (GMP, BLAS/LAPACK, etc) are usually written in C, Fortran, and Assembly, and
make heavy use of arrays and pointers. The proposed language, halfway between C and Rust, is designed to be safe and to ease the deductive verification of programs, while being low-level enough to be suitable for this kind of computationally intensive applications. This article also describes a compiler for this language, based on CompCert. The safety of the language has been formally proved using the Coq proof assistant, and so has the property of semantics preservation for the compiler. While the language is not yet feature-complete, this article shows what it entails to design a new domain-specific programming language along its formally verified compiler.
@Article{ICFP24p240,
author = {Guillaume Melquiond and Josué Moreau},
title = {A Safe Low-Level Language for Computer Algebra and Its Formally Verified Compiler},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {240},
numpages = {26},
doi = {10.1145/3674629},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Reusable
On the Operational Theory of the CPS-Calculus: Towards a Theoretical Foundation for IRs
Paulo Torrens,
Dominic Orchard, and
Cristiano Vasconcellos
(University of Kent, United Kingdom; University of Cambridge, United Kingdom; Santa Catarina State University, Brazil)
The continuation-passing style translation often employed by compilers gives rise to a class of intermediate representation languages where functions are not allowed to return anymore. Though the primary use of these intermediate representation languages is to expose details about a program's control flow, they may be equipped with an equational theory in order to be seen as specialized calculi, which in turn may be related to the original languages by means of a factorization theorem. In this paper, we explore Thielecke's CPS-calculus, a small theory of continuations inspired by compiler implementations, and study its metatheory. We extend it with a sound reduction semantics that faithfully represents optimization rules used in actual compilers, and prove that it acts as a suitable theoretical foundation for the intermediate representation of Appel's and Kennedy's compilers by following the guidelines set out by Plotkin. Finally, we prove that the CPS-calculus is strongly normalizing in the simply typed setting by using a novel proof method for reasoning about reducibility at a distance, from which logical consistency follows. Taken together, these results close a gap in the existing literature, providing a formal theory for reasoning about intermediate representations.
@Article{ICFP24p241,
author = {Paulo Torrens and Dominic Orchard and Cristiano Vasconcellos},
title = {On the Operational Theory of the CPS-Calculus: Towards a Theoretical Foundation for IRs},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {241},
numpages = {30},
doi = {10.1145/3674630},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Functional
The Functional, the Imperative, and the Sudoku: Getting Good, Bad, and Ugly to Get Along (Functional Pearl)
Manuel Serrano and
Robert Bruce Findler
(Inria, France; Université Côte d’Azur, France; Northwestern University, USA)
Conventional wisdom suggests that the benefits of functional programming no longer apply in the presence of even a small amount of imperative code, as if the addition of imperative code effectively subtracts. And yet, as we show in this paper, combining functional programming with the special imperative language Esterel provides a multiplicative improvement to the benefits of functional programming.
The key to the benefit of both Esterel and functional programming stems from a restriction that both share. As in functional programming, where only the inputs to a function determine its outputs, the state of an Esterel computation is fully determined by the program’s input and the state that the computation had in the previous time step, where the notion of a time step is explicit in the language. Esterel’s guarantee holds even though Esterel programs feature concurrent threads, mutable state, and the ability to create, suspend, and even terminate threads as the computation proceeds. This similarity is the root of the benefits that programmers accrue as they informally reason about their program’s behavior.
To illustrate these benefits, the bulk of this paper consists of an in-depth exploration of HipHop code (a mashup of JavaScript and Esterel) that implements a Sudoku solver, showing how it is possible to write code that is as easy to understand as if it were written in a pure functional programming style, even though it uses multiple threads, mutable state, thread preemption, and even thread abortion. Even better, concurrent composition and task canceling provide significant program structuring benefits that allow a clean decomposition and task separation in the solver.
@Article{ICFP24p242,
author = {Manuel Serrano and Robert Bruce Findler},
title = {The Functional, the Imperative, and the Sudoku: Getting Good, Bad, and Ugly to Get Along (Functional Pearl)},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {242},
numpages = {26},
doi = {10.1145/3674631},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Reusable
Almost-Sure Termination by Guarded Refinement
Simon Oddershede Gregersen,
Alejandro Aguirre,
Philipp G. Haselwarter,
Joseph Tassarotti, and
Lars Birkedal
(New York University, USA; Aarhus University, Denmark)
Almost-sure termination is an important correctness property for probabilistic programs, and a number of program logics have been developed for establishing it. However, these logics have mostly been developed for first-order programs written in languages with specific syntactic patterns for looping. In this paper, we consider almost-sure termination for higher-order probabilistic programs with general references. This combination of features allows for recursion and looping to be encoded through a variety of patterns. Therefore, rather than developing proof rules for reasoning about particular recursion patterns, we instead propose an approach based on proving refinement between a higher-order program and a simpler probabilistic model, in such a way that the refinement preserves termination behavior. By proving a refinement, almost-sure termination behavior of the program can then be established by analyzing the simpler model.
We present this approach in the form of Caliper, a higher-order separation logic for proving termination-preserving refinements. Caliper uses probabilistic couplings to carry out relational reasoning between a program and a model. To handle the range of recursion patterns found in higher-order programs, Caliper uses guarded recursion, in particular the principle of Löb induction. A technical novelty is that Caliper does not require the use of transfinite step indexing or other technical restrictions found in prior work on guarded recursion for termination-preservation refinement. We demonstrate the flexibility of this approach by proving almost-sure termination of several examples, including first-order loop constructs, a random list generator, treaps, and a sampler for Galton-Watson trees that uses higher-order store. All the results have been mechanized in the Coq proof assistant.
@Article{ICFP24p243,
author = {Simon Oddershede Gregersen and Alejandro Aguirre and Philipp G. Haselwarter and Joseph Tassarotti and Lars Birkedal},
title = {Almost-Sure Termination by Guarded Refinement},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {243},
numpages = {31},
doi = {10.1145/3674632},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Reusable
Functional Programming in Financial Markets (Experience Report)
Atze Dijkstra,
José Pedro Magalhães, and
Pierre Néron
(Standard Chartered Bank, United Kingdom; Standard Chartered Bank, Singapore)
We present a case-study of using functional programming in the real world at a very large scale. At
Standard Chartered Bank, Haskell is used in a core software library supporting the entire
Markets division -- a business line with 3 billion USD operating income in 2023.
Typed functional programming is used across the entire tech stack, including foundational APIs and CLIs for deal
valuation and risk analysis, server-side components for long-running batches or sub-second RESTful services, and
end-user GUIs. Thousands of users across Markets interact with software built using functional programming,
and over one hundred write functional code.
In this experience report we focus on how we leverage functional programming to orchestrate type-driven large-scale
pricing workflows. The same API can be used to price one trade locally, or millions of trades across thousands of
cloud nodes. Different parts of the computation can be run and inspected individually, and recomputing one part
triggers recalculation of the dependent parts only. We build upon decades of research and experience in the
functional programming community, relying on concepts such as monads, lenses, datatype generics, and closure
serialisation. We conclude that the use of functional programming is one of the main drivers of the success
of our project, and we see no significant downsides from it.
@Article{ICFP24p244,
author = {Atze Dijkstra and José Pedro Magalhães and Pierre Néron},
title = {Functional Programming in Financial Markets (Experience Report)},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {244},
numpages = {15},
doi = {10.1145/3674633},
year = {2024},
}
Publisher's Version
The Long Way to Deforestation: A Type Inference and Elaboration Technique for Removing Intermediate Data Structures
Yijia Chen and
Lionel Parreaux
(Hong Kong University of Science and Technology, Hong Kong, China)
Deforestation is a compiler optimization that removes intermediate data structure allocations from functional programs to improve their efficiency. This is an old idea, but previous approaches have proved limited or impractical — they either only worked on compositions of predefined combinators (shortcut fusion), or involved the aggressive unfolding of recursive definitions until a depth limit was reached or a reoccurring pattern was found to tie the recursive knot, resulting in impractical algorithmic complexity and large amounts of code duplication. We present Lumberhack, a general-purpose deforestation approach for purely functional call-by-value programs. Lumberhack uses subtype inference to reason about data structure production and consumption and uses an elaboration pass to fuse the corresponding recursive definitions. It fuses large classes of mutually recursive definitions while avoiding much of the unproductive (and sometimes counter-productive) code duplication inherent in previous approaches. We prove the soundness of Lumberhack using logical relations and experimentally demonstrate significant speedups in the standard nofib benchmark suite. We manually adapted nofib programs to call-by-value semantics and compiled them using the OCaml compiler. The average speedup over the 38 benchmarked programs is 8.2% while the average code size increases by just about 1.79x. In particular, 19 programs see their performance mostly unchanged, 17 programs improve significantly (by an average speedup of 16.6%), and only three programs visibly worsen (by an average slowdown of 1.8%). As a point of comparison, we measured that the well-proven but semi-manual list fusion technique of the Glasgow Haskell Compiler (GHC), which only works for call-by-need programs, had an average speedup of 6.5%. Our technique is in its infancy still and misses many deforestation opportunities. We are confident that further refinements to the core technique will yield greater performance improvements in the future.
@Article{ICFP24p245,
author = {Yijia Chen and Lionel Parreaux},
title = {The Long Way to Deforestation: A Type Inference and Elaboration Technique for Removing Intermediate Data Structures},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {245},
numpages = {35},
doi = {10.1145/3674634},
year = {2024},
}
Publisher's Version
Published Artifact
Info
Artifacts Available
Artifacts Functional
Error Credits: Resourceful Reasoning about Error Bounds for Higher-Order Probabilistic Programs
Alejandro Aguirre,
Philipp G. Haselwarter,
Markus de Medeiros,
Kwing Hei Li,
Simon Oddershede Gregersen,
Joseph Tassarotti, and
Lars Birkedal
(Aarhus University, Denmark; New York University, USA)
Probabilistic programs often trade accuracy for efficiency, and thus
may, with a small probability, return an incorrect result.
It is important to obtain precise bounds for
the probability of these errors, but existing verification approaches have limitations that
lead to error probability bounds that are excessively coarse, or only apply to first-order programs.
In this paper we present Eris, a higher-order separation logic for proving
error probability bounds for probabilistic programs written in an expressive higher-order language.
Our key novelty is the introduction of error credits, a separation logic resource
that tracks an upper bound on the probability that a program returns an erroneous result.
By representing error bounds as a resource,
we recover the benefits of separation logic, including compositionality,
modularity, and dependency between errors and program terms, allowing
for more precise specifications. Moreover, we enable novel reasoning principles
such as expectation-preserving error composition, amortized error reasoning,
and error induction.
We illustrate the advantages of our approach by proving amortized error
bounds on a range of examples, including collision probabilities
in hash functions, which allow us to write
more modular specifications for data structures that use them as clients. We
also use our logic to prove correctness and almost-sure termination of
rejection sampling algorithms. All of our results have been mechanized
in the Coq proof assistant using the Iris separation logic framework and
the Coquelicot real analysis library.
@Article{ICFP24p246,
author = {Alejandro Aguirre and Philipp G. Haselwarter and Markus de Medeiros and Kwing Hei Li and Simon Oddershede Gregersen and Joseph Tassarotti and Lars Birkedal},
title = {Error Credits: Resourceful Reasoning about Error Bounds for Higher-Order Probabilistic Programs},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {246},
numpages = {33},
doi = {10.1145/3674635},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Reusable
Example-Based Reasoning about the Realizability of Polymorphic Programs
Niek Mulleners,
Johan Jeuring, and
Bastiaan Heeren
(Utrecht University, Netherlands; Open Universiteit, Netherlands)
Parametricity states that polymorphic functions behave the same regardless of how they are instantiated. When developing polymorphic programs, Wadler’s free theorems can serve as free specifications, which can turn otherwise partial specifications into total ones, and can make otherwise realizable specifications unrealizable. This is of particular interest to the field of program synthesis, where the unrealizability of a specification can be used to prune the search space. In this paper, we focus on the interaction between parametricity, input-output examples, and sketches. Unfortunately, free theorems introduce universally quantified functions that make automated reasoning difficult. Container morphisms provide an alternative representation for polymorphic functions that captures parametricity in a more manageable way. By using a translation to the container setting, we show how reasoning about the realizability of polymorphic programs with input-output examples can be automated.
@Article{ICFP24p247,
author = {Niek Mulleners and Johan Jeuring and Bastiaan Heeren},
title = {Example-Based Reasoning about the Realizability of Polymorphic Programs},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {247},
numpages = {21},
doi = {10.1145/3674636},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Reusable
Snapshottable Stores
Clément Allain,
Basile Clément,
Alexandre Moine, and
Gabriel Scherer
(Inria, France; OCamlPro, France; Université Paris Cité - Inria - CNRS, France)
We say that an imperative data structure is snapshottable or supports snapshots if we can efficiently capture its current state, and restore a previously captured state to become the current state again. This is useful, for example, to implement backtracking search processes that update the data structure during search. Inspired by a data structure proposed in 1978 by Baker, we present a snapshottable store, a bag of mutable references that supports snapshots. Instead of capturing and restoring an array, we can capture an arbitrary set of references (of any type) and restore all of them at once. This snapshottable store can be used as a building block to support snapshots for arbitrary data structures, by simply replacing all mutable references in the data structure by our store references. We present use-cases of a snapshottable store when implementing type-checkers and automated theorem provers. Our implementation is designed to provide a very low overhead over normal references, in the common case where the capture/restore operations are infrequent. Read and write in store references are essentially as fast as in plain references in most situations, thanks to a key optimisation we call record elision. In comparison, the common approach of replacing references by integer indices into a persistent map incurs a logarithmic overhead on reads and writes, and sophisticated algorithms typically impose much larger constant factors. The implementation, which is inspired by Baker’s and the OCaml implementation of persistent arrays by Conchon and Filliâtre, is both fairly short and very hard to understand: it relies on shared mutable state in subtle ways. We provide a mechanized proof of correctness of its core using the Iris framework for the Coq proof assistant.
@Article{ICFP24p248,
author = {Clément Allain and Basile Clément and Alexandre Moine and Gabriel Scherer},
title = {Snapshottable Stores},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {248},
numpages = {32},
doi = {10.1145/3674637},
year = {2024},
}
Publisher's Version
Beyond Trees: Calculating Graph-Based Compilers (Functional Pearl)
Patrick Bahr and
Graham Hutton
(IT University of Copenhagen, Denmark; University of Nottingham, United Kingdom)
Bahr and Hutton recently developed an approach to compiler calculation that
allows a wide range of compilers to be derived from specifications of their
correctness. However, a limitation of the approach is that it results in
compilers that produce tree-structured code. By contrast, realistic
compilers produce code that is essentially graph-structured, where the edges
in the graph represent jumps that transfer the flow of control to other
locations in the code. In this article, we show how their approach can
naturally be adapted to calculate compilers that produce graph-structured
code, without changing the underlying calculational methodology, by using a
higher-order abstract syntax representation of graphs.
@Article{ICFP24p249,
author = {Patrick Bahr and Graham Hutton},
title = {Beyond Trees: Calculating Graph-Based Compilers (Functional Pearl)},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {249},
numpages = {25},
doi = {10.1145/3674638},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Reusable
Grokking the Sequent Calculus (Functional Pearl)
David Binder,
Marco Tzschentke,
Marius Müller, and
Klaus Ostermann
(University of Tübingen, Germany)
The sequent calculus is a proof system which was designed as a more symmetric alternative to natural deduction. The 𝜆𝜇𝜇-calculus is a term assignment system for the sequent calculus and a great foundation for compiler intermediate languages due to its first-class representation of evaluation contexts. Unfortunately, only experts of the sequent calculus can appreciate its beauty. To remedy this, we present the first introduction to the 𝜆𝜇𝜇-calculus which is not directed at type theorists or logicians but at compiler hackers and programming-language enthusiasts. We do this by writing a compiler from a small but interesting surface language to the 𝜆𝜇𝜇-calculus as a compiler intermediate language.
@Article{ICFP24p250,
author = {David Binder and Marco Tzschentke and Marius Müller and Klaus Ostermann},
title = {Grokking the Sequent Calculus (Functional Pearl)},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {250},
numpages = {31},
doi = {10.1145/3674639},
year = {2024},
}
Publisher's Version
Published Artifact
Info
Artifacts Available
Artifacts Reusable
Sound Borrow-Checking for Rust via Symbolic Semantics
Son Ho,
Aymeric Fromherz, and
Jonathan Protzenko
(Inria, France; Microsoft Azure Research, USA)
The Rust programming language continues to rise in popularity, and as such, warrants the close attention of the programming languages community. In this work, we present a new foundational contribution towards the theoretical understanding of Rust’s semantics. We prove that LLBC, a high-level, borrow-centric model previously proposed for Rust’s semantics and execution, is sound with regards to a low-level pointer-based language à la CompCert. Specifically, we prove the following: that LLBC is a correct view over a traditional model of execution; that LLBC’s symbolic semantics are a correct abstraction of LLBC programs; and that LLBC’s symbolic semantics act as a borrow-checker for LLBC, i.e. that symbolically-checked LLBC programs do not get stuck when executed on a heap-and-addresses model of execution. To prove these results, we introduce a new proof style that considerably simplifies our proofs of simulation, which relies on a notion of hybrid states. Equipped with this reasoning framework, we show that a new addition to LLBC’s symbolic semantics, namely a join operation, preserves the abstraction and borrow-checking properties. This in turn allows us to add support for loops to the Aeneas framework; we show, using a series of examples and case studies, that this unlocks new expressive power for Aeneas.
@Article{ICFP24p251,
author = {Son Ho and Aymeric Fromherz and Jonathan Protzenko},
title = {Sound Borrow-Checking for Rust via Symbolic Semantics},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {251},
numpages = {29},
doi = {10.1145/3674640},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Reusable
Abstracting Effect Systems for Algebraic Effect Handlers
Takuma Yoshioka,
Taro Sekiyama, and
Atsushi Igarashi
(Kyoto University, Japan; National Institute of Informatics, Japan; SOKENDAI, Japan)
Many effect systems for algebraic effect handlers are designed to guarantee that all invoked effects are handled adequately. However, respective researchers have developed their own effect systems that differ in how to represent the collections of effects that may happen. This situation results in blurring what is required for the representation and manipulation of effect collections in a safe effect system. In this work, we present a language λEA equipped with an effect system that abstracts the existing effect systems for algebraic effect handlers. The effect system of λEA is parameterized over effect algebras, which abstract the representation and manipulation of effect collections in safe effect systems. We prove the type-and-effect safety of λEA by assuming that a given effect algebra meets certain properties called safety conditions. As a result, we can obtain the safety properties of a concrete effect system by proving that an effect algebra corresponding to the concrete system meets the safety conditions. We also show that effect algebras meeting the safety conditions are expressive enough to accommodate some existing effect systems, each of which represents effect collections in a different style. Our framework can also differentiate the safety aspects of the effect collections of the existing effect systems. To this end, we extend λEA and the safety conditions to lift coercions and type-erasure semantics, propose other effect algebras including ones for which no effect system has been studied in the literature, and compare which effect algebra is safe and which is not for the extensions.
@Article{ICFP24p252,
author = {Takuma Yoshioka and Taro Sekiyama and Atsushi Igarashi},
title = {Abstracting Effect Systems for Algebraic Effect Handlers},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {252},
numpages = {30},
doi = {10.1145/3674641},
year = {2024},
}
Publisher's Version
Archive submitted (740 kB)
Oxidizing OCaml with Modal Memory Management
Anton Lorenzen,
Leo White,
Stephen Dolan,
Richard A. Eisenberg, and
Sam Lindley
(University of Edinburgh, United Kingdom; Jane Street, United Kingdom; Jane Street, USA)
Programmers can often improve the performance of their programs by reducing heap allocations: either by allocating on the stack or reusing existing memory in-place. However, without safety guarantees, these optimizations can easily lead to use-after-free errors and even type unsoundness. In this paper, we present a design based on modes which allows programmers to safely reduce allocations by using stack allocation and in-place updates of immutable structures. We focus on three mode axes: affinity, uniqueness and locality. Modes are fully backwards compatible with existing OCaml code and can be completely inferred. Our work makes manual memory management in OCaml safe and convenient and charts a path towards bringing the benefits of Rust to OCaml.
@Article{ICFP24p253,
author = {Anton Lorenzen and Leo White and Stephen Dolan and Richard A. Eisenberg and Sam Lindley},
title = {Oxidizing OCaml with Modal Memory Management},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {253},
numpages = {30},
doi = {10.1145/3674642},
year = {2024},
}
Publisher's Version
Archive submitted (890 kB)
Blame-Correct Support for Receiver Properties in Recursively-Structured Actor Contracts
Bram Vandenbogaerde,
Quentin Stiévenart, and
Coen De Roover
(Vrije Universiteit Brussel, Belgium; Université du Québec à Montréal, Canada)
Actor languages model concurrency as processes that communicate through asynchronous message sends.Unfortunately, as the complexity of these systems increases, it becomes more difficult to compose and integrate their components. This is because of assumptions made by components about their communication partners which may not be upheld when they remain implicit. In this paper, we bring design-by-contract programming to actor programs through a contract system that enables expressing constraints on receiver-related properties.Expressing properties about the expected receiver of a message, and about this receiver's communication behavior, requires two novel types of contracts.Through their recursive structure, these contracts can govern entire communication chains. We implement the contract system for an actor extension of Scheme, describe it formally, and show how to assign blame in case of a contract violation.Finally, we prove our contract system and its blame assignment correct by formulating and proving a blame correctness theorem.
@Article{ICFP24p254,
author = {Bram Vandenbogaerde and Quentin Stiévenart and Coen De Roover},
title = {Blame-Correct Support for Receiver Properties in Recursively-Structured Actor Contracts},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {254},
numpages = {29},
doi = {10.1145/3674643},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Reusable
Gradual Indexed Inductive Types
Mara Malewski,
Kenji Maillard,
Nicolas Tabareau, and
Éric Tanter
(University of Chile, Chile; Inria, France)
Indexed inductive types are essential in dependently-typed programming languages, enabling precise and expressive specifications of data structures and properties. Recognizing that programming and proving with dependent types could benefit from the smooth integration of static and dynamic checking that gradual typing offers, recent efforts have studied gradual dependent types. Gradualizing indexed inductive types however remains mostly unexplored: the standard encodings of indexed inductive types in intensional type theory, e.g., using type-level fixpoints or subset types, break in the presence of gradual features; and previous work on gradual dependent types focus on very specific instances of indexed inductive types. This paper contributes a general framework, named PUNK, specifically designed for exploring the design space of gradual indexed inductive types. PUNK is a versatile framework, enabling the exploration of the space between eager and lazy cast reduction semantics that arise from the interaction between casts and the inductive eliminator, allowing them to coexist and interoperate in a single system. Our work provides significant insights into the intersection of dependent types and gradual typing, by proposing a criteria for well-behaved gradual indexed inductive types, systematically addressing the outlined challenges of integrating these types. The contributions of this paper are a step forward in the quest for making gradual theorem proving and gradual dependently-typed programming a reality.
@Article{ICFP24p255,
author = {Mara Malewski and Kenji Maillard and Nicolas Tabareau and Éric Tanter},
title = {Gradual Indexed Inductive Types},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {255},
numpages = {29},
doi = {10.1145/3674644},
year = {2024},
}
Publisher's Version
Refinement Composition Logic
Youngju Song and
Dongjae Lee
(MPI-SWS, Germany; Seoul National University, South Korea)
One successful approach to verifying programs is refinement, where one establishes that the implementation (e.g., in C) behaves as specified in its mathematical specification. In this approach, the end result (a whole implementation refines a whole specification) is often established via composing multiple “small” refinements.
In this paper, we focus on the task of composing refinements. Our key observation is a novel correspondence between the task of composing refinements and the task of proving entailments in modern separation logic. This correspondence is useful. First, it unlocks tools and abstract constructs developed for separation logic, greatly streamlining the composition proof. Second, it uncovers a fundamentally new verification strategy. We address the key challenge in establishing the correspondence with a novel use of angelic non-determinism.
Guided by the correspondence, we develop RCL (Refinement Composition Logic), a logic dedicated to composing refinements. All our results are formalized in Coq.
@Article{ICFP24p256,
author = {Youngju Song and Dongjae Lee},
title = {Refinement Composition Logic},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {256},
numpages = {29},
doi = {10.1145/3674645},
year = {2024},
}
Publisher's Version
Abstract Interpreters: A Monadic Approach to Modular Verification
Sébastien Michelland,
Yannick Zakowski, and
Laure Gonnord
(Université Grenoble-Alpes - Grenoble INP - LCIS, France; Inria - ENS de Lyon - CNRS - UCBL1 - LIP - UMR 5668, France)
We argue that monadic interpreters built as layers of interpretations stacked atop the free monad constitute a promising way to implement and verify abstract interpreters in dependently-typed theories such as the one underlying the Coq proof assistant.
The approach enables modular proofs of soundness of the resulting interpreters. We provide generic abstract control flow combinators proven correct once and for all against their concrete counterpart. We demonstrate how to relate concrete handlers implementing effects to abstract variants of these handlers, essentially capturing the traditional soundness of transfer functions in the context of monadic interpreters. Finally, we provide generic results to lift soundness statements via the interpretation of stateful and failure effects.
We formalize all the aforementioned combinators and theories in Coq, and demonstrate their benefits by implementing and proving correct two illustrative abstract interpreters for a structured imperative language and a toy assembly.
@Article{ICFP24p257,
author = {Sébastien Michelland and Yannick Zakowski and Laure Gonnord},
title = {Abstract Interpreters: A Monadic Approach to Modular Verification},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {257},
numpages = {28},
doi = {10.1145/3674646},
year = {2024},
}
Publisher's Version
Published Artifact
Info
Artifacts Available
Artifacts Reusable
Dependent Ghosts Have a Reflection for Free
Théo Winterhalter
(Inria, France)
We introduce ghost type theory (GTT) a dependent type theory extended with a new universe for ghost data that can safely be erased when running a program but which is not proof irrelevant like with a universe of (strict) propositions. Instead, ghost data carry information that can be used in proofs or to discard impossible cases in relevant computations. Casts can be used to replace ghost values by others that are propositionally equal, but crucially these casts can be ignored for conversion without compromising soundness. We provide a type-preserving erasure procedure which gets rid of all ghost data and proofs, a step which may be used as a first step to program extraction. We give a syntactical model of GTT using a program translation akin to the parametricity translation and thus show consistency of the theory. Because it is a parametricity model, it can also be used to derive free theorems about programs using ghost code. We further extend GTT to support equality reflection and show that we can eliminate its use without the need for the usual extra axioms of function extensionality and uniqueness of identity proofs. In particular we validate the intuition that indices of inductive type—such as the length index of vectors—do not matter for computation and can safely be considered modulo theory. The results of the paper have been formalised in Coq.
@Article{ICFP24p258,
author = {Théo Winterhalter},
title = {Dependent Ghosts Have a Reflection for Free},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {258},
numpages = {29},
doi = {10.1145/3674647},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Reusable
Closure-Free Functional Programming in a Two-Level Type Theory
András Kovács
(University of Gothenburg, Sweden)
Many abstraction tools in functional programming rely heavily on general-purpose compiler optimization to achieve adequate performance. For example, monadic binding is a higher-order function which yields runtime closures in the absence of sufficient compile-time inlining and beta-reductions, thereby significantly degrading performance. In current systems such as the Glasgow Haskell Compiler, there is no strong guarantee that general-purpose optimization can eliminate abstraction overheads, and users only have indirect and fragile control over code generation through inlining directives and compiler options. We propose a two-stage language to simultaneously get strong guarantees about code generation and strong abstraction features. The object language is a simply-typed first-order language which can be compiled without runtime closures. The compile-time language is a dependent type theory. The two are integrated in a two-level type theory. We demonstrate two applications of the system. First, we develop monads and monad transformers. Here, abstraction overheads are eliminated by staging and we can reuse almost all definitions from the existing Haskell ecosystem. Second, we develop pull-based stream fusion. Here we make essential use of dependent types to give a concise definition of a concatMap operation with guaranteed fusion. We provide an Agda implementation and a typed Template Haskell implementation of these developments.
@Article{ICFP24p259,
author = {András Kovács},
title = {Closure-Free Functional Programming in a Two-Level Type Theory},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {259},
numpages = {34},
doi = {10.1145/3674648},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Reusable
Staged Compilation with Module Functors
Tsung-Ju Chiang,
Jeremy Yallop,
Leo White, and
Ningning Xie
(University of Toronto, Canada; University of Cambridge, United Kingdom; Jane Street, United Kingdom)
Multi-stage programming has been used in a wide variety of domains to eliminate the tension between abstraction and performance. However, the interaction of multi-stage programming features with features for programming-in-the-large remains understudied, hindering the full integration of multi-stage programming support into existing languages, and limiting the effective use of staging in large programs. We take steps to remedy the situation by studying the extension of MacoCaml, a recent OCaml extension that supports compile-time code generation via macros and quotations, with module functors, the key mechanism in OCaml for assembling program components into larger units. We discuss design choices related to evaluation order, formalize our calculus via elaboration, and show that the design enjoys key metatheoretical properties: syntactic type soundness, elaboration soundness, and phase distinction. We believe that this study lays a foundation for the continued exploration and implementation of the OCaml macro system.
@Article{ICFP24p260,
author = {Tsung-Ju Chiang and Jeremy Yallop and Leo White and Ningning Xie},
title = {Staged Compilation with Module Functors},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {260},
numpages = {35},
doi = {10.1145/3674649},
year = {2024},
}
Publisher's Version
Deriving with Derivatives: Optimizing Incremental Fixpoints for Higher-Order Flow Analysis
Benjamin Quiring and
David Van Horn
(University of Maryland at College Park, USA; University of Maryland, USA)
At the heart of efficient program analysis implementations are incremental solutions to fixpoint problems. These solutions can be interpreted as the derivative of the underlying analysis function. Methods that describe how to systematically derive higher-order analyses from program semantics, such as Abstracting Abstract Machines, don’t shed light on how to efficiently implement those analyses. In this paper, we explore complementary techniques to optimize the derivative computation towards deriving efficient implementations. In particular, we use static specializations (by partial evaluation and rewriting) and dynamic specializations (in the form of tracking dependencies during the fixpoint), yielding efficient incremental fixpoints. We present how these optimizations apply to an example analysis of continuation-passing-style λ-calculus, and describe how they pair particularly well with tunable and optimized workset-based fixpoint methods. We demonstrate the efficacy of this approach on a flow analysis for the Standard ML language, yielding an average speed-up of 56x over an existing fixpoint method for higher-order analyses from the literature.
@Article{ICFP24p261,
author = {Benjamin Quiring and David Van Horn},
title = {Deriving with Derivatives: Optimizing Incremental Fixpoints for Higher-Order Flow Analysis},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {261},
numpages = {28},
doi = {10.1145/3674650},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Functional
Parallel Algebraic Effect Handlers
Ningning Xie,
Daniel D. Johnson,
Dougal Maclaurin, and
Adam Paszke
(University of Toronto, Canada; Google DeepMind, Canada; Google DeepMind, USA; Google DeepMind, Germany)
Algebraic effect handlers support composable and structured control-flow abstraction. However, existing designs of algebraic effects often require effects to be executed sequentially. This paper studies parallel algebraic effect handlers. In particular, we formalize λp, a lambda calculus which models two key features, effect handlers and parallelizable computations, the latter of which takes the form of a for expression, inspired by the Dex programming language. We present various interesting examples expressible in our calculus. To show that our design can be implemented in a type-safe way, we present a higher-order polymorphic lambda calculus Fp that extends λp with a lightweight value dependent type system, and prove that Fp preserves the semantics of λp and enjoys syntactic type soundness. Lastly, we provide an implementation of the language design as a Haskell library, which mirrors both λp and Fp and reveals new connections to free applicative functors. All examples presented can be encoded in the Haskell implementation. We believe this paper is the first to study the combination of user-defined effect handlers and parallel computations, and it is our hope that it provides a basis for future designs and implementations of parallel algebraic effect handlers.
@Article{ICFP24p262,
author = {Ningning Xie and Daniel D. Johnson and Dougal Maclaurin and Adam Paszke},
title = {Parallel Algebraic Effect Handlers},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {262},
numpages = {33},
doi = {10.1145/3674651},
year = {2024},
}
Publisher's Version
A Two-Phase Infinite/Finite Low-Level Memory Model: Reconciling Integer–Pointer Casts, Finite Space, and undef at the LLVM IR Level of Abstraction
Calvin Beck,
Irene Yoon,
Hanxi Chen,
Yannick Zakowski, and
Steve Zdancewic
(University of Pennsylvania, USA; Inria, France; Inria - ENS de Lyon - CNRS - UCBL1 - LIP - UMR 5668, France)
This paper provides a novel approach to reconciling complex low-level memory model features, such as pointer--integer casts, with desired refinements that are needed to justify the correctness of program transformations. The idea is to use a "two-phase" memory model, one with an unbounded memory and corresponding unbounded integer type, and one with a finite memory; the connection between the two levels is made explicit by a notion of refinement that handles out-of-memory behaviors. This approach allows for more optimizations to be performed and establishes a clear boundary between the idealized semantics of a program and the implementation of that program on finite hardware.
The two-phase memory model has been incorporated into an LLVM IR semantics, demonstrating its utility in practice in the context of a low-level language with features like undef and bitcast. This yields infinite and finite memory versions of the language semantics that are proven to be in refinement with respect to out-of-memory behaviors. Each semantics is accompanied by a verified executable reference interpreter. The semantics justify optimizations, such as dead-alloca-elimination, that were previously impossible or difficult to prove correct.
@Article{ICFP24p263,
author = {Calvin Beck and Irene Yoon and Hanxi Chen and Yannick Zakowski and Steve Zdancewic},
title = {A Two-Phase Infinite/Finite Low-Level Memory Model: Reconciling Integer–Pointer Casts, Finite Space, and undef at the LLVM IR Level of Abstraction},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {263},
numpages = {29},
doi = {10.1145/3674652},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Functional
CCLemma: E-Graph Guided Lemma Discovery for Inductive Equational Proofs
Cole Kurashige,
Ruyi Ji,
Aditya Giridharan,
Mark Barbone,
Daniel Noor,
Shachar Itzhaky,
Ranjit Jhala, and
Nadia Polikarpova
(University of California at San Diego, USA; Peking University, China; Technion, Israel)
The problem of automatically proving the equality
of terms over recursive functions and
inductive data types is challenging,
as such proofs often require auxiliary lemmas
which must themselves be proven.
Previous attempts at lemma discovery compromise
on either efficiency or efficacy.
Goal-directed approaches
are fast but limited in expressiveness,
as they can only discover auxiliary lemmas which
entail their goals.
Theory exploration approaches
are expressive but inefficient,
as they exhaustively enumerate candidate lemmas.
We introduce e-graph guided lemma discovery,
a new approach to finding equational proofs
that makes theory exploration goal-directed.
We accomplish this by using e-graphs and equality saturation to
efficiently construct and compactly represent
the space of all goal-oriented proofs.
This allows us to explore only those auxiliary
lemmas guaranteed to help make progress
on some of these proofs.
We implemented our method in a new prover
called CCLemma and compared it with three state-of-the-art provers
across a variety of benchmarks.
CCLemma performs consistently well on two standard benchmarks
and additionally solves 50% more problems than the next best tool
on a new challenging set.
@Article{ICFP24p264,
author = {Cole Kurashige and Ruyi Ji and Aditya Giridharan and Mark Barbone and Daniel Noor and Shachar Itzhaky and Ranjit Jhala and Nadia Polikarpova},
title = {CCLemma: E-Graph Guided Lemma Discovery for Inductive Equational Proofs},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {264},
numpages = {27},
doi = {10.1145/3674653},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Reusable
Call-by-Unboxed-Value
Paul Downen
(University of Massachusetts at Lowell, USA)
Call-By-Push-Value has famously subsumed both call-by-name and call-by-value by decomposing programs along the axis of "values" versus "computations." Here, we introduce Call-By-Unboxed-Value which further decomposes programs along an orthogonal axis separating "atomic" versus "complex." As the name suggests, these two dimensions make it possible to express the representations of values as boxed or unboxed, so that functions pass unboxed values as inputs and outputs. More importantly, Call-By-Unboxed-Value allows for an unrestricted mixture of polymorphism and unboxed types, giving a foundation for studying compilation techniques for polymorphism based on representation irrelevance. In this regard, we use Call-By-Unboxed-Value to formalize representation polymorphism independently of types; for the first time compiling untyped representation-polymorphic code, while nonetheless preserving types all the way to the machine.
@Article{ICFP24p265,
author = {Paul Downen},
title = {Call-by-Unboxed-Value},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {265},
numpages = {35},
doi = {10.1145/3674654},
year = {2024},
}
Publisher's Version
Contextual Typing
Xu Xue and
Bruno C. d. S. Oliveira
(University of Hong Kong, China)
Bidirectional typing is a simple, lightweight approach to type inference that propagates known type information during typing, and can scale up to many different type systems and features. It typically only requires a reasonable amount of annotations and eliminates the need for many obvious annotations. Nonetheless the power of inference is still limited, and complications arise in the presence of more complex features. In this paper we present a generalization of bidirectional typing called contextual typing. In contextual typing not only known type information is propagated during typing, but also other known information about the surrounding context of a term. This information can be of various forms, such as other terms or record labels. Due to this richer notion of contextual information, less annotations are needed, while the approach remains lightweight and scalable. For type systems with subtyping, contextual typing subsumption is also more expressive than subsumption with bidirectional typing, since partially known contextual information can be exploited. To aid specifying type systems with contextual typing, we introduce Quantitative Type Assignment Systems (QTASs). A QTAS quantifies the amount of type information that a term needs in order to type check using counters. Thus, a counter in a QTAS generalizes modes in traditional bidirectional typing, which can only model an all (checking mode) or nothing (inference mode) approach. QTASs enable precise guidelines for annotatability of contextual type systems formalized as a theorem. We illustrate contextual typing first on a simply typed lambda calculus, and then on a richer calculus with subtyping, intersection types, records and overloading. All the metatheory is formalized in the Agda theorem prover.
@Article{ICFP24p266,
author = {Xu Xue and Bruno C. d. S. Oliveira},
title = {Contextual Typing},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {266},
numpages = {29},
doi = {10.1145/3674655},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Reusable
Specification and Verification for Unrestricted Algebraic Effects and Handling
Yahui Song,
Darius Foo, and
Wei-Ngan Chin
(National University of Singapore, Singapore)
Programming with user-defined effects and effect handlers has many practical use cases involving imperative effects. Additionally, it is natural and powerful to use multi-shot effect handlers for non-deterministic or probabilistic programs that allow backtracking to compute a comprehensive outcome. Existing works for verifying effect handlers are restricted in one of three ways: i) permitting multi-shot continuations under pure setting; ii) allowing heap manipulation for only one-shot continuations; or iii) allowing multi-shot continuations with heap-manipulation but under a restricted frame rule.
This work proposes a novel calculus called Effectful Specification Logic (ESL) to support unrestricted effect handlers, where zero-/one-/multi-shot continuations can co-exist with imperative effects and higher-order constructs. ESL captures behaviors in stages, and provides precise models to support invoked effects, handlers and continuations. To show its feasibility, we prototype an automated verification system for this novel specification logic, prove its soundness, report on useful case studies, and present experimental results. With this proposal, we have provided an extended specification logic that is capable of modeling arbitrary imperative higher-order programs with algebraic effects and continuation-enabled handlers.
@Article{ICFP24p267,
author = {Yahui Song and Darius Foo and Wei-Ngan Chin},
title = {Specification and Verification for Unrestricted Algebraic Effects and Handling},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {267},
numpages = {29},
doi = {10.1145/3674656},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Functional
Synchronous Programming with Refinement Types
Jiawei Chen,
José Luiz Vargas de Mendonça,
Bereket Shimels Ayele,
Bereket Ngussie Bekele,
Shayan Jalili,
Pranjal Sharma,
Nicholas Wohlfeil,
Yicheng Zhang, and
Jean-Baptiste Jeannin
(University of Michigan at Ann Arbor, USA; Addis Ababa Institute of Technology, Ethiopia)
Cyber-Physical Systems (CPS) consist of software interacting with the physical world, such as robots, vehicles, and industrial processes. CPS are frequently responsible for the safety of lives, property, or the environment, and so software correctness must be determined with a high degree of certainty. To that end, simply testing a CPS is insufficient, as its interactions with the physical world may be difficult to predict, and unsafe conditions may not be immediately obvious. Formal verification can provide stronger safety guarantees but relies on the accuracy of the verified system in representing the real system. Bringing together verification and implementation can be challenging, as languages that are typically used to implement CPS are not easy to formally verify, and languages that lend themselves well to verification often abstract away low-level implementation details. Translation between verification and implementation languages is possible, but requires additional assurances in the translation process and increases software complexity; having both in a single language is desirable. This paper presents a formalization of MARVeLus, a CPS language which combines verification and implementation. We develop a metatheory for its synchronous refinement type system and demonstrate verified synchronous programs executing on real systems.
@Article{ICFP24p268,
author = {Jiawei Chen and José Luiz Vargas de Mendonça and Bereket Shimels Ayele and Bereket Ngussie Bekele and Shayan Jalili and Pranjal Sharma and Nicholas Wohlfeil and Yicheng Zhang and Jean-Baptiste Jeannin},
title = {Synchronous Programming with Refinement Types},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {268},
numpages = {35},
doi = {10.1145/3674657},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Reusable
Automated Verification of Higher-Order Probabilistic Programs via a Dependent Refinement Type System
Satoshi Kura and
Hiroshi Unno
(Waseda University, Japan; Tohoku University, Japan)
Verification of higher-order probabilistic programs is a challenging problem. We present a verification method that supports several quantitative properties of higher-order probabilistic programs. Usually, extending verification methods to handle the quantitative aspects of probabilistic programs often entails extensive modifications to existing tools, reducing compatibility with advanced techniques developed for qualitative verification. In contrast, our approach necessitates only small amounts of modification, facilitating the reuse of existing techniques and implementations. On the theoretical side, we propose a dependent refinement type system for a generalised higher-order fixed point logic (HFL). Combined with continuation-passing style encodings of properties into HFL, our dependent refinement type system enables reasoning about several quantitative properties, including weakest pre-expectations, expected costs, moments of cost, and conditional weakest pre-expectations for higher-order probabilistic programs with continuous distributions and conditioning. The soundness of our approach is proved in a general setting using a framework of categorical semantics so that we don’t have to repeat similar proofs for each individual problem. On the empirical side, we implement a type checker for our dependent refinement type system that reduces the problem of type checking to constraint solving. We introduce admissible predicate variables and integrable predicate variables to constrained Horn clauses (CHC) so that we can soundly reason about the least fixed points and samplings from probability distributions. Our implementation demonstrates that existing CHC solvers developed for non-probabilistic programs can be extended to a solver for the extended CHC with only small efforts. We also demonstrate the ability of our type checker to verify various concrete examples.
@Article{ICFP24p269,
author = {Satoshi Kura and Hiroshi Unno},
title = {Automated Verification of Higher-Order Probabilistic Programs via a Dependent Refinement Type System},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {269},
numpages = {30},
doi = {10.1145/3674662},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Reusable
A Coq Mechanization of JavaScript Regular Expression Semantics
Noé De Santo,
Aurèle Barrière, and
Clément Pit-Claudel
(EPFL, Switzerland)
We present an executable, proven-safe, faithful, and future-proof Coq mechanization of JavaScript regular expression (regex) matching, as specified by the latest published edition of ECMA-262 section 22.2. This is, to our knowledge, the first time that an industrial-strength regex language has been faithfully mechanized in an interactive theorem prover. We highlight interesting challenges that arose in the process (including issues of encoding, corner cases, and executability), and we document the steps that we took to ensure that the result is straightforwardly auditable and that our understanding of the specification aligns with existing implementations.
We demonstrate the usability and versatility of the mechanization through a broad collection of analyses, case studies, and experiments: we prove that JavaScript regex matching always terminates and is safe (no assertion failures); we identify subtle corner cases that led to mistakes in previous publications; we verify an optimization extracted from a state-of-the-art regex engine; we show that some classic properties described in automata textbooks and used in derivatives-based matchers do not hold in JavaScript regexes; and we demonstrate that the cost of updating the mechanization to account for changes in the original specification is reasonably low.
Our mechanization can be extracted to OCaml and JavaScript and linked with Unicode libraries to produce an executable regex engine that passes the relevant parts of the official Test262 conformance test suite.
@Article{ICFP24p270,
author = {Noé De Santo and Aurèle Barrière and Clément Pit-Claudel},
title = {A Coq Mechanization of JavaScript Regular Expression Semantics},
journal = {Proc. ACM Program. Lang.},
volume = {8},
number = {ICFP},
articleno = {270},
numpages = {29},
doi = {10.1145/3674666},
year = {2024},
}
Publisher's Version
Published Artifact
Artifacts Available
Artifacts Reusable
proc time: 5.93