ICFP 2023 – Author Index |
Contents -
Abstracts -
Authors
|
A B C D E F G H J K L M N P R S T V W X Y
Abel, Andreas |
ICFP '23: "A Graded Modal Dependent Type ..."
A Graded Modal Dependent Type Theory with a Universe and Erasure, Formalized
Andreas Abel, Nils Anders Danielsson, and Oskar Eriksson (Chalmers University of Technology, Sweden; University of Gothenburg, Sweden) We present a graded modal type theory, a dependent type theory with grades that can be used to enforce various properties of the code. The theory has Π-types, weak and strong Σ-types, natural numbers, an empty type, and a universe, and we also extend the theory with a unit type and graded Σ-types. The theory is parameterized by a modality, a kind of partially ordered semiring, whose elements (grades) are used to track the usage of variables in terms and types. Different modalities are possible. We focus mainly on quantitative properties, in particular erasure: with the erasure modality one can mark function arguments as erasable. The theory is fully formalized in Agda. The formalization, which uses a syntactic Kripke logical relation at its core and is based on earlier work, establishes major meta-theoretic properties such as subject reduction, consistency, normalization, and decidability of definitional equality. We also prove a substitution theorem for grade assignment, and preservation of grades under reduction. Furthermore we study an extraction function that translates terms to an untyped λ-calculus and removes erasable content, in particular function arguments with the “erasable” grade. For a certain class of modalities we prove that extraction is sound, in the sense that programs of natural number type have the same value before and after extraction. Soundness of extraction holds also for open programs, as long as all variables in the context are erasable, the context is consistent, and erased matches are not allowed for weak Σ-types. @Article{ICFP23p220, author = {Andreas Abel and Nils Anders Danielsson and Oskar Eriksson}, title = {A Graded Modal Dependent Type Theory with a Universe and Erasure, Formalized}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {220}, numpages = {35}, doi = {10.1145/3607862}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Amin, Nada |
ICFP '23: "LURK: Lambda, the Ultimate ..."
LURK: Lambda, the Ultimate Recursive Knowledge (Experience Report)
Nada Amin, John Burnham, François Garillot, Rosario Gennaro, Chhi’mèd Künzang, Daniel Rogozin, and Cameron Wong (Harvard University, USA; Lurk Lab, USA; Lurk Lab, Canada; City College of New York, USA; University College London, UK) We introduce Lurk, a new LISP-based programming language for zk-SNARKs. Traditional approaches to programming over zero-knowledge proofs require compiling the desired computation into a flat circuit, imposing serious constraints on the size and complexity of computations that can be achieved in practice. Lurk programs are instead provided as data to the universal Lurk interpreter circuit, allowing the resulting language to be Turing-complete without compromising the size of the resulting proof artifacts. Our work describes the design and theory behind Lurk, along with detailing how its implementation of content addressing can be used to sidestep many of the usual concerns of programming zero-knowledge proofs. @Article{ICFP23p197, author = {Nada Amin and John Burnham and François Garillot and Rosario Gennaro and Chhi’mèd Künzang and Daniel Rogozin and Cameron Wong}, title = {LURK: Lambda, the Ultimate Recursive Knowledge (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {197}, numpages = {16}, doi = {10.1145/3607839}, year = {2023}, } Publisher's Version Info |
|
Attard, Duncan Paul |
ICFP '23: "Special Delivery: Programming ..."
Special Delivery: Programming with Mailbox Types
Simon Fowler, Duncan Paul Attard, Franciszek Sowul, Simon J. Gay, and Phil Trinder (University of Glasgow, UK) The asynchronous and unidirectional communication model supported by mailboxes is a key reason for the success of actor languages like Erlang and Elixir for implementing reliable and scalable distributed systems. While many actors may send messages to some actor, only the actor may (selectively) receive from its mailbox. Although actors eliminate many of the issues stemming from shared memory concurrency, they remain vulnerable to communication errors such as protocol violations and deadlocks. Mailbox types are a novel behavioural type system for mailboxes first introduced for a process calculus by de’Liguoro and Padovani in 2018, which capture the contents of a mailbox as a commutative regular expression. Due to aliasing and nested evaluation contexts, moving from a process calculus to a programming language is challenging. This paper presents Pat, the first programming language design incorporating mailbox types, and describes an algorithmic type system. We make essential use of quasi-linear typing to tame some of the complexity introduced by aliasing. Our algorithmic type system is necessarily co-contextual, achieved through a novel use of backwards bidirectional typing, and we prove it sound and complete with respect to our declarative type system. We implement a prototype type checker, and use it to demonstrate the expressiveness of Pat on a factory automation case study and a series of examples from the Savina actor benchmark suite. @Article{ICFP23p191, author = {Simon Fowler and Duncan Paul Attard and Franciszek Sowul and Simon J. Gay and Phil Trinder}, title = {Special Delivery: Programming with Mailbox Types}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {191}, numpages = {30}, doi = {10.1145/3607832}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Augustsson, Lennart |
ICFP '23: "The Verse Calculus: A Core ..."
The Verse Calculus: A Core Calculus for Deterministic Functional Logic Programming
Lennart Augustsson, Joachim Breitner, Koen Claessen, Ranjit Jhala, Simon Peyton Jones, Olin Shivers, Guy L. Steele Jr., and Tim Sweeney (Epic Games, Sweden; Unaffiliated, Germany; Epic Games, USA; Epic Games, UK; Oracle Labs, USA) Functional logic languages have a rich literature, but it is tricky to give them a satisfying semantics. In this paper we describe the Verse calculus, VC, a new core calculus for deterministic functional logic programming. Our main contribution is to equip VC with a small-step rewrite semantics, so that we can reason about a VC program in the same way as one does with lambda calculus; that is, by applying successive rewrites to it. We also show that the rewrite system is confluent for well-behaved terms. @Article{ICFP23p203, author = {Lennart Augustsson and Joachim Breitner and Koen Claessen and Ranjit Jhala and Simon Peyton Jones and Olin Shivers and Guy L. Steele Jr. and Tim Sweeney}, title = {The Verse Calculus: A Core Calculus for Deterministic Functional Logic Programming}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {203}, numpages = {31}, doi = {10.1145/3607845}, year = {2023}, } Publisher's Version |
|
Bahr, Patrick |
ICFP '23: "Asynchronous Modal FRP ..."
Asynchronous Modal FRP
Patrick Bahr and Rasmus Ejlers Møgelberg (IT University of Copenhagen, Denmark) Over the past decade, a number of languages for functional reactive programming (FRP) have been suggested, which use modal types to ensure properties like causality, productivity and lack of space leaks. So far, almost all of these languages have included a modal operator for delay on a global clock. For some applications, however, a global clock is unnatural and leads to leaky abstractions as well as inefficient implementations. While modal languages without a global clock have been proposed, no operational properties have been proved about them, yet. This paper proposes Async RaTT, a new modal language for asynchronous FRP, equipped with an operational semantics mapping complete programs to machines that take asynchronous input signals and produce output signals. The main novelty of Async RaTT is a new modality for asynchronous delay, allowing each output channel to be associated at runtime with the set of input channels it depends on, thus causing the machine to only compute new output when necessary. We prove a series of operational properties including causality, productivity and lack of space leaks. We also show that, although the set of input channels associated with an output channel can change during execution, upper bounds on these can be determined statically by the type system. @Article{ICFP23p205, author = {Patrick Bahr and Rasmus Ejlers Møgelberg}, title = {Asynchronous Modal FRP}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {205}, numpages = {35}, doi = {10.1145/3607847}, year = {2023}, } Publisher's Version ICFP '23: "Calculating Compilers for ..." Calculating Compilers for Concurrency Patrick Bahr and Graham Hutton (IT University of Copenhagen, Denmark; University of Nottingham, UK) Choice trees have recently been introduced as a general structure for defining the semantics of programming languages with a wide variety of features and effects. In this article we focus on concurrent languages, and show how a codensity version of choice trees allows the semantics for such languages to be systematically transformed into compilers using equational reasoning techniques. The codensity construction is the key ingredient that enables a high-level, algebraic approach. As a case study, we calculate a compiler for a concurrent lambda calculus with channel-based communication. @Article{ICFP23p213, author = {Patrick Bahr and Graham Hutton}, title = {Calculating Compilers for Concurrency}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {213}, numpages = {28}, doi = {10.1145/3607855}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Baudon, Thaïs |
ICFP '23: "Bit-Stealing Made Legal: Compilation ..."
Bit-Stealing Made Legal: Compilation for Custom Memory Representations of Algebraic Data Types
Thaïs Baudon, Gabriel Radanne, and Laure Gonnord (University of Lyon, France; ENS Lyon, France; UCBL, France; CNRS, France; Inria, France; LIP, France; University Grenoble Alpes, France; Grenoble INP, France; LCIS, France) Initially present only in functional languages such as OCaml and Haskell, Algebraic Data Types (ADTs) have now become pervasive in mainstream languages, providing nice data abstractions and an elegant way to express functions through pattern matching. Unfortunately, ADTs remain seldom used in low-level programming. One reason is that their increased convenience comes at the cost of abstracting away the exact memory layout of values. Even Rust, which tries to optimize data layout, severely limits control over memory representation. In this article, we present a new approach to specify the data layout of rich data types based on a dual view: a source type, providing a high-level description available in the rest of the code, along with a memory type, providing full control over the memory layout. This dual view allows for better reasoning about memory layout, both for correctness, with dedicated validity criteria linking the two views, and for optimizations that manipulate the memory view. We then provide algorithms to compile constructors and destructors, including pattern matching, to their low-level memory representation. We prove our compilation algorithms correct, implement them in a tool called ribbit that compiles to LLVM IR, and show some early experimental results. @Article{ICFP23p216, author = {Thaïs Baudon and Gabriel Radanne and Laure Gonnord}, title = {Bit-Stealing Made Legal: Compilation for Custom Memory Representations of Algebraic Data Types}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {216}, numpages = {34}, doi = {10.1145/3607858}, year = {2023}, } Publisher's Version Published Artifact Info Artifacts Available Artifacts Reusable |
|
Biernacki, Dariusz |
ICFP '23: "A General Fine-Grained Reduction ..."
A General Fine-Grained Reduction Theory for Effect Handlers
Filip Sieczkowski, Mateusz Pyzik, and Dariusz Biernacki (Heriot-Watt University, UK; University of Wrocław, Poland) Effect handlers are a modern and increasingly popular approach to structuring computational effects in functional programming languages. However, while their traditional operational semantics is well-suited to implementation tasks, it is less ideal as a reduction theory. We therefore introduce a fine-grained reduction theory for deep effect handlers, inspired by our existing reduction theory for shift0, along with a standard reduction strategy. We relate this strategy to the traditional, non-local operational semantics via a simulation argument, and show that the reduction theory preserves observational equivalence with respect to the classical semantics of handlers, thus allowing its use as a rewriting theory for handler-equipped programming languages -- this rewriting system mostly coincides with previously studied type-based optimisations. In the process, we establish theoretical properties of our reduction theory, including confluence and standardisation theorems, adapting and extending existing techniques. Finally, we demonstrate the utility of our semantics by providing the first normalisation-by-evaluation algorithm for effect handlers, and prove its soundness and completeness. Additionally, we establish non-expressibility of the lift operator, found in some effect-handler calculi, by the other constructs. @Article{ICFP23p206, author = {Filip Sieczkowski and Mateusz Pyzik and Dariusz Biernacki}, title = {A General Fine-Grained Reduction Theory for Effect Handlers}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {206}, numpages = {30}, doi = {10.1145/3607848}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Functional |
|
Birkedal, Lars |
ICFP '23: "Verifying Reliable Network ..."
Verifying Reliable Network Components in a Distributed Separation Logic with Dependent Separation Protocols
Léon Gondelman, Jonas Kastberg Hinrichsen, Mário Pereira, Amin Timany, and Lars Birkedal (Aarhus University, Denmark; NOVA-LINCS, Portugal; NOVA School of Sciences and Tecnhology, Portugal) We present a foundationally verified implementation of a reliable communication library for asynchronous client-server communication, and a stack of formally verified components on top thereof. Our library is implemented in an OCaml-like language on top of UDP and features characteristic traits of existing protocols, such as a simple handshaking protocol, bidirectional channels, and retransmission/acknowledgement mechanisms. We verify the library in the Aneris distributed separation logic using a novel proof pattern---dubbed the session escrow pattern---based on the existing escrow proof pattern and the so-called dependent separation protocols, which hitherto have only been used in a non-distributed concurrent setting. We demonstrate how our specification of the reliable communication library simplifies formal reasoning about applications, such as a remote procedure call library, which we in turn use to verify a lazily replicated key-value store with leader-followers and clients thereof. Our development is highly modular---each component is verified relative to specifications of the components it uses (not the implementation). All our results are formalized in the Coq proof assistant. @Article{ICFP23p217, author = {Léon Gondelman and Jonas Kastberg Hinrichsen and Mário Pereira and Amin Timany and Lars Birkedal}, title = {Verifying Reliable Network Components in a Distributed Separation Logic with Dependent Separation Protocols}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {217}, numpages = {31}, doi = {10.1145/3607859}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Bourgeat, Thomas |
ICFP '23: "Flexible Instruction-Set Semantics ..."
Flexible Instruction-Set Semantics via Abstract Monads (Experience Report)
Thomas Bourgeat, Ian Clester, Andres Erbsen, Samuel Gruetter, Pratap Singh, Andy Wright, and Adam Chlipala (Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Carnegie Mellon University, USA) Instruction sets, from families like x86 and ARM, are at the center of many ambitious formal-methods projects. Many verification, synthesis, programming, and debugging tools rely on formal semantics of instruction sets, but different tools can use semantics in rather different ways. The best-known work applying single semantics across diverse tools relies on domain-specific languages like Sail, where the language and its translation tools are specialized to the realm of instruction sets. In the context of the open RISC-V instruction-set family, we decided to explore a different approach, with semantics written in a carefully chosen subset of Haskell. This style does not depend on any new language translators, relying instead on parameterization of semantics over type-class instances. We have used a single core semantics to support testing, interactive proof, and model checking of both software and hardware, demonstrating that monads and the ability to abstract over them using type classes can support pleasant prototyping of ISA semantics. @Article{ICFP23p192, author = {Thomas Bourgeat and Ian Clester and Andres Erbsen and Samuel Gruetter and Pratap Singh and Andy Wright and Adam Chlipala}, title = {Flexible Instruction-Set Semantics via Abstract Monads (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {192}, numpages = {17}, doi = {10.1145/3607833}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Brachthäuser, Jonathan Immanuel |
ICFP '23: "With or Without You: Programming ..."
With or Without You: Programming with Effect Exclusion
Matthew Lutze, Magnus Madsen, Philipp Schuster, and Jonathan Immanuel Brachthäuser (Aarhus University, Denmark; University of Tübingen, Germany) Type and effect systems have been successfully used to statically reason about effects in many different domains, including region-based memory management, exceptions, and algebraic effects and handlers. Such systems’ soundness is often stated in terms of the absence of effects. Yet, existing systems only admit indirect reasoning about the absence of effects. This is further complicated by effect polymorphism which allows function signatures to abstract over arbitrary, unknown sets of effects. We present a new type and effect system with effect polymorphism as well as union, intersection, and complement effects. The effect system allows us to express effect exclusion as a new class of effect polymorphic functions: those that permit any effects except those in a specific set. This way, we equip programmers with the means to directly reason about the absence of effects. Our type and effect system builds on the Hindley-Milner type system, supports effect polymorphism, and preserves principal types modulo Boolean equivalence. In addition, a suitable extension of Algorithm W with Boolean unification on the algebra of sets enables complete type and effect inference. We formalize these notions in the λ∁ calculus. We prove the standard progress and preservation theorems as well as a non-standard effect safety theorem: no excluded effect is ever performed. We implement the type and effect system as an extension of the Flix programming language. We conduct a case study of open source projects identifying 59 program fragments that require effect exclusion for correctness. To demonstrate the usefulness of the proposed type and effect system, we recast these program fragments into our extension of Flix. @Article{ICFP23p204, author = {Matthew Lutze and Magnus Madsen and Philipp Schuster and Jonathan Immanuel Brachthäuser}, title = {With or Without You: Programming with Effect Exclusion}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {204}, numpages = {28}, doi = {10.1145/3607846}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Breitner, Joachim |
ICFP '23: "More Fixpoints! (Functional ..."
More Fixpoints! (Functional Pearl)
Joachim Breitner (Unaffiliated, Germany) Haskell’s laziness allows the programmer to solve some problems naturally and declaratively via recursive equations. Unfortunately, if the input is “too recursive”, these very elegant idioms can fall into the dreaded black hole, and the programmer has to resort to more pedestrian approaches. It does not have to be that way: We built variants of common pure data structures (Booleans, sets) where recursive definitions are productive. Internally, the infamous unsafePerformIO is at work, but the user only sees a beautiful and pure API, and their pretty recursive idioms – magically – work again. @Article{ICFP23p211, author = {Joachim Breitner}, title = {More Fixpoints! (Functional Pearl)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {211}, numpages = {25}, doi = {10.1145/3607853}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable ICFP '23: "The Verse Calculus: A Core ..." The Verse Calculus: A Core Calculus for Deterministic Functional Logic Programming Lennart Augustsson, Joachim Breitner, Koen Claessen, Ranjit Jhala, Simon Peyton Jones, Olin Shivers, Guy L. Steele Jr., and Tim Sweeney (Epic Games, Sweden; Unaffiliated, Germany; Epic Games, USA; Epic Games, UK; Oracle Labs, USA) Functional logic languages have a rich literature, but it is tricky to give them a satisfying semantics. In this paper we describe the Verse calculus, VC, a new core calculus for deterministic functional logic programming. Our main contribution is to equip VC with a small-step rewrite semantics, so that we can reason about a VC program in the same way as one does with lambda calculus; that is, by applying successive rewrites to it. We also show that the rewrite system is confluent for well-behaved terms. @Article{ICFP23p203, author = {Lennart Augustsson and Joachim Breitner and Koen Claessen and Ranjit Jhala and Simon Peyton Jones and Olin Shivers and Guy L. Steele Jr. and Tim Sweeney}, title = {The Verse Calculus: A Core Calculus for Deterministic Functional Logic Programming}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {203}, numpages = {31}, doi = {10.1145/3607845}, year = {2023}, } Publisher's Version |
|
Burnham, John |
ICFP '23: "LURK: Lambda, the Ultimate ..."
LURK: Lambda, the Ultimate Recursive Knowledge (Experience Report)
Nada Amin, John Burnham, François Garillot, Rosario Gennaro, Chhi’mèd Künzang, Daniel Rogozin, and Cameron Wong (Harvard University, USA; Lurk Lab, USA; Lurk Lab, Canada; City College of New York, USA; University College London, UK) We introduce Lurk, a new LISP-based programming language for zk-SNARKs. Traditional approaches to programming over zero-knowledge proofs require compiling the desired computation into a flat circuit, imposing serious constraints on the size and complexity of computations that can be achieved in practice. Lurk programs are instead provided as data to the universal Lurk interpreter circuit, allowing the resulting language to be Turing-complete without compromising the size of the resulting proof artifacts. Our work describes the design and theory behind Lurk, along with detailing how its implementation of content addressing can be used to sidestep many of the usual concerns of programming zero-knowledge proofs. @Article{ICFP23p197, author = {Nada Amin and John Burnham and François Garillot and Rosario Gennaro and Chhi’mèd Künzang and Daniel Rogozin and Cameron Wong}, title = {LURK: Lambda, the Ultimate Recursive Knowledge (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {197}, numpages = {16}, doi = {10.1145/3607839}, year = {2023}, } Publisher's Version Info |
|
Castagna, Giuseppe |
ICFP '23: "Typing Records, Maps, and ..."
Typing Records, Maps, and Structs
Giuseppe Castagna (CNRS, France; Université Paris Cité, France) Records are finite functions from keys to values. In this work we focus on two main distinct usages of records: structs and maps. The former associate different keys to values of different types, they are accessed by providing nominal keys, and trying to access a non-existent key yields an error. The latter associate all keys to values of the same type, they are accessed by providing expressions that compute a key, and trying to access a non-existent key usually yields some default value such as Null or nil. Here, we propose a type theory that covers both kinds of usage, where record types may associate to different types either single keys (as for structs) or sets of keys (as for maps) and where the same record expression can be accessed and used both in the struct-like style and in the map-like style we just described. Since we target dynamically-typed languages our type theory includes union and intersection types, characterized by a subtyping relation. We define the subtyping relation for our record types via a semantic interpretation and derive the decomposition rules to decide it, define a backtracking-free subtyping algorithm that we prove to be correct, and provide a canonical representation for record types that is used to define various type operators needed to type record operations such as selection, concatenation, and field deletion. @Article{ICFP23p196, author = {Giuseppe Castagna}, title = {Typing Records, Maps, and Structs}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {196}, numpages = {44}, doi = {10.1145/3607838}, year = {2023}, } Publisher's Version |
|
Chlipala, Adam |
ICFP '23: "Flexible Instruction-Set Semantics ..."
Flexible Instruction-Set Semantics via Abstract Monads (Experience Report)
Thomas Bourgeat, Ian Clester, Andres Erbsen, Samuel Gruetter, Pratap Singh, Andy Wright, and Adam Chlipala (Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Carnegie Mellon University, USA) Instruction sets, from families like x86 and ARM, are at the center of many ambitious formal-methods projects. Many verification, synthesis, programming, and debugging tools rely on formal semantics of instruction sets, but different tools can use semantics in rather different ways. The best-known work applying single semantics across diverse tools relies on domain-specific languages like Sail, where the language and its translation tools are specialized to the realm of instruction sets. In the context of the open RISC-V instruction-set family, we decided to explore a different approach, with semantics written in a carefully chosen subset of Haskell. This style does not depend on any new language translators, relying instead on parameterization of semantics over type-class instances. We have used a single core semantics to support testing, interactive proof, and model checking of both software and hardware, demonstrating that monads and the ability to abstract over them using type classes can support pleasant prototyping of ISA semantics. @Article{ICFP23p192, author = {Thomas Bourgeat and Ian Clester and Andres Erbsen and Samuel Gruetter and Pratap Singh and Andy Wright and Adam Chlipala}, title = {Flexible Instruction-Set Semantics via Abstract Monads (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {192}, numpages = {17}, doi = {10.1145/3607833}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Claessen, Koen |
ICFP '23: "The Verse Calculus: A Core ..."
The Verse Calculus: A Core Calculus for Deterministic Functional Logic Programming
Lennart Augustsson, Joachim Breitner, Koen Claessen, Ranjit Jhala, Simon Peyton Jones, Olin Shivers, Guy L. Steele Jr., and Tim Sweeney (Epic Games, Sweden; Unaffiliated, Germany; Epic Games, USA; Epic Games, UK; Oracle Labs, USA) Functional logic languages have a rich literature, but it is tricky to give them a satisfying semantics. In this paper we describe the Verse calculus, VC, a new core calculus for deterministic functional logic programming. Our main contribution is to equip VC with a small-step rewrite semantics, so that we can reason about a VC program in the same way as one does with lambda calculus; that is, by applying successive rewrites to it. We also show that the rewrite system is confluent for well-behaved terms. @Article{ICFP23p203, author = {Lennart Augustsson and Joachim Breitner and Koen Claessen and Ranjit Jhala and Simon Peyton Jones and Olin Shivers and Guy L. Steele Jr. and Tim Sweeney}, title = {The Verse Calculus: A Core Calculus for Deterministic Functional Logic Programming}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {203}, numpages = {31}, doi = {10.1145/3607845}, year = {2023}, } Publisher's Version |
|
Clester, Ian |
ICFP '23: "Flexible Instruction-Set Semantics ..."
Flexible Instruction-Set Semantics via Abstract Monads (Experience Report)
Thomas Bourgeat, Ian Clester, Andres Erbsen, Samuel Gruetter, Pratap Singh, Andy Wright, and Adam Chlipala (Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Carnegie Mellon University, USA) Instruction sets, from families like x86 and ARM, are at the center of many ambitious formal-methods projects. Many verification, synthesis, programming, and debugging tools rely on formal semantics of instruction sets, but different tools can use semantics in rather different ways. The best-known work applying single semantics across diverse tools relies on domain-specific languages like Sail, where the language and its translation tools are specialized to the realm of instruction sets. In the context of the open RISC-V instruction-set family, we decided to explore a different approach, with semantics written in a carefully chosen subset of Haskell. This style does not depend on any new language translators, relying instead on parameterization of semantics over type-class instances. We have used a single core semantics to support testing, interactive proof, and model checking of both software and hardware, demonstrating that monads and the ability to abstract over them using type classes can support pleasant prototyping of ISA semantics. @Article{ICFP23p192, author = {Thomas Bourgeat and Ian Clester and Andres Erbsen and Samuel Gruetter and Pratap Singh and Andy Wright and Adam Chlipala}, title = {Flexible Instruction-Set Semantics via Abstract Monads (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {192}, numpages = {17}, doi = {10.1145/3607833}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Danielsson, Nils Anders |
ICFP '23: "A Graded Modal Dependent Type ..."
A Graded Modal Dependent Type Theory with a Universe and Erasure, Formalized
Andreas Abel, Nils Anders Danielsson, and Oskar Eriksson (Chalmers University of Technology, Sweden; University of Gothenburg, Sweden) We present a graded modal type theory, a dependent type theory with grades that can be used to enforce various properties of the code. The theory has Π-types, weak and strong Σ-types, natural numbers, an empty type, and a universe, and we also extend the theory with a unit type and graded Σ-types. The theory is parameterized by a modality, a kind of partially ordered semiring, whose elements (grades) are used to track the usage of variables in terms and types. Different modalities are possible. We focus mainly on quantitative properties, in particular erasure: with the erasure modality one can mark function arguments as erasable. The theory is fully formalized in Agda. The formalization, which uses a syntactic Kripke logical relation at its core and is based on earlier work, establishes major meta-theoretic properties such as subject reduction, consistency, normalization, and decidability of definitional equality. We also prove a substitution theorem for grade assignment, and preservation of grades under reduction. Furthermore we study an extraction function that translates terms to an untyped λ-calculus and removes erasable content, in particular function arguments with the “erasable” grade. For a certain class of modalities we prove that extraction is sound, in the sense that programs of natural number type have the same value before and after extraction. Soundness of extraction holds also for open programs, as long as all variables in the context are erasable, the context is consistent, and erased matches are not allowed for weak Σ-types. @Article{ICFP23p220, author = {Andreas Abel and Nils Anders Danielsson and Oskar Eriksson}, title = {A Graded Modal Dependent Type Theory with a Universe and Erasure, Formalized}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {220}, numpages = {35}, doi = {10.1145/3607862}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Dimoulas, Christos |
ICFP '23: "How to Evaluate Blame for ..."
How to Evaluate Blame for Gradual Types, Part 2
Lukas Lazarek, Ben Greenman, Matthias Felleisen, and Christos Dimoulas (Northwestern University, USA; Brown University, USA; Northeastern University, USA) Equipping an existing programming language with a gradual type system requires two major steps. The first and most visible one in academia is to add a notation for types and a type checking apparatus. The second, highly practical one is to provide a type veneer for the large number of existing untyped libraries; doing so enables typed components to import pieces of functionality and get their uses type-checked, without any changes to the libraries. When programmers create such typed veneers for libraries, they make mistakes that persist and cause trouble. The question is whether the academically investigated run-time checks for gradual type systems assist programmers with debugging such mistakes. This paper provides a first, surprising answer to this question via a rational-programmer investigation: run-time checks alone are typically less helpful than the safety checks of the underlying language. Combining Natural run-time checks with blame, however, provides significantly superior debugging hints. @Article{ICFP23p194, author = {Lukas Lazarek and Ben Greenman and Matthias Felleisen and Christos Dimoulas}, title = {How to Evaluate Blame for Gradual Types, Part 2}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {194}, numpages = {28}, doi = {10.1145/3607836}, year = {2023}, } Publisher's Version |
|
Dockins, Robert |
ICFP '23: "Trustworthy Runtime Verification ..."
Trustworthy Runtime Verification via Bisimulation (Experience Report)
Ryan G. Scott, Mike Dodds, Ivan Perez, Alwyn E. Goodloe, and Robert Dockins (Galois, USA; KBR @ NASA Ames Research Center, USA; NASA Ames Research Center, USA; Amazon, USA) When runtime verification is used to monitor safety-critical systems, it is essential that monitoring code behaves correctly. The Copilot runtime verification framework pursues this goal by automatically generating C monitor programs from a high-level DSL embedded in Haskell. In safety-critical domains, every piece of deployed code must be accompanied by an assurance argument that is convincing to human auditors. However, it is difficult for auditors to determine with confidence that a compiled monitor cannot crash and implements the behavior required by the Copilot semantics. In this paper we describe CopilotVerifier, which runs alongside the Copilot compiler, generating a proof of correctness for the compiled output. The proof establishes that a given Copilot monitor and its compiled form produce equivalent outputs on equivalent inputs, and that they either crash in identical circumstances or cannot crash. The proof takes the form of a bisimulation broken down into a set of verification conditions. We leverage two pieces of SMT-backed technology: the Crucible symbolic execution library for LLVM and the What4 solver interface library. Our results demonstrate that dramatically increased compiler assurance can be achieved at moderate cost by building on existing tools. This paves the way to our ultimate goal of generating formal assurance arguments that are convincing to human auditors. @Article{ICFP23p199, author = {Ryan G. Scott and Mike Dodds and Ivan Perez and Alwyn E. Goodloe and Robert Dockins}, title = {Trustworthy Runtime Verification via Bisimulation (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {199}, numpages = {17}, doi = {10.1145/3607841}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Dodds, Mike |
ICFP '23: "Trustworthy Runtime Verification ..."
Trustworthy Runtime Verification via Bisimulation (Experience Report)
Ryan G. Scott, Mike Dodds, Ivan Perez, Alwyn E. Goodloe, and Robert Dockins (Galois, USA; KBR @ NASA Ames Research Center, USA; NASA Ames Research Center, USA; Amazon, USA) When runtime verification is used to monitor safety-critical systems, it is essential that monitoring code behaves correctly. The Copilot runtime verification framework pursues this goal by automatically generating C monitor programs from a high-level DSL embedded in Haskell. In safety-critical domains, every piece of deployed code must be accompanied by an assurance argument that is convincing to human auditors. However, it is difficult for auditors to determine with confidence that a compiled monitor cannot crash and implements the behavior required by the Copilot semantics. In this paper we describe CopilotVerifier, which runs alongside the Copilot compiler, generating a proof of correctness for the compiled output. The proof establishes that a given Copilot monitor and its compiled form produce equivalent outputs on equivalent inputs, and that they either crash in identical circumstances or cannot crash. The proof takes the form of a bisimulation broken down into a set of verification conditions. We leverage two pieces of SMT-backed technology: the Crucible symbolic execution library for LLVM and the What4 solver interface library. Our results demonstrate that dramatically increased compiler assurance can be achieved at moderate cost by building on existing tools. This paves the way to our ultimate goal of generating formal assurance arguments that are convincing to human auditors. @Article{ICFP23p199, author = {Ryan G. Scott and Mike Dodds and Ivan Perez and Alwyn E. Goodloe and Robert Dockins}, title = {Trustworthy Runtime Verification via Bisimulation (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {199}, numpages = {17}, doi = {10.1145/3607841}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Elliott, Conal |
ICFP '23: "Timely Computation ..."
Timely Computation
Conal Elliott (Independent, USA) This paper addresses the question “what is a digital circuit?” in relation to the fundamentally analog nature of actual (physical) circuits. A simple informal definition is given and then formalized in the proof assistant Agda. At the heart of this definition is the timely embedding of discrete information in temporally continuous signals. Once this embedding is defined (in constructive logic, i.e., type theory), it is extended in a generic fashion from one signal to many and from simple boolean operations (logic gates) to arbitrarily sophisticated sequential and parallel compositions, i.e., to computational circuits. Rather than constructing circuits and then trying to prove their correctness, a compositionally correct methodology maintains specification, implementation, timing, and correctness proofs at every step. Compositionality of each aspect and of their combination is supported by a single, shared algebraic vocabulary and related by homomorphisms. After formally defining and proving these notions, a few key transformations are applied to reveal the linearity of circuit timing (over a suitable semiring), thus enabling practical, modular, and fully verified timing analysis as linear maps over higher-dimensional time intervals. An emphasis throughout the paper is simplicity and generality of specification, minimizing circuit-specific definitions and proofs while highlighting a broadly applicable methodology of scalable, compositionally correct engineering through simple denotations and homomorphisms. @Article{ICFP23p219, author = {Conal Elliott}, title = {Timely Computation}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {219}, numpages = {25}, doi = {10.1145/3607861}, year = {2023}, } Publisher's Version |
|
Erbsen, Andres |
ICFP '23: "Flexible Instruction-Set Semantics ..."
Flexible Instruction-Set Semantics via Abstract Monads (Experience Report)
Thomas Bourgeat, Ian Clester, Andres Erbsen, Samuel Gruetter, Pratap Singh, Andy Wright, and Adam Chlipala (Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Carnegie Mellon University, USA) Instruction sets, from families like x86 and ARM, are at the center of many ambitious formal-methods projects. Many verification, synthesis, programming, and debugging tools rely on formal semantics of instruction sets, but different tools can use semantics in rather different ways. The best-known work applying single semantics across diverse tools relies on domain-specific languages like Sail, where the language and its translation tools are specialized to the realm of instruction sets. In the context of the open RISC-V instruction-set family, we decided to explore a different approach, with semantics written in a carefully chosen subset of Haskell. This style does not depend on any new language translators, relying instead on parameterization of semantics over type-class instances. We have used a single core semantics to support testing, interactive proof, and model checking of both software and hardware, demonstrating that monads and the ability to abstract over them using type classes can support pleasant prototyping of ISA semantics. @Article{ICFP23p192, author = {Thomas Bourgeat and Ian Clester and Andres Erbsen and Samuel Gruetter and Pratap Singh and Andy Wright and Adam Chlipala}, title = {Flexible Instruction-Set Semantics via Abstract Monads (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {192}, numpages = {17}, doi = {10.1145/3607833}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Erdweg, Sebastian |
ICFP '23: "Combinator-Based Fixpoint ..."
Combinator-Based Fixpoint Algorithms for Big-Step Abstract Interpreters
Sven Keidel, Sebastian Erdweg, and Tobias Hombücher (TU Darmstadt, Germany; JGU Mainz, Germany) Big-step abstract interpreters are an approach to build static analyzers based on big-step interpretation. While big-step interpretation provides a number of benefits for the definition of an analysis, it also requires particularly complicated fixpoint algorithms because the analysis definition is a recursive function whose termination is uncertain. This is in contrast to other analysis approaches, such as small-step reduction, abstract machines, or graph reachability, where the analysis essentially forms a finite transition system between widened analysis states. We show how to systematically develop sophisticated fixpoint algorithms for big-step abstract interpreters and how to ensure their soundness. Our approach is based on small and reusable fixpoint combinators that can be composed to yield fixpoint algorithms. For example, these combinators describe the order in which the program is analyzed, how deep recursive functions are unfolded and loops unrolled, or they record auxiliary data such as a (context-sensitive) call graph. Importantly, each combinator can be developed separately, reused across analyses, and can be verified sound independently. Consequently, analysis developers can freely compose combinators to obtain sound fixpoint algorithms that work best for their use case. We provide a formal metatheory that guarantees a fixpoint algorithm is sound if its composed from sound combinators only. We experimentally validate our combinator-based approach by describing sophisticated fixpoint algorithms for analyses of Stratego, Scheme, and WebAssembly. @Article{ICFP23p221, author = {Sven Keidel and Sebastian Erdweg and Tobias Hombücher}, title = {Combinator-Based Fixpoint Algorithms for Big-Step Abstract Interpreters}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {221}, numpages = {27}, doi = {10.1145/3607863}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Eriksson, Oskar |
ICFP '23: "A Graded Modal Dependent Type ..."
A Graded Modal Dependent Type Theory with a Universe and Erasure, Formalized
Andreas Abel, Nils Anders Danielsson, and Oskar Eriksson (Chalmers University of Technology, Sweden; University of Gothenburg, Sweden) We present a graded modal type theory, a dependent type theory with grades that can be used to enforce various properties of the code. The theory has Π-types, weak and strong Σ-types, natural numbers, an empty type, and a universe, and we also extend the theory with a unit type and graded Σ-types. The theory is parameterized by a modality, a kind of partially ordered semiring, whose elements (grades) are used to track the usage of variables in terms and types. Different modalities are possible. We focus mainly on quantitative properties, in particular erasure: with the erasure modality one can mark function arguments as erasable. The theory is fully formalized in Agda. The formalization, which uses a syntactic Kripke logical relation at its core and is based on earlier work, establishes major meta-theoretic properties such as subject reduction, consistency, normalization, and decidability of definitional equality. We also prove a substitution theorem for grade assignment, and preservation of grades under reduction. Furthermore we study an extraction function that translates terms to an untyped λ-calculus and removes erasable content, in particular function arguments with the “erasable” grade. For a certain class of modalities we prove that extraction is sound, in the sense that programs of natural number type have the same value before and after extraction. Soundness of extraction holds also for open programs, as long as all variables in the context are erasable, the context is consistent, and erased matches are not allowed for weak Σ-types. @Article{ICFP23p220, author = {Andreas Abel and Nils Anders Danielsson and Oskar Eriksson}, title = {A Graded Modal Dependent Type Theory with a Universe and Erasure, Formalized}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {220}, numpages = {35}, doi = {10.1145/3607862}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Felleisen, Matthias |
ICFP '23: "How to Evaluate Blame for ..."
How to Evaluate Blame for Gradual Types, Part 2
Lukas Lazarek, Ben Greenman, Matthias Felleisen, and Christos Dimoulas (Northwestern University, USA; Brown University, USA; Northeastern University, USA) Equipping an existing programming language with a gradual type system requires two major steps. The first and most visible one in academia is to add a notation for types and a type checking apparatus. The second, highly practical one is to provide a type veneer for the large number of existing untyped libraries; doing so enables typed components to import pieces of functionality and get their uses type-checked, without any changes to the libraries. When programmers create such typed veneers for libraries, they make mistakes that persist and cause trouble. The question is whether the academically investigated run-time checks for gradual type systems assist programmers with debugging such mistakes. This paper provides a first, surprising answer to this question via a rational-programmer investigation: run-time checks alone are typically less helpful than the safety checks of the underlying language. Combining Natural run-time checks with blame, however, provides significantly superior debugging hints. @Article{ICFP23p194, author = {Lukas Lazarek and Ben Greenman and Matthias Felleisen and Christos Dimoulas}, title = {How to Evaluate Blame for Gradual Types, Part 2}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {194}, numpages = {28}, doi = {10.1145/3607836}, year = {2023}, } Publisher's Version |
|
Fisler, Kathi |
ICFP '23: "What Happens When Students ..."
What Happens When Students Switch (Functional) Languages (Experience Report)
Kuang-Chen Lu, Shriram Krishnamurthi, Kathi Fisler, and Ethel Tshukudu (Brown University, USA; University of Botswana, Botswana) When novice programming students already know one programming language and have to learn another, what issues do they run into? We specifically focus on one or both languages being functional, varying along two axes: syntax and semantics. We report on problems, especially persistent ones. This work can be of immediate value to educators and also sets up avenues for future research. @Article{ICFP23p215, author = {Kuang-Chen Lu and Shriram Krishnamurthi and Kathi Fisler and Ethel Tshukudu}, title = {What Happens When Students Switch (Functional) Languages (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {215}, numpages = {17}, doi = {10.1145/3607857}, year = {2023}, } Publisher's Version Archive submitted (2.7 MB) |
|
Fowler, Simon |
ICFP '23: "Special Delivery: Programming ..."
Special Delivery: Programming with Mailbox Types
Simon Fowler, Duncan Paul Attard, Franciszek Sowul, Simon J. Gay, and Phil Trinder (University of Glasgow, UK) The asynchronous and unidirectional communication model supported by mailboxes is a key reason for the success of actor languages like Erlang and Elixir for implementing reliable and scalable distributed systems. While many actors may send messages to some actor, only the actor may (selectively) receive from its mailbox. Although actors eliminate many of the issues stemming from shared memory concurrency, they remain vulnerable to communication errors such as protocol violations and deadlocks. Mailbox types are a novel behavioural type system for mailboxes first introduced for a process calculus by de’Liguoro and Padovani in 2018, which capture the contents of a mailbox as a commutative regular expression. Due to aliasing and nested evaluation contexts, moving from a process calculus to a programming language is challenging. This paper presents Pat, the first programming language design incorporating mailbox types, and describes an algorithmic type system. We make essential use of quasi-linear typing to tame some of the complexity introduced by aliasing. Our algorithmic type system is necessarily co-contextual, achieved through a novel use of backwards bidirectional typing, and we prove it sound and complete with respect to our declarative type system. We implement a prototype type checker, and use it to demonstrate the expressiveness of Pat on a factory automation case study and a series of examples from the Savina actor benchmark suite. @Article{ICFP23p191, author = {Simon Fowler and Duncan Paul Attard and Franciszek Sowul and Simon J. Gay and Phil Trinder}, title = {Special Delivery: Programming with Mailbox Types}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {191}, numpages = {30}, doi = {10.1145/3607832}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Frohlich, Samantha |
ICFP '23: "Embedding by Unembedding ..."
Embedding by Unembedding
Kazutaka Matsuda, Samantha Frohlich, Meng Wang, and Nicolas Wu (Tohoku University, Japan; University of Bristol, UK; Imperial College London, UK) Embedding is a language development technique that implements the object language as a library in a host language. There are many advantages of the approach, including being lightweight and the ability to inherit features of the host language. A notable example is the technique of HOAS, which makes crucial use of higher-order functions to represent abstract syntax trees with binders. Despite its popularity, HOAS has its limitations. We observe that HOAS struggles with semantic domains that cannot be naturally expressed as functions, particularly when open expressions are involved. Prominent examples of this include incremental computation and reversible/bidirectional languages. In this paper, we pin-point the challenge faced by HOAS as a mismatch between the semantic domain of host and object language functions, and propose a solution. The solution is based on the technique of unembedding, which converts from the finally-tagless representation to de Bruijn-indexed terms with strong correctness guarantees. We show that this approach is able to extend the applicability of HOAS while preserving its elegance. We provide a generic strategy for Embedding by Unembedding, and then demonstrate its effectiveness with two substantial case studies in the domains of incremental computation and bidirectional transformations. The resulting embedded implementations are comparable in features to the state-of-the-art language implementations in the respective areas. @Article{ICFP23p189, author = {Kazutaka Matsuda and Samantha Frohlich and Meng Wang and Nicolas Wu}, title = {Embedding by Unembedding}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {189}, numpages = {47}, doi = {10.1145/3607830}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable ICFP '23: "Reflecting on Random Generation ..." Reflecting on Random Generation Harrison Goldstein, Samantha Frohlich, Meng Wang, and Benjamin C. Pierce (University of Pennsylvania, USA; University of Bristol, UK) Expert users of property-based testing often labor to craft random generators that encode detailed knowledge about what it means for a test input to be valid and interesting. Fortunately, the fruits of this labor can also be put to other uses. In the bidirectional programming literature, for example, generators have been repurposed as validity checkers, while Python's Hypothesis library uses the same structures for shrinking and mutating test inputs. To unify and generalize these uses and many others, we propose reflective generators, a new foundation for random data generators that can "reflect" on an input value to calculate the random choices that could have been made to produce it. Reflective generators combine ideas from two existing abstractions: free generators and partial monadic profunctors. They can be used to implement and enhance the aforementioned shrinking and mutation algorithms, generalizing them to work for any values that can be produced by the generator, not just ones for which a trace of the generator's execution is available. Beyond shrinking and mutation, reflective generators generalize a published algorithm for example-based generation, and they can also be used as checkers, partial value completers, and other kinds of test data producers. @Article{ICFP23p200, author = {Harrison Goldstein and Samantha Frohlich and Meng Wang and Benjamin C. Pierce}, title = {Reflecting on Random Generation}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {200}, numpages = {34}, doi = {10.1145/3607842}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Fromherz, Aymeric |
ICFP '23: "Modularity, Code Specialization, ..."
Modularity, Code Specialization, and Zero-Cost Abstractions for Program Verification
Son Ho, Aymeric Fromherz, and Jonathan Protzenko (Inria, France; Microsoft Research, USA) For all the successes in verifying low-level, efficient, security-critical code, little has been said or studied about the structure, architecture and engineering of such large-scale proof developments. We present the design, implementation and evaluation of a set of language-based techniques that allow the programmer to modularly write and verify code at a high level of abstraction, while retaining control over the compilation process and producing high-quality, zero-overhead, low-level code suitable for integration into mainstream software. We implement our techniques within the F proof assistant, and specifically its shallowly-embedded Low toolchain that compiles to C. Through our evaluation, we establish that our techniques were critical in scaling the popular HACL library past 100,000 lines of verified source code, and brought about significant gains in proof engineer productivity. The exposition of our methodology converges on one final, novel case study: the streaming API, a finicky API that has historically caused many bugs in high-profile software. Using our approach, we manage to capture the streaming semantics in a generic way, and apply it “for free” to over a dozen use-cases. Six of those have made it into the reference implementation of the Python programming language, replacing the previous CVE-ridden code. @Article{ICFP23p202, author = {Son Ho and Aymeric Fromherz and Jonathan Protzenko}, title = {Modularity, Code Specialization, and Zero-Cost Abstractions for Program Verification}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {202}, numpages = {32}, doi = {10.1145/3607844}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Garillot, François |
ICFP '23: "LURK: Lambda, the Ultimate ..."
LURK: Lambda, the Ultimate Recursive Knowledge (Experience Report)
Nada Amin, John Burnham, François Garillot, Rosario Gennaro, Chhi’mèd Künzang, Daniel Rogozin, and Cameron Wong (Harvard University, USA; Lurk Lab, USA; Lurk Lab, Canada; City College of New York, USA; University College London, UK) We introduce Lurk, a new LISP-based programming language for zk-SNARKs. Traditional approaches to programming over zero-knowledge proofs require compiling the desired computation into a flat circuit, imposing serious constraints on the size and complexity of computations that can be achieved in practice. Lurk programs are instead provided as data to the universal Lurk interpreter circuit, allowing the resulting language to be Turing-complete without compromising the size of the resulting proof artifacts. Our work describes the design and theory behind Lurk, along with detailing how its implementation of content addressing can be used to sidestep many of the usual concerns of programming zero-knowledge proofs. @Article{ICFP23p197, author = {Nada Amin and John Burnham and François Garillot and Rosario Gennaro and Chhi’mèd Künzang and Daniel Rogozin and Cameron Wong}, title = {LURK: Lambda, the Ultimate Recursive Knowledge (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {197}, numpages = {16}, doi = {10.1145/3607839}, year = {2023}, } Publisher's Version Info |
|
Gay, Simon J. |
ICFP '23: "Special Delivery: Programming ..."
Special Delivery: Programming with Mailbox Types
Simon Fowler, Duncan Paul Attard, Franciszek Sowul, Simon J. Gay, and Phil Trinder (University of Glasgow, UK) The asynchronous and unidirectional communication model supported by mailboxes is a key reason for the success of actor languages like Erlang and Elixir for implementing reliable and scalable distributed systems. While many actors may send messages to some actor, only the actor may (selectively) receive from its mailbox. Although actors eliminate many of the issues stemming from shared memory concurrency, they remain vulnerable to communication errors such as protocol violations and deadlocks. Mailbox types are a novel behavioural type system for mailboxes first introduced for a process calculus by de’Liguoro and Padovani in 2018, which capture the contents of a mailbox as a commutative regular expression. Due to aliasing and nested evaluation contexts, moving from a process calculus to a programming language is challenging. This paper presents Pat, the first programming language design incorporating mailbox types, and describes an algorithmic type system. We make essential use of quasi-linear typing to tame some of the complexity introduced by aliasing. Our algorithmic type system is necessarily co-contextual, achieved through a novel use of backwards bidirectional typing, and we prove it sound and complete with respect to our declarative type system. We implement a prototype type checker, and use it to demonstrate the expressiveness of Pat on a factory automation case study and a series of examples from the Savina actor benchmark suite. @Article{ICFP23p191, author = {Simon Fowler and Duncan Paul Attard and Franciszek Sowul and Simon J. Gay and Phil Trinder}, title = {Special Delivery: Programming with Mailbox Types}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {191}, numpages = {30}, doi = {10.1145/3607832}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Gennaro, Rosario |
ICFP '23: "LURK: Lambda, the Ultimate ..."
LURK: Lambda, the Ultimate Recursive Knowledge (Experience Report)
Nada Amin, John Burnham, François Garillot, Rosario Gennaro, Chhi’mèd Künzang, Daniel Rogozin, and Cameron Wong (Harvard University, USA; Lurk Lab, USA; Lurk Lab, Canada; City College of New York, USA; University College London, UK) We introduce Lurk, a new LISP-based programming language for zk-SNARKs. Traditional approaches to programming over zero-knowledge proofs require compiling the desired computation into a flat circuit, imposing serious constraints on the size and complexity of computations that can be achieved in practice. Lurk programs are instead provided as data to the universal Lurk interpreter circuit, allowing the resulting language to be Turing-complete without compromising the size of the resulting proof artifacts. Our work describes the design and theory behind Lurk, along with detailing how its implementation of content addressing can be used to sidestep many of the usual concerns of programming zero-knowledge proofs. @Article{ICFP23p197, author = {Nada Amin and John Burnham and François Garillot and Rosario Gennaro and Chhi’mèd Künzang and Daniel Rogozin and Cameron Wong}, title = {LURK: Lambda, the Ultimate Recursive Knowledge (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {197}, numpages = {16}, doi = {10.1145/3607839}, year = {2023}, } Publisher's Version Info |
|
Ghaffari, Mohsen |
ICFP '23: "Formal Specification and Testing ..."
Formal Specification and Testing for Reinforcement Learning
Mahsa Varshosaz, Mohsen Ghaffari, Einar Broch Johnsen, and Andrzej Wąsowski (IT University of Copenhagen, Denmark; University of Oslo, Norway) The development process for reinforcement learning applications is still exploratory rather than systematic. This exploratory nature reduces reuse of specifications between applications and increases the chances of introducing programming errors. This paper takes a step towards systematizing the development of reinforcement learning applications. We introduce a formal specification of reinforcement learning problems and algorithms, with a particular focus on temporal difference methods and their definitions in backup diagrams. We further develop a test harness for a large class of reinforcement learning applications based on temporal difference learning, including SARSA and Q-learning. The entire development is rooted in functional programming methods; starting with pure specifications and denotational semantics, ending with property-based testing and using compositional interpreters for a domain-specific term language as a test oracle for concrete implementations. We demonstrate the usefulness of this testing method on a number of examples, and evaluate with mutation testing. We show that our test suite is effective in killing mutants (90% mutants killed for 75% of subject agents). More importantly, almost half of all mutants are killed by generic write-once-use-everywhere tests that apply to any reinforcement learning problem modeled using our library, without any additional effort from the programmer. @Article{ICFP23p193, author = {Mahsa Varshosaz and Mohsen Ghaffari and Einar Broch Johnsen and Andrzej Wąsowski}, title = {Formal Specification and Testing for Reinforcement Learning}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {193}, numpages = {34}, doi = {10.1145/3607835}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Ghalayini, Jad Elkhaleq |
ICFP '23: "Explicit Refinement Types ..."
Explicit Refinement Types
Jad Elkhaleq Ghalayini and Neel Krishnaswami (University of Cambridge, UK) We present λert, a type theory supporting refinement types with <em>explicit proofs</em>. Instead of solving refinement constraints with an SMT solver like DML and Liquid Haskell, our system requires and permits programmers to embed proofs of properties within the program text, letting us support a rich logic of properties including quantifiers and induction. We show that the type system is sound by showing that every refined program erases to a simply-typed program, and by means of a denotational semantics, we show that every erased program has all of the properties demanded by its refined type. All of our proofs are formalised in Lean 4. @Article{ICFP23p195, author = {Jad Elkhaleq Ghalayini and Neel Krishnaswami}, title = {Explicit Refinement Types}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {195}, numpages = {28}, doi = {10.1145/3607837}, year = {2023}, } Publisher's Version |
|
Goldstein, Harrison |
ICFP '23: "Reflecting on Random Generation ..."
Reflecting on Random Generation
Harrison Goldstein, Samantha Frohlich, Meng Wang, and Benjamin C. Pierce (University of Pennsylvania, USA; University of Bristol, UK) Expert users of property-based testing often labor to craft random generators that encode detailed knowledge about what it means for a test input to be valid and interesting. Fortunately, the fruits of this labor can also be put to other uses. In the bidirectional programming literature, for example, generators have been repurposed as validity checkers, while Python's Hypothesis library uses the same structures for shrinking and mutating test inputs. To unify and generalize these uses and many others, we propose reflective generators, a new foundation for random data generators that can "reflect" on an input value to calculate the random choices that could have been made to produce it. Reflective generators combine ideas from two existing abstractions: free generators and partial monadic profunctors. They can be used to implement and enhance the aforementioned shrinking and mutation algorithms, generalizing them to work for any values that can be produced by the generator, not just ones for which a trace of the generator's execution is available. Beyond shrinking and mutation, reflective generators generalize a published algorithm for example-based generation, and they can also be used as checkers, partial value completers, and other kinds of test data producers. @Article{ICFP23p200, author = {Harrison Goldstein and Samantha Frohlich and Meng Wang and Benjamin C. Pierce}, title = {Reflecting on Random Generation}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {200}, numpages = {34}, doi = {10.1145/3607842}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable ICFP '23: "Etna: An Evaluation Platform ..." Etna: An Evaluation Platform for Property-Based Testing (Experience Report) Jessica Shi, Alperen Keles, Harrison Goldstein, Benjamin C. Pierce, and Leonidas Lampropoulos (University of Pennsylvania, USA; University of Maryland, College Park, USA) Property-based testing is a mainstay of functional programming, boasting a rich literature, an enthusiastic user community, and an abundance of tools — so many, indeed, that new users may have difficulty choosing. Moreover, any given framework may support a variety of strategies for generating test inputs; even experienced users may wonder which are better in a given situation. Sadly, the PBT literature, though long on creativity, is short on rigorous comparisons to help answer such questions. We present Etna, a platform for empirical evaluation and comparison of PBT techniques. Etna incorporates a number of popular PBT frameworks and testing workloads from the literature, and its extensible architecture makes adding new ones easy, while handling the technical drudgery of performance measurement. To illustrate its benefits, we use Etna to carry out several experiments with popular PBT approaches in both Coq and Haskell, allowing users to more clearly understand best practices and tradeoffs. @Article{ICFP23p218, author = {Jessica Shi and Alperen Keles and Harrison Goldstein and Benjamin C. Pierce and Leonidas Lampropoulos}, title = {Etna: An Evaluation Platform for Property-Based Testing (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {218}, numpages = {17}, doi = {10.1145/3607860}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Gondelman, Léon |
ICFP '23: "Verifying Reliable Network ..."
Verifying Reliable Network Components in a Distributed Separation Logic with Dependent Separation Protocols
Léon Gondelman, Jonas Kastberg Hinrichsen, Mário Pereira, Amin Timany, and Lars Birkedal (Aarhus University, Denmark; NOVA-LINCS, Portugal; NOVA School of Sciences and Tecnhology, Portugal) We present a foundationally verified implementation of a reliable communication library for asynchronous client-server communication, and a stack of formally verified components on top thereof. Our library is implemented in an OCaml-like language on top of UDP and features characteristic traits of existing protocols, such as a simple handshaking protocol, bidirectional channels, and retransmission/acknowledgement mechanisms. We verify the library in the Aneris distributed separation logic using a novel proof pattern---dubbed the session escrow pattern---based on the existing escrow proof pattern and the so-called dependent separation protocols, which hitherto have only been used in a non-distributed concurrent setting. We demonstrate how our specification of the reliable communication library simplifies formal reasoning about applications, such as a remote procedure call library, which we in turn use to verify a lazily replicated key-value store with leader-followers and clients thereof. Our development is highly modular---each component is verified relative to specifications of the components it uses (not the implementation). All our results are formalized in the Coq proof assistant. @Article{ICFP23p217, author = {Léon Gondelman and Jonas Kastberg Hinrichsen and Mário Pereira and Amin Timany and Lars Birkedal}, title = {Verifying Reliable Network Components in a Distributed Separation Logic with Dependent Separation Protocols}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {217}, numpages = {31}, doi = {10.1145/3607859}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Gonnord, Laure |
ICFP '23: "Bit-Stealing Made Legal: Compilation ..."
Bit-Stealing Made Legal: Compilation for Custom Memory Representations of Algebraic Data Types
Thaïs Baudon, Gabriel Radanne, and Laure Gonnord (University of Lyon, France; ENS Lyon, France; UCBL, France; CNRS, France; Inria, France; LIP, France; University Grenoble Alpes, France; Grenoble INP, France; LCIS, France) Initially present only in functional languages such as OCaml and Haskell, Algebraic Data Types (ADTs) have now become pervasive in mainstream languages, providing nice data abstractions and an elegant way to express functions through pattern matching. Unfortunately, ADTs remain seldom used in low-level programming. One reason is that their increased convenience comes at the cost of abstracting away the exact memory layout of values. Even Rust, which tries to optimize data layout, severely limits control over memory representation. In this article, we present a new approach to specify the data layout of rich data types based on a dual view: a source type, providing a high-level description available in the rest of the code, along with a memory type, providing full control over the memory layout. This dual view allows for better reasoning about memory layout, both for correctness, with dedicated validity criteria linking the two views, and for optimizations that manipulate the memory view. We then provide algorithms to compile constructors and destructors, including pattern matching, to their low-level memory representation. We prove our compilation algorithms correct, implement them in a tool called ribbit that compiles to LLVM IR, and show some early experimental results. @Article{ICFP23p216, author = {Thaïs Baudon and Gabriel Radanne and Laure Gonnord}, title = {Bit-Stealing Made Legal: Compilation for Custom Memory Representations of Algebraic Data Types}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {216}, numpages = {34}, doi = {10.1145/3607858}, year = {2023}, } Publisher's Version Published Artifact Info Artifacts Available Artifacts Reusable |
|
Goodloe, Alwyn E. |
ICFP '23: "Trustworthy Runtime Verification ..."
Trustworthy Runtime Verification via Bisimulation (Experience Report)
Ryan G. Scott, Mike Dodds, Ivan Perez, Alwyn E. Goodloe, and Robert Dockins (Galois, USA; KBR @ NASA Ames Research Center, USA; NASA Ames Research Center, USA; Amazon, USA) When runtime verification is used to monitor safety-critical systems, it is essential that monitoring code behaves correctly. The Copilot runtime verification framework pursues this goal by automatically generating C monitor programs from a high-level DSL embedded in Haskell. In safety-critical domains, every piece of deployed code must be accompanied by an assurance argument that is convincing to human auditors. However, it is difficult for auditors to determine with confidence that a compiled monitor cannot crash and implements the behavior required by the Copilot semantics. In this paper we describe CopilotVerifier, which runs alongside the Copilot compiler, generating a proof of correctness for the compiled output. The proof establishes that a given Copilot monitor and its compiled form produce equivalent outputs on equivalent inputs, and that they either crash in identical circumstances or cannot crash. The proof takes the form of a bisimulation broken down into a set of verification conditions. We leverage two pieces of SMT-backed technology: the Crucible symbolic execution library for LLVM and the What4 solver interface library. Our results demonstrate that dramatically increased compiler assurance can be achieved at moderate cost by building on existing tools. This paves the way to our ultimate goal of generating formal assurance arguments that are convincing to human auditors. @Article{ICFP23p199, author = {Ryan G. Scott and Mike Dodds and Ivan Perez and Alwyn E. Goodloe and Robert Dockins}, title = {Trustworthy Runtime Verification via Bisimulation (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {199}, numpages = {17}, doi = {10.1145/3607841}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Greenman, Ben |
ICFP '23: "How to Evaluate Blame for ..."
How to Evaluate Blame for Gradual Types, Part 2
Lukas Lazarek, Ben Greenman, Matthias Felleisen, and Christos Dimoulas (Northwestern University, USA; Brown University, USA; Northeastern University, USA) Equipping an existing programming language with a gradual type system requires two major steps. The first and most visible one in academia is to add a notation for types and a type checking apparatus. The second, highly practical one is to provide a type veneer for the large number of existing untyped libraries; doing so enables typed components to import pieces of functionality and get their uses type-checked, without any changes to the libraries. When programmers create such typed veneers for libraries, they make mistakes that persist and cause trouble. The question is whether the academically investigated run-time checks for gradual type systems assist programmers with debugging such mistakes. This paper provides a first, surprising answer to this question via a rational-programmer investigation: run-time checks alone are typically less helpful than the safety checks of the underlying language. Combining Natural run-time checks with blame, however, provides significantly superior debugging hints. @Article{ICFP23p194, author = {Lukas Lazarek and Ben Greenman and Matthias Felleisen and Christos Dimoulas}, title = {How to Evaluate Blame for Gradual Types, Part 2}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {194}, numpages = {28}, doi = {10.1145/3607836}, year = {2023}, } Publisher's Version |
|
Gruetter, Samuel |
ICFP '23: "Flexible Instruction-Set Semantics ..."
Flexible Instruction-Set Semantics via Abstract Monads (Experience Report)
Thomas Bourgeat, Ian Clester, Andres Erbsen, Samuel Gruetter, Pratap Singh, Andy Wright, and Adam Chlipala (Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Carnegie Mellon University, USA) Instruction sets, from families like x86 and ARM, are at the center of many ambitious formal-methods projects. Many verification, synthesis, programming, and debugging tools rely on formal semantics of instruction sets, but different tools can use semantics in rather different ways. The best-known work applying single semantics across diverse tools relies on domain-specific languages like Sail, where the language and its translation tools are specialized to the realm of instruction sets. In the context of the open RISC-V instruction-set family, we decided to explore a different approach, with semantics written in a carefully chosen subset of Haskell. This style does not depend on any new language translators, relying instead on parameterization of semantics over type-class instances. We have used a single core semantics to support testing, interactive proof, and model checking of both software and hardware, demonstrating that monads and the ability to abstract over them using type classes can support pleasant prototyping of ISA semantics. @Article{ICFP23p192, author = {Thomas Bourgeat and Ian Clester and Andres Erbsen and Samuel Gruetter and Pratap Singh and Andy Wright and Adam Chlipala}, title = {Flexible Instruction-Set Semantics via Abstract Monads (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {192}, numpages = {17}, doi = {10.1145/3607833}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Hinrichsen, Jonas Kastberg |
ICFP '23: "Verifying Reliable Network ..."
Verifying Reliable Network Components in a Distributed Separation Logic with Dependent Separation Protocols
Léon Gondelman, Jonas Kastberg Hinrichsen, Mário Pereira, Amin Timany, and Lars Birkedal (Aarhus University, Denmark; NOVA-LINCS, Portugal; NOVA School of Sciences and Tecnhology, Portugal) We present a foundationally verified implementation of a reliable communication library for asynchronous client-server communication, and a stack of formally verified components on top thereof. Our library is implemented in an OCaml-like language on top of UDP and features characteristic traits of existing protocols, such as a simple handshaking protocol, bidirectional channels, and retransmission/acknowledgement mechanisms. We verify the library in the Aneris distributed separation logic using a novel proof pattern---dubbed the session escrow pattern---based on the existing escrow proof pattern and the so-called dependent separation protocols, which hitherto have only been used in a non-distributed concurrent setting. We demonstrate how our specification of the reliable communication library simplifies formal reasoning about applications, such as a remote procedure call library, which we in turn use to verify a lazily replicated key-value store with leader-followers and clients thereof. Our development is highly modular---each component is verified relative to specifications of the components it uses (not the implementation). All our results are formalized in the Coq proof assistant. @Article{ICFP23p217, author = {Léon Gondelman and Jonas Kastberg Hinrichsen and Mário Pereira and Amin Timany and Lars Birkedal}, title = {Verifying Reliable Network Components in a Distributed Separation Logic with Dependent Separation Protocols}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {217}, numpages = {31}, doi = {10.1145/3607859}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable ICFP '23: "Dependent Session Protocols ..." Dependent Session Protocols in Separation Logic from First Principles (Functional Pearl) Jules Jacobs, Jonas Kastberg Hinrichsen, and Robbert Krebbers (Radboud University Nijmegen, Netherlands; Aarhus University, Denmark) We develop an account of dependent session protocols in concurrent separation logic for a functional language with message-passing. Inspired by minimalistic session calculi, we present a layered design: starting from mutable references, we build one-shot channels, session channels, and imperative channels. Whereas previous work on dependent session protocols in concurrent separation logic required advanced mechanisms such as recursive domain equations and higher-order ghost state, we only require the most basic mechanisms to verify that our one-shot channels satisfy one-shot protocols, and subsequently treat their specification as a black box on top of which we define dependent session protocols. This has a number of advantages in terms of simplicity, elegance, and flexibility: support for subprotocols and guarded recursion automatically transfers from the one-shot protocols to the dependent session protocols, and we easily obtain various forms of channel closing. Because the meta theory of our results is so simple, we are able to give all definitions as part of this paper, and mechanize all our results using the Iris framework in less than 1000 lines of Coq. @Article{ICFP23p214, author = {Jules Jacobs and Jonas Kastberg Hinrichsen and Robbert Krebbers}, title = {Dependent Session Protocols in Separation Logic from First Principles (Functional Pearl)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {214}, numpages = {28}, doi = {10.1145/3607856}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Ho, Son |
ICFP '23: "Modularity, Code Specialization, ..."
Modularity, Code Specialization, and Zero-Cost Abstractions for Program Verification
Son Ho, Aymeric Fromherz, and Jonathan Protzenko (Inria, France; Microsoft Research, USA) For all the successes in verifying low-level, efficient, security-critical code, little has been said or studied about the structure, architecture and engineering of such large-scale proof developments. We present the design, implementation and evaluation of a set of language-based techniques that allow the programmer to modularly write and verify code at a high level of abstraction, while retaining control over the compilation process and producing high-quality, zero-overhead, low-level code suitable for integration into mainstream software. We implement our techniques within the F proof assistant, and specifically its shallowly-embedded Low toolchain that compiles to C. Through our evaluation, we establish that our techniques were critical in scaling the popular HACL library past 100,000 lines of verified source code, and brought about significant gains in proof engineer productivity. The exposition of our methodology converges on one final, novel case study: the streaming API, a finicky API that has historically caused many bugs in high-profile software. Using our approach, we manage to capture the streaming semantics in a generic way, and apply it “for free” to over a dozen use-cases. Six of those have made it into the reference implementation of the Python programming language, replacing the previous CVE-ridden code. @Article{ICFP23p202, author = {Son Ho and Aymeric Fromherz and Jonathan Protzenko}, title = {Modularity, Code Specialization, and Zero-Cost Abstractions for Program Verification}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {202}, numpages = {32}, doi = {10.1145/3607844}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Hombücher, Tobias |
ICFP '23: "Combinator-Based Fixpoint ..."
Combinator-Based Fixpoint Algorithms for Big-Step Abstract Interpreters
Sven Keidel, Sebastian Erdweg, and Tobias Hombücher (TU Darmstadt, Germany; JGU Mainz, Germany) Big-step abstract interpreters are an approach to build static analyzers based on big-step interpretation. While big-step interpretation provides a number of benefits for the definition of an analysis, it also requires particularly complicated fixpoint algorithms because the analysis definition is a recursive function whose termination is uncertain. This is in contrast to other analysis approaches, such as small-step reduction, abstract machines, or graph reachability, where the analysis essentially forms a finite transition system between widened analysis states. We show how to systematically develop sophisticated fixpoint algorithms for big-step abstract interpreters and how to ensure their soundness. Our approach is based on small and reusable fixpoint combinators that can be composed to yield fixpoint algorithms. For example, these combinators describe the order in which the program is analyzed, how deep recursive functions are unfolded and loops unrolled, or they record auxiliary data such as a (context-sensitive) call graph. Importantly, each combinator can be developed separately, reused across analyses, and can be verified sound independently. Consequently, analysis developers can freely compose combinators to obtain sound fixpoint algorithms that work best for their use case. We provide a formal metatheory that guarantees a fixpoint algorithm is sound if its composed from sound combinators only. We experimentally validate our combinator-based approach by describing sophisticated fixpoint algorithms for analyses of Stratego, Scheme, and WebAssembly. @Article{ICFP23p221, author = {Sven Keidel and Sebastian Erdweg and Tobias Hombücher}, title = {Combinator-Based Fixpoint Algorithms for Big-Step Abstract Interpreters}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {221}, numpages = {27}, doi = {10.1145/3607863}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Hubers, Alex |
ICFP '23: "Generic Programming with Extensible ..."
Generic Programming with Extensible Data Types: Or, Making Ad Hoc Extensible Data Types Less Ad Hoc
Alex Hubers and J. Garrett Morris (University of Iowa, USA) We present a novel approach to generic programming over extensible data types. Row types capture the structure of records and variants, and can be used to express record and variant subtyping, record extension, and modular composition of case branches. We extend row typing to capture generic programming over rows themselves, capturing patterns including lifting operations to records and variations from their component types, and the duality between cases blocks over variants and records of labeled functions, without placing specific requirements on the fields or constructors present in the records and variants. We formalize our approach in System R𝜔, an extension of F𝜔 with row types, and give a denotational semantics for (stratified) R𝜔 in Agda. @Article{ICFP23p201, author = {Alex Hubers and J. Garrett Morris}, title = {Generic Programming with Extensible Data Types: Or, Making Ad Hoc Extensible Data Types Less Ad Hoc}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {201}, numpages = {29}, doi = {10.1145/3607843}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Hutton, Graham |
ICFP '23: "Calculating Compilers for ..."
Calculating Compilers for Concurrency
Patrick Bahr and Graham Hutton (IT University of Copenhagen, Denmark; University of Nottingham, UK) Choice trees have recently been introduced as a general structure for defining the semantics of programming languages with a wide variety of features and effects. In this article we focus on concurrent languages, and show how a codensity version of choice trees allows the semantics for such languages to be systematically transformed into compilers using equational reasoning techniques. The codensity construction is the key ingredient that enables a high-level, algebraic approach. As a case study, we calculate a compiler for a concurrent lambda calculus with channel-based communication. @Article{ICFP23p213, author = {Patrick Bahr and Graham Hutton}, title = {Calculating Compilers for Concurrency}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {213}, numpages = {28}, doi = {10.1145/3607855}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Jacobs, Jules |
ICFP '23: "Dependent Session Protocols ..."
Dependent Session Protocols in Separation Logic from First Principles (Functional Pearl)
Jules Jacobs, Jonas Kastberg Hinrichsen, and Robbert Krebbers (Radboud University Nijmegen, Netherlands; Aarhus University, Denmark) We develop an account of dependent session protocols in concurrent separation logic for a functional language with message-passing. Inspired by minimalistic session calculi, we present a layered design: starting from mutable references, we build one-shot channels, session channels, and imperative channels. Whereas previous work on dependent session protocols in concurrent separation logic required advanced mechanisms such as recursive domain equations and higher-order ghost state, we only require the most basic mechanisms to verify that our one-shot channels satisfy one-shot protocols, and subsequently treat their specification as a black box on top of which we define dependent session protocols. This has a number of advantages in terms of simplicity, elegance, and flexibility: support for subprotocols and guarded recursion automatically transfers from the one-shot protocols to the dependent session protocols, and we easily obtain various forms of channel closing. Because the meta theory of our results is so simple, we are able to give all definitions as part of this paper, and mechanize all our results using the Iris framework in less than 1000 lines of Coq. @Article{ICFP23p214, author = {Jules Jacobs and Jonas Kastberg Hinrichsen and Robbert Krebbers}, title = {Dependent Session Protocols in Separation Logic from First Principles (Functional Pearl)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {214}, numpages = {28}, doi = {10.1145/3607856}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Jhala, Ranjit |
ICFP '23: "The Verse Calculus: A Core ..."
The Verse Calculus: A Core Calculus for Deterministic Functional Logic Programming
Lennart Augustsson, Joachim Breitner, Koen Claessen, Ranjit Jhala, Simon Peyton Jones, Olin Shivers, Guy L. Steele Jr., and Tim Sweeney (Epic Games, Sweden; Unaffiliated, Germany; Epic Games, USA; Epic Games, UK; Oracle Labs, USA) Functional logic languages have a rich literature, but it is tricky to give them a satisfying semantics. In this paper we describe the Verse calculus, VC, a new core calculus for deterministic functional logic programming. Our main contribution is to equip VC with a small-step rewrite semantics, so that we can reason about a VC program in the same way as one does with lambda calculus; that is, by applying successive rewrites to it. We also show that the rewrite system is confluent for well-behaved terms. @Article{ICFP23p203, author = {Lennart Augustsson and Joachim Breitner and Koen Claessen and Ranjit Jhala and Simon Peyton Jones and Olin Shivers and Guy L. Steele Jr. and Tim Sweeney}, title = {The Verse Calculus: A Core Calculus for Deterministic Functional Logic Programming}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {203}, numpages = {31}, doi = {10.1145/3607845}, year = {2023}, } Publisher's Version |
|
Johnsen, Einar Broch |
ICFP '23: "Formal Specification and Testing ..."
Formal Specification and Testing for Reinforcement Learning
Mahsa Varshosaz, Mohsen Ghaffari, Einar Broch Johnsen, and Andrzej Wąsowski (IT University of Copenhagen, Denmark; University of Oslo, Norway) The development process for reinforcement learning applications is still exploratory rather than systematic. This exploratory nature reduces reuse of specifications between applications and increases the chances of introducing programming errors. This paper takes a step towards systematizing the development of reinforcement learning applications. We introduce a formal specification of reinforcement learning problems and algorithms, with a particular focus on temporal difference methods and their definitions in backup diagrams. We further develop a test harness for a large class of reinforcement learning applications based on temporal difference learning, including SARSA and Q-learning. The entire development is rooted in functional programming methods; starting with pure specifications and denotational semantics, ending with property-based testing and using compositional interpreters for a domain-specific term language as a test oracle for concrete implementations. We demonstrate the usefulness of this testing method on a number of examples, and evaluate with mutation testing. We show that our test suite is effective in killing mutants (90% mutants killed for 75% of subject agents). More importantly, almost half of all mutants are killed by generic write-once-use-everywhere tests that apply to any reinforcement learning problem modeled using our library, without any additional effort from the programmer. @Article{ICFP23p193, author = {Mahsa Varshosaz and Mohsen Ghaffari and Einar Broch Johnsen and Andrzej Wąsowski}, title = {Formal Specification and Testing for Reinforcement Learning}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {193}, numpages = {34}, doi = {10.1145/3607835}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Kashiwa, Shun |
ICFP '23: "HasChor: Functional Choreographic ..."
HasChor: Functional Choreographic Programming for All (Functional Pearl)
Gan Shen, Shun Kashiwa, and Lindsey Kuper (University of California at Santa Cruz, USA) Choreographic programming is an emerging paradigm for programming distributed systems. In choreographic programming, the programmer describes the behavior of the entire system as a single, unified program -- a choreography -- which is then compiled to individual programs that run on each node, via a compilation step called endpoint projection. We present a new model for functional choreographic programming where choreographies are expressed as computations in a monad. Our model supports cutting-edge choreographic programming features that enable modularity and code reuse: in particular, it supports higher-order choreographies, in which a choreography may be passed as an argument to another choreography, and location-polymorphic choreographies, in which a choreography can abstract over nodes. Our model is implemented in a Haskell library, HasChor, which lets programmers write choreographic programs while using the rich Haskell ecosystem at no cost, bringing choreographic programming within reach of everyday Haskellers. Moreover, thanks to Haskell's abstractions, the implementation of the HasChor library itself is concise and understandable, boiling down endpoint projection to its short and simple essence. @Article{ICFP23p207, author = {Gan Shen and Shun Kashiwa and Lindsey Kuper}, title = {HasChor: Functional Choreographic Programming for All (Functional Pearl)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {207}, numpages = {25}, doi = {10.1145/3607849}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Katsura, Hiroyuki |
ICFP '23: "Higher-Order Property-Directed ..."
Higher-Order Property-Directed Reachability
Hiroyuki Katsura, Naoki Kobayashi, and Ryosuke Sato (University of Tokyo, Japan) The property-directed reachability (PDR) has been used as a successful method for automated verification of first-order transition systems. We propose a higher-order extension of PDR, called HoPDR, where higher-order recursive functions may be used to describe transition systems. We formalize HoPDR for the validity checking problem for conjunctive nu-HFL(Z), a higher-order fixpoint logic with integers and greatest fixpoint operators. The validity checking problem can also be viewed as a higher-order extension of the satisfiability problem for Constrained Horn Clauses (CHC), and safety property verification of higher-order programs can naturally be reduced to the validity checking problem. We have implemented a prototype verification tool based on HoPDR and confirmed its effectiveness. We also compare our HoPDR procedure with the PDR procedure for first-order systems and previous methods for fully automated higher-order program verification. @Article{ICFP23p190, author = {Hiroyuki Katsura and Naoki Kobayashi and Ryosuke Sato}, title = {Higher-Order Property-Directed Reachability}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {190}, numpages = {30}, doi = {10.1145/3607831}, year = {2023}, } Publisher's Version |
|
Keidel, Sven |
ICFP '23: "Combinator-Based Fixpoint ..."
Combinator-Based Fixpoint Algorithms for Big-Step Abstract Interpreters
Sven Keidel, Sebastian Erdweg, and Tobias Hombücher (TU Darmstadt, Germany; JGU Mainz, Germany) Big-step abstract interpreters are an approach to build static analyzers based on big-step interpretation. While big-step interpretation provides a number of benefits for the definition of an analysis, it also requires particularly complicated fixpoint algorithms because the analysis definition is a recursive function whose termination is uncertain. This is in contrast to other analysis approaches, such as small-step reduction, abstract machines, or graph reachability, where the analysis essentially forms a finite transition system between widened analysis states. We show how to systematically develop sophisticated fixpoint algorithms for big-step abstract interpreters and how to ensure their soundness. Our approach is based on small and reusable fixpoint combinators that can be composed to yield fixpoint algorithms. For example, these combinators describe the order in which the program is analyzed, how deep recursive functions are unfolded and loops unrolled, or they record auxiliary data such as a (context-sensitive) call graph. Importantly, each combinator can be developed separately, reused across analyses, and can be verified sound independently. Consequently, analysis developers can freely compose combinators to obtain sound fixpoint algorithms that work best for their use case. We provide a formal metatheory that guarantees a fixpoint algorithm is sound if its composed from sound combinators only. We experimentally validate our combinator-based approach by describing sophisticated fixpoint algorithms for analyses of Stratego, Scheme, and WebAssembly. @Article{ICFP23p221, author = {Sven Keidel and Sebastian Erdweg and Tobias Hombücher}, title = {Combinator-Based Fixpoint Algorithms for Big-Step Abstract Interpreters}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {221}, numpages = {27}, doi = {10.1145/3607863}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Keles, Alperen |
ICFP '23: "Etna: An Evaluation Platform ..."
Etna: An Evaluation Platform for Property-Based Testing (Experience Report)
Jessica Shi, Alperen Keles, Harrison Goldstein, Benjamin C. Pierce, and Leonidas Lampropoulos (University of Pennsylvania, USA; University of Maryland, College Park, USA) Property-based testing is a mainstay of functional programming, boasting a rich literature, an enthusiastic user community, and an abundance of tools — so many, indeed, that new users may have difficulty choosing. Moreover, any given framework may support a variety of strategies for generating test inputs; even experienced users may wonder which are better in a given situation. Sadly, the PBT literature, though long on creativity, is short on rigorous comparisons to help answer such questions. We present Etna, a platform for empirical evaluation and comparison of PBT techniques. Etna incorporates a number of popular PBT frameworks and testing workloads from the literature, and its extensible architecture makes adding new ones easy, while handling the technical drudgery of performance measurement. To illustrate its benefits, we use Etna to carry out several experiments with popular PBT approaches in both Coq and Haskell, allowing users to more clearly understand best practices and tradeoffs. @Article{ICFP23p218, author = {Jessica Shi and Alperen Keles and Harrison Goldstein and Benjamin C. Pierce and Leonidas Lampropoulos}, title = {Etna: An Evaluation Platform for Property-Based Testing (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {218}, numpages = {17}, doi = {10.1145/3607860}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Kobayashi, Naoki |
ICFP '23: "Higher-Order Property-Directed ..."
Higher-Order Property-Directed Reachability
Hiroyuki Katsura, Naoki Kobayashi, and Ryosuke Sato (University of Tokyo, Japan) The property-directed reachability (PDR) has been used as a successful method for automated verification of first-order transition systems. We propose a higher-order extension of PDR, called HoPDR, where higher-order recursive functions may be used to describe transition systems. We formalize HoPDR for the validity checking problem for conjunctive nu-HFL(Z), a higher-order fixpoint logic with integers and greatest fixpoint operators. The validity checking problem can also be viewed as a higher-order extension of the satisfiability problem for Constrained Horn Clauses (CHC), and safety property verification of higher-order programs can naturally be reduced to the validity checking problem. We have implemented a prototype verification tool based on HoPDR and confirmed its effectiveness. We also compare our HoPDR procedure with the PDR procedure for first-order systems and previous methods for fully automated higher-order program verification. @Article{ICFP23p190, author = {Hiroyuki Katsura and Naoki Kobayashi and Ryosuke Sato}, title = {Higher-Order Property-Directed Reachability}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {190}, numpages = {30}, doi = {10.1145/3607831}, year = {2023}, } Publisher's Version |
|
Krebbers, Robbert |
ICFP '23: "Dependent Session Protocols ..."
Dependent Session Protocols in Separation Logic from First Principles (Functional Pearl)
Jules Jacobs, Jonas Kastberg Hinrichsen, and Robbert Krebbers (Radboud University Nijmegen, Netherlands; Aarhus University, Denmark) We develop an account of dependent session protocols in concurrent separation logic for a functional language with message-passing. Inspired by minimalistic session calculi, we present a layered design: starting from mutable references, we build one-shot channels, session channels, and imperative channels. Whereas previous work on dependent session protocols in concurrent separation logic required advanced mechanisms such as recursive domain equations and higher-order ghost state, we only require the most basic mechanisms to verify that our one-shot channels satisfy one-shot protocols, and subsequently treat their specification as a black box on top of which we define dependent session protocols. This has a number of advantages in terms of simplicity, elegance, and flexibility: support for subprotocols and guarded recursion automatically transfers from the one-shot protocols to the dependent session protocols, and we easily obtain various forms of channel closing. Because the meta theory of our results is so simple, we are able to give all definitions as part of this paper, and mechanize all our results using the Iris framework in less than 1000 lines of Coq. @Article{ICFP23p214, author = {Jules Jacobs and Jonas Kastberg Hinrichsen and Robbert Krebbers}, title = {Dependent Session Protocols in Separation Logic from First Principles (Functional Pearl)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {214}, numpages = {28}, doi = {10.1145/3607856}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Krishnamurthi, Shriram |
ICFP '23: "What Happens When Students ..."
What Happens When Students Switch (Functional) Languages (Experience Report)
Kuang-Chen Lu, Shriram Krishnamurthi, Kathi Fisler, and Ethel Tshukudu (Brown University, USA; University of Botswana, Botswana) When novice programming students already know one programming language and have to learn another, what issues do they run into? We specifically focus on one or both languages being functional, varying along two axes: syntax and semantics. We report on problems, especially persistent ones. This work can be of immediate value to educators and also sets up avenues for future research. @Article{ICFP23p215, author = {Kuang-Chen Lu and Shriram Krishnamurthi and Kathi Fisler and Ethel Tshukudu}, title = {What Happens When Students Switch (Functional) Languages (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {215}, numpages = {17}, doi = {10.1145/3607857}, year = {2023}, } Publisher's Version Archive submitted (2.7 MB) |
|
Krishnaswami, Neel |
ICFP '23: "Explicit Refinement Types ..."
Explicit Refinement Types
Jad Elkhaleq Ghalayini and Neel Krishnaswami (University of Cambridge, UK) We present λert, a type theory supporting refinement types with <em>explicit proofs</em>. Instead of solving refinement constraints with an SMT solver like DML and Liquid Haskell, our system requires and permits programmers to embed proofs of properties within the program text, letting us support a rich logic of properties including quantifiers and induction. We show that the type system is sound by showing that every refined program erases to a simply-typed program, and by means of a denotational semantics, we show that every erased program has all of the properties demanded by its refined type. All of our proofs are formalised in Lean 4. @Article{ICFP23p195, author = {Jad Elkhaleq Ghalayini and Neel Krishnaswami}, title = {Explicit Refinement Types}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {195}, numpages = {28}, doi = {10.1145/3607837}, year = {2023}, } Publisher's Version |
|
Künzang, Chhi’mèd |
ICFP '23: "LURK: Lambda, the Ultimate ..."
LURK: Lambda, the Ultimate Recursive Knowledge (Experience Report)
Nada Amin, John Burnham, François Garillot, Rosario Gennaro, Chhi’mèd Künzang, Daniel Rogozin, and Cameron Wong (Harvard University, USA; Lurk Lab, USA; Lurk Lab, Canada; City College of New York, USA; University College London, UK) We introduce Lurk, a new LISP-based programming language for zk-SNARKs. Traditional approaches to programming over zero-knowledge proofs require compiling the desired computation into a flat circuit, imposing serious constraints on the size and complexity of computations that can be achieved in practice. Lurk programs are instead provided as data to the universal Lurk interpreter circuit, allowing the resulting language to be Turing-complete without compromising the size of the resulting proof artifacts. Our work describes the design and theory behind Lurk, along with detailing how its implementation of content addressing can be used to sidestep many of the usual concerns of programming zero-knowledge proofs. @Article{ICFP23p197, author = {Nada Amin and John Burnham and François Garillot and Rosario Gennaro and Chhi’mèd Künzang and Daniel Rogozin and Cameron Wong}, title = {LURK: Lambda, the Ultimate Recursive Knowledge (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {197}, numpages = {16}, doi = {10.1145/3607839}, year = {2023}, } Publisher's Version Info |
|
Kuper, Lindsey |
ICFP '23: "HasChor: Functional Choreographic ..."
HasChor: Functional Choreographic Programming for All (Functional Pearl)
Gan Shen, Shun Kashiwa, and Lindsey Kuper (University of California at Santa Cruz, USA) Choreographic programming is an emerging paradigm for programming distributed systems. In choreographic programming, the programmer describes the behavior of the entire system as a single, unified program -- a choreography -- which is then compiled to individual programs that run on each node, via a compilation step called endpoint projection. We present a new model for functional choreographic programming where choreographies are expressed as computations in a monad. Our model supports cutting-edge choreographic programming features that enable modularity and code reuse: in particular, it supports higher-order choreographies, in which a choreography may be passed as an argument to another choreography, and location-polymorphic choreographies, in which a choreography can abstract over nodes. Our model is implemented in a Haskell library, HasChor, which lets programmers write choreographic programs while using the rich Haskell ecosystem at no cost, bringing choreographic programming within reach of everyday Haskellers. Moreover, thanks to Haskell's abstractions, the implementation of the HasChor library itself is concise and understandable, boiling down endpoint projection to its short and simple essence. @Article{ICFP23p207, author = {Gan Shen and Shun Kashiwa and Lindsey Kuper}, title = {HasChor: Functional Choreographic Programming for All (Functional Pearl)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {207}, numpages = {25}, doi = {10.1145/3607849}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Lampropoulos, Leonidas |
ICFP '23: "Etna: An Evaluation Platform ..."
Etna: An Evaluation Platform for Property-Based Testing (Experience Report)
Jessica Shi, Alperen Keles, Harrison Goldstein, Benjamin C. Pierce, and Leonidas Lampropoulos (University of Pennsylvania, USA; University of Maryland, College Park, USA) Property-based testing is a mainstay of functional programming, boasting a rich literature, an enthusiastic user community, and an abundance of tools — so many, indeed, that new users may have difficulty choosing. Moreover, any given framework may support a variety of strategies for generating test inputs; even experienced users may wonder which are better in a given situation. Sadly, the PBT literature, though long on creativity, is short on rigorous comparisons to help answer such questions. We present Etna, a platform for empirical evaluation and comparison of PBT techniques. Etna incorporates a number of popular PBT frameworks and testing workloads from the literature, and its extensible architecture makes adding new ones easy, while handling the technical drudgery of performance measurement. To illustrate its benefits, we use Etna to carry out several experiments with popular PBT approaches in both Coq and Haskell, allowing users to more clearly understand best practices and tradeoffs. @Article{ICFP23p218, author = {Jessica Shi and Alperen Keles and Harrison Goldstein and Benjamin C. Pierce and Leonidas Lampropoulos}, title = {Etna: An Evaluation Platform for Property-Based Testing (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {218}, numpages = {17}, doi = {10.1145/3607860}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Lazarek, Lukas |
ICFP '23: "How to Evaluate Blame for ..."
How to Evaluate Blame for Gradual Types, Part 2
Lukas Lazarek, Ben Greenman, Matthias Felleisen, and Christos Dimoulas (Northwestern University, USA; Brown University, USA; Northeastern University, USA) Equipping an existing programming language with a gradual type system requires two major steps. The first and most visible one in academia is to add a notation for types and a type checking apparatus. The second, highly practical one is to provide a type veneer for the large number of existing untyped libraries; doing so enables typed components to import pieces of functionality and get their uses type-checked, without any changes to the libraries. When programmers create such typed veneers for libraries, they make mistakes that persist and cause trouble. The question is whether the academically investigated run-time checks for gradual type systems assist programmers with debugging such mistakes. This paper provides a first, surprising answer to this question via a rational-programmer investigation: run-time checks alone are typically less helpful than the safety checks of the underlying language. Combining Natural run-time checks with blame, however, provides significantly superior debugging hints. @Article{ICFP23p194, author = {Lukas Lazarek and Ben Greenman and Matthias Felleisen and Christos Dimoulas}, title = {How to Evaluate Blame for Gradual Types, Part 2}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {194}, numpages = {28}, doi = {10.1145/3607836}, year = {2023}, } Publisher's Version |
|
Leijen, Daan |
ICFP '23: "FP²: Fully in-Place Functional ..."
FP²: Fully in-Place Functional Programming
Anton Lorenzen, Daan Leijen, and Wouter Swierstra (University of Edinburgh, UK; Microsoft Research, USA; Utrecht University, Netherlands) As functional programmers we always face a dilemma: should we write purely functional code, or sacrifice purity for efficiency and resort to in-place updates? This paper identifies precisely when we can have the best of both worlds: a wide class of purely functional programs can be executed safely using in-place updates without requiring allocation, provided their arguments are not shared elsewhere. We describe a linear _fully in-place_ (FIP) calculus where we prove that we can always execute such functions in a way that requires no (de)allocation and uses constant stack space. Of course, such a calculus is only relevant if we can express interesting algorithms; we provide numerous examples of in-place functions on datastructures such as splay trees or finger trees, together with in-place versions of merge sort and quick sort. We also show how we can generically derive a map function over _any_ polynomial data type that is fully in-place. Finally, we have implemented the rules of the FIP calculus in the Koka language. Using the Perceus reference counting garbage collection, this implementation dynamically executes FIP functions in-place whenever possible. @Article{ICFP23p198, author = {Anton Lorenzen and Daan Leijen and Wouter Swierstra}, title = {FP²: Fully in-Place Functional Programming}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {198}, numpages = {30}, doi = {10.1145/3607840}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Liu, Yiyun |
ICFP '23: "Dependently-Typed Programming ..."
Dependently-Typed Programming with Logical Equality Reflection
Yiyun Liu and Stephanie Weirich (University of Pennsylvania, USA) In dependently-typed functional programming languages that allow general recursion, programs used as proofs must be evaluated to retain type soundness. As a result, programmers must make a trade-off between performance and safety. To address this problem, we propose System DE, an explicitly-typed, moded core calculus that supports termination tracking and equality reflection. Programmers can write inductive proofs about potentially diverging programs in a logical sublanguage and reflect those proofs to the type checker, while knowing that such proofs will be erased by the compiler before execution. A key feature of System DE is its use of modes for both termination and relevance tracking, which not only simplifies the design but also leaves it open for future extension. System DE is suitable for use in the Glasgow Haskell Compiler, but could serve as the basis for any general purpose dependently-typed language. @Article{ICFP23p210, author = {Yiyun Liu and Stephanie Weirich}, title = {Dependently-Typed Programming with Logical Equality Reflection}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {210}, numpages = {37}, doi = {10.1145/3607852}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Lorenzen, Anton |
ICFP '23: "FP²: Fully in-Place Functional ..."
FP²: Fully in-Place Functional Programming
Anton Lorenzen, Daan Leijen, and Wouter Swierstra (University of Edinburgh, UK; Microsoft Research, USA; Utrecht University, Netherlands) As functional programmers we always face a dilemma: should we write purely functional code, or sacrifice purity for efficiency and resort to in-place updates? This paper identifies precisely when we can have the best of both worlds: a wide class of purely functional programs can be executed safely using in-place updates without requiring allocation, provided their arguments are not shared elsewhere. We describe a linear _fully in-place_ (FIP) calculus where we prove that we can always execute such functions in a way that requires no (de)allocation and uses constant stack space. Of course, such a calculus is only relevant if we can express interesting algorithms; we provide numerous examples of in-place functions on datastructures such as splay trees or finger trees, together with in-place versions of merge sort and quick sort. We also show how we can generically derive a map function over _any_ polynomial data type that is fully in-place. Finally, we have implemented the rules of the FIP calculus in the Koka language. Using the Perceus reference counting garbage collection, this implementation dynamically executes FIP functions in-place whenever possible. @Article{ICFP23p198, author = {Anton Lorenzen and Daan Leijen and Wouter Swierstra}, title = {FP²: Fully in-Place Functional Programming}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {198}, numpages = {30}, doi = {10.1145/3607840}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Lu, Kuang-Chen |
ICFP '23: "What Happens When Students ..."
What Happens When Students Switch (Functional) Languages (Experience Report)
Kuang-Chen Lu, Shriram Krishnamurthi, Kathi Fisler, and Ethel Tshukudu (Brown University, USA; University of Botswana, Botswana) When novice programming students already know one programming language and have to learn another, what issues do they run into? We specifically focus on one or both languages being functional, varying along two axes: syntax and semantics. We report on problems, especially persistent ones. This work can be of immediate value to educators and also sets up avenues for future research. @Article{ICFP23p215, author = {Kuang-Chen Lu and Shriram Krishnamurthi and Kathi Fisler and Ethel Tshukudu}, title = {What Happens When Students Switch (Functional) Languages (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {215}, numpages = {17}, doi = {10.1145/3607857}, year = {2023}, } Publisher's Version Archive submitted (2.7 MB) |
|
Lutze, Matthew |
ICFP '23: "With or Without You: Programming ..."
With or Without You: Programming with Effect Exclusion
Matthew Lutze, Magnus Madsen, Philipp Schuster, and Jonathan Immanuel Brachthäuser (Aarhus University, Denmark; University of Tübingen, Germany) Type and effect systems have been successfully used to statically reason about effects in many different domains, including region-based memory management, exceptions, and algebraic effects and handlers. Such systems’ soundness is often stated in terms of the absence of effects. Yet, existing systems only admit indirect reasoning about the absence of effects. This is further complicated by effect polymorphism which allows function signatures to abstract over arbitrary, unknown sets of effects. We present a new type and effect system with effect polymorphism as well as union, intersection, and complement effects. The effect system allows us to express effect exclusion as a new class of effect polymorphic functions: those that permit any effects except those in a specific set. This way, we equip programmers with the means to directly reason about the absence of effects. Our type and effect system builds on the Hindley-Milner type system, supports effect polymorphism, and preserves principal types modulo Boolean equivalence. In addition, a suitable extension of Algorithm W with Boolean unification on the algebra of sets enables complete type and effect inference. We formalize these notions in the λ∁ calculus. We prove the standard progress and preservation theorems as well as a non-standard effect safety theorem: no excluded effect is ever performed. We implement the type and effect system as an extension of the Flix programming language. We conduct a case study of open source projects identifying 59 program fragments that require effect exclusion for correctness. To demonstrate the usefulness of the proposed type and effect system, we recast these program fragments into our extension of Flix. @Article{ICFP23p204, author = {Matthew Lutze and Magnus Madsen and Philipp Schuster and Jonathan Immanuel Brachthäuser}, title = {With or Without You: Programming with Effect Exclusion}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {204}, numpages = {28}, doi = {10.1145/3607846}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Madsen, Magnus |
ICFP '23: "With or Without You: Programming ..."
With or Without You: Programming with Effect Exclusion
Matthew Lutze, Magnus Madsen, Philipp Schuster, and Jonathan Immanuel Brachthäuser (Aarhus University, Denmark; University of Tübingen, Germany) Type and effect systems have been successfully used to statically reason about effects in many different domains, including region-based memory management, exceptions, and algebraic effects and handlers. Such systems’ soundness is often stated in terms of the absence of effects. Yet, existing systems only admit indirect reasoning about the absence of effects. This is further complicated by effect polymorphism which allows function signatures to abstract over arbitrary, unknown sets of effects. We present a new type and effect system with effect polymorphism as well as union, intersection, and complement effects. The effect system allows us to express effect exclusion as a new class of effect polymorphic functions: those that permit any effects except those in a specific set. This way, we equip programmers with the means to directly reason about the absence of effects. Our type and effect system builds on the Hindley-Milner type system, supports effect polymorphism, and preserves principal types modulo Boolean equivalence. In addition, a suitable extension of Algorithm W with Boolean unification on the algebra of sets enables complete type and effect inference. We formalize these notions in the λ∁ calculus. We prove the standard progress and preservation theorems as well as a non-standard effect safety theorem: no excluded effect is ever performed. We implement the type and effect system as an extension of the Flix programming language. We conduct a case study of open source projects identifying 59 program fragments that require effect exclusion for correctness. To demonstrate the usefulness of the proposed type and effect system, we recast these program fragments into our extension of Flix. @Article{ICFP23p204, author = {Matthew Lutze and Magnus Madsen and Philipp Schuster and Jonathan Immanuel Brachthäuser}, title = {With or Without You: Programming with Effect Exclusion}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {204}, numpages = {28}, doi = {10.1145/3607846}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Matsuda, Kazutaka |
ICFP '23: "Embedding by Unembedding ..."
Embedding by Unembedding
Kazutaka Matsuda, Samantha Frohlich, Meng Wang, and Nicolas Wu (Tohoku University, Japan; University of Bristol, UK; Imperial College London, UK) Embedding is a language development technique that implements the object language as a library in a host language. There are many advantages of the approach, including being lightweight and the ability to inherit features of the host language. A notable example is the technique of HOAS, which makes crucial use of higher-order functions to represent abstract syntax trees with binders. Despite its popularity, HOAS has its limitations. We observe that HOAS struggles with semantic domains that cannot be naturally expressed as functions, particularly when open expressions are involved. Prominent examples of this include incremental computation and reversible/bidirectional languages. In this paper, we pin-point the challenge faced by HOAS as a mismatch between the semantic domain of host and object language functions, and propose a solution. The solution is based on the technique of unembedding, which converts from the finally-tagless representation to de Bruijn-indexed terms with strong correctness guarantees. We show that this approach is able to extend the applicability of HOAS while preserving its elegance. We provide a generic strategy for Embedding by Unembedding, and then demonstrate its effectiveness with two substantial case studies in the domains of incremental computation and bidirectional transformations. The resulting embedded implementations are comparable in features to the state-of-the-art language implementations in the respective areas. @Article{ICFP23p189, author = {Kazutaka Matsuda and Samantha Frohlich and Meng Wang and Nicolas Wu}, title = {Embedding by Unembedding}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {189}, numpages = {47}, doi = {10.1145/3607830}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Møgelberg, Rasmus Ejlers |
ICFP '23: "Asynchronous Modal FRP ..."
Asynchronous Modal FRP
Patrick Bahr and Rasmus Ejlers Møgelberg (IT University of Copenhagen, Denmark) Over the past decade, a number of languages for functional reactive programming (FRP) have been suggested, which use modal types to ensure properties like causality, productivity and lack of space leaks. So far, almost all of these languages have included a modal operator for delay on a global clock. For some applications, however, a global clock is unnatural and leads to leaky abstractions as well as inefficient implementations. While modal languages without a global clock have been proposed, no operational properties have been proved about them, yet. This paper proposes Async RaTT, a new modal language for asynchronous FRP, equipped with an operational semantics mapping complete programs to machines that take asynchronous input signals and produce output signals. The main novelty of Async RaTT is a new modality for asynchronous delay, allowing each output channel to be associated at runtime with the set of input channels it depends on, thus causing the machine to only compute new output when necessary. We prove a series of operational properties including causality, productivity and lack of space leaks. We also show that, although the set of input channels associated with an output channel can change during execution, upper bounds on these can be determined statically by the type system. @Article{ICFP23p205, author = {Patrick Bahr and Rasmus Ejlers Møgelberg}, title = {Asynchronous Modal FRP}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {205}, numpages = {35}, doi = {10.1145/3607847}, year = {2023}, } Publisher's Version |
|
Morris, J. Garrett |
ICFP '23: "Generic Programming with Extensible ..."
Generic Programming with Extensible Data Types: Or, Making Ad Hoc Extensible Data Types Less Ad Hoc
Alex Hubers and J. Garrett Morris (University of Iowa, USA) We present a novel approach to generic programming over extensible data types. Row types capture the structure of records and variants, and can be used to express record and variant subtyping, record extension, and modular composition of case branches. We extend row typing to capture generic programming over rows themselves, capturing patterns including lifting operations to records and variations from their component types, and the duality between cases blocks over variants and records of labeled functions, without placing specific requirements on the fields or constructors present in the records and variants. We formalize our approach in System R𝜔, an extension of F𝜔 with row types, and give a denotational semantics for (stratified) R𝜔 in Agda. @Article{ICFP23p201, author = {Alex Hubers and J. Garrett Morris}, title = {Generic Programming with Extensible Data Types: Or, Making Ad Hoc Extensible Data Types Less Ad Hoc}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {201}, numpages = {29}, doi = {10.1145/3607843}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Nicole, Olivier |
ICFP '23: "MacoCaml: Staging Composable ..."
MacoCaml: Staging Composable and Compilable Macros
Ningning Xie, Leo White, Olivier Nicole, and Jeremy Yallop (University of Toronto, Canada; Jane Street, UK; Tarides, France; University of Cambridge, UK) We introduce MacoCaml, a new design and implementation of compile-time code generation for the OCaml language. MacoCaml features a novel combination of macros with phase separation and quotation-based staging, where macros are considered as compile-time bindings, expression cross evaluation phases using staging annotations, and compile-time evaluation happens inside top-level splices. We provide a theoretical foundation for MacoCaml by formalizing a typed source calculus maco that supports interleaving typing and compile-time code generation, references with explicit compile-time heaps, and modules. We study various crucial properties including soundness and phase distinction. We have implemented MacoCaml in the OCaml compiler, and ported two substantial existing libraries to validate our implementation. @Article{ICFP23p209, author = {Ningning Xie and Leo White and Olivier Nicole and Jeremy Yallop}, title = {MacoCaml: Staging Composable and Compilable Macros}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {209}, numpages = {45}, doi = {10.1145/3607851}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Pereira, Mário |
ICFP '23: "Verifying Reliable Network ..."
Verifying Reliable Network Components in a Distributed Separation Logic with Dependent Separation Protocols
Léon Gondelman, Jonas Kastberg Hinrichsen, Mário Pereira, Amin Timany, and Lars Birkedal (Aarhus University, Denmark; NOVA-LINCS, Portugal; NOVA School of Sciences and Tecnhology, Portugal) We present a foundationally verified implementation of a reliable communication library for asynchronous client-server communication, and a stack of formally verified components on top thereof. Our library is implemented in an OCaml-like language on top of UDP and features characteristic traits of existing protocols, such as a simple handshaking protocol, bidirectional channels, and retransmission/acknowledgement mechanisms. We verify the library in the Aneris distributed separation logic using a novel proof pattern---dubbed the session escrow pattern---based on the existing escrow proof pattern and the so-called dependent separation protocols, which hitherto have only been used in a non-distributed concurrent setting. We demonstrate how our specification of the reliable communication library simplifies formal reasoning about applications, such as a remote procedure call library, which we in turn use to verify a lazily replicated key-value store with leader-followers and clients thereof. Our development is highly modular---each component is verified relative to specifications of the components it uses (not the implementation). All our results are formalized in the Coq proof assistant. @Article{ICFP23p217, author = {Léon Gondelman and Jonas Kastberg Hinrichsen and Mário Pereira and Amin Timany and Lars Birkedal}, title = {Verifying Reliable Network Components in a Distributed Separation Logic with Dependent Separation Protocols}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {217}, numpages = {31}, doi = {10.1145/3607859}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Perez, Ivan |
ICFP '23: "Trustworthy Runtime Verification ..."
Trustworthy Runtime Verification via Bisimulation (Experience Report)
Ryan G. Scott, Mike Dodds, Ivan Perez, Alwyn E. Goodloe, and Robert Dockins (Galois, USA; KBR @ NASA Ames Research Center, USA; NASA Ames Research Center, USA; Amazon, USA) When runtime verification is used to monitor safety-critical systems, it is essential that monitoring code behaves correctly. The Copilot runtime verification framework pursues this goal by automatically generating C monitor programs from a high-level DSL embedded in Haskell. In safety-critical domains, every piece of deployed code must be accompanied by an assurance argument that is convincing to human auditors. However, it is difficult for auditors to determine with confidence that a compiled monitor cannot crash and implements the behavior required by the Copilot semantics. In this paper we describe CopilotVerifier, which runs alongside the Copilot compiler, generating a proof of correctness for the compiled output. The proof establishes that a given Copilot monitor and its compiled form produce equivalent outputs on equivalent inputs, and that they either crash in identical circumstances or cannot crash. The proof takes the form of a bisimulation broken down into a set of verification conditions. We leverage two pieces of SMT-backed technology: the Crucible symbolic execution library for LLVM and the What4 solver interface library. Our results demonstrate that dramatically increased compiler assurance can be achieved at moderate cost by building on existing tools. This paves the way to our ultimate goal of generating formal assurance arguments that are convincing to human auditors. @Article{ICFP23p199, author = {Ryan G. Scott and Mike Dodds and Ivan Perez and Alwyn E. Goodloe and Robert Dockins}, title = {Trustworthy Runtime Verification via Bisimulation (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {199}, numpages = {17}, doi = {10.1145/3607841}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Peyton Jones, Simon |
ICFP '23: "The Verse Calculus: A Core ..."
The Verse Calculus: A Core Calculus for Deterministic Functional Logic Programming
Lennart Augustsson, Joachim Breitner, Koen Claessen, Ranjit Jhala, Simon Peyton Jones, Olin Shivers, Guy L. Steele Jr., and Tim Sweeney (Epic Games, Sweden; Unaffiliated, Germany; Epic Games, USA; Epic Games, UK; Oracle Labs, USA) Functional logic languages have a rich literature, but it is tricky to give them a satisfying semantics. In this paper we describe the Verse calculus, VC, a new core calculus for deterministic functional logic programming. Our main contribution is to equip VC with a small-step rewrite semantics, so that we can reason about a VC program in the same way as one does with lambda calculus; that is, by applying successive rewrites to it. We also show that the rewrite system is confluent for well-behaved terms. @Article{ICFP23p203, author = {Lennart Augustsson and Joachim Breitner and Koen Claessen and Ranjit Jhala and Simon Peyton Jones and Olin Shivers and Guy L. Steele Jr. and Tim Sweeney}, title = {The Verse Calculus: A Core Calculus for Deterministic Functional Logic Programming}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {203}, numpages = {31}, doi = {10.1145/3607845}, year = {2023}, } Publisher's Version |
|
Pierce, Benjamin C. |
ICFP '23: "Reflecting on Random Generation ..."
Reflecting on Random Generation
Harrison Goldstein, Samantha Frohlich, Meng Wang, and Benjamin C. Pierce (University of Pennsylvania, USA; University of Bristol, UK) Expert users of property-based testing often labor to craft random generators that encode detailed knowledge about what it means for a test input to be valid and interesting. Fortunately, the fruits of this labor can also be put to other uses. In the bidirectional programming literature, for example, generators have been repurposed as validity checkers, while Python's Hypothesis library uses the same structures for shrinking and mutating test inputs. To unify and generalize these uses and many others, we propose reflective generators, a new foundation for random data generators that can "reflect" on an input value to calculate the random choices that could have been made to produce it. Reflective generators combine ideas from two existing abstractions: free generators and partial monadic profunctors. They can be used to implement and enhance the aforementioned shrinking and mutation algorithms, generalizing them to work for any values that can be produced by the generator, not just ones for which a trace of the generator's execution is available. Beyond shrinking and mutation, reflective generators generalize a published algorithm for example-based generation, and they can also be used as checkers, partial value completers, and other kinds of test data producers. @Article{ICFP23p200, author = {Harrison Goldstein and Samantha Frohlich and Meng Wang and Benjamin C. Pierce}, title = {Reflecting on Random Generation}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {200}, numpages = {34}, doi = {10.1145/3607842}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable ICFP '23: "Etna: An Evaluation Platform ..." Etna: An Evaluation Platform for Property-Based Testing (Experience Report) Jessica Shi, Alperen Keles, Harrison Goldstein, Benjamin C. Pierce, and Leonidas Lampropoulos (University of Pennsylvania, USA; University of Maryland, College Park, USA) Property-based testing is a mainstay of functional programming, boasting a rich literature, an enthusiastic user community, and an abundance of tools — so many, indeed, that new users may have difficulty choosing. Moreover, any given framework may support a variety of strategies for generating test inputs; even experienced users may wonder which are better in a given situation. Sadly, the PBT literature, though long on creativity, is short on rigorous comparisons to help answer such questions. We present Etna, a platform for empirical evaluation and comparison of PBT techniques. Etna incorporates a number of popular PBT frameworks and testing workloads from the literature, and its extensible architecture makes adding new ones easy, while handling the technical drudgery of performance measurement. To illustrate its benefits, we use Etna to carry out several experiments with popular PBT approaches in both Coq and Haskell, allowing users to more clearly understand best practices and tradeoffs. @Article{ICFP23p218, author = {Jessica Shi and Alperen Keles and Harrison Goldstein and Benjamin C. Pierce and Leonidas Lampropoulos}, title = {Etna: An Evaluation Platform for Property-Based Testing (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {218}, numpages = {17}, doi = {10.1145/3607860}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Protzenko, Jonathan |
ICFP '23: "Modularity, Code Specialization, ..."
Modularity, Code Specialization, and Zero-Cost Abstractions for Program Verification
Son Ho, Aymeric Fromherz, and Jonathan Protzenko (Inria, France; Microsoft Research, USA) For all the successes in verifying low-level, efficient, security-critical code, little has been said or studied about the structure, architecture and engineering of such large-scale proof developments. We present the design, implementation and evaluation of a set of language-based techniques that allow the programmer to modularly write and verify code at a high level of abstraction, while retaining control over the compilation process and producing high-quality, zero-overhead, low-level code suitable for integration into mainstream software. We implement our techniques within the F proof assistant, and specifically its shallowly-embedded Low toolchain that compiles to C. Through our evaluation, we establish that our techniques were critical in scaling the popular HACL library past 100,000 lines of verified source code, and brought about significant gains in proof engineer productivity. The exposition of our methodology converges on one final, novel case study: the streaming API, a finicky API that has historically caused many bugs in high-profile software. Using our approach, we manage to capture the streaming semantics in a generic way, and apply it “for free” to over a dozen use-cases. Six of those have made it into the reference implementation of the Python programming language, replacing the previous CVE-ridden code. @Article{ICFP23p202, author = {Son Ho and Aymeric Fromherz and Jonathan Protzenko}, title = {Modularity, Code Specialization, and Zero-Cost Abstractions for Program Verification}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {202}, numpages = {32}, doi = {10.1145/3607844}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Pyzik, Mateusz |
ICFP '23: "A General Fine-Grained Reduction ..."
A General Fine-Grained Reduction Theory for Effect Handlers
Filip Sieczkowski, Mateusz Pyzik, and Dariusz Biernacki (Heriot-Watt University, UK; University of Wrocław, Poland) Effect handlers are a modern and increasingly popular approach to structuring computational effects in functional programming languages. However, while their traditional operational semantics is well-suited to implementation tasks, it is less ideal as a reduction theory. We therefore introduce a fine-grained reduction theory for deep effect handlers, inspired by our existing reduction theory for shift0, along with a standard reduction strategy. We relate this strategy to the traditional, non-local operational semantics via a simulation argument, and show that the reduction theory preserves observational equivalence with respect to the classical semantics of handlers, thus allowing its use as a rewriting theory for handler-equipped programming languages -- this rewriting system mostly coincides with previously studied type-based optimisations. In the process, we establish theoretical properties of our reduction theory, including confluence and standardisation theorems, adapting and extending existing techniques. Finally, we demonstrate the utility of our semantics by providing the first normalisation-by-evaluation algorithm for effect handlers, and prove its soundness and completeness. Additionally, we establish non-expressibility of the lift operator, found in some effect-handler calculi, by the other constructs. @Article{ICFP23p206, author = {Filip Sieczkowski and Mateusz Pyzik and Dariusz Biernacki}, title = {A General Fine-Grained Reduction Theory for Effect Handlers}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {206}, numpages = {30}, doi = {10.1145/3607848}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Functional |
|
Radanne, Gabriel |
ICFP '23: "Bit-Stealing Made Legal: Compilation ..."
Bit-Stealing Made Legal: Compilation for Custom Memory Representations of Algebraic Data Types
Thaïs Baudon, Gabriel Radanne, and Laure Gonnord (University of Lyon, France; ENS Lyon, France; UCBL, France; CNRS, France; Inria, France; LIP, France; University Grenoble Alpes, France; Grenoble INP, France; LCIS, France) Initially present only in functional languages such as OCaml and Haskell, Algebraic Data Types (ADTs) have now become pervasive in mainstream languages, providing nice data abstractions and an elegant way to express functions through pattern matching. Unfortunately, ADTs remain seldom used in low-level programming. One reason is that their increased convenience comes at the cost of abstracting away the exact memory layout of values. Even Rust, which tries to optimize data layout, severely limits control over memory representation. In this article, we present a new approach to specify the data layout of rich data types based on a dual view: a source type, providing a high-level description available in the rest of the code, along with a memory type, providing full control over the memory layout. This dual view allows for better reasoning about memory layout, both for correctness, with dedicated validity criteria linking the two views, and for optimizations that manipulate the memory view. We then provide algorithms to compile constructors and destructors, including pattern matching, to their low-level memory representation. We prove our compilation algorithms correct, implement them in a tool called ribbit that compiles to LLVM IR, and show some early experimental results. @Article{ICFP23p216, author = {Thaïs Baudon and Gabriel Radanne and Laure Gonnord}, title = {Bit-Stealing Made Legal: Compilation for Custom Memory Representations of Algebraic Data Types}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {216}, numpages = {34}, doi = {10.1145/3607858}, year = {2023}, } Publisher's Version Published Artifact Info Artifacts Available Artifacts Reusable |
|
Rogozin, Daniel |
ICFP '23: "LURK: Lambda, the Ultimate ..."
LURK: Lambda, the Ultimate Recursive Knowledge (Experience Report)
Nada Amin, John Burnham, François Garillot, Rosario Gennaro, Chhi’mèd Künzang, Daniel Rogozin, and Cameron Wong (Harvard University, USA; Lurk Lab, USA; Lurk Lab, Canada; City College of New York, USA; University College London, UK) We introduce Lurk, a new LISP-based programming language for zk-SNARKs. Traditional approaches to programming over zero-knowledge proofs require compiling the desired computation into a flat circuit, imposing serious constraints on the size and complexity of computations that can be achieved in practice. Lurk programs are instead provided as data to the universal Lurk interpreter circuit, allowing the resulting language to be Turing-complete without compromising the size of the resulting proof artifacts. Our work describes the design and theory behind Lurk, along with detailing how its implementation of content addressing can be used to sidestep many of the usual concerns of programming zero-knowledge proofs. @Article{ICFP23p197, author = {Nada Amin and John Burnham and François Garillot and Rosario Gennaro and Chhi’mèd Künzang and Daniel Rogozin and Cameron Wong}, title = {LURK: Lambda, the Ultimate Recursive Knowledge (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {197}, numpages = {16}, doi = {10.1145/3607839}, year = {2023}, } Publisher's Version Info |
|
Sato, Ryosuke |
ICFP '23: "Higher-Order Property-Directed ..."
Higher-Order Property-Directed Reachability
Hiroyuki Katsura, Naoki Kobayashi, and Ryosuke Sato (University of Tokyo, Japan) The property-directed reachability (PDR) has been used as a successful method for automated verification of first-order transition systems. We propose a higher-order extension of PDR, called HoPDR, where higher-order recursive functions may be used to describe transition systems. We formalize HoPDR for the validity checking problem for conjunctive nu-HFL(Z), a higher-order fixpoint logic with integers and greatest fixpoint operators. The validity checking problem can also be viewed as a higher-order extension of the satisfiability problem for Constrained Horn Clauses (CHC), and safety property verification of higher-order programs can naturally be reduced to the validity checking problem. We have implemented a prototype verification tool based on HoPDR and confirmed its effectiveness. We also compare our HoPDR procedure with the PDR procedure for first-order systems and previous methods for fully automated higher-order program verification. @Article{ICFP23p190, author = {Hiroyuki Katsura and Naoki Kobayashi and Ryosuke Sato}, title = {Higher-Order Property-Directed Reachability}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {190}, numpages = {30}, doi = {10.1145/3607831}, year = {2023}, } Publisher's Version |
|
Schuster, Philipp |
ICFP '23: "With or Without You: Programming ..."
With or Without You: Programming with Effect Exclusion
Matthew Lutze, Magnus Madsen, Philipp Schuster, and Jonathan Immanuel Brachthäuser (Aarhus University, Denmark; University of Tübingen, Germany) Type and effect systems have been successfully used to statically reason about effects in many different domains, including region-based memory management, exceptions, and algebraic effects and handlers. Such systems’ soundness is often stated in terms of the absence of effects. Yet, existing systems only admit indirect reasoning about the absence of effects. This is further complicated by effect polymorphism which allows function signatures to abstract over arbitrary, unknown sets of effects. We present a new type and effect system with effect polymorphism as well as union, intersection, and complement effects. The effect system allows us to express effect exclusion as a new class of effect polymorphic functions: those that permit any effects except those in a specific set. This way, we equip programmers with the means to directly reason about the absence of effects. Our type and effect system builds on the Hindley-Milner type system, supports effect polymorphism, and preserves principal types modulo Boolean equivalence. In addition, a suitable extension of Algorithm W with Boolean unification on the algebra of sets enables complete type and effect inference. We formalize these notions in the λ∁ calculus. We prove the standard progress and preservation theorems as well as a non-standard effect safety theorem: no excluded effect is ever performed. We implement the type and effect system as an extension of the Flix programming language. We conduct a case study of open source projects identifying 59 program fragments that require effect exclusion for correctness. To demonstrate the usefulness of the proposed type and effect system, we recast these program fragments into our extension of Flix. @Article{ICFP23p204, author = {Matthew Lutze and Magnus Madsen and Philipp Schuster and Jonathan Immanuel Brachthäuser}, title = {With or Without You: Programming with Effect Exclusion}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {204}, numpages = {28}, doi = {10.1145/3607846}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Scott, Ryan G. |
ICFP '23: "Trustworthy Runtime Verification ..."
Trustworthy Runtime Verification via Bisimulation (Experience Report)
Ryan G. Scott, Mike Dodds, Ivan Perez, Alwyn E. Goodloe, and Robert Dockins (Galois, USA; KBR @ NASA Ames Research Center, USA; NASA Ames Research Center, USA; Amazon, USA) When runtime verification is used to monitor safety-critical systems, it is essential that monitoring code behaves correctly. The Copilot runtime verification framework pursues this goal by automatically generating C monitor programs from a high-level DSL embedded in Haskell. In safety-critical domains, every piece of deployed code must be accompanied by an assurance argument that is convincing to human auditors. However, it is difficult for auditors to determine with confidence that a compiled monitor cannot crash and implements the behavior required by the Copilot semantics. In this paper we describe CopilotVerifier, which runs alongside the Copilot compiler, generating a proof of correctness for the compiled output. The proof establishes that a given Copilot monitor and its compiled form produce equivalent outputs on equivalent inputs, and that they either crash in identical circumstances or cannot crash. The proof takes the form of a bisimulation broken down into a set of verification conditions. We leverage two pieces of SMT-backed technology: the Crucible symbolic execution library for LLVM and the What4 solver interface library. Our results demonstrate that dramatically increased compiler assurance can be achieved at moderate cost by building on existing tools. This paves the way to our ultimate goal of generating formal assurance arguments that are convincing to human auditors. @Article{ICFP23p199, author = {Ryan G. Scott and Mike Dodds and Ivan Perez and Alwyn E. Goodloe and Robert Dockins}, title = {Trustworthy Runtime Verification via Bisimulation (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {199}, numpages = {17}, doi = {10.1145/3607841}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Shen, Gan |
ICFP '23: "HasChor: Functional Choreographic ..."
HasChor: Functional Choreographic Programming for All (Functional Pearl)
Gan Shen, Shun Kashiwa, and Lindsey Kuper (University of California at Santa Cruz, USA) Choreographic programming is an emerging paradigm for programming distributed systems. In choreographic programming, the programmer describes the behavior of the entire system as a single, unified program -- a choreography -- which is then compiled to individual programs that run on each node, via a compilation step called endpoint projection. We present a new model for functional choreographic programming where choreographies are expressed as computations in a monad. Our model supports cutting-edge choreographic programming features that enable modularity and code reuse: in particular, it supports higher-order choreographies, in which a choreography may be passed as an argument to another choreography, and location-polymorphic choreographies, in which a choreography can abstract over nodes. Our model is implemented in a Haskell library, HasChor, which lets programmers write choreographic programs while using the rich Haskell ecosystem at no cost, bringing choreographic programming within reach of everyday Haskellers. Moreover, thanks to Haskell's abstractions, the implementation of the HasChor library itself is concise and understandable, boiling down endpoint projection to its short and simple essence. @Article{ICFP23p207, author = {Gan Shen and Shun Kashiwa and Lindsey Kuper}, title = {HasChor: Functional Choreographic Programming for All (Functional Pearl)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {207}, numpages = {25}, doi = {10.1145/3607849}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Shi, Jessica |
ICFP '23: "Etna: An Evaluation Platform ..."
Etna: An Evaluation Platform for Property-Based Testing (Experience Report)
Jessica Shi, Alperen Keles, Harrison Goldstein, Benjamin C. Pierce, and Leonidas Lampropoulos (University of Pennsylvania, USA; University of Maryland, College Park, USA) Property-based testing is a mainstay of functional programming, boasting a rich literature, an enthusiastic user community, and an abundance of tools — so many, indeed, that new users may have difficulty choosing. Moreover, any given framework may support a variety of strategies for generating test inputs; even experienced users may wonder which are better in a given situation. Sadly, the PBT literature, though long on creativity, is short on rigorous comparisons to help answer such questions. We present Etna, a platform for empirical evaluation and comparison of PBT techniques. Etna incorporates a number of popular PBT frameworks and testing workloads from the literature, and its extensible architecture makes adding new ones easy, while handling the technical drudgery of performance measurement. To illustrate its benefits, we use Etna to carry out several experiments with popular PBT approaches in both Coq and Haskell, allowing users to more clearly understand best practices and tradeoffs. @Article{ICFP23p218, author = {Jessica Shi and Alperen Keles and Harrison Goldstein and Benjamin C. Pierce and Leonidas Lampropoulos}, title = {Etna: An Evaluation Platform for Property-Based Testing (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {218}, numpages = {17}, doi = {10.1145/3607860}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Shivers, Olin |
ICFP '23: "The Verse Calculus: A Core ..."
The Verse Calculus: A Core Calculus for Deterministic Functional Logic Programming
Lennart Augustsson, Joachim Breitner, Koen Claessen, Ranjit Jhala, Simon Peyton Jones, Olin Shivers, Guy L. Steele Jr., and Tim Sweeney (Epic Games, Sweden; Unaffiliated, Germany; Epic Games, USA; Epic Games, UK; Oracle Labs, USA) Functional logic languages have a rich literature, but it is tricky to give them a satisfying semantics. In this paper we describe the Verse calculus, VC, a new core calculus for deterministic functional logic programming. Our main contribution is to equip VC with a small-step rewrite semantics, so that we can reason about a VC program in the same way as one does with lambda calculus; that is, by applying successive rewrites to it. We also show that the rewrite system is confluent for well-behaved terms. @Article{ICFP23p203, author = {Lennart Augustsson and Joachim Breitner and Koen Claessen and Ranjit Jhala and Simon Peyton Jones and Olin Shivers and Guy L. Steele Jr. and Tim Sweeney}, title = {The Verse Calculus: A Core Calculus for Deterministic Functional Logic Programming}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {203}, numpages = {31}, doi = {10.1145/3607845}, year = {2023}, } Publisher's Version |
|
Sieczkowski, Filip |
ICFP '23: "A General Fine-Grained Reduction ..."
A General Fine-Grained Reduction Theory for Effect Handlers
Filip Sieczkowski, Mateusz Pyzik, and Dariusz Biernacki (Heriot-Watt University, UK; University of Wrocław, Poland) Effect handlers are a modern and increasingly popular approach to structuring computational effects in functional programming languages. However, while their traditional operational semantics is well-suited to implementation tasks, it is less ideal as a reduction theory. We therefore introduce a fine-grained reduction theory for deep effect handlers, inspired by our existing reduction theory for shift0, along with a standard reduction strategy. We relate this strategy to the traditional, non-local operational semantics via a simulation argument, and show that the reduction theory preserves observational equivalence with respect to the classical semantics of handlers, thus allowing its use as a rewriting theory for handler-equipped programming languages -- this rewriting system mostly coincides with previously studied type-based optimisations. In the process, we establish theoretical properties of our reduction theory, including confluence and standardisation theorems, adapting and extending existing techniques. Finally, we demonstrate the utility of our semantics by providing the first normalisation-by-evaluation algorithm for effect handlers, and prove its soundness and completeness. Additionally, we establish non-expressibility of the lift operator, found in some effect-handler calculi, by the other constructs. @Article{ICFP23p206, author = {Filip Sieczkowski and Mateusz Pyzik and Dariusz Biernacki}, title = {A General Fine-Grained Reduction Theory for Effect Handlers}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {206}, numpages = {30}, doi = {10.1145/3607848}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Functional |
|
Singh, Pratap |
ICFP '23: "Flexible Instruction-Set Semantics ..."
Flexible Instruction-Set Semantics via Abstract Monads (Experience Report)
Thomas Bourgeat, Ian Clester, Andres Erbsen, Samuel Gruetter, Pratap Singh, Andy Wright, and Adam Chlipala (Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Carnegie Mellon University, USA) Instruction sets, from families like x86 and ARM, are at the center of many ambitious formal-methods projects. Many verification, synthesis, programming, and debugging tools rely on formal semantics of instruction sets, but different tools can use semantics in rather different ways. The best-known work applying single semantics across diverse tools relies on domain-specific languages like Sail, where the language and its translation tools are specialized to the realm of instruction sets. In the context of the open RISC-V instruction-set family, we decided to explore a different approach, with semantics written in a carefully chosen subset of Haskell. This style does not depend on any new language translators, relying instead on parameterization of semantics over type-class instances. We have used a single core semantics to support testing, interactive proof, and model checking of both software and hardware, demonstrating that monads and the ability to abstract over them using type classes can support pleasant prototyping of ISA semantics. @Article{ICFP23p192, author = {Thomas Bourgeat and Ian Clester and Andres Erbsen and Samuel Gruetter and Pratap Singh and Andy Wright and Adam Chlipala}, title = {Flexible Instruction-Set Semantics via Abstract Monads (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {192}, numpages = {17}, doi = {10.1145/3607833}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Sowul, Franciszek |
ICFP '23: "Special Delivery: Programming ..."
Special Delivery: Programming with Mailbox Types
Simon Fowler, Duncan Paul Attard, Franciszek Sowul, Simon J. Gay, and Phil Trinder (University of Glasgow, UK) The asynchronous and unidirectional communication model supported by mailboxes is a key reason for the success of actor languages like Erlang and Elixir for implementing reliable and scalable distributed systems. While many actors may send messages to some actor, only the actor may (selectively) receive from its mailbox. Although actors eliminate many of the issues stemming from shared memory concurrency, they remain vulnerable to communication errors such as protocol violations and deadlocks. Mailbox types are a novel behavioural type system for mailboxes first introduced for a process calculus by de’Liguoro and Padovani in 2018, which capture the contents of a mailbox as a commutative regular expression. Due to aliasing and nested evaluation contexts, moving from a process calculus to a programming language is challenging. This paper presents Pat, the first programming language design incorporating mailbox types, and describes an algorithmic type system. We make essential use of quasi-linear typing to tame some of the complexity introduced by aliasing. Our algorithmic type system is necessarily co-contextual, achieved through a novel use of backwards bidirectional typing, and we prove it sound and complete with respect to our declarative type system. We implement a prototype type checker, and use it to demonstrate the expressiveness of Pat on a factory automation case study and a series of examples from the Savina actor benchmark suite. @Article{ICFP23p191, author = {Simon Fowler and Duncan Paul Attard and Franciszek Sowul and Simon J. Gay and Phil Trinder}, title = {Special Delivery: Programming with Mailbox Types}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {191}, numpages = {30}, doi = {10.1145/3607832}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Steele Jr., Guy L. |
ICFP '23: "The Verse Calculus: A Core ..."
The Verse Calculus: A Core Calculus for Deterministic Functional Logic Programming
Lennart Augustsson, Joachim Breitner, Koen Claessen, Ranjit Jhala, Simon Peyton Jones, Olin Shivers, Guy L. Steele Jr., and Tim Sweeney (Epic Games, Sweden; Unaffiliated, Germany; Epic Games, USA; Epic Games, UK; Oracle Labs, USA) Functional logic languages have a rich literature, but it is tricky to give them a satisfying semantics. In this paper we describe the Verse calculus, VC, a new core calculus for deterministic functional logic programming. Our main contribution is to equip VC with a small-step rewrite semantics, so that we can reason about a VC program in the same way as one does with lambda calculus; that is, by applying successive rewrites to it. We also show that the rewrite system is confluent for well-behaved terms. @Article{ICFP23p203, author = {Lennart Augustsson and Joachim Breitner and Koen Claessen and Ranjit Jhala and Simon Peyton Jones and Olin Shivers and Guy L. Steele Jr. and Tim Sweeney}, title = {The Verse Calculus: A Core Calculus for Deterministic Functional Logic Programming}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {203}, numpages = {31}, doi = {10.1145/3607845}, year = {2023}, } Publisher's Version |
|
Sweeney, Tim |
ICFP '23: "The Verse Calculus: A Core ..."
The Verse Calculus: A Core Calculus for Deterministic Functional Logic Programming
Lennart Augustsson, Joachim Breitner, Koen Claessen, Ranjit Jhala, Simon Peyton Jones, Olin Shivers, Guy L. Steele Jr., and Tim Sweeney (Epic Games, Sweden; Unaffiliated, Germany; Epic Games, USA; Epic Games, UK; Oracle Labs, USA) Functional logic languages have a rich literature, but it is tricky to give them a satisfying semantics. In this paper we describe the Verse calculus, VC, a new core calculus for deterministic functional logic programming. Our main contribution is to equip VC with a small-step rewrite semantics, so that we can reason about a VC program in the same way as one does with lambda calculus; that is, by applying successive rewrites to it. We also show that the rewrite system is confluent for well-behaved terms. @Article{ICFP23p203, author = {Lennart Augustsson and Joachim Breitner and Koen Claessen and Ranjit Jhala and Simon Peyton Jones and Olin Shivers and Guy L. Steele Jr. and Tim Sweeney}, title = {The Verse Calculus: A Core Calculus for Deterministic Functional Logic Programming}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {203}, numpages = {31}, doi = {10.1145/3607845}, year = {2023}, } Publisher's Version |
|
Swierstra, Wouter |
ICFP '23: "FP²: Fully in-Place Functional ..."
FP²: Fully in-Place Functional Programming
Anton Lorenzen, Daan Leijen, and Wouter Swierstra (University of Edinburgh, UK; Microsoft Research, USA; Utrecht University, Netherlands) As functional programmers we always face a dilemma: should we write purely functional code, or sacrifice purity for efficiency and resort to in-place updates? This paper identifies precisely when we can have the best of both worlds: a wide class of purely functional programs can be executed safely using in-place updates without requiring allocation, provided their arguments are not shared elsewhere. We describe a linear _fully in-place_ (FIP) calculus where we prove that we can always execute such functions in a way that requires no (de)allocation and uses constant stack space. Of course, such a calculus is only relevant if we can express interesting algorithms; we provide numerous examples of in-place functions on datastructures such as splay trees or finger trees, together with in-place versions of merge sort and quick sort. We also show how we can generically derive a map function over _any_ polynomial data type that is fully in-place. Finally, we have implemented the rules of the FIP calculus in the Koka language. Using the Perceus reference counting garbage collection, this implementation dynamically executes FIP functions in-place whenever possible. @Article{ICFP23p198, author = {Anton Lorenzen and Daan Leijen and Wouter Swierstra}, title = {FP²: Fully in-Place Functional Programming}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {198}, numpages = {30}, doi = {10.1145/3607840}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Thiemann, Peter |
ICFP '23: "Intrinsically Typed Sessions ..."
Intrinsically Typed Sessions with Callbacks (Functional Pearl)
Peter Thiemann (University of Freiburg, Germany) All formalizations of session types rely on linear types for soundness as session-typed communication channels must change their type at every operation. Embedded language implementations of session types follow suit. They either rely on clever typing constructions to guarantee linearity statically, or on run-time checks that approximate linearity. We present a new language-embedded implementation of session types, which is inspired by the inversion-of-control design principle. With our approach, all application programs are intrinsically session-typed and unable to break linearity by construction. Our design relies on a tiny encapsulated library, for which linearity remains a proof obligation that can be discharged once and for all when the library is built. We demonstrate that our proposed design extends to a wide range of features of session type systems: branching, recursion, multichannel and higher-order sessions, as well as context-free sessions. The multichannel extension provides an embedded implementation of session types which guarantees deadlock freedom by construction. The development reported in this paper is fully backed by type-checked Agda code. @Article{ICFP23p212, author = {Peter Thiemann}, title = {Intrinsically Typed Sessions with Callbacks (Functional Pearl)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {212}, numpages = {29}, doi = {10.1145/3607854}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Timany, Amin |
ICFP '23: "Verifying Reliable Network ..."
Verifying Reliable Network Components in a Distributed Separation Logic with Dependent Separation Protocols
Léon Gondelman, Jonas Kastberg Hinrichsen, Mário Pereira, Amin Timany, and Lars Birkedal (Aarhus University, Denmark; NOVA-LINCS, Portugal; NOVA School of Sciences and Tecnhology, Portugal) We present a foundationally verified implementation of a reliable communication library for asynchronous client-server communication, and a stack of formally verified components on top thereof. Our library is implemented in an OCaml-like language on top of UDP and features characteristic traits of existing protocols, such as a simple handshaking protocol, bidirectional channels, and retransmission/acknowledgement mechanisms. We verify the library in the Aneris distributed separation logic using a novel proof pattern---dubbed the session escrow pattern---based on the existing escrow proof pattern and the so-called dependent separation protocols, which hitherto have only been used in a non-distributed concurrent setting. We demonstrate how our specification of the reliable communication library simplifies formal reasoning about applications, such as a remote procedure call library, which we in turn use to verify a lazily replicated key-value store with leader-followers and clients thereof. Our development is highly modular---each component is verified relative to specifications of the components it uses (not the implementation). All our results are formalized in the Coq proof assistant. @Article{ICFP23p217, author = {Léon Gondelman and Jonas Kastberg Hinrichsen and Mário Pereira and Amin Timany and Lars Birkedal}, title = {Verifying Reliable Network Components in a Distributed Separation Logic with Dependent Separation Protocols}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {217}, numpages = {31}, doi = {10.1145/3607859}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Trinder, Phil |
ICFP '23: "Special Delivery: Programming ..."
Special Delivery: Programming with Mailbox Types
Simon Fowler, Duncan Paul Attard, Franciszek Sowul, Simon J. Gay, and Phil Trinder (University of Glasgow, UK) The asynchronous and unidirectional communication model supported by mailboxes is a key reason for the success of actor languages like Erlang and Elixir for implementing reliable and scalable distributed systems. While many actors may send messages to some actor, only the actor may (selectively) receive from its mailbox. Although actors eliminate many of the issues stemming from shared memory concurrency, they remain vulnerable to communication errors such as protocol violations and deadlocks. Mailbox types are a novel behavioural type system for mailboxes first introduced for a process calculus by de’Liguoro and Padovani in 2018, which capture the contents of a mailbox as a commutative regular expression. Due to aliasing and nested evaluation contexts, moving from a process calculus to a programming language is challenging. This paper presents Pat, the first programming language design incorporating mailbox types, and describes an algorithmic type system. We make essential use of quasi-linear typing to tame some of the complexity introduced by aliasing. Our algorithmic type system is necessarily co-contextual, achieved through a novel use of backwards bidirectional typing, and we prove it sound and complete with respect to our declarative type system. We implement a prototype type checker, and use it to demonstrate the expressiveness of Pat on a factory automation case study and a series of examples from the Savina actor benchmark suite. @Article{ICFP23p191, author = {Simon Fowler and Duncan Paul Attard and Franciszek Sowul and Simon J. Gay and Phil Trinder}, title = {Special Delivery: Programming with Mailbox Types}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {191}, numpages = {30}, doi = {10.1145/3607832}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Tshukudu, Ethel |
ICFP '23: "What Happens When Students ..."
What Happens When Students Switch (Functional) Languages (Experience Report)
Kuang-Chen Lu, Shriram Krishnamurthi, Kathi Fisler, and Ethel Tshukudu (Brown University, USA; University of Botswana, Botswana) When novice programming students already know one programming language and have to learn another, what issues do they run into? We specifically focus on one or both languages being functional, varying along two axes: syntax and semantics. We report on problems, especially persistent ones. This work can be of immediate value to educators and also sets up avenues for future research. @Article{ICFP23p215, author = {Kuang-Chen Lu and Shriram Krishnamurthi and Kathi Fisler and Ethel Tshukudu}, title = {What Happens When Students Switch (Functional) Languages (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {215}, numpages = {17}, doi = {10.1145/3607857}, year = {2023}, } Publisher's Version Archive submitted (2.7 MB) |
|
Varshosaz, Mahsa |
ICFP '23: "Formal Specification and Testing ..."
Formal Specification and Testing for Reinforcement Learning
Mahsa Varshosaz, Mohsen Ghaffari, Einar Broch Johnsen, and Andrzej Wąsowski (IT University of Copenhagen, Denmark; University of Oslo, Norway) The development process for reinforcement learning applications is still exploratory rather than systematic. This exploratory nature reduces reuse of specifications between applications and increases the chances of introducing programming errors. This paper takes a step towards systematizing the development of reinforcement learning applications. We introduce a formal specification of reinforcement learning problems and algorithms, with a particular focus on temporal difference methods and their definitions in backup diagrams. We further develop a test harness for a large class of reinforcement learning applications based on temporal difference learning, including SARSA and Q-learning. The entire development is rooted in functional programming methods; starting with pure specifications and denotational semantics, ending with property-based testing and using compositional interpreters for a domain-specific term language as a test oracle for concrete implementations. We demonstrate the usefulness of this testing method on a number of examples, and evaluate with mutation testing. We show that our test suite is effective in killing mutants (90% mutants killed for 75% of subject agents). More importantly, almost half of all mutants are killed by generic write-once-use-everywhere tests that apply to any reinforcement learning problem modeled using our library, without any additional effort from the programmer. @Article{ICFP23p193, author = {Mahsa Varshosaz and Mohsen Ghaffari and Einar Broch Johnsen and Andrzej Wąsowski}, title = {Formal Specification and Testing for Reinforcement Learning}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {193}, numpages = {34}, doi = {10.1145/3607835}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Wang, Meng |
ICFP '23: "Embedding by Unembedding ..."
Embedding by Unembedding
Kazutaka Matsuda, Samantha Frohlich, Meng Wang, and Nicolas Wu (Tohoku University, Japan; University of Bristol, UK; Imperial College London, UK) Embedding is a language development technique that implements the object language as a library in a host language. There are many advantages of the approach, including being lightweight and the ability to inherit features of the host language. A notable example is the technique of HOAS, which makes crucial use of higher-order functions to represent abstract syntax trees with binders. Despite its popularity, HOAS has its limitations. We observe that HOAS struggles with semantic domains that cannot be naturally expressed as functions, particularly when open expressions are involved. Prominent examples of this include incremental computation and reversible/bidirectional languages. In this paper, we pin-point the challenge faced by HOAS as a mismatch between the semantic domain of host and object language functions, and propose a solution. The solution is based on the technique of unembedding, which converts from the finally-tagless representation to de Bruijn-indexed terms with strong correctness guarantees. We show that this approach is able to extend the applicability of HOAS while preserving its elegance. We provide a generic strategy for Embedding by Unembedding, and then demonstrate its effectiveness with two substantial case studies in the domains of incremental computation and bidirectional transformations. The resulting embedded implementations are comparable in features to the state-of-the-art language implementations in the respective areas. @Article{ICFP23p189, author = {Kazutaka Matsuda and Samantha Frohlich and Meng Wang and Nicolas Wu}, title = {Embedding by Unembedding}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {189}, numpages = {47}, doi = {10.1145/3607830}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable ICFP '23: "Reflecting on Random Generation ..." Reflecting on Random Generation Harrison Goldstein, Samantha Frohlich, Meng Wang, and Benjamin C. Pierce (University of Pennsylvania, USA; University of Bristol, UK) Expert users of property-based testing often labor to craft random generators that encode detailed knowledge about what it means for a test input to be valid and interesting. Fortunately, the fruits of this labor can also be put to other uses. In the bidirectional programming literature, for example, generators have been repurposed as validity checkers, while Python's Hypothesis library uses the same structures for shrinking and mutating test inputs. To unify and generalize these uses and many others, we propose reflective generators, a new foundation for random data generators that can "reflect" on an input value to calculate the random choices that could have been made to produce it. Reflective generators combine ideas from two existing abstractions: free generators and partial monadic profunctors. They can be used to implement and enhance the aforementioned shrinking and mutation algorithms, generalizing them to work for any values that can be produced by the generator, not just ones for which a trace of the generator's execution is available. Beyond shrinking and mutation, reflective generators generalize a published algorithm for example-based generation, and they can also be used as checkers, partial value completers, and other kinds of test data producers. @Article{ICFP23p200, author = {Harrison Goldstein and Samantha Frohlich and Meng Wang and Benjamin C. Pierce}, title = {Reflecting on Random Generation}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {200}, numpages = {34}, doi = {10.1145/3607842}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Wąsowski, Andrzej |
ICFP '23: "Formal Specification and Testing ..."
Formal Specification and Testing for Reinforcement Learning
Mahsa Varshosaz, Mohsen Ghaffari, Einar Broch Johnsen, and Andrzej Wąsowski (IT University of Copenhagen, Denmark; University of Oslo, Norway) The development process for reinforcement learning applications is still exploratory rather than systematic. This exploratory nature reduces reuse of specifications between applications and increases the chances of introducing programming errors. This paper takes a step towards systematizing the development of reinforcement learning applications. We introduce a formal specification of reinforcement learning problems and algorithms, with a particular focus on temporal difference methods and their definitions in backup diagrams. We further develop a test harness for a large class of reinforcement learning applications based on temporal difference learning, including SARSA and Q-learning. The entire development is rooted in functional programming methods; starting with pure specifications and denotational semantics, ending with property-based testing and using compositional interpreters for a domain-specific term language as a test oracle for concrete implementations. We demonstrate the usefulness of this testing method on a number of examples, and evaluate with mutation testing. We show that our test suite is effective in killing mutants (90% mutants killed for 75% of subject agents). More importantly, almost half of all mutants are killed by generic write-once-use-everywhere tests that apply to any reinforcement learning problem modeled using our library, without any additional effort from the programmer. @Article{ICFP23p193, author = {Mahsa Varshosaz and Mohsen Ghaffari and Einar Broch Johnsen and Andrzej Wąsowski}, title = {Formal Specification and Testing for Reinforcement Learning}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {193}, numpages = {34}, doi = {10.1145/3607835}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Weirich, Stephanie |
ICFP '23: "Dependently-Typed Programming ..."
Dependently-Typed Programming with Logical Equality Reflection
Yiyun Liu and Stephanie Weirich (University of Pennsylvania, USA) In dependently-typed functional programming languages that allow general recursion, programs used as proofs must be evaluated to retain type soundness. As a result, programmers must make a trade-off between performance and safety. To address this problem, we propose System DE, an explicitly-typed, moded core calculus that supports termination tracking and equality reflection. Programmers can write inductive proofs about potentially diverging programs in a logical sublanguage and reflect those proofs to the type checker, while knowing that such proofs will be erased by the compiler before execution. A key feature of System DE is its use of modes for both termination and relevance tracking, which not only simplifies the design but also leaves it open for future extension. System DE is suitable for use in the Glasgow Haskell Compiler, but could serve as the basis for any general purpose dependently-typed language. @Article{ICFP23p210, author = {Yiyun Liu and Stephanie Weirich}, title = {Dependently-Typed Programming with Logical Equality Reflection}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {210}, numpages = {37}, doi = {10.1145/3607852}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
White, Leo |
ICFP '23: "MacoCaml: Staging Composable ..."
MacoCaml: Staging Composable and Compilable Macros
Ningning Xie, Leo White, Olivier Nicole, and Jeremy Yallop (University of Toronto, Canada; Jane Street, UK; Tarides, France; University of Cambridge, UK) We introduce MacoCaml, a new design and implementation of compile-time code generation for the OCaml language. MacoCaml features a novel combination of macros with phase separation and quotation-based staging, where macros are considered as compile-time bindings, expression cross evaluation phases using staging annotations, and compile-time evaluation happens inside top-level splices. We provide a theoretical foundation for MacoCaml by formalizing a typed source calculus maco that supports interleaving typing and compile-time code generation, references with explicit compile-time heaps, and modules. We study various crucial properties including soundness and phase distinction. We have implemented MacoCaml in the OCaml compiler, and ported two substantial existing libraries to validate our implementation. @Article{ICFP23p209, author = {Ningning Xie and Leo White and Olivier Nicole and Jeremy Yallop}, title = {MacoCaml: Staging Composable and Compilable Macros}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {209}, numpages = {45}, doi = {10.1145/3607851}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Wong, Cameron |
ICFP '23: "LURK: Lambda, the Ultimate ..."
LURK: Lambda, the Ultimate Recursive Knowledge (Experience Report)
Nada Amin, John Burnham, François Garillot, Rosario Gennaro, Chhi’mèd Künzang, Daniel Rogozin, and Cameron Wong (Harvard University, USA; Lurk Lab, USA; Lurk Lab, Canada; City College of New York, USA; University College London, UK) We introduce Lurk, a new LISP-based programming language for zk-SNARKs. Traditional approaches to programming over zero-knowledge proofs require compiling the desired computation into a flat circuit, imposing serious constraints on the size and complexity of computations that can be achieved in practice. Lurk programs are instead provided as data to the universal Lurk interpreter circuit, allowing the resulting language to be Turing-complete without compromising the size of the resulting proof artifacts. Our work describes the design and theory behind Lurk, along with detailing how its implementation of content addressing can be used to sidestep many of the usual concerns of programming zero-knowledge proofs. @Article{ICFP23p197, author = {Nada Amin and John Burnham and François Garillot and Rosario Gennaro and Chhi’mèd Künzang and Daniel Rogozin and Cameron Wong}, title = {LURK: Lambda, the Ultimate Recursive Knowledge (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {197}, numpages = {16}, doi = {10.1145/3607839}, year = {2023}, } Publisher's Version Info |
|
Wright, Andy |
ICFP '23: "Flexible Instruction-Set Semantics ..."
Flexible Instruction-Set Semantics via Abstract Monads (Experience Report)
Thomas Bourgeat, Ian Clester, Andres Erbsen, Samuel Gruetter, Pratap Singh, Andy Wright, and Adam Chlipala (Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Carnegie Mellon University, USA) Instruction sets, from families like x86 and ARM, are at the center of many ambitious formal-methods projects. Many verification, synthesis, programming, and debugging tools rely on formal semantics of instruction sets, but different tools can use semantics in rather different ways. The best-known work applying single semantics across diverse tools relies on domain-specific languages like Sail, where the language and its translation tools are specialized to the realm of instruction sets. In the context of the open RISC-V instruction-set family, we decided to explore a different approach, with semantics written in a carefully chosen subset of Haskell. This style does not depend on any new language translators, relying instead on parameterization of semantics over type-class instances. We have used a single core semantics to support testing, interactive proof, and model checking of both software and hardware, demonstrating that monads and the ability to abstract over them using type classes can support pleasant prototyping of ISA semantics. @Article{ICFP23p192, author = {Thomas Bourgeat and Ian Clester and Andres Erbsen and Samuel Gruetter and Pratap Singh and Andy Wright and Adam Chlipala}, title = {Flexible Instruction-Set Semantics via Abstract Monads (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {192}, numpages = {17}, doi = {10.1145/3607833}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Wu, Nicolas |
ICFP '23: "Embedding by Unembedding ..."
Embedding by Unembedding
Kazutaka Matsuda, Samantha Frohlich, Meng Wang, and Nicolas Wu (Tohoku University, Japan; University of Bristol, UK; Imperial College London, UK) Embedding is a language development technique that implements the object language as a library in a host language. There are many advantages of the approach, including being lightweight and the ability to inherit features of the host language. A notable example is the technique of HOAS, which makes crucial use of higher-order functions to represent abstract syntax trees with binders. Despite its popularity, HOAS has its limitations. We observe that HOAS struggles with semantic domains that cannot be naturally expressed as functions, particularly when open expressions are involved. Prominent examples of this include incremental computation and reversible/bidirectional languages. In this paper, we pin-point the challenge faced by HOAS as a mismatch between the semantic domain of host and object language functions, and propose a solution. The solution is based on the technique of unembedding, which converts from the finally-tagless representation to de Bruijn-indexed terms with strong correctness guarantees. We show that this approach is able to extend the applicability of HOAS while preserving its elegance. We provide a generic strategy for Embedding by Unembedding, and then demonstrate its effectiveness with two substantial case studies in the domains of incremental computation and bidirectional transformations. The resulting embedded implementations are comparable in features to the state-of-the-art language implementations in the respective areas. @Article{ICFP23p189, author = {Kazutaka Matsuda and Samantha Frohlich and Meng Wang and Nicolas Wu}, title = {Embedding by Unembedding}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {189}, numpages = {47}, doi = {10.1145/3607830}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable ICFP '23: "Modular Models of Monoids ..." Modular Models of Monoids with Operations Zhixuan Yang and Nicolas Wu (Imperial College London, UK) Inspired by algebraic effects and the principle of notions of computations as monoids, we study a categorical framework for equational theories and models of monoids equipped with operations. The framework covers not only algebraic operations but also scoped and variable-binding operations. Appealingly, in this framework both theories and models can be modularly composed. Technically, a general monoid-theory correspondence is shown, saying that the category of theories of algebraic operations is equivalent to the category of monoids. Moreover, more complex forms of operations can be coreflected into algebraic operations, in a way that preserves initial algebras. On models, we introduce modular models of a theory, which can interpret abstract syntax in the presence of other operations. We show constructions of modular models (i) from monoid transformers, (ii) from free algebras, (iii) by composition, and (iv) in symmetric monoidal categories. @Article{ICFP23p208, author = {Zhixuan Yang and Nicolas Wu}, title = {Modular Models of Monoids with Operations}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {208}, numpages = {38}, doi = {10.1145/3607850}, year = {2023}, } Publisher's Version |
|
Xie, Ningning |
ICFP '23: "MacoCaml: Staging Composable ..."
MacoCaml: Staging Composable and Compilable Macros
Ningning Xie, Leo White, Olivier Nicole, and Jeremy Yallop (University of Toronto, Canada; Jane Street, UK; Tarides, France; University of Cambridge, UK) We introduce MacoCaml, a new design and implementation of compile-time code generation for the OCaml language. MacoCaml features a novel combination of macros with phase separation and quotation-based staging, where macros are considered as compile-time bindings, expression cross evaluation phases using staging annotations, and compile-time evaluation happens inside top-level splices. We provide a theoretical foundation for MacoCaml by formalizing a typed source calculus maco that supports interleaving typing and compile-time code generation, references with explicit compile-time heaps, and modules. We study various crucial properties including soundness and phase distinction. We have implemented MacoCaml in the OCaml compiler, and ported two substantial existing libraries to validate our implementation. @Article{ICFP23p209, author = {Ningning Xie and Leo White and Olivier Nicole and Jeremy Yallop}, title = {MacoCaml: Staging Composable and Compilable Macros}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {209}, numpages = {45}, doi = {10.1145/3607851}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Yallop, Jeremy |
ICFP '23: "MacoCaml: Staging Composable ..."
MacoCaml: Staging Composable and Compilable Macros
Ningning Xie, Leo White, Olivier Nicole, and Jeremy Yallop (University of Toronto, Canada; Jane Street, UK; Tarides, France; University of Cambridge, UK) We introduce MacoCaml, a new design and implementation of compile-time code generation for the OCaml language. MacoCaml features a novel combination of macros with phase separation and quotation-based staging, where macros are considered as compile-time bindings, expression cross evaluation phases using staging annotations, and compile-time evaluation happens inside top-level splices. We provide a theoretical foundation for MacoCaml by formalizing a typed source calculus maco that supports interleaving typing and compile-time code generation, references with explicit compile-time heaps, and modules. We study various crucial properties including soundness and phase distinction. We have implemented MacoCaml in the OCaml compiler, and ported two substantial existing libraries to validate our implementation. @Article{ICFP23p209, author = {Ningning Xie and Leo White and Olivier Nicole and Jeremy Yallop}, title = {MacoCaml: Staging Composable and Compilable Macros}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {209}, numpages = {45}, doi = {10.1145/3607851}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available Artifacts Reusable |
|
Yang, Zhixuan |
ICFP '23: "Modular Models of Monoids ..."
Modular Models of Monoids with Operations
Zhixuan Yang and Nicolas Wu (Imperial College London, UK) Inspired by algebraic effects and the principle of notions of computations as monoids, we study a categorical framework for equational theories and models of monoids equipped with operations. The framework covers not only algebraic operations but also scoped and variable-binding operations. Appealingly, in this framework both theories and models can be modularly composed. Technically, a general monoid-theory correspondence is shown, saying that the category of theories of algebraic operations is equivalent to the category of monoids. Moreover, more complex forms of operations can be coreflected into algebraic operations, in a way that preserves initial algebras. On models, we introduce modular models of a theory, which can interpret abstract syntax in the presence of other operations. We show constructions of modular models (i) from monoid transformers, (ii) from free algebras, (iii) by composition, and (iv) in symmetric monoidal categories. @Article{ICFP23p208, author = {Zhixuan Yang and Nicolas Wu}, title = {Modular Models of Monoids with Operations}, journal = {Proc. ACM Program. Lang.}, volume = {7}, number = {ICFP}, articleno = {208}, numpages = {38}, doi = {10.1145/3607850}, year = {2023}, } Publisher's Version |
105 authors
proc time: 19.76