ICFP 2019
Proceedings of the ACM on Programming Languages, Volume 3, Number ICFP
Powered by
Conference Publishing Consulting

Proceedings of the ACM on Programming Languages, Volume 3, Number ICFP, August 18–23, 2019, Berlin, Germany

ICFP – Journal Issue

Contents - Abstracts - Authors

Frontmatter

Title Page


Sponsors


Compilation and Parallelism

Rebuilding Racket on Chez Scheme (Experience Report)
Matthew Flatt, Caner Derici, R. Kent Dybvig, Andrew W. Keep, Gustavo E. Massaccesi, Sarah Spall, Sam Tobin-Hochstadt, and Jon Zeppieri
(University of Utah, USA; Indiana University, USA; Cisco Systems, USA; University of Buenos Aires, Argentina)
We rebuilt Racket on Chez Scheme, and it works well—as long as we're allowed a few patches to Chez Scheme. DrRacket runs, the Racket distribution can build itself, and nearly all of the core Racket test suite passes. Maintainability and performance of the resulting implementation are good, although some work remains to improve end-to-end performance. The least predictable part of our effort was how big the differences between Racket and Chez Scheme would turn out to be and how we would manage those differences. We expect Racket on Chez Scheme to become the main Racket implementation, and we encourage other language implementers to consider Chez Scheme as a target virtual machine.

Preprint Info Artifacts Available Artifacts Functional
Compiling with Continuations, or without? Whatever.
Youyou Cong, Leo Osvald, Grégory M. Essertel, and Tiark Rompf
(Tokyo Institute of Technology, Japan; Purdue University, USA)
What makes a good compiler IR? In the context of functional languages, there has been an extensive debate on the advantages and disadvantages of continuation-passing-style (CPS). The consensus seems to be that some form of explicit continuations is necessary to model jumps in a functional style, but that they should have a 2nd-class status, separate from regular functions, to ensure efficient code generation. Building on this observation, a recent study from PLDI 2017 proposed a direct-style IR with explicit join points, which essentially represent local continuations, i.e., functions that do not return or escape. While this IR can work well in practice, as evidenced by the implementation of join points in the Glasgow Haskell Compiler (GHC), there still seems to be room for improvement, especially with regard to the way continuations are handled in the course of optimization.
In this paper, we contribute to the CPS debate by developing a novel IR with the following features. First, we integrate a control operator that resembles Felleisen’s C, eliminating certain redundant rewrites observed in the previous study. Second, we treat the non-returning and non-escaping aspects of continuations separately, allowing efficient compilation of well-behaved functions defined by the user. Third, we define a selective CPS translation of our IR, which erases control operators while preserving the meaning and typing of programs. These features enable optimizations in both direct style and full CPS, as well as in any intermediate style with selectively exposed continuations. Thus, we change the spectrum of available options from “CPS yes or no” to “as much or as little CPS as you want, when you want it”.

Article Search
Lambda Calculus with Algebraic Simplification for Reduction Parallelization by Equational Reasoning
Akimasa Morihata
(University of Tokyo, Japan)
Parallel reduction is a major component of parallel programming and widely used for summarization and aggregation. It is not well understood, however, what sorts of nontrivial summarizations can be implemented as parallel reductions. This paper develops a calculus named λas, a simply typed lambda calculus with algebraic simplification. This calculus provides a foundation for studying parallelization of complex reductions by equational reasoning. Its key feature is δ abstraction. A δ abstraction is observationally equivalent to the standard λ abstraction, but its body is simplified before the arrival of its arguments by using algebraic properties such as associativity and commutativity. In addition, the type system of λas guarantees that simplifications due to δ abstractions do not lead to serious overheads. The usefulness of λas is demonstrated on examples of developing complex parallel reductions, including those containing more than one reduction operator, loops with jumps, prefix-sum patterns, and even tree manipulations.

Article Search
Fairness in Responsive Parallelism
Stefan K. Muller, Sam Westrick, and Umut A. Acar
(Carnegie Mellon University, USA; Inria, France)
Research on parallel computing has historically revolved around compute-intensive applications drawn from traditional areas such as high-performance computing. With the growing availability and usage of multicore chips, applications of parallel computing now include interactive parallel applications that mix compute-intensive tasks with interaction, e.g., with the user or more generally with the external world. Recent theoretical work on responsive parallelism presents abstract cost models and type systems for ensuring and reasoning about responsiveness and throughput of such interactive parallel programs.
In this paper, we extend prior work by considering a crucial metric: fairness. To express rich interactive parallel programs, we allow programmers to assign priorities to threads and instruct the scheduler to obey a notion of fairness. We then propose the fairly prompt scheduling principle for executing such programs; the principle specifies the schedule for multithreaded programs on multiple processors. For such schedules, we prove theoretical bounds on the execution and response times of jobs of various priorities. In particular, we bound the amount, i.e., stretch, by which a low-priority job can be delayed by higher-priority work. We also present an algorithm designed to approximate the fairly prompt scheduling principle on multicore computers, implement the algorithm by extending the Standard ML language, and present an empirical evaluation.

Article Search

Verified Compilation

Narcissus: Correct-by-Construction Derivation of Decoders and Encoders from Binary Formats
Benjamin Delaware, Sorawit Suriyakarn, Clément Pit-Claudel, Qianchuan Ye, and Adam Chlipala
(Purdue University, USA; Band Protocol, Thailand; Massachusetts Institute of Technology, USA)
It is a neat result from functional programming that libraries of parser combinators can support rapid construction of decoders for quite a range of formats. With a little more work, the same combinator program can denote both a decoder and an encoder. Unfortunately, the real world is full of gnarly formats, as with the packet formats that make up the standard Internet protocol stack. Most past parser-combinator approaches cannot handle these formats, and the few exceptions require redundancy – one part of the natural grammar needs to be hand-translated into hints in multiple parts of a parser program. We show how to recover very natural and nonredundant format specifications, covering all popular network packet formats and generating both decoders and encoders automatically. The catch is that we use the Coq proof assistant to derive both kinds of artifacts using tactics, automatically, in a way that guarantees that they form inverses of each other. We used our approach to reimplement packet processing for a full Internet protocol stack, inserting our replacement into the OCaml-based MirageOS unikernel, resulting in minimal performance degradation.

Article Search Artifacts Available Artifacts Functional
Closure Conversion Is Safe for Space
Zoe Paraskevopoulou and Andrew W. Appel
(Princeton University, USA)
We formally prove that closure conversion with flat environments for CPS lambda calculus is correct (preserves semantics) and safe for time and space, meaning that produced code preserves the time and space required for the execution of the source program.
We give a cost model to pre- and post-closure-conversion code by formalizing profiling semantics that keep track of the time and space resources needed for the execution of a program, taking garbage collection into account. To show preservation of time and space we set up a general, "garbage-collection compatible", binary logical relation that establishes invariants on resource consumption of the related programs, along with functional correctness. Using this framework, we show semantics preservation and space and time safety for terminating source programs, and divergence preservation and space safety for diverging source programs.
We formally prove that closure conversion with flat environments for CPS lambda calculus is correct (preserves semantics) and safe for time and space, meaning that produced code preserves the time and space required for the execution of the source program.
We give a cost model to pre- and post-closure-conversion code by formalizing profiling semantics that keep track of the time and space resources needed for the execution of a program, taking garbage collection into account. To show preservation of time and space we set up a general, "garbage-collection compatible", binary logical relation that establishes invariants on resource consumption of the related programs, along with functional correctness. Using this framework, we show semantics preservation and space and time safety for terminating source programs, and divergence preservation and space safety for diverging source programs.
This is the first formal proof of space-safety of a closure-conversion transformation. The transformation and the proof are parts of the CertiCoq compiler pipeline from Coq (Gallina) through CompCert Clight to assembly language. Our results are mechanized in the Coq proof assistant.

Article Search Artifacts Functional
Linear Capabilities for Fully Abstract Compilation of Separation-Logic-Verified Code
Thomas Van Strydonck, Frank Piessens, and Dominique Devriese
(KU Leuven, Belgium; Vrije Universiteit Brussel, Belgium)
Separation logic is a powerful program logic for the static modular verification of imperative programs. However, dynamic checking of separation logic contracts on the boundaries between verified and untrusted modules is hard, because it requires one to enforce (among other things) that outcalls from a verified to an untrusted module do not access memory resources currently owned by the verified module.
This paper proposes an approach to dynamic contract checking by relying on support for capabilities, a well-studied form of unforgeable memory pointers that enables fine-grained, efficient memory access control. More specifically, we rely on a form of capabilities called linear capabilities for which the hardware enforces that they cannot be copied.
We formalize our approach as a fully abstract compiler from a statically verified source language to an unverified target language with support for linear capabilities. The key insight behind our compiler is that memory resources described by spatial separation logic predicates can be represented at run time by linear capabilities. The compiler is separation-logic-proof-directed: it uses the separation logic proof of the source program to determine how memory accesses in the source program should be compiled to linear capability accesses in the target program.
The full abstraction property of the compiler essentially guarantees that compiled verified modules can interact with untrusted target language modules as if they were compiled from verified code as well.

Article Search Artifacts Available
The Next 700 Compiler Correctness Theorems (Functional Pearl)
Daniel Patterson and Amal Ahmed
(Northeastern University, USA)
Compiler correctness is an old problem, with results stretching back beyond the last half-century. Founding the field, John McCarthy and James Painter set out to build a "completely trustworthy compiler". And yet, until quite recently, even despite truly impressive verification efforts, the theorems being proved were only about the compilation of whole programs, a theoretically quite appealing but practically unrealistic simplification. For a compiler correctness theorem to assure complete trust, the theorem must reflect the reality of how the compiler will be used.
There has been much recent work on more realistic "compositional" compiler correctness aimed at proving correct compilation of components while supporting linking with components compiled from different languages using different compilers. However, the variety of theorems, stated in remarkably different ways, raises questions about what researchers even mean by a "compiler is correct." In this pearl, we develop a new framework with which to understand compiler correctness theorems in the presence of linking, and apply it to understanding and comparing this diversity of results. In doing so, not only are we better able to assess their relative strengths and weaknesses, but gain insight into what we as a community should expect from compiler correctness theorems of the future.

Article Search Info

Type Theory

Equations Reloaded: High-Level Dependently-Typed Functional Programming and Proving in Coq
Matthieu Sozeau and Cyprien Mangin
(Inria, France; IRIF, France; University of Paris Diderot, France)
Equations is a plugin for the Coq proof assistant which provides a notation for defining programs by dependent pattern-matching and structural or well-founded recursion. It additionally derives useful high-level proof principles for demonstrating properties about them, abstracting away from the implementation details of the function and its compiled form. We present a general design and implementation that provides a robust and expressive function definition package as a definitional extension to the Coq kernel. At the core of the system is a new simplifier for dependent equalities based on an original handling of the no-confusion property of constructors.

Article Search Info Artifacts Available Artifacts Functional
Cubical Agda: A Dependently Typed Programming Language with Univalence and Higher Inductive Types
Andrea Vezzosi, Anders Mörtberg, and Andreas Abel
(IT University of Copenhagen, Denmark; Stockholm University, Sweden; Carnegie Mellon University, USA; Chalmers University of Technology, Sweden; University of Gothenburg, Sweden)
Proof assistants based on dependent type theory provide expressive languages for both programming and proving within the same system. However, all of the major implementations lack powerful extensionality principles for reasoning about equality, such as function and propositional extensionality. These principles are typically added axiomatically which disrupts the constructive properties of these systems. Cubical type theory provides a solution by giving computational meaning to Homotopy Type Theory and Univalent Foundations, in particular to the univalence axiom and higher inductive types. This paper describes an extension of the dependently typed functional programming language Agda with cubical primitives, making it into a full-blown proof assistant with native support for univalence and a general schema of higher inductive types. These new primitives make function and propositional extensionality as well as quotient types directly definable with computational content. Additionally, thanks also to copatterns, bisimilarity is equivalent to equality for coinductive types. This extends Agda with support for a wide range of extensionality principles, without sacrificing type checking and constructivity.

Article Search Artifacts Available Artifacts Functional
Approximate Normalization for Gradual Dependent Types
Joseph Eremondi, Éric Tanter, and Ronald Garcia
(University of British Columbia, Canada; University of Chile, Chile; Inria, France)
Dependent types help programmers write highly reliable code. However, this reliability comes at a cost: it can be challenging to write new prototypes in (or migrate old code to) dependently-typed programming languages. Gradual typing makes static type disciplines more flexible, so an appropriate notion of gradual dependent types could fruitfully lower this cost. However, dependent types raise unique challenges for gradual typing. Dependent typechecking involves the execution of program code, but gradually-typed code can signal runtime type errors or diverge. These runtime errors threaten the soundness guarantees that make dependent types so attractive, while divergence spoils the type-driven programming experience.
This paper presents GDTL, a gradual dependently-typed language that emphasizes pragmatic dependently-typed programming. GDTL fully embeds both an untyped and dependently-typed language, and allows for smooth transitions between the two. In addition to gradual types we introduce gradual terms, which allow the user to be imprecise in type indices and to omit proof terms; runtime checks ensure type safety. To account for nontermination and failure, we distinguish between compile-time normalization and run-time execution: compile-time normalization is approximate but total, while runtime execution is exact, but may fail or diverge. We prove that GDTL has decidable typechecking and satisfies all the expected properties of gradual languages. In particular, GDTL satisfies the static and dynamic gradual guarantees: reducing type precision preserves typedness, and altering type precision does not change program behavior outside of dynamic type failures. To prove these properties, we were led to establish a novel normalization gradual guarantee that captures the monotonicity of approximate normalization with respect to imprecision.

Preprint Artifacts Available Artifacts Functional

Types 1

Simple Noninterference from Parametricity
Maximilian Algehed and Jean-Philippe Bernardy
(Chalmers University of Technology, Sweden; University of Gothenburg, Sweden)
In this paper we revisit the connection between parametricity and noninterference. Our primary contribution is a proof of noninterference for a polyvariant variation of the Dependency Core Calculus of in the Calculus of Constructions. The proof is modular: it leverages parametricity for the Calculus of Constructions and the encoding of data abstraction using existential types. This perspective gives rise to simple and understandable proofs of noninterference from parametricity. All our contributions have been mechanised in the Agda proof assistant.

Article Search
Selective Applicative Functors
Andrey Mokhov, Georgy Lukyanov, Simon Marlow, and Jeremie Dimino
(Newcastle University, UK; Facebook, UK; Jane Street, UK)
Applicative functors and monads have conquered the world of functional programming by providing general and powerful ways of describing effectful computations using pure functions. Applicative functors provide a way to compose independent effects that cannot depend on values produced by earlier computations, and all of which are declared statically. Monads extend the applicative interface by making it possible to compose dependent effects, where the value computed by one effect determines all subsequent effects, dynamically.
This paper introduces an intermediate abstraction called selective applicative functors that requires all effects to be declared statically, but provides a way to select which of the effects to execute dynamically. We demonstrate applications of the new abstraction on several examples, including two industrial case studies.

Article Search Artifacts Available Artifacts Functional
Coherence of Type Class Resolution
Gert-Jan Bottu, Ningning Xie, Koar Marntirosian, and Tom Schrijvers
(KU Leuven, Belgium; University of Hong Kong, China)
Elaboration-based type class resolution, as found in languages like Haskell, Mercury and PureScript, is generally nondeterministic: there can be multiple ways to satisfy a wanted constraint in terms of global instances and locally given constraints. Coherence is the key property that keeps this sane; it guarantees that, despite the nondeterminism, programs still behave predictably. Even though elaboration-based resolution is generally assumed coherent, as far as we know, there is no formal proof of this property in the presence of sources of nondeterminism, like superclasses and flexible contexts.
This paper provides a formal proof to remedy the situation. The proof is non-trivial because the semantics elaborates resolution into a target language where different elaborations can be distinguished by contexts that do not have a source language counterpart. Inspired by the notion of full abstraction, we present a two-step strategy that first elaborates nondeterministically into an intermediate language that preserves contextual equivalence, and then deterministically elaborates from there into the target language. We use an approach based on logical relations to establish contextual equivalence and thus coherence for the first step of elaboration, while the second step’s determinism straightforwardly preserves this coherence property.

Article Search

Program Analysis and Synthesis

Relational Cost Analysis for Functional-Imperative Programs
Weihao Qu, Marco Gaboardi, and Deepak Garg
(SUNY Buffalo, USA; MPI-SWS, Germany)
Relational cost analysis aims at formally establishing bounds on the difference in the evaluation costs of two programs. As a particular case, one can also use relational cost analysis to establish bounds on the difference in the evaluation cost of the same program on two different inputs. One way to perform relational cost analysis is to use a relational type-and-effect system that supports reasoning about relations between two executions of two programs.
Building on this basic idea, we present a type-and-effect system, called ARel, for reasoning about the relative cost of array-manipulating, higher-order functional-imperative programs. The key ingredient of our approach is a new lightweight type refinement discipline that we use to track relations (differences) between two mutable arrays. This discipline combined with Hoare-style triples built into the types allows us to express and establish precise relative costs of several interesting programs which imperatively update their data. We have implemented ARel using ideas from bidirectional type checking.

Article Search Artifacts Available Artifacts Functional
Fuzzi: A Three-Level Logic for Differential Privacy
Hengchu Zhang, Edo Roth, Andreas Haeberlen, Benjamin C. Pierce, and Aaron Roth
(University of Pennsylvania, USA)
Curators of sensitive datasets sometimes need to know whether queries against the data are differentially private. Two sorts of logics have been proposed for checking this property: (1) type systems and other static analyses, which fully automate straightforward reasoning with concepts like “program sensitivity” and “privacy loss,” and (2) full-blown program logics such as apRHL (an approximate, probabilistic, relational Hoare logic), which support more flexible reasoning about subtle privacy-preserving algorithmic techniques but offer only minimal automation.
We propose a three-level logic for differential privacy in an imperative setting and present a prototype implementation called Fuzzi. Fuzzi’s lowest level is a general-purpose logic; its middle level is apRHL; and its top level is a novel sensitivity logic adapted from the linear-logic-inspired type system of Fuzz, a differentially private functional language. The key novelty is a high degree of integration between the sensitivity logic and the two lower-level logics: the judgments and proofs of the sensitivity logic can be easily translated into apRHL; conversely, privacy properties of key algorithmic building blocks can be proved manually in apRHL and the base logic, then packaged up as typing rules that can be applied by a checker for the sensitivity logic to automatically construct privacy proofs for composite programs of arbitrary size.
We demonstrate Fuzzi’s utility by implementing four different private machine-learning algorithms and showing that Fuzzi’s checker is able to derive tight sensitivity bounds.

Preprint Artifacts Available Artifacts Functional
Synthesizing Differentially Private Programs
Calvin Smith and Aws Albarghouthi
(University of Wisconsin-Madison, USA)
Inspired by the proliferation of data-analysis tasks, recent research in program synthesis has had a strong focus on enabling users to specify data-analysis programs through intuitive specifications, like examples and natural language. However, with the ever-increasing threat to privacy through data analysis, we believe it is imperative to reimagine program synthesis technology in the presence of formal privacy constraints.
In this paper, we study the problem of automatically synthesizing randomized, differentially private programs, where the user can provide the synthesizer with a constraint on the privacy of the desired algorithm. We base our technique on a linear dependent type system that can track the resources consumed by a program, and hence its privacy cost. We develop a novel type-directed synthesis algorithm that constructs randomized differentially private programs. We apply our technique to the problems of synthesizing database-like queries as well as recursive differential privacy mechanisms from the literature.

Article Search
Synthesizing Symmetric Lenses
Anders Miltner, Solomon Maina, Kathleen Fisher, Benjamin C. Pierce, David Walker, and Steve Zdancewic
(Princeton University, USA; University of Pennsylvania, USA; Tufts University, USA)
Lenses are programs that can be run both "front to back" and "back to front," allowing updates to either their source or their target data to be transferred in both directions. Since their introduction by Foster et al., lenses have been extensively studied, extended, and applied. Recent work has also demonstrated how techniques from type-directed program synthesis can be used to efficiently synthesize a simple class of lenses---so-called bijective lenses over string data---given a pair of types (regular expressions) and a small number of examples.
We extend this synthesis algorithm to a much broader class of lenses, called simple symmetric lenses, including all bijective lenses, all of the popular category of "asymmetric" lenses, and a rich subset of the more powerful "symmetric lenses" proposed by Hofmann et al. Intuitively, simple symmetric lenses allow some information to be present on one side but not the other and vice versa. They are of independent theoretical interest, being the largest class of symmetric lenses that do not rely on persistent internal state.
Synthesizing simple symmetric lenses is substantially more challenging than synthesizing bijective lenses: Since some of the information on each side can be "disconnected" from the other side, there will, in general, be many lenses that agree with a given example. To guide the search process, we use stochastic regular expressions and ideas from information theory to estimate the amount of information propagated by a candidate lens, generally preferring lenses that propagate more information, as well as user annotations marking parts of the source and target data structures as either irrelevant or essential.
We describe an implementation of simple symmetric lenses and our synthesis procedure as extensions to the Boomerang language. We evaluate its performance on 48 benchmark examples drawn from Flash Fill, Augeas, the bidirectional programming literature, and electronic file format synchronization tasks. Our implementation can synthesize each of these lenses in under 30 seconds.

Preprint Artifacts Available Artifacts Functional

The Real World

Demystifying Differentiable Programming: Shift/Reset the Penultimate Backpropagator
Fei Wang, Daniel Zheng, James Decker, Xilun Wu, Grégory M. Essertel, and Tiark Rompf
(Purdue University, USA)
Deep learning has seen tremendous success over the past decade in computer vision, machine translation, and gameplay. This success rests crucially on gradient-descent optimization and the ability to “learn” parameters of a neural network by backpropagating observed errors. However, neural network architectures are growing increasingly sophisticated and diverse, which motivates an emerging quest for even more general forms of differentiable programming, where arbitrary parameterized computations can be trained by gradient descent. In this paper, we take a fresh look at automatic differentiation (AD) techniques, and especially aim to demystify the reverse-mode form of AD that generalizes backpropagation in neural networks. We uncover a tight connection between reverse-mode AD and delimited continuations, which permits implementing reverse-mode AD purely via operator overloading and without managing any auxiliary data structures. We further show how this formulation of AD can be fruitfully combined with multi-stage programming (staging), leading to an efficient implementation that combines the performance benefits of deep learning frameworks based on explicit reified computation graphs (e.g., TensorFlow) with the expressiveness of pure library approaches (e.g., PyTorch).

Article Search
Efficient Differentiable Programming in a Functional Array-Processing Language
Amir Shaikhha, Andrew Fitzgibbon, Dimitrios Vytiniotis, and Simon Peyton Jones
(University of Oxford, UK; Microsoft Research, UK; DeepMind, UK)
We present a system for the automatic differentiation (AD) of a higher-order functional array-processing language. The core functional language underlying this system simultaneously supports both source-to-source forward-mode AD and global optimisations such as loop transformations. In combination, gradient computation with forward-mode AD can be as efficient as reverse mode, and that the Jacobian matrices required for numerical algorithms such as Gauss-Newton and Levenberg-Marquardt can be efficiently computed.

Article Search
From High-Level Inference Algorithms to Efficient Code
Rajan Walia, Praveen Narayanan, Jacques Carette, Sam Tobin-Hochstadt, and Chung-chieh Shan
(Indiana University, USA; McMaster University, Canada)
Probabilistic programming languages are valuable because they allow domain experts to express probabilistic models and inference algorithms without worrying about irrelevant details. However, for decades there remained an important and popular class of probabilistic inference algorithms whose efficient implementation required manual low-level coding that is tedious and error-prone. They are algorithms whose idiomatic expression requires random array variables that are latent or whose likelihood is conjugate. Although that is how practitioners communicate and compose these algorithms on paper, executing such expressions requires eliminating the latent variables and recognizing the conjugacy by symbolic mathematics. Moreover, matching the performance of handwritten code requires speeding up loops by more than a constant factor.
We show how probabilistic programs that directly and concisely express these desired inference algorithms can be compiled while maintaining efficiency. We introduce new transformations that turn high-level probabilistic programs with arrays into pure loop code. We then make great use of domain-specific invariants and norms to optimize the code, and to specialize and JIT-compile the code per execution. The resulting performance is competitive with manual implementations.

Preprint Artifacts Available
Sound and Robust Solid Modeling via Exact Real Arithmetic and Continuity
Benjamin Sherman, Jesse Michel, and Michael Carbin
(Massachusetts Institute of Technology, USA)
Algorithms for solid modeling, i.e., Computer-Aided Design (CAD) and computer graphics, are often specified on real numbers and then implemented with finite-precision arithmetic, such as floating-point. The result is that these implementations do not soundly compute the results that are expected from their specifications.
We present a new library, StoneWorks, that provides sound and robust solid modeling primitives. We implement StoneWorks in MarshallB, a pure functional programming language for exact real arithmetic in which types denote topological spaces and functions denote continuous maps, ensuring that all programs are sound and robust. We developed MarshallB as an extension of the Marshall language.
We also define a new shape representation, compact representation (K-rep), that enables constructions such as Minkowski sum and analyses such as Hausdorff distance that are not possible with traditional representations. K-rep is a nondeterminism monad for describing all the points in a shape.
With our library, language, and representation together, we show that short StoneWorks programs can specify and execute sound and robust solid modeling algorithms and tasks.

Article Search Artifacts Available Artifacts Functional

Dependent Types in Haskell

Dependently Typed Haskell in Industry (Experience Report)
David Thrane Christiansen, Iavor S. Diatchki, Robert Dockins, Joe Hendrix, and Tristan Ravitch
(Galois, USA)
Recent versions of the Haskell compiler GHC have a number of advanced features that allow many idioms from dependently typed programming to be encoded. We describe our experiences using this "dependently typed Haskell" to construct a performance-critical library that is a key component in a number of verification tools. We have discovered that it can be done, and it brings significant value, but also at a high cost. In this experience report, we describe the ways in which programming at the edge of what is expressible in Haskell's type system has brought us value, the difficulties that it has imposed, and some of the ways we coped with the difficulties.

Article Search
A Role for Dependent Types in Haskell
Stephanie Weirich, Pritam Choudhury, Antoine Voizard, and Richard A. Eisenberg
(University of Pennsylvania, USA; Bryn Mawr College, USA)
Modern Haskell supports zero-cost coercions, a mechanism where types that share the same run-time representation may be freely converted between. To make sure such conversions are safe and desirable, this feature relies on a mechanism of roles to prohibit invalid coercions. In this work, we show how to incorporate roles into dependent types systems and prove, using the Coq proof assistant, that the resulting system is sound. We have designed this work as a foundation for the addition of dependent types to the Glasgow Haskell Compiler, but we also expect that it will be of use to designers of other dependently-typed languages who might want to adopt Haskell’s safe coercions feature.

Preprint Artifacts Available Artifacts Functional
Higher-Order Type-Level Programming in Haskell
Csongor Kiss, Tony Field, Susan Eisenbach, and Simon Peyton Jones
(Imperial College London, UK; Microsoft Research, UK)
Type family applications in Haskell must be fully saturated. This means that all type-level functions have to be first-order, leading to code that is both messy and longwinded. In this paper we detail an extension to GHC that removes this restriction. We augment Haskell’s existing type arrow, |->|, with an unmatchable arrow, | >|, that supports partial application of type families without compromising soundness. A soundness proof is provided. We show how the techniques described can lead to substantial code-size reduction (circa 80%) in the type-level logic of commonly-used type-level libraries whilst simultaneously improving code quality and readability.

Article Search Artifacts Available Artifacts Functional

Program Verification

A Predicate Transformer Semantics for Effects (Functional Pearl)
Wouter Swierstra and Tim Baanen
(Utrecht University, Netherlands)
Reasoning about programs that use effects can be much harder than reasoning about their pure counterparts. This paper presents a predicate transformer semantics for a variety of effects, including exceptions, state, non-determinism, and general recursion. The predicate transformer semantics gives rise to a refinement relation that can be used to relate a program to its specification, or even calculate effectful programs that are correct by construction.

Article Search Artifacts Available Artifacts Functional
Dijkstra Monads for All
Kenji Maillard, Danel Ahman, Robert Atkey, Guido Martínez, Cătălin Hriţcu, Exequiel Rivas, and Éric Tanter
(Inria, France; ENS, France; University of Ljubljana, Slovenia; University of Strathclyde, UK; CIFASIS-CONICET, Argentina; University of Chile, Chile)
This paper proposes a general semantic framework for verifying programs with arbitrary monadic side-effects using Dijkstra monads, which we define as monad-like structures indexed by a specification monad. We prove that any monad morphism between a computational monad and a specification monad gives rise to a Dijkstra monad, which provides great flexibility for obtaining Dijkstra monads tailored to the verification task at hand. We moreover show that a large variety of specification monads can be obtained by applying monad transformers to various base specification monads, including predicate transformers and Hoare-style pre- and postconditions. For defining correct monad transformers, we propose a language inspired by Moggi's monadic metalanguage that is parameterized by a dependent type theory. We also develop a notion of algebraic operations for Dijkstra monads, and start to investigate two ways of also accommodating effect handlers. We implement our framework in both Coq and F*, and illustrate that it supports a wide variety of verification styles for effects such as exceptions, nondeterminism, state, input-output, and general recursion.

Article Search Artifacts Available Artifacts Functional
Mechanized Relational Verification of Concurrent Programs with Continuations
Amin Timany and Lars Birkedal
(KU Leuven, Belgium; Aarhus University, Denmark)
Concurrent higher-order imperative programming languages with continuations are very flexible and allow for the implementation of sophisticated programming patterns. For instance, it is well known that continuations can be used to implement cooperative concurrency. Continuations can also simplify web server implementations. This, in particular, helps simplify keeping track of the state of server’s clients. However, such advanced programming languages are very challenging to reason about. One of the main challenges in reasoning about programs in the presence of continuations is due to the fact that the non-local flow of control breaks the bind rule, one of the important modular reasoning principles of Hoare logic.
In this paper we present the first completely formalized tool for interactive mechanized relational verification of programs written in a concurrent higher-order imperative programming language with continuations (call/cc and throw). We develop novel logical relations which can be used to give mechanized proofs of relational properties. In particular, we prove correctness of an implementation of cooperative concurrency with continuations. In addition, we show that that a rudimentary web server implemented using the continuation-based pattern is contextually equivalent to one implemented without the continuation-based pattern. We introduce context-local reasoning principles for our calculus which allows us to regain modular reasoning principles for the fragment of the language without non-local control flow. These novel reasoning principles can be used in tandem with our (non-context-local) Hoare logic for reasoning about programs that do feature non-local control flow. Indeed, we use the combination of context-local and non-context-local reasoning to simplify reasoning about the examples.

Article Search Artifacts Available Artifacts Functional
Sequential Programming for Replicated Data Stores
Nicholas V. Lewchenko, Arjun Radhakrishna, Akash Gaonkar, and Pavol Černý
(University of Colorado Boulder, USA; Microsoft, USA)
We introduce Carol, a refinement-typed programming language for replicated data stores. The salient feature of Carol is that it allows programming and verifying replicated store operations modularly, without consideration of other operations that might interleave, and sequentially, without requiring reference to or knowledge of the concurrent execution model. This is in stark contrast with existing systems, which require understanding the concurrent interactions of all pairs of operations when developing or verifying them.
The key enabling idea is the consistency guard, a two-state predicate relating the locally-viewed store and the hypothetical remote store that an operation’s updates may eventually be applied to, which is used by the Carol programmer to declare their precise consistency requirements. Guards appear to the programmer and refinement typechecker as simple data pre-conditions, enabling sequential reasoning, while appearing to the distributed runtime as consistency control instructions.
We implement and evaluate the Carol system in two parts: (1) the algorithm used to statically translate guards into the runtime coordination actions required to enforce them, and (2) the networked-replica runtime which executes arbitrary operations, written in a Haskell DSL, according to the Carol language semantics.

Article Search

Modal Types

Implementing a Modal Dependent Type Theory
Daniel Gratzer, Jonathan Sterling, and Lars Birkedal
(Aarhus University, Denmark; Carnegie Mellon University, USA)
Modalities are everywhere in programming and mathematics! Despite this, however, there are still significant technical challenges in formulating a core dependent type theory with modalities. We present a dependent type theory MLTT🔒 supporting the connectives of standard Martin-Löf Type Theory as well as an S4-style necessity operator. MLTT🔒 supports a smooth interaction between modal and dependent types and provides a common basis for the use of modalities in programming and in synthetic mathematics. We design and prove the soundness and completeness of a type checking algorithm for MLTT🔒, using a novel extension of normalization by evaluation. We have also implemented our algorithm in a prototype proof assistant for MLTT🔒, demonstrating the ease of applying our techniques.

Preprint Artifacts Available Artifacts Functional
A Reasonably Exceptional Type Theory
Pierre-Marie Pédrot, Nicolas Tabareau, Hans Jacob Fehrmann, and Éric Tanter
(Inria, France; University of Chile, Chile)
Traditional approaches to compensate for the lack of exceptions in type theories for proof assistants have severe drawbacks from both a programming and a reasoning perspective.
Pédrot and Tabareau recently extended the Calculus of Inductive Constructions (CIC) with exceptions. The new exceptional type theory is interpreted by a translation into CIC, covering full dependent elimination, decidable type-checking and canonicity. However, the exceptional theory is inconsistent as a logical system. To recover consistency, Pédrot and Tabareau propose an additional translation that uses parametricity to enforce that all exceptions are caught locally. While this enforcement brings logical expressivity gains over CIC, it completely prevents reasoning about exceptional programs such as partial functions.
This work addresses the dilemma between exceptions and consistency in a more flexible manner, with the Reasonably Exceptional Type Theory (RETT). RETT is structured in three layers: (a) the exceptional layer, in which all terms can raise exceptions; (b) the mediation layer, in which exceptional terms must be provably parametric; (c) the pure layer, in which terms are non-exceptional, but can refer to exceptional terms.
We present the general theory of RETT, where each layer is realized by a predicative hierarchy of universes, and develop an instance of RETT in Coq: the impure layer corresponds to the predicative universe hierarchy, the pure layer is realized by the impredicative universe of propositions, and the mediation layer is reified via a parametricity type class. RETT is the first full dependent type theory to support consistent reasoning about exceptional terms, and the CoqRETT plugin readily brings this ability to Coq programmers.

Article Search Artifacts Available Artifacts Functional
Simply RaTT: A Fitch-Style Modal Calculus for Reactive Programming without Space Leaks
Patrick Bahr, Christian Uldal Graulund, and Rasmus Ejlers Møgelberg
(IT University of Copenhagen, Denmark)
Functional reactive programming (FRP) is a paradigm for programming with signals and events, allowing the user to describe reactive programs on a high level of abstraction. For this to make sense, an FRP language must ensure that all programs are causal, and can be implemented without introducing space leaks and time leaks. To this end, some FRP languages do not give direct access to signals, but just to signal functions.
Recently, modal types have been suggested as an alternative approach to ensuring causality in FRP languages in the synchronous case, giving direct access to the signal and event abstractions. This paper presents Simply RaTT, a new modal calculus for reactive programming. Unlike prior calculi, Simply RaTT uses a Fitch-style approach to modal types, which simplifies the type system and makes programs more concise. Echoing a previous result by Krishnaswami for a different language, we devise an operational semantics that safely executes Simply RaTT programs without space leaks.
We also identify a source of time leaks present in other modal FRP languages: The unfolding of fixed points in delayed computations. The Fitch-style presentation allows an easy way to rules out these leaks, which appears not to be possible in the more traditional dual context approach.

Article Search Artifacts Available Artifacts Functional
Quantitative Program Reasoning with Graded Modal Types
Dominic Orchard, Vilem-Benjamin Liepelt, and Harley Eades III
(University of Kent, UK; Augusta University, USA)
In programming, some data acts as a resource (e.g., file handles, channels) subject to usage constraints. This poses a challenge to software correctness as most languages are agnostic to constraints on data. The approach of linear types provides a partial remedy, delineating data into resources to be used but never copied or discarded, and unconstrained values. Bounded Linear Logic provides a more fine-grained approach, quantifying non-linear use via an indexed-family of modalities. Recent work on coeffect types generalises this idea to graded comonads, providing type systems which can capture various program properties. Here, we propose the umbrella notion of graded modal types, encompassing coeffect types and dual notions of type-based effect reasoning via graded monads. In combination with linear and indexed types, we show that graded modal types provide an expressive type theory for quantitative program reasoning, advancing the reach of type systems to capture and verify a broader set of program properties. We demonstrate this approach via a type system embodied in a fully-fledged functional language called Granule, exploring various examples.

Article Search Artifacts Available Artifacts Functional

Types 2

Mixed Linear and Non-linear Recursive Types
Bert Lindenhovius, Michael Mislove, and Vladimir Zamdzhiev
(Tulane University, USA; University of Lorraine, France; CNRS, France; Inria, France; LORIA, France)
We describe a type system with mixed linear and non-linear recursive types called LNL-FPC (the linear/non-linear fixpoint calculus). The type system supports linear typing which enhances the safety properties of programs, but also supports non-linear typing as well which makes the type system more convenient for programming. Just like in FPC, we show that LNL-FPC supports type-level recursion which in turn induces term-level recursion. We also provide sound and computationally adequate categorical models for LNL-FPC which describe the categorical structure of the substructural operations of Intuitionistic Linear Logic at all non-linear types, including the recursive ones. In order to do so, we describe a new technique for solving recursive domain equations within the category CPO by constructing the solutions over pre-embeddings. The type system also enjoys implicit weakening and contraction rules which we are able to model by identifying the canonical comonoid structure of all non-linear types. We also show that the requirements of our abstract model are reasonable by constructing a large class of concrete models that have found applications not only in classical functional programming, but also in emerging programming paradigms that incorporate linear types, such as quantum programming and circuit description programming languages.

Article Search
A Mechanical Formalization of Higher-Ranked Polymorphic Type Inference
Jinxu Zhao, Bruno C. d. S. Oliveira, and Tom Schrijvers
(University of Hong Kong, China; KU Leuven, Belgium)
Modern functional programming languages, such as Haskell or OCaml, use sophisticated forms of type inference. While an important topic in the Programming Languages research, there is little work on the mechanization of the metatheory of type inference in theorem provers. In particular we are unaware of any complete formalization of the type inference algorithms that are the backbone of modern functional languages.
This paper presents the first full mechanical formalization of the metatheory for higher-ranked polymorphic type inference. The system that we formalize is the bidirectional type system by Dunfield and Krishnaswami (DK). The DK type system has two variants (a declarative and an algorithmic one) that have been manually proven sound, complete and decidable. We present a mechanical formalization in the Abella theorem prover of DK’s declarative type system with a novel algorithmic system. We have a few reasons to use a new algorithm. Firstly, our new algorithm employs worklist judgments, which precisely capture the scope of variables and simplify the formalization of scoping in a theorem prover. Secondly, while DK’s original formalization comes with very well-written manual proofs, there are several details missing and some incorrect proofs, which complicate the task of writing a mechanized proof. Despite the use of a different algorithm we prove the same results as DK, although with significantly different proofs and proof techniques. Since such type inference algorithms are quite subtle and have a complex metatheory, mechanical formalizations are an important advance in type-inference research.

Article Search Info Artifacts Functional
An Efficient Algorithm for Type-Safe Structural Diffing
Victor Cacciari Miraldo and Wouter Swierstra
(Utrecht University, Netherlands)
Effectively computing the difference between two version of a source file has become an indispensable part of software development. The de facto standard tool used by most version control systems is the UNIX diff utility, that compares two files on a line-by-line basis without any regard for the structure of the data stored in these files.
This paper presents an alternative datatype generic algorithm for computing the difference between two values of any algebraic datatype. This algorithm maximizes sharing between the source and target trees, while still running in linear time.
Finally, this paper demonstrates that by instantiating this algorithm to the Lua abstract syntax tree and mining the commit history of repositories found on GitHub, the resulting patches can often be merged automatically, even when existing technology has failed.

Article Search

Lambda-Calculus and Teaching

Call-by-Need Is Clairvoyant Call-by-Value
Jennifer Hackett and Graham Hutton
(University of Nottingham, UK)
Call-by-need evaluation, also known as lazy evaluation, provides two key benefits: compositional programming and infinite data. The standard semantics for laziness is Launchbury’s natural semantics DBLP:conf/popl/Launchbury93, which uses a heap to memoise the results of delayed evaluations. However, the stateful nature of this heap greatly complicates reasoning about the operational behaviour of lazy programs. In this article, we propose an alternative semantics for laziness, clairvoyant evaluation, that replaces the state effect with nondeterminism, and prove this semantics equivalent in a strong sense to the standard semantics. We show how this new semantics greatly simplifies operational reasoning, admitting much simpler proofs of a number of results from the literature, and how it leads to the first denotational cost semantics for lazy evaluation.

Article Search
Teaching the Art of Functional Programming using Automated Grading (Experience Report)
Aliya Hameer and Brigitte Pientka
(McGill University, Canada)
Online programming platforms have immense potential to improve students' educational experience. They make programming more accessible, as no installation is required; and automatic grading facilities provide students with immediate feedback on their code, allowing them to to fix bugs and address errors in their understanding right away. However, these graders tend to focus heavily on the functional correctness of a solution, neglecting other aspects of students' code and thereby causing students to miss out on a significant amount of valuable feedback.
In this paper, we recount our experience in using the Learn-OCaml online programming platform to teach functional programming in a second-year university course on programming languages and paradigms. Moreover, we explore how to leverage Learn-OCaml's automated grading infrastructure to make it easy to write more expressive graders that give students feedback on properties of their code beyond simple input/output correctness, in order to effectively teach elements of functional programming style. In particular, we describe our extensions to the Learn-OCaml platform that evaluate students on test quality and code style.
By providing these tools and a suite of our own homework problems and associated graders, we aim to promote functional programming education, enhance students' educational experience, and make teaching and learning typed functional programming more accessible to instructors and students alike, in our community and beyond.

Article Search Info Artifacts Available Artifacts Functional
Lambda: The Ultimate Sublanguage (Experience Report)
Jeremy Yallop and Leo White
(University of Cambridge, UK; Jane Street, UK)
We describe our experience teaching an advanced typed functional programming course based around the use of System Fω as a programming language.

Preprint

proc time: 3.57