ICFP 2019 – Author Index 
Contents 
Abstracts 
Authors

A B C D E F G H K L M N O P Q R S T V W X Y Z
Abel, Andreas 
ICFP '19: "Cubical Agda: A Dependently ..."
Cubical Agda: A Dependently Typed Programming Language with Univalence and Higher Inductive Types
Andrea Vezzosi, Anders Mörtberg, and Andreas Abel (IT University of Copenhagen, Denmark; Stockholm University, Sweden; Carnegie Mellon University, USA; Chalmers University of Technology, Sweden; University of Gothenburg, Sweden) Proof assistants based on dependent type theory provide expressive languages for both programming and proving within the same system. However, all of the major implementations lack powerful extensionality principles for reasoning about equality, such as function and propositional extensionality. These principles are typically added axiomatically which disrupts the constructive properties of these systems. Cubical type theory provides a solution by giving computational meaning to Homotopy Type Theory and Univalent Foundations, in particular to the univalence axiom and higher inductive types. This paper describes an extension of the dependently typed functional programming language Agda with cubical primitives, making it into a fullblown proof assistant with native support for univalence and a general schema of higher inductive types. These new primitives make function and propositional extensionality as well as quotient types directly definable with computational content. Additionally, thanks also to copatterns, bisimilarity is equivalent to equality for coinductive types. This extends Agda with support for a wide range of extensionality principles, without sacrificing type checking and constructivity. @Article{ICFP19p87, author = {Andrea Vezzosi and Anders Mörtberg and Andreas Abel}, title = {Cubical Agda: A Dependently Typed Programming Language with Univalence and Higher Inductive Types}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {87}, numpages = {29}, doi = {10.1145/3341691}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Acar, Umut A. 
ICFP '19: "Fairness in Responsive Parallelism ..."
Fairness in Responsive Parallelism
Stefan K. Muller, Sam Westrick, and Umut A. Acar (Carnegie Mellon University, USA; Inria, France) Research on parallel computing has historically revolved around computeintensive applications drawn from traditional areas such as highperformance computing. With the growing availability and usage of multicore chips, applications of parallel computing now include interactive parallel applications that mix computeintensive tasks with interaction, e.g., with the user or more generally with the external world. Recent theoretical work on responsive parallelism presents abstract cost models and type systems for ensuring and reasoning about responsiveness and throughput of such interactive parallel programs. In this paper, we extend prior work by considering a crucial metric: fairness. To express rich interactive parallel programs, we allow programmers to assign priorities to threads and instruct the scheduler to obey a notion of fairness. We then propose the fairly prompt scheduling principle for executing such programs; the principle specifies the schedule for multithreaded programs on multiple processors. For such schedules, we prove theoretical bounds on the execution and response times of jobs of various priorities. In particular, we bound the amount, i.e., stretch, by which a lowpriority job can be delayed by higherpriority work. We also present an algorithm designed to approximate the fairly prompt scheduling principle on multicore computers, implement the algorithm by extending the Standard ML language, and present an empirical evaluation. @Article{ICFP19p81, author = {Stefan K. Muller and Sam Westrick and Umut A. Acar}, title = {Fairness in Responsive Parallelism}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {81}, numpages = {30}, doi = {10.1145/3341685}, year = {2019}, } Publisher's Version Article Search 

Ahman, Danel 
ICFP '19: "Dijkstra Monads for All ..."
Dijkstra Monads for All
Kenji Maillard, Danel Ahman, Robert Atkey, Guido Martínez, Cătălin Hriţcu, Exequiel Rivas, and Éric Tanter (Inria, France; ENS, France; University of Ljubljana, Slovenia; University of Strathclyde, UK; CIFASISCONICET, Argentina; University of Chile, Chile) This paper proposes a general semantic framework for verifying programs with arbitrary monadic sideeffects using Dijkstra monads, which we define as monadlike structures indexed by a specification monad. We prove that any monad morphism between a computational monad and a specification monad gives rise to a Dijkstra monad, which provides great flexibility for obtaining Dijkstra monads tailored to the verification task at hand. We moreover show that a large variety of specification monads can be obtained by applying monad transformers to various base specification monads, including predicate transformers and Hoarestyle pre and postconditions. For defining correct monad transformers, we propose a language inspired by Moggi's monadic metalanguage that is parameterized by a dependent type theory. We also develop a notion of algebraic operations for Dijkstra monads, and start to investigate two ways of also accommodating effect handlers. We implement our framework in both Coq and F*, and illustrate that it supports a wide variety of verification styles for effects such as exceptions, nondeterminism, state, inputoutput, and general recursion. @Article{ICFP19p104, author = {Kenji Maillard and Danel Ahman and Robert Atkey and Guido Martínez and Cătălin Hriţcu and Exequiel Rivas and Éric Tanter}, title = {Dijkstra Monads for All}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {104}, numpages = {29}, doi = {10.1145/3341708}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Ahmed, Amal 
ICFP '19: "The Next 700 Compiler Correctness ..."
The Next 700 Compiler Correctness Theorems (Functional Pearl)
Daniel Patterson and Amal Ahmed (Northeastern University, USA) Compiler correctness is an old problem, with results stretching back beyond the last halfcentury. Founding the field, John McCarthy and James Painter set out to build a "completely trustworthy compiler". And yet, until quite recently, even despite truly impressive verification efforts, the theorems being proved were only about the compilation of whole programs, a theoretically quite appealing but practically unrealistic simplification. For a compiler correctness theorem to assure complete trust, the theorem must reflect the reality of how the compiler will be used. There has been much recent work on more realistic "compositional" compiler correctness aimed at proving correct compilation of components while supporting linking with components compiled from different languages using different compilers. However, the variety of theorems, stated in remarkably different ways, raises questions about what researchers even mean by a "compiler is correct." In this pearl, we develop a new framework with which to understand compiler correctness theorems in the presence of linking, and apply it to understanding and comparing this diversity of results. In doing so, not only are we better able to assess their relative strengths and weaknesses, but gain insight into what we as a community should expect from compiler correctness theorems of the future. @Article{ICFP19p85, author = {Daniel Patterson and Amal Ahmed}, title = {The Next 700 Compiler Correctness Theorems (Functional Pearl)}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {85}, numpages = {29}, doi = {10.1145/3341689}, year = {2019}, } Publisher's Version Article Search Info 

Albarghouthi, Aws 
ICFP '19: "Synthesizing Differentially ..."
Synthesizing Differentially Private Programs
Calvin Smith and Aws Albarghouthi (University of WisconsinMadison, USA) Inspired by the proliferation of dataanalysis tasks, recent research in program synthesis has had a strong focus on enabling users to specify dataanalysis programs through intuitive specifications, like examples and natural language. However, with the everincreasing threat to privacy through data analysis, we believe it is imperative to reimagine program synthesis technology in the presence of formal privacy constraints. In this paper, we study the problem of automatically synthesizing randomized, differentially private programs, where the user can provide the synthesizer with a constraint on the privacy of the desired algorithm. We base our technique on a linear dependent type system that can track the resources consumed by a program, and hence its privacy cost. We develop a novel typedirected synthesis algorithm that constructs randomized differentially private programs. We apply our technique to the problems of synthesizing databaselike queries as well as recursive differential privacy mechanisms from the literature. @Article{ICFP19p94, author = {Calvin Smith and Aws Albarghouthi}, title = {Synthesizing Differentially Private Programs}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {94}, numpages = {29}, doi = {10.1145/3341698}, year = {2019}, } Publisher's Version Article Search 

Algehed, Maximilian 
ICFP '19: "Simple Noninterference from ..."
Simple Noninterference from Parametricity
Maximilian Algehed and JeanPhilippe Bernardy (Chalmers University of Technology, Sweden; University of Gothenburg, Sweden) In this paper we revisit the connection between parametricity and noninterference. Our primary contribution is a proof of noninterference for a polyvariant variation of the Dependency Core Calculus of in the Calculus of Constructions. The proof is modular: it leverages parametricity for the Calculus of Constructions and the encoding of data abstraction using existential types. This perspective gives rise to simple and understandable proofs of noninterference from parametricity. All our contributions have been mechanised in the Agda proof assistant. @Article{ICFP19p89, author = {Maximilian Algehed and JeanPhilippe Bernardy}, title = {Simple Noninterference from Parametricity}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {89}, numpages = {22}, doi = {10.1145/3341693}, year = {2019}, } Publisher's Version Article Search 

Appel, Andrew W. 
ICFP '19: "Closure Conversion Is Safe ..."
Closure Conversion Is Safe for Space
Zoe Paraskevopoulou and Andrew W. Appel (Princeton University, USA) We formally prove that closure conversion with flat environments for CPS lambda calculus is correct (preserves semantics) and safe for time and space, meaning that produced code preserves the time and space required for the execution of the source program. We give a cost model to pre and postclosureconversion code by formalizing profiling semantics that keep track of the time and space resources needed for the execution of a program, taking garbage collection into account. To show preservation of time and space we set up a general, "garbagecollection compatible", binary logical relation that establishes invariants on resource consumption of the related programs, along with functional correctness. Using this framework, we show semantics preservation and space and time safety for terminating source programs, and divergence preservation and space safety for diverging source programs. We formally prove that closure conversion with flat environments for CPS lambda calculus is correct (preserves semantics) and safe for time and space, meaning that produced code preserves the time and space required for the execution of the source program. We give a cost model to pre and postclosureconversion code by formalizing profiling semantics that keep track of the time and space resources needed for the execution of a program, taking garbage collection into account. To show preservation of time and space we set up a general, "garbagecollection compatible", binary logical relation that establishes invariants on resource consumption of the related programs, along with functional correctness. Using this framework, we show semantics preservation and space and time safety for terminating source programs, and divergence preservation and space safety for diverging source programs. This is the first formal proof of spacesafety of a closureconversion transformation. The transformation and the proof are parts of the CertiCoq compiler pipeline from Coq (Gallina) through CompCert Clight to assembly language. Our results are mechanized in the Coq proof assistant. @Article{ICFP19p83, author = {Zoe Paraskevopoulou and Andrew W. Appel}, title = {Closure Conversion Is Safe for Space}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {83}, numpages = {29}, doi = {10.1145/3341687}, year = {2019}, } Publisher's Version Article Search Artifacts Functional 

Atkey, Robert 
ICFP '19: "Dijkstra Monads for All ..."
Dijkstra Monads for All
Kenji Maillard, Danel Ahman, Robert Atkey, Guido Martínez, Cătălin Hriţcu, Exequiel Rivas, and Éric Tanter (Inria, France; ENS, France; University of Ljubljana, Slovenia; University of Strathclyde, UK; CIFASISCONICET, Argentina; University of Chile, Chile) This paper proposes a general semantic framework for verifying programs with arbitrary monadic sideeffects using Dijkstra monads, which we define as monadlike structures indexed by a specification monad. We prove that any monad morphism between a computational monad and a specification monad gives rise to a Dijkstra monad, which provides great flexibility for obtaining Dijkstra monads tailored to the verification task at hand. We moreover show that a large variety of specification monads can be obtained by applying monad transformers to various base specification monads, including predicate transformers and Hoarestyle pre and postconditions. For defining correct monad transformers, we propose a language inspired by Moggi's monadic metalanguage that is parameterized by a dependent type theory. We also develop a notion of algebraic operations for Dijkstra monads, and start to investigate two ways of also accommodating effect handlers. We implement our framework in both Coq and F*, and illustrate that it supports a wide variety of verification styles for effects such as exceptions, nondeterminism, state, inputoutput, and general recursion. @Article{ICFP19p104, author = {Kenji Maillard and Danel Ahman and Robert Atkey and Guido Martínez and Cătălin Hriţcu and Exequiel Rivas and Éric Tanter}, title = {Dijkstra Monads for All}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {104}, numpages = {29}, doi = {10.1145/3341708}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Baanen, Tim 
ICFP '19: "A Predicate Transformer Semantics ..."
A Predicate Transformer Semantics for Effects (Functional Pearl)
Wouter Swierstra and Tim Baanen (Utrecht University, Netherlands) Reasoning about programs that use effects can be much harder than reasoning about their pure counterparts. This paper presents a predicate transformer semantics for a variety of effects, including exceptions, state, nondeterminism, and general recursion. The predicate transformer semantics gives rise to a refinement relation that can be used to relate a program to its specification, or even calculate effectful programs that are correct by construction. @Article{ICFP19p103, author = {Wouter Swierstra and Tim Baanen}, title = {A Predicate Transformer Semantics for Effects (Functional Pearl)}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {103}, numpages = {26}, doi = {10.1145/3341707}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Bahr, Patrick 
ICFP '19: "Simply RaTT: A FitchStyle ..."
Simply RaTT: A FitchStyle Modal Calculus for Reactive Programming without Space Leaks
Patrick Bahr, Christian Uldal Graulund, and Rasmus Ejlers Møgelberg (IT University of Copenhagen, Denmark) Functional reactive programming (FRP) is a paradigm for programming with signals and events, allowing the user to describe reactive programs on a high level of abstraction. For this to make sense, an FRP language must ensure that all programs are causal, and can be implemented without introducing space leaks and time leaks. To this end, some FRP languages do not give direct access to signals, but just to signal functions. Recently, modal types have been suggested as an alternative approach to ensuring causality in FRP languages in the synchronous case, giving direct access to the signal and event abstractions. This paper presents Simply RaTT, a new modal calculus for reactive programming. Unlike prior calculi, Simply RaTT uses a Fitchstyle approach to modal types, which simplifies the type system and makes programs more concise. Echoing a previous result by Krishnaswami for a different language, we devise an operational semantics that safely executes Simply RaTT programs without space leaks. We also identify a source of time leaks present in other modal FRP languages: The unfolding of fixed points in delayed computations. The Fitchstyle presentation allows an easy way to rules out these leaks, which appears not to be possible in the more traditional dual context approach. @Article{ICFP19p109, author = {Patrick Bahr and Christian Uldal Graulund and Rasmus Ejlers Møgelberg}, title = {Simply RaTT: A FitchStyle Modal Calculus for Reactive Programming without Space Leaks}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {109}, numpages = {27}, doi = {10.1145/3341713}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Bernardy, JeanPhilippe 
ICFP '19: "Simple Noninterference from ..."
Simple Noninterference from Parametricity
Maximilian Algehed and JeanPhilippe Bernardy (Chalmers University of Technology, Sweden; University of Gothenburg, Sweden) In this paper we revisit the connection between parametricity and noninterference. Our primary contribution is a proof of noninterference for a polyvariant variation of the Dependency Core Calculus of in the Calculus of Constructions. The proof is modular: it leverages parametricity for the Calculus of Constructions and the encoding of data abstraction using existential types. This perspective gives rise to simple and understandable proofs of noninterference from parametricity. All our contributions have been mechanised in the Agda proof assistant. @Article{ICFP19p89, author = {Maximilian Algehed and JeanPhilippe Bernardy}, title = {Simple Noninterference from Parametricity}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {89}, numpages = {22}, doi = {10.1145/3341693}, year = {2019}, } Publisher's Version Article Search 

Birkedal, Lars 
ICFP '19: "Mechanized Relational Verification ..."
Mechanized Relational Verification of Concurrent Programs with Continuations
Amin Timany and Lars Birkedal (KU Leuven, Belgium; Aarhus University, Denmark) Concurrent higherorder imperative programming languages with continuations are very flexible and allow for the implementation of sophisticated programming patterns. For instance, it is well known that continuations can be used to implement cooperative concurrency. Continuations can also simplify web server implementations. This, in particular, helps simplify keeping track of the state of server’s clients. However, such advanced programming languages are very challenging to reason about. One of the main challenges in reasoning about programs in the presence of continuations is due to the fact that the nonlocal flow of control breaks the bind rule, one of the important modular reasoning principles of Hoare logic. In this paper we present the first completely formalized tool for interactive mechanized relational verification of programs written in a concurrent higherorder imperative programming language with continuations (call/cc and throw). We develop novel logical relations which can be used to give mechanized proofs of relational properties. In particular, we prove correctness of an implementation of cooperative concurrency with continuations. In addition, we show that that a rudimentary web server implemented using the continuationbased pattern is contextually equivalent to one implemented without the continuationbased pattern. We introduce contextlocal reasoning principles for our calculus which allows us to regain modular reasoning principles for the fragment of the language without nonlocal control flow. These novel reasoning principles can be used in tandem with our (noncontextlocal) Hoare logic for reasoning about programs that do feature nonlocal control flow. Indeed, we use the combination of contextlocal and noncontextlocal reasoning to simplify reasoning about the examples. @Article{ICFP19p105, author = {Amin Timany and Lars Birkedal}, title = {Mechanized Relational Verification of Concurrent Programs with Continuations}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {105}, numpages = {28}, doi = {10.1145/3341709}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional ICFP '19: "Implementing a Modal Dependent ..." Implementing a Modal Dependent Type Theory Daniel Gratzer, Jonathan Sterling, and Lars Birkedal (Aarhus University, Denmark; Carnegie Mellon University, USA) Modalities are everywhere in programming and mathematics! Despite this, however, there are still significant technical challenges in formulating a core dependent type theory with modalities. We present a dependent type theory MLTT_{🔒} supporting the connectives of standard MartinLöf Type Theory as well as an S4style necessity operator. MLTT_{🔒} supports a smooth interaction between modal and dependent types and provides a common basis for the use of modalities in programming and in synthetic mathematics. We design and prove the soundness and completeness of a type checking algorithm for MLTT_{🔒}, using a novel extension of normalization by evaluation. We have also implemented our algorithm in a prototype proof assistant for MLTT_{🔒}, demonstrating the ease of applying our techniques. @Article{ICFP19p107, author = {Daniel Gratzer and Jonathan Sterling and Lars Birkedal}, title = {Implementing a Modal Dependent Type Theory}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {107}, numpages = {29}, doi = {10.1145/3341711}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Bottu, GertJan 
ICFP '19: "Coherence of Type Class Resolution ..."
Coherence of Type Class Resolution
GertJan Bottu, Ningning Xie, Koar Marntirosian, and Tom Schrijvers (KU Leuven, Belgium; University of Hong Kong, China) Elaborationbased type class resolution, as found in languages like Haskell, Mercury and PureScript, is generally nondeterministic: there can be multiple ways to satisfy a wanted constraint in terms of global instances and locally given constraints. Coherence is the key property that keeps this sane; it guarantees that, despite the nondeterminism, programs still behave predictably. Even though elaborationbased resolution is generally assumed coherent, as far as we know, there is no formal proof of this property in the presence of sources of nondeterminism, like superclasses and flexible contexts. This paper provides a formal proof to remedy the situation. The proof is nontrivial because the semantics elaborates resolution into a target language where different elaborations can be distinguished by contexts that do not have a source language counterpart. Inspired by the notion of full abstraction, we present a twostep strategy that first elaborates nondeterministically into an intermediate language that preserves contextual equivalence, and then deterministically elaborates from there into the target language. We use an approach based on logical relations to establish contextual equivalence and thus coherence for the first step of elaboration, while the second step’s determinism straightforwardly preserves this coherence property. @Article{ICFP19p91, author = {GertJan Bottu and Ningning Xie and Koar Marntirosian and Tom Schrijvers}, title = {Coherence of Type Class Resolution}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {91}, numpages = {28}, doi = {10.1145/3341695}, year = {2019}, } Publisher's Version Article Search 

Carbin, Michael 
ICFP '19: "Sound and Robust Solid Modeling ..."
Sound and Robust Solid Modeling via Exact Real Arithmetic and Continuity
Benjamin Sherman, Jesse Michel, and Michael Carbin (Massachusetts Institute of Technology, USA) Algorithms for solid modeling, i.e., ComputerAided Design (CAD) and computer graphics, are often specified on real numbers and then implemented with finiteprecision arithmetic, such as floatingpoint. The result is that these implementations do not soundly compute the results that are expected from their specifications. We present a new library, StoneWorks, that provides sound and robust solid modeling primitives. We implement StoneWorks in MarshallB, a pure functional programming language for exact real arithmetic in which types denote topological spaces and functions denote continuous maps, ensuring that all programs are sound and robust. We developed MarshallB as an extension of the Marshall language. We also define a new shape representation, compact representation (Krep), that enables constructions such as Minkowski sum and analyses such as Hausdorff distance that are not possible with traditional representations. Krep is a nondeterminism monad for describing all the points in a shape. With our library, language, and representation together, we show that short StoneWorks programs can specify and execute sound and robust solid modeling algorithms and tasks. @Article{ICFP19p99, author = {Benjamin Sherman and Jesse Michel and Michael Carbin}, title = {Sound and Robust Solid Modeling via Exact Real Arithmetic and Continuity}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {99}, numpages = {29}, doi = {10.1145/3341703}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Carette, Jacques 
ICFP '19: "From HighLevel Inference ..."
From HighLevel Inference Algorithms to Efficient Code
Rajan Walia, Praveen Narayanan, Jacques Carette, Sam TobinHochstadt, and Chungchieh Shan (Indiana University, USA; McMaster University, Canada) Probabilistic programming languages are valuable because they allow domain experts to express probabilistic models and inference algorithms without worrying about irrelevant details. However, for decades there remained an important and popular class of probabilistic inference algorithms whose efficient implementation required manual lowlevel coding that is tedious and errorprone. They are algorithms whose idiomatic expression requires random array variables that are latent or whose likelihood is conjugate. Although that is how practitioners communicate and compose these algorithms on paper, executing such expressions requires eliminating the latent variables and recognizing the conjugacy by symbolic mathematics. Moreover, matching the performance of handwritten code requires speeding up loops by more than a constant factor. We show how probabilistic programs that directly and concisely express these desired inference algorithms can be compiled while maintaining efficiency. We introduce new transformations that turn highlevel probabilistic programs with arrays into pure loop code. We then make great use of domainspecific invariants and norms to optimize the code, and to specialize and JITcompile the code per execution. The resulting performance is competitive with manual implementations. @Article{ICFP19p98, author = {Rajan Walia and Praveen Narayanan and Jacques Carette and Sam TobinHochstadt and Chungchieh Shan}, title = {From HighLevel Inference Algorithms to Efficient Code}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {98}, numpages = {30}, doi = {10.1145/3341702}, year = {2019}, } Publisher's Version Article Search Artifacts Available 

Černý, Pavol 
ICFP '19: "Sequential Programming for ..."
Sequential Programming for Replicated Data Stores
Nicholas V. Lewchenko, Arjun Radhakrishna, Akash Gaonkar, and Pavol Černý (University of Colorado Boulder, USA; Microsoft, USA) We introduce Carol, a refinementtyped programming language for replicated data stores. The salient feature of Carol is that it allows programming and verifying replicated store operations modularly, without consideration of other operations that might interleave, and sequentially, without requiring reference to or knowledge of the concurrent execution model. This is in stark contrast with existing systems, which require understanding the concurrent interactions of all pairs of operations when developing or verifying them. The key enabling idea is the consistency guard, a twostate predicate relating the locallyviewed store and the hypothetical remote store that an operation’s updates may eventually be applied to, which is used by the Carol programmer to declare their precise consistency requirements. Guards appear to the programmer and refinement typechecker as simple data preconditions, enabling sequential reasoning, while appearing to the distributed runtime as consistency control instructions. We implement and evaluate the Carol system in two parts: (1) the algorithm used to statically translate guards into the runtime coordination actions required to enforce them, and (2) the networkedreplica runtime which executes arbitrary operations, written in a Haskell DSL, according to the Carol language semantics. @Article{ICFP19p106, author = {Nicholas V. Lewchenko and Arjun Radhakrishna and Akash Gaonkar and Pavol Černý}, title = {Sequential Programming for Replicated Data Stores}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {106}, numpages = {28}, doi = {10.1145/3341710}, year = {2019}, } Publisher's Version Article Search 

Chlipala, Adam 
ICFP '19: "Narcissus: CorrectbyConstruction ..."
Narcissus: CorrectbyConstruction Derivation of Decoders and Encoders from Binary Formats
Benjamin Delaware, Sorawit Suriyakarn, Clément PitClaudel, Qianchuan Ye, and Adam Chlipala (Purdue University, USA; Band Protocol, Thailand; Massachusetts Institute of Technology, USA) It is a neat result from functional programming that libraries of parser combinators can support rapid construction of decoders for quite a range of formats. With a little more work, the same combinator program can denote both a decoder and an encoder. Unfortunately, the real world is full of gnarly formats, as with the packet formats that make up the standard Internet protocol stack. Most past parsercombinator approaches cannot handle these formats, and the few exceptions require redundancy – one part of the natural grammar needs to be handtranslated into hints in multiple parts of a parser program. We show how to recover very natural and nonredundant format specifications, covering all popular network packet formats and generating both decoders and encoders automatically. The catch is that we use the Coq proof assistant to derive both kinds of artifacts using tactics, automatically, in a way that guarantees that they form inverses of each other. We used our approach to reimplement packet processing for a full Internet protocol stack, inserting our replacement into the OCamlbased MirageOS unikernel, resulting in minimal performance degradation. @Article{ICFP19p82, author = {Benjamin Delaware and Sorawit Suriyakarn and Clément PitClaudel and Qianchuan Ye and Adam Chlipala}, title = {Narcissus: CorrectbyConstruction Derivation of Decoders and Encoders from Binary Formats}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {82}, numpages = {29}, doi = {10.1145/3341686}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Choudhury, Pritam 
ICFP '19: "A Role for Dependent Types ..."
A Role for Dependent Types in Haskell
Stephanie Weirich, Pritam Choudhury, Antoine Voizard, and Richard A. Eisenberg (University of Pennsylvania, USA; Bryn Mawr College, USA) Modern Haskell supports zerocost coercions, a mechanism where types that share the same runtime representation may be freely converted between. To make sure such conversions are safe and desirable, this feature relies on a mechanism of roles to prohibit invalid coercions. In this work, we show how to incorporate roles into dependent types systems and prove, using the Coq proof assistant, that the resulting system is sound. We have designed this work as a foundation for the addition of dependent types to the Glasgow Haskell Compiler, but we also expect that it will be of use to designers of other dependentlytyped languages who might want to adopt Haskell’s safe coercions feature. @Article{ICFP19p101, author = {Stephanie Weirich and Pritam Choudhury and Antoine Voizard and Richard A. Eisenberg}, title = {A Role for Dependent Types in Haskell}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {101}, numpages = {29}, doi = {10.1145/3341705}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Christiansen, David Thrane 
ICFP '19: "Dependently Typed Haskell ..."
Dependently Typed Haskell in Industry (Experience Report)
David Thrane Christiansen, Iavor S. Diatchki, Robert Dockins, Joe Hendrix, and Tristan Ravitch (Galois, USA) Recent versions of the Haskell compiler GHC have a number of advanced features that allow many idioms from dependently typed programming to be encoded. We describe our experiences using this "dependently typed Haskell" to construct a performancecritical library that is a key component in a number of verification tools. We have discovered that it can be done, and it brings significant value, but also at a high cost. In this experience report, we describe the ways in which programming at the edge of what is expressible in Haskell's type system has brought us value, the difficulties that it has imposed, and some of the ways we coped with the difficulties. @Article{ICFP19p100, author = {David Thrane Christiansen and Iavor S. Diatchki and Robert Dockins and Joe Hendrix and Tristan Ravitch}, title = {Dependently Typed Haskell in Industry (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {100}, numpages = {16}, doi = {10.1145/3341704}, year = {2019}, } Publisher's Version Article Search 

Cong, Youyou 
ICFP '19: "Compiling with Continuations, ..."
Compiling with Continuations, or without? Whatever.
Youyou Cong, Leo Osvald, Grégory M. Essertel, and Tiark Rompf (Tokyo Institute of Technology, Japan; Purdue University, USA) What makes a good compiler IR? In the context of functional languages, there has been an extensive debate on the advantages and disadvantages of continuationpassingstyle (CPS). The consensus seems to be that some form of explicit continuations is necessary to model jumps in a functional style, but that they should have a 2ndclass status, separate from regular functions, to ensure efficient code generation. Building on this observation, a recent study from PLDI 2017 proposed a directstyle IR with explicit join points, which essentially represent local continuations, i.e., functions that do not return or escape. While this IR can work well in practice, as evidenced by the implementation of join points in the Glasgow Haskell Compiler (GHC), there still seems to be room for improvement, especially with regard to the way continuations are handled in the course of optimization. In this paper, we contribute to the CPS debate by developing a novel IR with the following features. First, we integrate a control operator that resembles Felleisen’s C, eliminating certain redundant rewrites observed in the previous study. Second, we treat the nonreturning and nonescaping aspects of continuations separately, allowing efficient compilation of wellbehaved functions defined by the user. Third, we define a selective CPS translation of our IR, which erases control operators while preserving the meaning and typing of programs. These features enable optimizations in both direct style and full CPS, as well as in any intermediate style with selectively exposed continuations. Thus, we change the spectrum of available options from “CPS yes or no” to “as much or as little CPS as you want, when you want it”. @Article{ICFP19p79, author = {Youyou Cong and Leo Osvald and Grégory M. Essertel and Tiark Rompf}, title = {Compiling with Continuations, or without? Whatever.}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {79}, numpages = {28}, doi = {10.1145/3341643}, year = {2019}, } Publisher's Version Article Search 

Decker, James 
ICFP '19: "Demystifying Differentiable ..."
Demystifying Differentiable Programming: Shift/Reset the Penultimate Backpropagator
Fei Wang, Daniel Zheng, James Decker, Xilun Wu, Grégory M. Essertel, and Tiark Rompf (Purdue University, USA) Deep learning has seen tremendous success over the past decade in computer vision, machine translation, and gameplay. This success rests crucially on gradientdescent optimization and the ability to “learn” parameters of a neural network by backpropagating observed errors. However, neural network architectures are growing increasingly sophisticated and diverse, which motivates an emerging quest for even more general forms of differentiable programming, where arbitrary parameterized computations can be trained by gradient descent. In this paper, we take a fresh look at automatic differentiation (AD) techniques, and especially aim to demystify the reversemode form of AD that generalizes backpropagation in neural networks. We uncover a tight connection between reversemode AD and delimited continuations, which permits implementing reversemode AD purely via operator overloading and without managing any auxiliary data structures. We further show how this formulation of AD can be fruitfully combined with multistage programming (staging), leading to an efficient implementation that combines the performance benefits of deep learning frameworks based on explicit reified computation graphs (e.g., TensorFlow) with the expressiveness of pure library approaches (e.g., PyTorch). @Article{ICFP19p96, author = {Fei Wang and Daniel Zheng and James Decker and Xilun Wu and Grégory M. Essertel and Tiark Rompf}, title = {Demystifying Differentiable Programming: Shift/Reset the Penultimate Backpropagator}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {96}, numpages = {31}, doi = {10.1145/3341700}, year = {2019}, } Publisher's Version Article Search 

Delaware, Benjamin 
ICFP '19: "Narcissus: CorrectbyConstruction ..."
Narcissus: CorrectbyConstruction Derivation of Decoders and Encoders from Binary Formats
Benjamin Delaware, Sorawit Suriyakarn, Clément PitClaudel, Qianchuan Ye, and Adam Chlipala (Purdue University, USA; Band Protocol, Thailand; Massachusetts Institute of Technology, USA) It is a neat result from functional programming that libraries of parser combinators can support rapid construction of decoders for quite a range of formats. With a little more work, the same combinator program can denote both a decoder and an encoder. Unfortunately, the real world is full of gnarly formats, as with the packet formats that make up the standard Internet protocol stack. Most past parsercombinator approaches cannot handle these formats, and the few exceptions require redundancy – one part of the natural grammar needs to be handtranslated into hints in multiple parts of a parser program. We show how to recover very natural and nonredundant format specifications, covering all popular network packet formats and generating both decoders and encoders automatically. The catch is that we use the Coq proof assistant to derive both kinds of artifacts using tactics, automatically, in a way that guarantees that they form inverses of each other. We used our approach to reimplement packet processing for a full Internet protocol stack, inserting our replacement into the OCamlbased MirageOS unikernel, resulting in minimal performance degradation. @Article{ICFP19p82, author = {Benjamin Delaware and Sorawit Suriyakarn and Clément PitClaudel and Qianchuan Ye and Adam Chlipala}, title = {Narcissus: CorrectbyConstruction Derivation of Decoders and Encoders from Binary Formats}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {82}, numpages = {29}, doi = {10.1145/3341686}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Derici, Caner 
ICFP '19: "Rebuilding Racket on Chez ..."
Rebuilding Racket on Chez Scheme (Experience Report)
Matthew Flatt, Caner Derici, R. Kent Dybvig, Andrew W. Keep, Gustavo E. Massaccesi, Sarah Spall, Sam TobinHochstadt, and Jon Zeppieri (University of Utah, USA; Indiana University, USA; Cisco Systems, USA; University of Buenos Aires, Argentina) We rebuilt Racket on Chez Scheme, and it works well—as long as we're allowed a few patches to Chez Scheme. DrRacket runs, the Racket distribution can build itself, and nearly all of the core Racket test suite passes. Maintainability and performance of the resulting implementation are good, although some work remains to improve endtoend performance. The least predictable part of our effort was how big the differences between Racket and Chez Scheme would turn out to be and how we would manage those differences. We expect Racket on Chez Scheme to become the main Racket implementation, and we encourage other language implementers to consider Chez Scheme as a target virtual machine. @Article{ICFP19p78, author = {Matthew Flatt and Caner Derici and R. Kent Dybvig and Andrew W. Keep and Gustavo E. Massaccesi and Sarah Spall and Sam TobinHochstadt and Jon Zeppieri}, title = {Rebuilding Racket on Chez Scheme (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {78}, numpages = {15}, doi = {10.1145/3341642}, year = {2019}, } Publisher's Version Article Search Info Artifacts Available Artifacts Functional 

Devriese, Dominique 
ICFP '19: "Linear Capabilities for Fully ..."
Linear Capabilities for Fully Abstract Compilation of SeparationLogicVerified Code
Thomas Van Strydonck, Frank Piessens, and Dominique Devriese (KU Leuven, Belgium; Vrije Universiteit Brussel, Belgium) Separation logic is a powerful program logic for the static modular verification of imperative programs. However, dynamic checking of separation logic contracts on the boundaries between verified and untrusted modules is hard, because it requires one to enforce (among other things) that outcalls from a verified to an untrusted module do not access memory resources currently owned by the verified module. This paper proposes an approach to dynamic contract checking by relying on support for capabilities, a wellstudied form of unforgeable memory pointers that enables finegrained, efficient memory access control. More specifically, we rely on a form of capabilities called linear capabilities for which the hardware enforces that they cannot be copied. We formalize our approach as a fully abstract compiler from a statically verified source language to an unverified target language with support for linear capabilities. The key insight behind our compiler is that memory resources described by spatial separation logic predicates can be represented at run time by linear capabilities. The compiler is separationlogicproofdirected: it uses the separation logic proof of the source program to determine how memory accesses in the source program should be compiled to linear capability accesses in the target program. The full abstraction property of the compiler essentially guarantees that compiled verified modules can interact with untrusted target language modules as if they were compiled from verified code as well. @Article{ICFP19p84, author = {Thomas Van Strydonck and Frank Piessens and Dominique Devriese}, title = {Linear Capabilities for Fully Abstract Compilation of SeparationLogicVerified Code}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {84}, numpages = {29}, doi = {10.1145/3341688}, year = {2019}, } Publisher's Version Article Search Artifacts Available 

Diatchki, Iavor S. 
ICFP '19: "Dependently Typed Haskell ..."
Dependently Typed Haskell in Industry (Experience Report)
David Thrane Christiansen, Iavor S. Diatchki, Robert Dockins, Joe Hendrix, and Tristan Ravitch (Galois, USA) Recent versions of the Haskell compiler GHC have a number of advanced features that allow many idioms from dependently typed programming to be encoded. We describe our experiences using this "dependently typed Haskell" to construct a performancecritical library that is a key component in a number of verification tools. We have discovered that it can be done, and it brings significant value, but also at a high cost. In this experience report, we describe the ways in which programming at the edge of what is expressible in Haskell's type system has brought us value, the difficulties that it has imposed, and some of the ways we coped with the difficulties. @Article{ICFP19p100, author = {David Thrane Christiansen and Iavor S. Diatchki and Robert Dockins and Joe Hendrix and Tristan Ravitch}, title = {Dependently Typed Haskell in Industry (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {100}, numpages = {16}, doi = {10.1145/3341704}, year = {2019}, } Publisher's Version Article Search 

Dimino, Jeremie 
ICFP '19: "Selective Applicative Functors ..."
Selective Applicative Functors
Andrey Mokhov, Georgy Lukyanov, Simon Marlow, and Jeremie Dimino (Newcastle University, UK; Facebook, UK; Jane Street, UK) Applicative functors and monads have conquered the world of functional programming by providing general and powerful ways of describing effectful computations using pure functions. Applicative functors provide a way to compose independent effects that cannot depend on values produced by earlier computations, and all of which are declared statically. Monads extend the applicative interface by making it possible to compose dependent effects, where the value computed by one effect determines all subsequent effects, dynamically. This paper introduces an intermediate abstraction called selective applicative functors that requires all effects to be declared statically, but provides a way to select which of the effects to execute dynamically. We demonstrate applications of the new abstraction on several examples, including two industrial case studies. @Article{ICFP19p90, author = {Andrey Mokhov and Georgy Lukyanov and Simon Marlow and Jeremie Dimino}, title = {Selective Applicative Functors}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {90}, numpages = {29}, doi = {10.1145/3341694}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Dockins, Robert 
ICFP '19: "Dependently Typed Haskell ..."
Dependently Typed Haskell in Industry (Experience Report)
David Thrane Christiansen, Iavor S. Diatchki, Robert Dockins, Joe Hendrix, and Tristan Ravitch (Galois, USA) Recent versions of the Haskell compiler GHC have a number of advanced features that allow many idioms from dependently typed programming to be encoded. We describe our experiences using this "dependently typed Haskell" to construct a performancecritical library that is a key component in a number of verification tools. We have discovered that it can be done, and it brings significant value, but also at a high cost. In this experience report, we describe the ways in which programming at the edge of what is expressible in Haskell's type system has brought us value, the difficulties that it has imposed, and some of the ways we coped with the difficulties. @Article{ICFP19p100, author = {David Thrane Christiansen and Iavor S. Diatchki and Robert Dockins and Joe Hendrix and Tristan Ravitch}, title = {Dependently Typed Haskell in Industry (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {100}, numpages = {16}, doi = {10.1145/3341704}, year = {2019}, } Publisher's Version Article Search 

Dybvig, R. Kent 
ICFP '19: "Rebuilding Racket on Chez ..."
Rebuilding Racket on Chez Scheme (Experience Report)
Matthew Flatt, Caner Derici, R. Kent Dybvig, Andrew W. Keep, Gustavo E. Massaccesi, Sarah Spall, Sam TobinHochstadt, and Jon Zeppieri (University of Utah, USA; Indiana University, USA; Cisco Systems, USA; University of Buenos Aires, Argentina) We rebuilt Racket on Chez Scheme, and it works well—as long as we're allowed a few patches to Chez Scheme. DrRacket runs, the Racket distribution can build itself, and nearly all of the core Racket test suite passes. Maintainability and performance of the resulting implementation are good, although some work remains to improve endtoend performance. The least predictable part of our effort was how big the differences between Racket and Chez Scheme would turn out to be and how we would manage those differences. We expect Racket on Chez Scheme to become the main Racket implementation, and we encourage other language implementers to consider Chez Scheme as a target virtual machine. @Article{ICFP19p78, author = {Matthew Flatt and Caner Derici and R. Kent Dybvig and Andrew W. Keep and Gustavo E. Massaccesi and Sarah Spall and Sam TobinHochstadt and Jon Zeppieri}, title = {Rebuilding Racket on Chez Scheme (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {78}, numpages = {15}, doi = {10.1145/3341642}, year = {2019}, } Publisher's Version Article Search Info Artifacts Available Artifacts Functional 

Eades III, Harley 
ICFP '19: "Quantitative Program Reasoning ..."
Quantitative Program Reasoning with Graded Modal Types
Dominic Orchard, VilemBenjamin Liepelt, and Harley Eades III (University of Kent, UK; Augusta University, USA) In programming, some data acts as a resource (e.g., file handles, channels) subject to usage constraints. This poses a challenge to software correctness as most languages are agnostic to constraints on data. The approach of linear types provides a partial remedy, delineating data into resources to be used but never copied or discarded, and unconstrained values. Bounded Linear Logic provides a more finegrained approach, quantifying nonlinear use via an indexedfamily of modalities. Recent work on coeffect types generalises this idea to graded comonads, providing type systems which can capture various program properties. Here, we propose the umbrella notion of graded modal types, encompassing coeffect types and dual notions of typebased effect reasoning via graded monads. In combination with linear and indexed types, we show that graded modal types provide an expressive type theory for quantitative program reasoning, advancing the reach of type systems to capture and verify a broader set of program properties. We demonstrate this approach via a type system embodied in a fullyfledged functional language called Granule, exploring various examples. @Article{ICFP19p110, author = {Dominic Orchard and VilemBenjamin Liepelt and Harley Eades III}, title = {Quantitative Program Reasoning with Graded Modal Types}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {110}, numpages = {30}, doi = {10.1145/3341714}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Eisenbach, Susan 
ICFP '19: "HigherOrder TypeLevel Programming ..."
HigherOrder TypeLevel Programming in Haskell
Csongor Kiss, Tony Field, Susan Eisenbach, and Simon Peyton Jones (Imperial College London, UK; Microsoft Research, UK) Type family applications in Haskell must be fully saturated. This means that all typelevel functions have to be firstorder, leading to code that is both messy and longwinded. In this paper we detail an extension to GHC that removes this restriction. We augment Haskell’s existing type arrow, >, with an unmatchable arrow,  >, that supports partial application of type families without compromising soundness. A soundness proof is provided. We show how the techniques described can lead to substantial codesize reduction (circa 80%) in the typelevel logic of commonlyused typelevel libraries whilst simultaneously improving code quality and readability. @Article{ICFP19p102, author = {Csongor Kiss and Tony Field and Susan Eisenbach and Simon Peyton Jones}, title = {HigherOrder TypeLevel Programming in Haskell}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {102}, numpages = {26}, doi = {10.1145/3341706}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Eisenberg, Richard A. 
ICFP '19: "A Role for Dependent Types ..."
A Role for Dependent Types in Haskell
Stephanie Weirich, Pritam Choudhury, Antoine Voizard, and Richard A. Eisenberg (University of Pennsylvania, USA; Bryn Mawr College, USA) Modern Haskell supports zerocost coercions, a mechanism where types that share the same runtime representation may be freely converted between. To make sure such conversions are safe and desirable, this feature relies on a mechanism of roles to prohibit invalid coercions. In this work, we show how to incorporate roles into dependent types systems and prove, using the Coq proof assistant, that the resulting system is sound. We have designed this work as a foundation for the addition of dependent types to the Glasgow Haskell Compiler, but we also expect that it will be of use to designers of other dependentlytyped languages who might want to adopt Haskell’s safe coercions feature. @Article{ICFP19p101, author = {Stephanie Weirich and Pritam Choudhury and Antoine Voizard and Richard A. Eisenberg}, title = {A Role for Dependent Types in Haskell}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {101}, numpages = {29}, doi = {10.1145/3341705}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Eremondi, Joseph 
ICFP '19: "Approximate Normalization ..."
Approximate Normalization for Gradual Dependent Types
Joseph Eremondi, Éric Tanter, and Ronald Garcia (University of British Columbia, Canada; University of Chile, Chile; Inria, France) Dependent types help programmers write highly reliable code. However, this reliability comes at a cost: it can be challenging to write new prototypes in (or migrate old code to) dependentlytyped programming languages. Gradual typing makes static type disciplines more flexible, so an appropriate notion of gradual dependent types could fruitfully lower this cost. However, dependent types raise unique challenges for gradual typing. Dependent typechecking involves the execution of program code, but graduallytyped code can signal runtime type errors or diverge. These runtime errors threaten the soundness guarantees that make dependent types so attractive, while divergence spoils the typedriven programming experience. This paper presents GDTL, a gradual dependentlytyped language that emphasizes pragmatic dependentlytyped programming. GDTL fully embeds both an untyped and dependentlytyped language, and allows for smooth transitions between the two. In addition to gradual types we introduce gradual terms, which allow the user to be imprecise in type indices and to omit proof terms; runtime checks ensure type safety. To account for nontermination and failure, we distinguish between compiletime normalization and runtime execution: compiletime normalization is approximate but total, while runtime execution is exact, but may fail or diverge. We prove that GDTL has decidable typechecking and satisfies all the expected properties of gradual languages. In particular, GDTL satisfies the static and dynamic gradual guarantees: reducing type precision preserves typedness, and altering type precision does not change program behavior outside of dynamic type failures. To prove these properties, we were led to establish a novel normalization gradual guarantee that captures the monotonicity of approximate normalization with respect to imprecision. @Article{ICFP19p88, author = {Joseph Eremondi and Éric Tanter and Ronald Garcia}, title = {Approximate Normalization for Gradual Dependent Types}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {88}, numpages = {30}, doi = {10.1145/3341692}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Essertel, Grégory M. 
ICFP '19: "Compiling with Continuations, ..."
Compiling with Continuations, or without? Whatever.
Youyou Cong, Leo Osvald, Grégory M. Essertel, and Tiark Rompf (Tokyo Institute of Technology, Japan; Purdue University, USA) What makes a good compiler IR? In the context of functional languages, there has been an extensive debate on the advantages and disadvantages of continuationpassingstyle (CPS). The consensus seems to be that some form of explicit continuations is necessary to model jumps in a functional style, but that they should have a 2ndclass status, separate from regular functions, to ensure efficient code generation. Building on this observation, a recent study from PLDI 2017 proposed a directstyle IR with explicit join points, which essentially represent local continuations, i.e., functions that do not return or escape. While this IR can work well in practice, as evidenced by the implementation of join points in the Glasgow Haskell Compiler (GHC), there still seems to be room for improvement, especially with regard to the way continuations are handled in the course of optimization. In this paper, we contribute to the CPS debate by developing a novel IR with the following features. First, we integrate a control operator that resembles Felleisen’s C, eliminating certain redundant rewrites observed in the previous study. Second, we treat the nonreturning and nonescaping aspects of continuations separately, allowing efficient compilation of wellbehaved functions defined by the user. Third, we define a selective CPS translation of our IR, which erases control operators while preserving the meaning and typing of programs. These features enable optimizations in both direct style and full CPS, as well as in any intermediate style with selectively exposed continuations. Thus, we change the spectrum of available options from “CPS yes or no” to “as much or as little CPS as you want, when you want it”. @Article{ICFP19p79, author = {Youyou Cong and Leo Osvald and Grégory M. Essertel and Tiark Rompf}, title = {Compiling with Continuations, or without? Whatever.}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {79}, numpages = {28}, doi = {10.1145/3341643}, year = {2019}, } Publisher's Version Article Search ICFP '19: "Demystifying Differentiable ..." Demystifying Differentiable Programming: Shift/Reset the Penultimate Backpropagator Fei Wang, Daniel Zheng, James Decker, Xilun Wu, Grégory M. Essertel, and Tiark Rompf (Purdue University, USA) Deep learning has seen tremendous success over the past decade in computer vision, machine translation, and gameplay. This success rests crucially on gradientdescent optimization and the ability to “learn” parameters of a neural network by backpropagating observed errors. However, neural network architectures are growing increasingly sophisticated and diverse, which motivates an emerging quest for even more general forms of differentiable programming, where arbitrary parameterized computations can be trained by gradient descent. In this paper, we take a fresh look at automatic differentiation (AD) techniques, and especially aim to demystify the reversemode form of AD that generalizes backpropagation in neural networks. We uncover a tight connection between reversemode AD and delimited continuations, which permits implementing reversemode AD purely via operator overloading and without managing any auxiliary data structures. We further show how this formulation of AD can be fruitfully combined with multistage programming (staging), leading to an efficient implementation that combines the performance benefits of deep learning frameworks based on explicit reified computation graphs (e.g., TensorFlow) with the expressiveness of pure library approaches (e.g., PyTorch). @Article{ICFP19p96, author = {Fei Wang and Daniel Zheng and James Decker and Xilun Wu and Grégory M. Essertel and Tiark Rompf}, title = {Demystifying Differentiable Programming: Shift/Reset the Penultimate Backpropagator}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {96}, numpages = {31}, doi = {10.1145/3341700}, year = {2019}, } Publisher's Version Article Search 

Fehrmann, Hans Jacob 
ICFP '19: "A Reasonably Exceptional Type ..."
A Reasonably Exceptional Type Theory
PierreMarie Pédrot, Nicolas Tabareau, Hans Jacob Fehrmann, and Éric Tanter (Inria, France; University of Chile, Chile) Traditional approaches to compensate for the lack of exceptions in type theories for proof assistants have severe drawbacks from both a programming and a reasoning perspective. Pédrot and Tabareau recently extended the Calculus of Inductive Constructions (CIC) with exceptions. The new exceptional type theory is interpreted by a translation into CIC, covering full dependent elimination, decidable typechecking and canonicity. However, the exceptional theory is inconsistent as a logical system. To recover consistency, Pédrot and Tabareau propose an additional translation that uses parametricity to enforce that all exceptions are caught locally. While this enforcement brings logical expressivity gains over CIC, it completely prevents reasoning about exceptional programs such as partial functions. This work addresses the dilemma between exceptions and consistency in a more flexible manner, with the Reasonably Exceptional Type Theory (RETT). RETT is structured in three layers: (a) the exceptional layer, in which all terms can raise exceptions; (b) the mediation layer, in which exceptional terms must be provably parametric; (c) the pure layer, in which terms are nonexceptional, but can refer to exceptional terms. We present the general theory of RETT, where each layer is realized by a predicative hierarchy of universes, and develop an instance of RETT in Coq: the impure layer corresponds to the predicative universe hierarchy, the pure layer is realized by the impredicative universe of propositions, and the mediation layer is reified via a parametricity type class. RETT is the first full dependent type theory to support consistent reasoning about exceptional terms, and the CoqRETT plugin readily brings this ability to Coq programmers. @Article{ICFP19p108, author = {PierreMarie Pédrot and Nicolas Tabareau and Hans Jacob Fehrmann and Éric Tanter}, title = {A Reasonably Exceptional Type Theory}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {108}, numpages = {29}, doi = {10.1145/3341712}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Field, Tony 
ICFP '19: "HigherOrder TypeLevel Programming ..."
HigherOrder TypeLevel Programming in Haskell
Csongor Kiss, Tony Field, Susan Eisenbach, and Simon Peyton Jones (Imperial College London, UK; Microsoft Research, UK) Type family applications in Haskell must be fully saturated. This means that all typelevel functions have to be firstorder, leading to code that is both messy and longwinded. In this paper we detail an extension to GHC that removes this restriction. We augment Haskell’s existing type arrow, >, with an unmatchable arrow,  >, that supports partial application of type families without compromising soundness. A soundness proof is provided. We show how the techniques described can lead to substantial codesize reduction (circa 80%) in the typelevel logic of commonlyused typelevel libraries whilst simultaneously improving code quality and readability. @Article{ICFP19p102, author = {Csongor Kiss and Tony Field and Susan Eisenbach and Simon Peyton Jones}, title = {HigherOrder TypeLevel Programming in Haskell}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {102}, numpages = {26}, doi = {10.1145/3341706}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Fisher, Kathleen 
ICFP '19: "Synthesizing Symmetric Lenses ..."
Synthesizing Symmetric Lenses
Anders Miltner, Solomon Maina, Kathleen Fisher, Benjamin C. Pierce, David Walker, and Steve Zdancewic (Princeton University, USA; University of Pennsylvania, USA; Tufts University, USA) Lenses are programs that can be run both "front to back" and "back to front," allowing updates to either their source or their target data to be transferred in both directions. Since their introduction by Foster et al., lenses have been extensively studied, extended, and applied. Recent work has also demonstrated how techniques from typedirected program synthesis can be used to efficiently synthesize a simple class of lensessocalled bijective lenses over string datagiven a pair of types (regular expressions) and a small number of examples. We extend this synthesis algorithm to a much broader class of lenses, called simple symmetric lenses, including all bijective lenses, all of the popular category of "asymmetric" lenses, and a rich subset of the more powerful "symmetric lenses" proposed by Hofmann et al. Intuitively, simple symmetric lenses allow some information to be present on one side but not the other and vice versa. They are of independent theoretical interest, being the largest class of symmetric lenses that do not rely on persistent internal state. Synthesizing simple symmetric lenses is substantially more challenging than synthesizing bijective lenses: Since some of the information on each side can be "disconnected" from the other side, there will, in general, be many lenses that agree with a given example. To guide the search process, we use stochastic regular expressions and ideas from information theory to estimate the amount of information propagated by a candidate lens, generally preferring lenses that propagate more information, as well as user annotations marking parts of the source and target data structures as either irrelevant or essential. We describe an implementation of simple symmetric lenses and our synthesis procedure as extensions to the Boomerang language. We evaluate its performance on 48 benchmark examples drawn from Flash Fill, Augeas, the bidirectional programming literature, and electronic file format synchronization tasks. Our implementation can synthesize each of these lenses in under 30 seconds. @Article{ICFP19p95, author = {Anders Miltner and Solomon Maina and Kathleen Fisher and Benjamin C. Pierce and David Walker and Steve Zdancewic}, title = {Synthesizing Symmetric Lenses}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {95}, numpages = {28}, doi = {10.1145/3341699}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Fitzgibbon, Andrew 
ICFP '19: "Efficient Differentiable Programming ..."
Efficient Differentiable Programming in a Functional ArrayProcessing Language
Amir Shaikhha, Andrew Fitzgibbon, Dimitrios Vytiniotis, and Simon Peyton Jones (University of Oxford, UK; Microsoft Research, UK; DeepMind, UK) We present a system for the automatic differentiation (AD) of a higherorder functional arrayprocessing language. The core functional language underlying this system simultaneously supports both sourcetosource forwardmode AD and global optimisations such as loop transformations. In combination, gradient computation with forwardmode AD can be as efficient as reverse mode, and that the Jacobian matrices required for numerical algorithms such as GaussNewton and LevenbergMarquardt can be efficiently computed. @Article{ICFP19p97, author = {Amir Shaikhha and Andrew Fitzgibbon and Dimitrios Vytiniotis and Simon Peyton Jones}, title = {Efficient Differentiable Programming in a Functional ArrayProcessing Language}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {97}, numpages = {30}, doi = {10.1145/3341701}, year = {2019}, } Publisher's Version Article Search 

Flatt, Matthew 
ICFP '19: "Rebuilding Racket on Chez ..."
Rebuilding Racket on Chez Scheme (Experience Report)
Matthew Flatt, Caner Derici, R. Kent Dybvig, Andrew W. Keep, Gustavo E. Massaccesi, Sarah Spall, Sam TobinHochstadt, and Jon Zeppieri (University of Utah, USA; Indiana University, USA; Cisco Systems, USA; University of Buenos Aires, Argentina) We rebuilt Racket on Chez Scheme, and it works well—as long as we're allowed a few patches to Chez Scheme. DrRacket runs, the Racket distribution can build itself, and nearly all of the core Racket test suite passes. Maintainability and performance of the resulting implementation are good, although some work remains to improve endtoend performance. The least predictable part of our effort was how big the differences between Racket and Chez Scheme would turn out to be and how we would manage those differences. We expect Racket on Chez Scheme to become the main Racket implementation, and we encourage other language implementers to consider Chez Scheme as a target virtual machine. @Article{ICFP19p78, author = {Matthew Flatt and Caner Derici and R. Kent Dybvig and Andrew W. Keep and Gustavo E. Massaccesi and Sarah Spall and Sam TobinHochstadt and Jon Zeppieri}, title = {Rebuilding Racket on Chez Scheme (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {78}, numpages = {15}, doi = {10.1145/3341642}, year = {2019}, } Publisher's Version Article Search Info Artifacts Available Artifacts Functional 

Gaboardi, Marco 
ICFP '19: "Relational Cost Analysis for ..."
Relational Cost Analysis for FunctionalImperative Programs
Weihao Qu, Marco Gaboardi, and Deepak Garg (SUNY Buffalo, USA; MPISWS, Germany) Relational cost analysis aims at formally establishing bounds on the difference in the evaluation costs of two programs. As a particular case, one can also use relational cost analysis to establish bounds on the difference in the evaluation cost of the same program on two different inputs. One way to perform relational cost analysis is to use a relational typeandeffect system that supports reasoning about relations between two executions of two programs. Building on this basic idea, we present a typeandeffect system, called ARel, for reasoning about the relative cost of arraymanipulating, higherorder functionalimperative programs. The key ingredient of our approach is a new lightweight type refinement discipline that we use to track relations (differences) between two mutable arrays. This discipline combined with Hoarestyle triples built into the types allows us to express and establish precise relative costs of several interesting programs which imperatively update their data. We have implemented ARel using ideas from bidirectional type checking. @Article{ICFP19p92, author = {Weihao Qu and Marco Gaboardi and Deepak Garg}, title = {Relational Cost Analysis for FunctionalImperative Programs}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {92}, numpages = {29}, doi = {10.1145/3341696}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Gaonkar, Akash 
ICFP '19: "Sequential Programming for ..."
Sequential Programming for Replicated Data Stores
Nicholas V. Lewchenko, Arjun Radhakrishna, Akash Gaonkar, and Pavol Černý (University of Colorado Boulder, USA; Microsoft, USA) We introduce Carol, a refinementtyped programming language for replicated data stores. The salient feature of Carol is that it allows programming and verifying replicated store operations modularly, without consideration of other operations that might interleave, and sequentially, without requiring reference to or knowledge of the concurrent execution model. This is in stark contrast with existing systems, which require understanding the concurrent interactions of all pairs of operations when developing or verifying them. The key enabling idea is the consistency guard, a twostate predicate relating the locallyviewed store and the hypothetical remote store that an operation’s updates may eventually be applied to, which is used by the Carol programmer to declare their precise consistency requirements. Guards appear to the programmer and refinement typechecker as simple data preconditions, enabling sequential reasoning, while appearing to the distributed runtime as consistency control instructions. We implement and evaluate the Carol system in two parts: (1) the algorithm used to statically translate guards into the runtime coordination actions required to enforce them, and (2) the networkedreplica runtime which executes arbitrary operations, written in a Haskell DSL, according to the Carol language semantics. @Article{ICFP19p106, author = {Nicholas V. Lewchenko and Arjun Radhakrishna and Akash Gaonkar and Pavol Černý}, title = {Sequential Programming for Replicated Data Stores}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {106}, numpages = {28}, doi = {10.1145/3341710}, year = {2019}, } Publisher's Version Article Search 

Garcia, Ronald 
ICFP '19: "Approximate Normalization ..."
Approximate Normalization for Gradual Dependent Types
Joseph Eremondi, Éric Tanter, and Ronald Garcia (University of British Columbia, Canada; University of Chile, Chile; Inria, France) Dependent types help programmers write highly reliable code. However, this reliability comes at a cost: it can be challenging to write new prototypes in (or migrate old code to) dependentlytyped programming languages. Gradual typing makes static type disciplines more flexible, so an appropriate notion of gradual dependent types could fruitfully lower this cost. However, dependent types raise unique challenges for gradual typing. Dependent typechecking involves the execution of program code, but graduallytyped code can signal runtime type errors or diverge. These runtime errors threaten the soundness guarantees that make dependent types so attractive, while divergence spoils the typedriven programming experience. This paper presents GDTL, a gradual dependentlytyped language that emphasizes pragmatic dependentlytyped programming. GDTL fully embeds both an untyped and dependentlytyped language, and allows for smooth transitions between the two. In addition to gradual types we introduce gradual terms, which allow the user to be imprecise in type indices and to omit proof terms; runtime checks ensure type safety. To account for nontermination and failure, we distinguish between compiletime normalization and runtime execution: compiletime normalization is approximate but total, while runtime execution is exact, but may fail or diverge. We prove that GDTL has decidable typechecking and satisfies all the expected properties of gradual languages. In particular, GDTL satisfies the static and dynamic gradual guarantees: reducing type precision preserves typedness, and altering type precision does not change program behavior outside of dynamic type failures. To prove these properties, we were led to establish a novel normalization gradual guarantee that captures the monotonicity of approximate normalization with respect to imprecision. @Article{ICFP19p88, author = {Joseph Eremondi and Éric Tanter and Ronald Garcia}, title = {Approximate Normalization for Gradual Dependent Types}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {88}, numpages = {30}, doi = {10.1145/3341692}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Garg, Deepak 
ICFP '19: "Relational Cost Analysis for ..."
Relational Cost Analysis for FunctionalImperative Programs
Weihao Qu, Marco Gaboardi, and Deepak Garg (SUNY Buffalo, USA; MPISWS, Germany) Relational cost analysis aims at formally establishing bounds on the difference in the evaluation costs of two programs. As a particular case, one can also use relational cost analysis to establish bounds on the difference in the evaluation cost of the same program on two different inputs. One way to perform relational cost analysis is to use a relational typeandeffect system that supports reasoning about relations between two executions of two programs. Building on this basic idea, we present a typeandeffect system, called ARel, for reasoning about the relative cost of arraymanipulating, higherorder functionalimperative programs. The key ingredient of our approach is a new lightweight type refinement discipline that we use to track relations (differences) between two mutable arrays. This discipline combined with Hoarestyle triples built into the types allows us to express and establish precise relative costs of several interesting programs which imperatively update their data. We have implemented ARel using ideas from bidirectional type checking. @Article{ICFP19p92, author = {Weihao Qu and Marco Gaboardi and Deepak Garg}, title = {Relational Cost Analysis for FunctionalImperative Programs}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {92}, numpages = {29}, doi = {10.1145/3341696}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Gratzer, Daniel 
ICFP '19: "Implementing a Modal Dependent ..."
Implementing a Modal Dependent Type Theory
Daniel Gratzer, Jonathan Sterling, and Lars Birkedal (Aarhus University, Denmark; Carnegie Mellon University, USA) Modalities are everywhere in programming and mathematics! Despite this, however, there are still significant technical challenges in formulating a core dependent type theory with modalities. We present a dependent type theory MLTT_{🔒} supporting the connectives of standard MartinLöf Type Theory as well as an S4style necessity operator. MLTT_{🔒} supports a smooth interaction between modal and dependent types and provides a common basis for the use of modalities in programming and in synthetic mathematics. We design and prove the soundness and completeness of a type checking algorithm for MLTT_{🔒}, using a novel extension of normalization by evaluation. We have also implemented our algorithm in a prototype proof assistant for MLTT_{🔒}, demonstrating the ease of applying our techniques. @Article{ICFP19p107, author = {Daniel Gratzer and Jonathan Sterling and Lars Birkedal}, title = {Implementing a Modal Dependent Type Theory}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {107}, numpages = {29}, doi = {10.1145/3341711}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Graulund, Christian Uldal 
ICFP '19: "Simply RaTT: A FitchStyle ..."
Simply RaTT: A FitchStyle Modal Calculus for Reactive Programming without Space Leaks
Patrick Bahr, Christian Uldal Graulund, and Rasmus Ejlers Møgelberg (IT University of Copenhagen, Denmark) Functional reactive programming (FRP) is a paradigm for programming with signals and events, allowing the user to describe reactive programs on a high level of abstraction. For this to make sense, an FRP language must ensure that all programs are causal, and can be implemented without introducing space leaks and time leaks. To this end, some FRP languages do not give direct access to signals, but just to signal functions. Recently, modal types have been suggested as an alternative approach to ensuring causality in FRP languages in the synchronous case, giving direct access to the signal and event abstractions. This paper presents Simply RaTT, a new modal calculus for reactive programming. Unlike prior calculi, Simply RaTT uses a Fitchstyle approach to modal types, which simplifies the type system and makes programs more concise. Echoing a previous result by Krishnaswami for a different language, we devise an operational semantics that safely executes Simply RaTT programs without space leaks. We also identify a source of time leaks present in other modal FRP languages: The unfolding of fixed points in delayed computations. The Fitchstyle presentation allows an easy way to rules out these leaks, which appears not to be possible in the more traditional dual context approach. @Article{ICFP19p109, author = {Patrick Bahr and Christian Uldal Graulund and Rasmus Ejlers Møgelberg}, title = {Simply RaTT: A FitchStyle Modal Calculus for Reactive Programming without Space Leaks}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {109}, numpages = {27}, doi = {10.1145/3341713}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Hackett, Jennifer 
ICFP '19: "CallbyNeed Is Clairvoyant ..."
CallbyNeed Is Clairvoyant CallbyValue
Jennifer Hackett and Graham Hutton (University of Nottingham, UK) Callbyneed evaluation, also known as lazy evaluation, provides two key benefits: compositional programming and infinite data. The standard semantics for laziness is Launchbury’s natural semantics DBLP:conf/popl/Launchbury93, which uses a heap to memoise the results of delayed evaluations. However, the stateful nature of this heap greatly complicates reasoning about the operational behaviour of lazy programs. In this article, we propose an alternative semantics for laziness, clairvoyant evaluation, that replaces the state effect with nondeterminism, and prove this semantics equivalent in a strong sense to the standard semantics. We show how this new semantics greatly simplifies operational reasoning, admitting much simpler proofs of a number of results from the literature, and how it leads to the first denotational cost semantics for lazy evaluation. @Article{ICFP19p114, author = {Jennifer Hackett and Graham Hutton}, title = {CallbyNeed Is Clairvoyant CallbyValue}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {114}, numpages = {23}, doi = {10.1145/3341718}, year = {2019}, } Publisher's Version Article Search 

Haeberlen, Andreas 
ICFP '19: "Fuzzi: A ThreeLevel Logic ..."
Fuzzi: A ThreeLevel Logic for Differential Privacy
Hengchu Zhang, Edo Roth, Andreas Haeberlen, Benjamin C. Pierce, and Aaron Roth (University of Pennsylvania, USA) Curators of sensitive datasets sometimes need to know whether queries against the data are differentially private. Two sorts of logics have been proposed for checking this property: (1) type systems and other static analyses, which fully automate straightforward reasoning with concepts like “program sensitivity” and “privacy loss,” and (2) fullblown program logics such as apRHL (an approximate, probabilistic, relational Hoare logic), which support more flexible reasoning about subtle privacypreserving algorithmic techniques but offer only minimal automation. We propose a threelevel logic for differential privacy in an imperative setting and present a prototype implementation called Fuzzi. Fuzzi’s lowest level is a generalpurpose logic; its middle level is apRHL; and its top level is a novel sensitivity logic adapted from the linearlogicinspired type system of Fuzz, a differentially private functional language. The key novelty is a high degree of integration between the sensitivity logic and the two lowerlevel logics: the judgments and proofs of the sensitivity logic can be easily translated into apRHL; conversely, privacy properties of key algorithmic building blocks can be proved manually in apRHL and the base logic, then packaged up as typing rules that can be applied by a checker for the sensitivity logic to automatically construct privacy proofs for composite programs of arbitrary size. We demonstrate Fuzzi’s utility by implementing four different private machinelearning algorithms and showing that Fuzzi’s checker is able to derive tight sensitivity bounds. @Article{ICFP19p93, author = {Hengchu Zhang and Edo Roth and Andreas Haeberlen and Benjamin C. Pierce and Aaron Roth}, title = {Fuzzi: A ThreeLevel Logic for Differential Privacy}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {93}, numpages = {28}, doi = {10.1145/3341697}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Hameer, Aliya 
ICFP '19: "Teaching the Art of Functional ..."
Teaching the Art of Functional Programming using Automated Grading (Experience Report)
Aliya Hameer and Brigitte Pientka (McGill University, Canada) Online programming platforms have immense potential to improve students' educational experience. They make programming more accessible, as no installation is required; and automatic grading facilities provide students with immediate feedback on their code, allowing them to to fix bugs and address errors in their understanding right away. However, these graders tend to focus heavily on the functional correctness of a solution, neglecting other aspects of students' code and thereby causing students to miss out on a significant amount of valuable feedback. In this paper, we recount our experience in using the LearnOCaml online programming platform to teach functional programming in a secondyear university course on programming languages and paradigms. Moreover, we explore how to leverage LearnOCaml's automated grading infrastructure to make it easy to write more expressive graders that give students feedback on properties of their code beyond simple input/output correctness, in order to effectively teach elements of functional programming style. In particular, we describe our extensions to the LearnOCaml platform that evaluate students on test quality and code style. By providing these tools and a suite of our own homework problems and associated graders, we aim to promote functional programming education, enhance students' educational experience, and make teaching and learning typed functional programming more accessible to instructors and students alike, in our community and beyond. @Article{ICFP19p115, author = {Aliya Hameer and Brigitte Pientka}, title = {Teaching the Art of Functional Programming using Automated Grading (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {115}, numpages = {15}, doi = {10.1145/3341719}, year = {2019}, } Publisher's Version Article Search Info Artifacts Available Artifacts Functional 

Hendrix, Joe 
ICFP '19: "Dependently Typed Haskell ..."
Dependently Typed Haskell in Industry (Experience Report)
David Thrane Christiansen, Iavor S. Diatchki, Robert Dockins, Joe Hendrix, and Tristan Ravitch (Galois, USA) Recent versions of the Haskell compiler GHC have a number of advanced features that allow many idioms from dependently typed programming to be encoded. We describe our experiences using this "dependently typed Haskell" to construct a performancecritical library that is a key component in a number of verification tools. We have discovered that it can be done, and it brings significant value, but also at a high cost. In this experience report, we describe the ways in which programming at the edge of what is expressible in Haskell's type system has brought us value, the difficulties that it has imposed, and some of the ways we coped with the difficulties. @Article{ICFP19p100, author = {David Thrane Christiansen and Iavor S. Diatchki and Robert Dockins and Joe Hendrix and Tristan Ravitch}, title = {Dependently Typed Haskell in Industry (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {100}, numpages = {16}, doi = {10.1145/3341704}, year = {2019}, } Publisher's Version Article Search 

Hriţcu, Cătălin 
ICFP '19: "Dijkstra Monads for All ..."
Dijkstra Monads for All
Kenji Maillard, Danel Ahman, Robert Atkey, Guido Martínez, Cătălin Hriţcu, Exequiel Rivas, and Éric Tanter (Inria, France; ENS, France; University of Ljubljana, Slovenia; University of Strathclyde, UK; CIFASISCONICET, Argentina; University of Chile, Chile) This paper proposes a general semantic framework for verifying programs with arbitrary monadic sideeffects using Dijkstra monads, which we define as monadlike structures indexed by a specification monad. We prove that any monad morphism between a computational monad and a specification monad gives rise to a Dijkstra monad, which provides great flexibility for obtaining Dijkstra monads tailored to the verification task at hand. We moreover show that a large variety of specification monads can be obtained by applying monad transformers to various base specification monads, including predicate transformers and Hoarestyle pre and postconditions. For defining correct monad transformers, we propose a language inspired by Moggi's monadic metalanguage that is parameterized by a dependent type theory. We also develop a notion of algebraic operations for Dijkstra monads, and start to investigate two ways of also accommodating effect handlers. We implement our framework in both Coq and F*, and illustrate that it supports a wide variety of verification styles for effects such as exceptions, nondeterminism, state, inputoutput, and general recursion. @Article{ICFP19p104, author = {Kenji Maillard and Danel Ahman and Robert Atkey and Guido Martínez and Cătălin Hriţcu and Exequiel Rivas and Éric Tanter}, title = {Dijkstra Monads for All}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {104}, numpages = {29}, doi = {10.1145/3341708}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Hutton, Graham 
ICFP '19: "CallbyNeed Is Clairvoyant ..."
CallbyNeed Is Clairvoyant CallbyValue
Jennifer Hackett and Graham Hutton (University of Nottingham, UK) Callbyneed evaluation, also known as lazy evaluation, provides two key benefits: compositional programming and infinite data. The standard semantics for laziness is Launchbury’s natural semantics DBLP:conf/popl/Launchbury93, which uses a heap to memoise the results of delayed evaluations. However, the stateful nature of this heap greatly complicates reasoning about the operational behaviour of lazy programs. In this article, we propose an alternative semantics for laziness, clairvoyant evaluation, that replaces the state effect with nondeterminism, and prove this semantics equivalent in a strong sense to the standard semantics. We show how this new semantics greatly simplifies operational reasoning, admitting much simpler proofs of a number of results from the literature, and how it leads to the first denotational cost semantics for lazy evaluation. @Article{ICFP19p114, author = {Jennifer Hackett and Graham Hutton}, title = {CallbyNeed Is Clairvoyant CallbyValue}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {114}, numpages = {23}, doi = {10.1145/3341718}, year = {2019}, } Publisher's Version Article Search 

Keep, Andrew W. 
ICFP '19: "Rebuilding Racket on Chez ..."
Rebuilding Racket on Chez Scheme (Experience Report)
Matthew Flatt, Caner Derici, R. Kent Dybvig, Andrew W. Keep, Gustavo E. Massaccesi, Sarah Spall, Sam TobinHochstadt, and Jon Zeppieri (University of Utah, USA; Indiana University, USA; Cisco Systems, USA; University of Buenos Aires, Argentina) We rebuilt Racket on Chez Scheme, and it works well—as long as we're allowed a few patches to Chez Scheme. DrRacket runs, the Racket distribution can build itself, and nearly all of the core Racket test suite passes. Maintainability and performance of the resulting implementation are good, although some work remains to improve endtoend performance. The least predictable part of our effort was how big the differences between Racket and Chez Scheme would turn out to be and how we would manage those differences. We expect Racket on Chez Scheme to become the main Racket implementation, and we encourage other language implementers to consider Chez Scheme as a target virtual machine. @Article{ICFP19p78, author = {Matthew Flatt and Caner Derici and R. Kent Dybvig and Andrew W. Keep and Gustavo E. Massaccesi and Sarah Spall and Sam TobinHochstadt and Jon Zeppieri}, title = {Rebuilding Racket on Chez Scheme (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {78}, numpages = {15}, doi = {10.1145/3341642}, year = {2019}, } Publisher's Version Article Search Info Artifacts Available Artifacts Functional 

Kiss, Csongor 
ICFP '19: "HigherOrder TypeLevel Programming ..."
HigherOrder TypeLevel Programming in Haskell
Csongor Kiss, Tony Field, Susan Eisenbach, and Simon Peyton Jones (Imperial College London, UK; Microsoft Research, UK) Type family applications in Haskell must be fully saturated. This means that all typelevel functions have to be firstorder, leading to code that is both messy and longwinded. In this paper we detail an extension to GHC that removes this restriction. We augment Haskell’s existing type arrow, >, with an unmatchable arrow,  >, that supports partial application of type families without compromising soundness. A soundness proof is provided. We show how the techniques described can lead to substantial codesize reduction (circa 80%) in the typelevel logic of commonlyused typelevel libraries whilst simultaneously improving code quality and readability. @Article{ICFP19p102, author = {Csongor Kiss and Tony Field and Susan Eisenbach and Simon Peyton Jones}, title = {HigherOrder TypeLevel Programming in Haskell}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {102}, numpages = {26}, doi = {10.1145/3341706}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Lewchenko, Nicholas V. 
ICFP '19: "Sequential Programming for ..."
Sequential Programming for Replicated Data Stores
Nicholas V. Lewchenko, Arjun Radhakrishna, Akash Gaonkar, and Pavol Černý (University of Colorado Boulder, USA; Microsoft, USA) We introduce Carol, a refinementtyped programming language for replicated data stores. The salient feature of Carol is that it allows programming and verifying replicated store operations modularly, without consideration of other operations that might interleave, and sequentially, without requiring reference to or knowledge of the concurrent execution model. This is in stark contrast with existing systems, which require understanding the concurrent interactions of all pairs of operations when developing or verifying them. The key enabling idea is the consistency guard, a twostate predicate relating the locallyviewed store and the hypothetical remote store that an operation’s updates may eventually be applied to, which is used by the Carol programmer to declare their precise consistency requirements. Guards appear to the programmer and refinement typechecker as simple data preconditions, enabling sequential reasoning, while appearing to the distributed runtime as consistency control instructions. We implement and evaluate the Carol system in two parts: (1) the algorithm used to statically translate guards into the runtime coordination actions required to enforce them, and (2) the networkedreplica runtime which executes arbitrary operations, written in a Haskell DSL, according to the Carol language semantics. @Article{ICFP19p106, author = {Nicholas V. Lewchenko and Arjun Radhakrishna and Akash Gaonkar and Pavol Černý}, title = {Sequential Programming for Replicated Data Stores}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {106}, numpages = {28}, doi = {10.1145/3341710}, year = {2019}, } Publisher's Version Article Search 

Liepelt, VilemBenjamin 
ICFP '19: "Quantitative Program Reasoning ..."
Quantitative Program Reasoning with Graded Modal Types
Dominic Orchard, VilemBenjamin Liepelt, and Harley Eades III (University of Kent, UK; Augusta University, USA) In programming, some data acts as a resource (e.g., file handles, channels) subject to usage constraints. This poses a challenge to software correctness as most languages are agnostic to constraints on data. The approach of linear types provides a partial remedy, delineating data into resources to be used but never copied or discarded, and unconstrained values. Bounded Linear Logic provides a more finegrained approach, quantifying nonlinear use via an indexedfamily of modalities. Recent work on coeffect types generalises this idea to graded comonads, providing type systems which can capture various program properties. Here, we propose the umbrella notion of graded modal types, encompassing coeffect types and dual notions of typebased effect reasoning via graded monads. In combination with linear and indexed types, we show that graded modal types provide an expressive type theory for quantitative program reasoning, advancing the reach of type systems to capture and verify a broader set of program properties. We demonstrate this approach via a type system embodied in a fullyfledged functional language called Granule, exploring various examples. @Article{ICFP19p110, author = {Dominic Orchard and VilemBenjamin Liepelt and Harley Eades III}, title = {Quantitative Program Reasoning with Graded Modal Types}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {110}, numpages = {30}, doi = {10.1145/3341714}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Lindenhovius, Bert 
ICFP '19: "Mixed Linear and Nonlinear ..."
Mixed Linear and Nonlinear Recursive Types
Bert Lindenhovius, Michael Mislove, and Vladimir Zamdzhiev (Tulane University, USA; University of Lorraine, France; CNRS, France; Inria, France; LORIA, France) We describe a type system with mixed linear and nonlinear recursive types called LNLFPC (the linear/nonlinear fixpoint calculus). The type system supports linear typing which enhances the safety properties of programs, but also supports nonlinear typing as well which makes the type system more convenient for programming. Just like in FPC, we show that LNLFPC supports typelevel recursion which in turn induces termlevel recursion. We also provide sound and computationally adequate categorical models for LNLFPC which describe the categorical structure of the substructural operations of Intuitionistic Linear Logic at all nonlinear types, including the recursive ones. In order to do so, we describe a new technique for solving recursive domain equations within the category CPO by constructing the solutions over preembeddings. The type system also enjoys implicit weakening and contraction rules which we are able to model by identifying the canonical comonoid structure of all nonlinear types. We also show that the requirements of our abstract model are reasonable by constructing a large class of concrete models that have found applications not only in classical functional programming, but also in emerging programming paradigms that incorporate linear types, such as quantum programming and circuit description programming languages. @Article{ICFP19p111, author = {Bert Lindenhovius and Michael Mislove and Vladimir Zamdzhiev}, title = {Mixed Linear and Nonlinear Recursive Types}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {111}, numpages = {29}, doi = {10.1145/3341715}, year = {2019}, } Publisher's Version Article Search 

Lukyanov, Georgy 
ICFP '19: "Selective Applicative Functors ..."
Selective Applicative Functors
Andrey Mokhov, Georgy Lukyanov, Simon Marlow, and Jeremie Dimino (Newcastle University, UK; Facebook, UK; Jane Street, UK) Applicative functors and monads have conquered the world of functional programming by providing general and powerful ways of describing effectful computations using pure functions. Applicative functors provide a way to compose independent effects that cannot depend on values produced by earlier computations, and all of which are declared statically. Monads extend the applicative interface by making it possible to compose dependent effects, where the value computed by one effect determines all subsequent effects, dynamically. This paper introduces an intermediate abstraction called selective applicative functors that requires all effects to be declared statically, but provides a way to select which of the effects to execute dynamically. We demonstrate applications of the new abstraction on several examples, including two industrial case studies. @Article{ICFP19p90, author = {Andrey Mokhov and Georgy Lukyanov and Simon Marlow and Jeremie Dimino}, title = {Selective Applicative Functors}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {90}, numpages = {29}, doi = {10.1145/3341694}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Maillard, Kenji 
ICFP '19: "Dijkstra Monads for All ..."
Dijkstra Monads for All
Kenji Maillard, Danel Ahman, Robert Atkey, Guido Martínez, Cătălin Hriţcu, Exequiel Rivas, and Éric Tanter (Inria, France; ENS, France; University of Ljubljana, Slovenia; University of Strathclyde, UK; CIFASISCONICET, Argentina; University of Chile, Chile) This paper proposes a general semantic framework for verifying programs with arbitrary monadic sideeffects using Dijkstra monads, which we define as monadlike structures indexed by a specification monad. We prove that any monad morphism between a computational monad and a specification monad gives rise to a Dijkstra monad, which provides great flexibility for obtaining Dijkstra monads tailored to the verification task at hand. We moreover show that a large variety of specification monads can be obtained by applying monad transformers to various base specification monads, including predicate transformers and Hoarestyle pre and postconditions. For defining correct monad transformers, we propose a language inspired by Moggi's monadic metalanguage that is parameterized by a dependent type theory. We also develop a notion of algebraic operations for Dijkstra monads, and start to investigate two ways of also accommodating effect handlers. We implement our framework in both Coq and F*, and illustrate that it supports a wide variety of verification styles for effects such as exceptions, nondeterminism, state, inputoutput, and general recursion. @Article{ICFP19p104, author = {Kenji Maillard and Danel Ahman and Robert Atkey and Guido Martínez and Cătălin Hriţcu and Exequiel Rivas and Éric Tanter}, title = {Dijkstra Monads for All}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {104}, numpages = {29}, doi = {10.1145/3341708}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Maina, Solomon 
ICFP '19: "Synthesizing Symmetric Lenses ..."
Synthesizing Symmetric Lenses
Anders Miltner, Solomon Maina, Kathleen Fisher, Benjamin C. Pierce, David Walker, and Steve Zdancewic (Princeton University, USA; University of Pennsylvania, USA; Tufts University, USA) Lenses are programs that can be run both "front to back" and "back to front," allowing updates to either their source or their target data to be transferred in both directions. Since their introduction by Foster et al., lenses have been extensively studied, extended, and applied. Recent work has also demonstrated how techniques from typedirected program synthesis can be used to efficiently synthesize a simple class of lensessocalled bijective lenses over string datagiven a pair of types (regular expressions) and a small number of examples. We extend this synthesis algorithm to a much broader class of lenses, called simple symmetric lenses, including all bijective lenses, all of the popular category of "asymmetric" lenses, and a rich subset of the more powerful "symmetric lenses" proposed by Hofmann et al. Intuitively, simple symmetric lenses allow some information to be present on one side but not the other and vice versa. They are of independent theoretical interest, being the largest class of symmetric lenses that do not rely on persistent internal state. Synthesizing simple symmetric lenses is substantially more challenging than synthesizing bijective lenses: Since some of the information on each side can be "disconnected" from the other side, there will, in general, be many lenses that agree with a given example. To guide the search process, we use stochastic regular expressions and ideas from information theory to estimate the amount of information propagated by a candidate lens, generally preferring lenses that propagate more information, as well as user annotations marking parts of the source and target data structures as either irrelevant or essential. We describe an implementation of simple symmetric lenses and our synthesis procedure as extensions to the Boomerang language. We evaluate its performance on 48 benchmark examples drawn from Flash Fill, Augeas, the bidirectional programming literature, and electronic file format synchronization tasks. Our implementation can synthesize each of these lenses in under 30 seconds. @Article{ICFP19p95, author = {Anders Miltner and Solomon Maina and Kathleen Fisher and Benjamin C. Pierce and David Walker and Steve Zdancewic}, title = {Synthesizing Symmetric Lenses}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {95}, numpages = {28}, doi = {10.1145/3341699}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Mangin, Cyprien 
ICFP '19: "Equations Reloaded: HighLevel ..."
Equations Reloaded: HighLevel DependentlyTyped Functional Programming and Proving in Coq
Matthieu Sozeau and Cyprien Mangin (Inria, France; IRIF, France; University of Paris Diderot, France) Equations is a plugin for the Coq proof assistant which provides a notation for defining programs by dependent patternmatching and structural or wellfounded recursion. It additionally derives useful highlevel proof principles for demonstrating properties about them, abstracting away from the implementation details of the function and its compiled form. We present a general design and implementation that provides a robust and expressive function definition package as a definitional extension to the Coq kernel. At the core of the system is a new simplifier for dependent equalities based on an original handling of the noconfusion property of constructors. @Article{ICFP19p86, author = {Matthieu Sozeau and Cyprien Mangin}, title = {Equations Reloaded: HighLevel DependentlyTyped Functional Programming and Proving in Coq}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {86}, numpages = {29}, doi = {10.1145/3341690}, year = {2019}, } Publisher's Version Article Search Info Artifacts Available Artifacts Functional 

Marlow, Simon 
ICFP '19: "Selective Applicative Functors ..."
Selective Applicative Functors
Andrey Mokhov, Georgy Lukyanov, Simon Marlow, and Jeremie Dimino (Newcastle University, UK; Facebook, UK; Jane Street, UK) Applicative functors and monads have conquered the world of functional programming by providing general and powerful ways of describing effectful computations using pure functions. Applicative functors provide a way to compose independent effects that cannot depend on values produced by earlier computations, and all of which are declared statically. Monads extend the applicative interface by making it possible to compose dependent effects, where the value computed by one effect determines all subsequent effects, dynamically. This paper introduces an intermediate abstraction called selective applicative functors that requires all effects to be declared statically, but provides a way to select which of the effects to execute dynamically. We demonstrate applications of the new abstraction on several examples, including two industrial case studies. @Article{ICFP19p90, author = {Andrey Mokhov and Georgy Lukyanov and Simon Marlow and Jeremie Dimino}, title = {Selective Applicative Functors}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {90}, numpages = {29}, doi = {10.1145/3341694}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Marntirosian, Koar 
ICFP '19: "Coherence of Type Class Resolution ..."
Coherence of Type Class Resolution
GertJan Bottu, Ningning Xie, Koar Marntirosian, and Tom Schrijvers (KU Leuven, Belgium; University of Hong Kong, China) Elaborationbased type class resolution, as found in languages like Haskell, Mercury and PureScript, is generally nondeterministic: there can be multiple ways to satisfy a wanted constraint in terms of global instances and locally given constraints. Coherence is the key property that keeps this sane; it guarantees that, despite the nondeterminism, programs still behave predictably. Even though elaborationbased resolution is generally assumed coherent, as far as we know, there is no formal proof of this property in the presence of sources of nondeterminism, like superclasses and flexible contexts. This paper provides a formal proof to remedy the situation. The proof is nontrivial because the semantics elaborates resolution into a target language where different elaborations can be distinguished by contexts that do not have a source language counterpart. Inspired by the notion of full abstraction, we present a twostep strategy that first elaborates nondeterministically into an intermediate language that preserves contextual equivalence, and then deterministically elaborates from there into the target language. We use an approach based on logical relations to establish contextual equivalence and thus coherence for the first step of elaboration, while the second step’s determinism straightforwardly preserves this coherence property. @Article{ICFP19p91, author = {GertJan Bottu and Ningning Xie and Koar Marntirosian and Tom Schrijvers}, title = {Coherence of Type Class Resolution}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {91}, numpages = {28}, doi = {10.1145/3341695}, year = {2019}, } Publisher's Version Article Search 

Martínez, Guido 
ICFP '19: "Dijkstra Monads for All ..."
Dijkstra Monads for All
Kenji Maillard, Danel Ahman, Robert Atkey, Guido Martínez, Cătălin Hriţcu, Exequiel Rivas, and Éric Tanter (Inria, France; ENS, France; University of Ljubljana, Slovenia; University of Strathclyde, UK; CIFASISCONICET, Argentina; University of Chile, Chile) This paper proposes a general semantic framework for verifying programs with arbitrary monadic sideeffects using Dijkstra monads, which we define as monadlike structures indexed by a specification monad. We prove that any monad morphism between a computational monad and a specification monad gives rise to a Dijkstra monad, which provides great flexibility for obtaining Dijkstra monads tailored to the verification task at hand. We moreover show that a large variety of specification monads can be obtained by applying monad transformers to various base specification monads, including predicate transformers and Hoarestyle pre and postconditions. For defining correct monad transformers, we propose a language inspired by Moggi's monadic metalanguage that is parameterized by a dependent type theory. We also develop a notion of algebraic operations for Dijkstra monads, and start to investigate two ways of also accommodating effect handlers. We implement our framework in both Coq and F*, and illustrate that it supports a wide variety of verification styles for effects such as exceptions, nondeterminism, state, inputoutput, and general recursion. @Article{ICFP19p104, author = {Kenji Maillard and Danel Ahman and Robert Atkey and Guido Martínez and Cătălin Hriţcu and Exequiel Rivas and Éric Tanter}, title = {Dijkstra Monads for All}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {104}, numpages = {29}, doi = {10.1145/3341708}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Massaccesi, Gustavo E. 
ICFP '19: "Rebuilding Racket on Chez ..."
Rebuilding Racket on Chez Scheme (Experience Report)
Matthew Flatt, Caner Derici, R. Kent Dybvig, Andrew W. Keep, Gustavo E. Massaccesi, Sarah Spall, Sam TobinHochstadt, and Jon Zeppieri (University of Utah, USA; Indiana University, USA; Cisco Systems, USA; University of Buenos Aires, Argentina) We rebuilt Racket on Chez Scheme, and it works well—as long as we're allowed a few patches to Chez Scheme. DrRacket runs, the Racket distribution can build itself, and nearly all of the core Racket test suite passes. Maintainability and performance of the resulting implementation are good, although some work remains to improve endtoend performance. The least predictable part of our effort was how big the differences between Racket and Chez Scheme would turn out to be and how we would manage those differences. We expect Racket on Chez Scheme to become the main Racket implementation, and we encourage other language implementers to consider Chez Scheme as a target virtual machine. @Article{ICFP19p78, author = {Matthew Flatt and Caner Derici and R. Kent Dybvig and Andrew W. Keep and Gustavo E. Massaccesi and Sarah Spall and Sam TobinHochstadt and Jon Zeppieri}, title = {Rebuilding Racket on Chez Scheme (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {78}, numpages = {15}, doi = {10.1145/3341642}, year = {2019}, } Publisher's Version Article Search Info Artifacts Available Artifacts Functional 

Michel, Jesse 
ICFP '19: "Sound and Robust Solid Modeling ..."
Sound and Robust Solid Modeling via Exact Real Arithmetic and Continuity
Benjamin Sherman, Jesse Michel, and Michael Carbin (Massachusetts Institute of Technology, USA) Algorithms for solid modeling, i.e., ComputerAided Design (CAD) and computer graphics, are often specified on real numbers and then implemented with finiteprecision arithmetic, such as floatingpoint. The result is that these implementations do not soundly compute the results that are expected from their specifications. We present a new library, StoneWorks, that provides sound and robust solid modeling primitives. We implement StoneWorks in MarshallB, a pure functional programming language for exact real arithmetic in which types denote topological spaces and functions denote continuous maps, ensuring that all programs are sound and robust. We developed MarshallB as an extension of the Marshall language. We also define a new shape representation, compact representation (Krep), that enables constructions such as Minkowski sum and analyses such as Hausdorff distance that are not possible with traditional representations. Krep is a nondeterminism monad for describing all the points in a shape. With our library, language, and representation together, we show that short StoneWorks programs can specify and execute sound and robust solid modeling algorithms and tasks. @Article{ICFP19p99, author = {Benjamin Sherman and Jesse Michel and Michael Carbin}, title = {Sound and Robust Solid Modeling via Exact Real Arithmetic and Continuity}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {99}, numpages = {29}, doi = {10.1145/3341703}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Miltner, Anders 
ICFP '19: "Synthesizing Symmetric Lenses ..."
Synthesizing Symmetric Lenses
Anders Miltner, Solomon Maina, Kathleen Fisher, Benjamin C. Pierce, David Walker, and Steve Zdancewic (Princeton University, USA; University of Pennsylvania, USA; Tufts University, USA) Lenses are programs that can be run both "front to back" and "back to front," allowing updates to either their source or their target data to be transferred in both directions. Since their introduction by Foster et al., lenses have been extensively studied, extended, and applied. Recent work has also demonstrated how techniques from typedirected program synthesis can be used to efficiently synthesize a simple class of lensessocalled bijective lenses over string datagiven a pair of types (regular expressions) and a small number of examples. We extend this synthesis algorithm to a much broader class of lenses, called simple symmetric lenses, including all bijective lenses, all of the popular category of "asymmetric" lenses, and a rich subset of the more powerful "symmetric lenses" proposed by Hofmann et al. Intuitively, simple symmetric lenses allow some information to be present on one side but not the other and vice versa. They are of independent theoretical interest, being the largest class of symmetric lenses that do not rely on persistent internal state. Synthesizing simple symmetric lenses is substantially more challenging than synthesizing bijective lenses: Since some of the information on each side can be "disconnected" from the other side, there will, in general, be many lenses that agree with a given example. To guide the search process, we use stochastic regular expressions and ideas from information theory to estimate the amount of information propagated by a candidate lens, generally preferring lenses that propagate more information, as well as user annotations marking parts of the source and target data structures as either irrelevant or essential. We describe an implementation of simple symmetric lenses and our synthesis procedure as extensions to the Boomerang language. We evaluate its performance on 48 benchmark examples drawn from Flash Fill, Augeas, the bidirectional programming literature, and electronic file format synchronization tasks. Our implementation can synthesize each of these lenses in under 30 seconds. @Article{ICFP19p95, author = {Anders Miltner and Solomon Maina and Kathleen Fisher and Benjamin C. Pierce and David Walker and Steve Zdancewic}, title = {Synthesizing Symmetric Lenses}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {95}, numpages = {28}, doi = {10.1145/3341699}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Miraldo, Victor Cacciari 
ICFP '19: "An Efficient Algorithm for ..."
An Efficient Algorithm for TypeSafe Structural Diffing
Victor Cacciari Miraldo and Wouter Swierstra (Utrecht University, Netherlands) Effectively computing the difference between two version of a source file has become an indispensable part of software development. The de facto standard tool used by most version control systems is the UNIX diff utility, that compares two files on a linebyline basis without any regard for the structure of the data stored in these files. This paper presents an alternative datatype generic algorithm for computing the difference between two values of any algebraic datatype. This algorithm maximizes sharing between the source and target trees, while still running in linear time. Finally, this paper demonstrates that by instantiating this algorithm to the Lua abstract syntax tree and mining the commit history of repositories found on GitHub, the resulting patches can often be merged automatically, even when existing technology has failed. @Article{ICFP19p113, author = {Victor Cacciari Miraldo and Wouter Swierstra}, title = {An Efficient Algorithm for TypeSafe Structural Diffing}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {113}, numpages = {29}, doi = {10.1145/3341717}, year = {2019}, } Publisher's Version Article Search 

Mislove, Michael 
ICFP '19: "Mixed Linear and Nonlinear ..."
Mixed Linear and Nonlinear Recursive Types
Bert Lindenhovius, Michael Mislove, and Vladimir Zamdzhiev (Tulane University, USA; University of Lorraine, France; CNRS, France; Inria, France; LORIA, France) We describe a type system with mixed linear and nonlinear recursive types called LNLFPC (the linear/nonlinear fixpoint calculus). The type system supports linear typing which enhances the safety properties of programs, but also supports nonlinear typing as well which makes the type system more convenient for programming. Just like in FPC, we show that LNLFPC supports typelevel recursion which in turn induces termlevel recursion. We also provide sound and computationally adequate categorical models for LNLFPC which describe the categorical structure of the substructural operations of Intuitionistic Linear Logic at all nonlinear types, including the recursive ones. In order to do so, we describe a new technique for solving recursive domain equations within the category CPO by constructing the solutions over preembeddings. The type system also enjoys implicit weakening and contraction rules which we are able to model by identifying the canonical comonoid structure of all nonlinear types. We also show that the requirements of our abstract model are reasonable by constructing a large class of concrete models that have found applications not only in classical functional programming, but also in emerging programming paradigms that incorporate linear types, such as quantum programming and circuit description programming languages. @Article{ICFP19p111, author = {Bert Lindenhovius and Michael Mislove and Vladimir Zamdzhiev}, title = {Mixed Linear and Nonlinear Recursive Types}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {111}, numpages = {29}, doi = {10.1145/3341715}, year = {2019}, } Publisher's Version Article Search 

Møgelberg, Rasmus Ejlers 
ICFP '19: "Simply RaTT: A FitchStyle ..."
Simply RaTT: A FitchStyle Modal Calculus for Reactive Programming without Space Leaks
Patrick Bahr, Christian Uldal Graulund, and Rasmus Ejlers Møgelberg (IT University of Copenhagen, Denmark) Functional reactive programming (FRP) is a paradigm for programming with signals and events, allowing the user to describe reactive programs on a high level of abstraction. For this to make sense, an FRP language must ensure that all programs are causal, and can be implemented without introducing space leaks and time leaks. To this end, some FRP languages do not give direct access to signals, but just to signal functions. Recently, modal types have been suggested as an alternative approach to ensuring causality in FRP languages in the synchronous case, giving direct access to the signal and event abstractions. This paper presents Simply RaTT, a new modal calculus for reactive programming. Unlike prior calculi, Simply RaTT uses a Fitchstyle approach to modal types, which simplifies the type system and makes programs more concise. Echoing a previous result by Krishnaswami for a different language, we devise an operational semantics that safely executes Simply RaTT programs without space leaks. We also identify a source of time leaks present in other modal FRP languages: The unfolding of fixed points in delayed computations. The Fitchstyle presentation allows an easy way to rules out these leaks, which appears not to be possible in the more traditional dual context approach. @Article{ICFP19p109, author = {Patrick Bahr and Christian Uldal Graulund and Rasmus Ejlers Møgelberg}, title = {Simply RaTT: A FitchStyle Modal Calculus for Reactive Programming without Space Leaks}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {109}, numpages = {27}, doi = {10.1145/3341713}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Mörtberg, Anders 
ICFP '19: "Cubical Agda: A Dependently ..."
Cubical Agda: A Dependently Typed Programming Language with Univalence and Higher Inductive Types
Andrea Vezzosi, Anders Mörtberg, and Andreas Abel (IT University of Copenhagen, Denmark; Stockholm University, Sweden; Carnegie Mellon University, USA; Chalmers University of Technology, Sweden; University of Gothenburg, Sweden) Proof assistants based on dependent type theory provide expressive languages for both programming and proving within the same system. However, all of the major implementations lack powerful extensionality principles for reasoning about equality, such as function and propositional extensionality. These principles are typically added axiomatically which disrupts the constructive properties of these systems. Cubical type theory provides a solution by giving computational meaning to Homotopy Type Theory and Univalent Foundations, in particular to the univalence axiom and higher inductive types. This paper describes an extension of the dependently typed functional programming language Agda with cubical primitives, making it into a fullblown proof assistant with native support for univalence and a general schema of higher inductive types. These new primitives make function and propositional extensionality as well as quotient types directly definable with computational content. Additionally, thanks also to copatterns, bisimilarity is equivalent to equality for coinductive types. This extends Agda with support for a wide range of extensionality principles, without sacrificing type checking and constructivity. @Article{ICFP19p87, author = {Andrea Vezzosi and Anders Mörtberg and Andreas Abel}, title = {Cubical Agda: A Dependently Typed Programming Language with Univalence and Higher Inductive Types}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {87}, numpages = {29}, doi = {10.1145/3341691}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Mokhov, Andrey 
ICFP '19: "Selective Applicative Functors ..."
Selective Applicative Functors
Andrey Mokhov, Georgy Lukyanov, Simon Marlow, and Jeremie Dimino (Newcastle University, UK; Facebook, UK; Jane Street, UK) Applicative functors and monads have conquered the world of functional programming by providing general and powerful ways of describing effectful computations using pure functions. Applicative functors provide a way to compose independent effects that cannot depend on values produced by earlier computations, and all of which are declared statically. Monads extend the applicative interface by making it possible to compose dependent effects, where the value computed by one effect determines all subsequent effects, dynamically. This paper introduces an intermediate abstraction called selective applicative functors that requires all effects to be declared statically, but provides a way to select which of the effects to execute dynamically. We demonstrate applications of the new abstraction on several examples, including two industrial case studies. @Article{ICFP19p90, author = {Andrey Mokhov and Georgy Lukyanov and Simon Marlow and Jeremie Dimino}, title = {Selective Applicative Functors}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {90}, numpages = {29}, doi = {10.1145/3341694}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Morihata, Akimasa 
ICFP '19: "Lambda Calculus with Algebraic ..."
Lambda Calculus with Algebraic Simplification for Reduction Parallelization by Equational Reasoning
Akimasa Morihata (University of Tokyo, Japan) Parallel reduction is a major component of parallel programming and widely used for summarization and aggregation. It is not well understood, however, what sorts of nontrivial summarizations can be implemented as parallel reductions. This paper develops a calculus named λas, a simply typed lambda calculus with algebraic simplification. This calculus provides a foundation for studying parallelization of complex reductions by equational reasoning. Its key feature is δ abstraction. A δ abstraction is observationally equivalent to the standard λ abstraction, but its body is simplified before the arrival of its arguments by using algebraic properties such as associativity and commutativity. In addition, the type system of λas guarantees that simplifications due to δ abstractions do not lead to serious overheads. The usefulness of λas is demonstrated on examples of developing complex parallel reductions, including those containing more than one reduction operator, loops with jumps, prefixsum patterns, and even tree manipulations. @Article{ICFP19p80, author = {Akimasa Morihata}, title = {Lambda Calculus with Algebraic Simplification for Reduction Parallelization by Equational Reasoning}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {80}, numpages = {25}, doi = {10.1145/3341644}, year = {2019}, } Publisher's Version Article Search 

Muller, Stefan K. 
ICFP '19: "Fairness in Responsive Parallelism ..."
Fairness in Responsive Parallelism
Stefan K. Muller, Sam Westrick, and Umut A. Acar (Carnegie Mellon University, USA; Inria, France) Research on parallel computing has historically revolved around computeintensive applications drawn from traditional areas such as highperformance computing. With the growing availability and usage of multicore chips, applications of parallel computing now include interactive parallel applications that mix computeintensive tasks with interaction, e.g., with the user or more generally with the external world. Recent theoretical work on responsive parallelism presents abstract cost models and type systems for ensuring and reasoning about responsiveness and throughput of such interactive parallel programs. In this paper, we extend prior work by considering a crucial metric: fairness. To express rich interactive parallel programs, we allow programmers to assign priorities to threads and instruct the scheduler to obey a notion of fairness. We then propose the fairly prompt scheduling principle for executing such programs; the principle specifies the schedule for multithreaded programs on multiple processors. For such schedules, we prove theoretical bounds on the execution and response times of jobs of various priorities. In particular, we bound the amount, i.e., stretch, by which a lowpriority job can be delayed by higherpriority work. We also present an algorithm designed to approximate the fairly prompt scheduling principle on multicore computers, implement the algorithm by extending the Standard ML language, and present an empirical evaluation. @Article{ICFP19p81, author = {Stefan K. Muller and Sam Westrick and Umut A. Acar}, title = {Fairness in Responsive Parallelism}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {81}, numpages = {30}, doi = {10.1145/3341685}, year = {2019}, } Publisher's Version Article Search 

Narayanan, Praveen 
ICFP '19: "From HighLevel Inference ..."
From HighLevel Inference Algorithms to Efficient Code
Rajan Walia, Praveen Narayanan, Jacques Carette, Sam TobinHochstadt, and Chungchieh Shan (Indiana University, USA; McMaster University, Canada) Probabilistic programming languages are valuable because they allow domain experts to express probabilistic models and inference algorithms without worrying about irrelevant details. However, for decades there remained an important and popular class of probabilistic inference algorithms whose efficient implementation required manual lowlevel coding that is tedious and errorprone. They are algorithms whose idiomatic expression requires random array variables that are latent or whose likelihood is conjugate. Although that is how practitioners communicate and compose these algorithms on paper, executing such expressions requires eliminating the latent variables and recognizing the conjugacy by symbolic mathematics. Moreover, matching the performance of handwritten code requires speeding up loops by more than a constant factor. We show how probabilistic programs that directly and concisely express these desired inference algorithms can be compiled while maintaining efficiency. We introduce new transformations that turn highlevel probabilistic programs with arrays into pure loop code. We then make great use of domainspecific invariants and norms to optimize the code, and to specialize and JITcompile the code per execution. The resulting performance is competitive with manual implementations. @Article{ICFP19p98, author = {Rajan Walia and Praveen Narayanan and Jacques Carette and Sam TobinHochstadt and Chungchieh Shan}, title = {From HighLevel Inference Algorithms to Efficient Code}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {98}, numpages = {30}, doi = {10.1145/3341702}, year = {2019}, } Publisher's Version Article Search Artifacts Available 

Oliveira, Bruno C. d. S. 
ICFP '19: "A Mechanical Formalization ..."
A Mechanical Formalization of HigherRanked Polymorphic Type Inference
Jinxu Zhao, Bruno C. d. S. Oliveira, and Tom Schrijvers (University of Hong Kong, China; KU Leuven, Belgium) Modern functional programming languages, such as Haskell or OCaml, use sophisticated forms of type inference. While an important topic in the Programming Languages research, there is little work on the mechanization of the metatheory of type inference in theorem provers. In particular we are unaware of any complete formalization of the type inference algorithms that are the backbone of modern functional languages. This paper presents the first full mechanical formalization of the metatheory for higherranked polymorphic type inference. The system that we formalize is the bidirectional type system by Dunfield and Krishnaswami (DK). The DK type system has two variants (a declarative and an algorithmic one) that have been manually proven sound, complete and decidable. We present a mechanical formalization in the Abella theorem prover of DK’s declarative type system with a novel algorithmic system. We have a few reasons to use a new algorithm. Firstly, our new algorithm employs worklist judgments, which precisely capture the scope of variables and simplify the formalization of scoping in a theorem prover. Secondly, while DK’s original formalization comes with very wellwritten manual proofs, there are several details missing and some incorrect proofs, which complicate the task of writing a mechanized proof. Despite the use of a different algorithm we prove the same results as DK, although with significantly different proofs and proof techniques. Since such type inference algorithms are quite subtle and have a complex metatheory, mechanical formalizations are an important advance in typeinference research. @Article{ICFP19p112, author = {Jinxu Zhao and Bruno C. d. S. Oliveira and Tom Schrijvers}, title = {A Mechanical Formalization of HigherRanked Polymorphic Type Inference}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {112}, numpages = {29}, doi = {10.1145/3341716}, year = {2019}, } Publisher's Version Article Search Info Artifacts Functional 

Orchard, Dominic 
ICFP '19: "Quantitative Program Reasoning ..."
Quantitative Program Reasoning with Graded Modal Types
Dominic Orchard, VilemBenjamin Liepelt, and Harley Eades III (University of Kent, UK; Augusta University, USA) In programming, some data acts as a resource (e.g., file handles, channels) subject to usage constraints. This poses a challenge to software correctness as most languages are agnostic to constraints on data. The approach of linear types provides a partial remedy, delineating data into resources to be used but never copied or discarded, and unconstrained values. Bounded Linear Logic provides a more finegrained approach, quantifying nonlinear use via an indexedfamily of modalities. Recent work on coeffect types generalises this idea to graded comonads, providing type systems which can capture various program properties. Here, we propose the umbrella notion of graded modal types, encompassing coeffect types and dual notions of typebased effect reasoning via graded monads. In combination with linear and indexed types, we show that graded modal types provide an expressive type theory for quantitative program reasoning, advancing the reach of type systems to capture and verify a broader set of program properties. We demonstrate this approach via a type system embodied in a fullyfledged functional language called Granule, exploring various examples. @Article{ICFP19p110, author = {Dominic Orchard and VilemBenjamin Liepelt and Harley Eades III}, title = {Quantitative Program Reasoning with Graded Modal Types}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {110}, numpages = {30}, doi = {10.1145/3341714}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Osvald, Leo 
ICFP '19: "Compiling with Continuations, ..."
Compiling with Continuations, or without? Whatever.
Youyou Cong, Leo Osvald, Grégory M. Essertel, and Tiark Rompf (Tokyo Institute of Technology, Japan; Purdue University, USA) What makes a good compiler IR? In the context of functional languages, there has been an extensive debate on the advantages and disadvantages of continuationpassingstyle (CPS). The consensus seems to be that some form of explicit continuations is necessary to model jumps in a functional style, but that they should have a 2ndclass status, separate from regular functions, to ensure efficient code generation. Building on this observation, a recent study from PLDI 2017 proposed a directstyle IR with explicit join points, which essentially represent local continuations, i.e., functions that do not return or escape. While this IR can work well in practice, as evidenced by the implementation of join points in the Glasgow Haskell Compiler (GHC), there still seems to be room for improvement, especially with regard to the way continuations are handled in the course of optimization. In this paper, we contribute to the CPS debate by developing a novel IR with the following features. First, we integrate a control operator that resembles Felleisen’s C, eliminating certain redundant rewrites observed in the previous study. Second, we treat the nonreturning and nonescaping aspects of continuations separately, allowing efficient compilation of wellbehaved functions defined by the user. Third, we define a selective CPS translation of our IR, which erases control operators while preserving the meaning and typing of programs. These features enable optimizations in both direct style and full CPS, as well as in any intermediate style with selectively exposed continuations. Thus, we change the spectrum of available options from “CPS yes or no” to “as much or as little CPS as you want, when you want it”. @Article{ICFP19p79, author = {Youyou Cong and Leo Osvald and Grégory M. Essertel and Tiark Rompf}, title = {Compiling with Continuations, or without? Whatever.}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {79}, numpages = {28}, doi = {10.1145/3341643}, year = {2019}, } Publisher's Version Article Search 

Paraskevopoulou, Zoe 
ICFP '19: "Closure Conversion Is Safe ..."
Closure Conversion Is Safe for Space
Zoe Paraskevopoulou and Andrew W. Appel (Princeton University, USA) We formally prove that closure conversion with flat environments for CPS lambda calculus is correct (preserves semantics) and safe for time and space, meaning that produced code preserves the time and space required for the execution of the source program. We give a cost model to pre and postclosureconversion code by formalizing profiling semantics that keep track of the time and space resources needed for the execution of a program, taking garbage collection into account. To show preservation of time and space we set up a general, "garbagecollection compatible", binary logical relation that establishes invariants on resource consumption of the related programs, along with functional correctness. Using this framework, we show semantics preservation and space and time safety for terminating source programs, and divergence preservation and space safety for diverging source programs. We formally prove that closure conversion with flat environments for CPS lambda calculus is correct (preserves semantics) and safe for time and space, meaning that produced code preserves the time and space required for the execution of the source program. We give a cost model to pre and postclosureconversion code by formalizing profiling semantics that keep track of the time and space resources needed for the execution of a program, taking garbage collection into account. To show preservation of time and space we set up a general, "garbagecollection compatible", binary logical relation that establishes invariants on resource consumption of the related programs, along with functional correctness. Using this framework, we show semantics preservation and space and time safety for terminating source programs, and divergence preservation and space safety for diverging source programs. This is the first formal proof of spacesafety of a closureconversion transformation. The transformation and the proof are parts of the CertiCoq compiler pipeline from Coq (Gallina) through CompCert Clight to assembly language. Our results are mechanized in the Coq proof assistant. @Article{ICFP19p83, author = {Zoe Paraskevopoulou and Andrew W. Appel}, title = {Closure Conversion Is Safe for Space}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {83}, numpages = {29}, doi = {10.1145/3341687}, year = {2019}, } Publisher's Version Article Search Artifacts Functional 

Patterson, Daniel 
ICFP '19: "The Next 700 Compiler Correctness ..."
The Next 700 Compiler Correctness Theorems (Functional Pearl)
Daniel Patterson and Amal Ahmed (Northeastern University, USA) Compiler correctness is an old problem, with results stretching back beyond the last halfcentury. Founding the field, John McCarthy and James Painter set out to build a "completely trustworthy compiler". And yet, until quite recently, even despite truly impressive verification efforts, the theorems being proved were only about the compilation of whole programs, a theoretically quite appealing but practically unrealistic simplification. For a compiler correctness theorem to assure complete trust, the theorem must reflect the reality of how the compiler will be used. There has been much recent work on more realistic "compositional" compiler correctness aimed at proving correct compilation of components while supporting linking with components compiled from different languages using different compilers. However, the variety of theorems, stated in remarkably different ways, raises questions about what researchers even mean by a "compiler is correct." In this pearl, we develop a new framework with which to understand compiler correctness theorems in the presence of linking, and apply it to understanding and comparing this diversity of results. In doing so, not only are we better able to assess their relative strengths and weaknesses, but gain insight into what we as a community should expect from compiler correctness theorems of the future. @Article{ICFP19p85, author = {Daniel Patterson and Amal Ahmed}, title = {The Next 700 Compiler Correctness Theorems (Functional Pearl)}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {85}, numpages = {29}, doi = {10.1145/3341689}, year = {2019}, } Publisher's Version Article Search Info 

Pédrot, PierreMarie 
ICFP '19: "A Reasonably Exceptional Type ..."
A Reasonably Exceptional Type Theory
PierreMarie Pédrot, Nicolas Tabareau, Hans Jacob Fehrmann, and Éric Tanter (Inria, France; University of Chile, Chile) Traditional approaches to compensate for the lack of exceptions in type theories for proof assistants have severe drawbacks from both a programming and a reasoning perspective. Pédrot and Tabareau recently extended the Calculus of Inductive Constructions (CIC) with exceptions. The new exceptional type theory is interpreted by a translation into CIC, covering full dependent elimination, decidable typechecking and canonicity. However, the exceptional theory is inconsistent as a logical system. To recover consistency, Pédrot and Tabareau propose an additional translation that uses parametricity to enforce that all exceptions are caught locally. While this enforcement brings logical expressivity gains over CIC, it completely prevents reasoning about exceptional programs such as partial functions. This work addresses the dilemma between exceptions and consistency in a more flexible manner, with the Reasonably Exceptional Type Theory (RETT). RETT is structured in three layers: (a) the exceptional layer, in which all terms can raise exceptions; (b) the mediation layer, in which exceptional terms must be provably parametric; (c) the pure layer, in which terms are nonexceptional, but can refer to exceptional terms. We present the general theory of RETT, where each layer is realized by a predicative hierarchy of universes, and develop an instance of RETT in Coq: the impure layer corresponds to the predicative universe hierarchy, the pure layer is realized by the impredicative universe of propositions, and the mediation layer is reified via a parametricity type class. RETT is the first full dependent type theory to support consistent reasoning about exceptional terms, and the CoqRETT plugin readily brings this ability to Coq programmers. @Article{ICFP19p108, author = {PierreMarie Pédrot and Nicolas Tabareau and Hans Jacob Fehrmann and Éric Tanter}, title = {A Reasonably Exceptional Type Theory}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {108}, numpages = {29}, doi = {10.1145/3341712}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Peyton Jones, Simon 
ICFP '19: "Efficient Differentiable Programming ..."
Efficient Differentiable Programming in a Functional ArrayProcessing Language
Amir Shaikhha, Andrew Fitzgibbon, Dimitrios Vytiniotis, and Simon Peyton Jones (University of Oxford, UK; Microsoft Research, UK; DeepMind, UK) We present a system for the automatic differentiation (AD) of a higherorder functional arrayprocessing language. The core functional language underlying this system simultaneously supports both sourcetosource forwardmode AD and global optimisations such as loop transformations. In combination, gradient computation with forwardmode AD can be as efficient as reverse mode, and that the Jacobian matrices required for numerical algorithms such as GaussNewton and LevenbergMarquardt can be efficiently computed. @Article{ICFP19p97, author = {Amir Shaikhha and Andrew Fitzgibbon and Dimitrios Vytiniotis and Simon Peyton Jones}, title = {Efficient Differentiable Programming in a Functional ArrayProcessing Language}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {97}, numpages = {30}, doi = {10.1145/3341701}, year = {2019}, } Publisher's Version Article Search ICFP '19: "HigherOrder TypeLevel Programming ..." HigherOrder TypeLevel Programming in Haskell Csongor Kiss, Tony Field, Susan Eisenbach, and Simon Peyton Jones (Imperial College London, UK; Microsoft Research, UK) Type family applications in Haskell must be fully saturated. This means that all typelevel functions have to be firstorder, leading to code that is both messy and longwinded. In this paper we detail an extension to GHC that removes this restriction. We augment Haskell’s existing type arrow, >, with an unmatchable arrow,  >, that supports partial application of type families without compromising soundness. A soundness proof is provided. We show how the techniques described can lead to substantial codesize reduction (circa 80%) in the typelevel logic of commonlyused typelevel libraries whilst simultaneously improving code quality and readability. @Article{ICFP19p102, author = {Csongor Kiss and Tony Field and Susan Eisenbach and Simon Peyton Jones}, title = {HigherOrder TypeLevel Programming in Haskell}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {102}, numpages = {26}, doi = {10.1145/3341706}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Pientka, Brigitte 
ICFP '19: "Teaching the Art of Functional ..."
Teaching the Art of Functional Programming using Automated Grading (Experience Report)
Aliya Hameer and Brigitte Pientka (McGill University, Canada) Online programming platforms have immense potential to improve students' educational experience. They make programming more accessible, as no installation is required; and automatic grading facilities provide students with immediate feedback on their code, allowing them to to fix bugs and address errors in their understanding right away. However, these graders tend to focus heavily on the functional correctness of a solution, neglecting other aspects of students' code and thereby causing students to miss out on a significant amount of valuable feedback. In this paper, we recount our experience in using the LearnOCaml online programming platform to teach functional programming in a secondyear university course on programming languages and paradigms. Moreover, we explore how to leverage LearnOCaml's automated grading infrastructure to make it easy to write more expressive graders that give students feedback on properties of their code beyond simple input/output correctness, in order to effectively teach elements of functional programming style. In particular, we describe our extensions to the LearnOCaml platform that evaluate students on test quality and code style. By providing these tools and a suite of our own homework problems and associated graders, we aim to promote functional programming education, enhance students' educational experience, and make teaching and learning typed functional programming more accessible to instructors and students alike, in our community and beyond. @Article{ICFP19p115, author = {Aliya Hameer and Brigitte Pientka}, title = {Teaching the Art of Functional Programming using Automated Grading (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {115}, numpages = {15}, doi = {10.1145/3341719}, year = {2019}, } Publisher's Version Article Search Info Artifacts Available Artifacts Functional 

Pierce, Benjamin C. 
ICFP '19: "Synthesizing Symmetric Lenses ..."
Synthesizing Symmetric Lenses
Anders Miltner, Solomon Maina, Kathleen Fisher, Benjamin C. Pierce, David Walker, and Steve Zdancewic (Princeton University, USA; University of Pennsylvania, USA; Tufts University, USA) Lenses are programs that can be run both "front to back" and "back to front," allowing updates to either their source or their target data to be transferred in both directions. Since their introduction by Foster et al., lenses have been extensively studied, extended, and applied. Recent work has also demonstrated how techniques from typedirected program synthesis can be used to efficiently synthesize a simple class of lensessocalled bijective lenses over string datagiven a pair of types (regular expressions) and a small number of examples. We extend this synthesis algorithm to a much broader class of lenses, called simple symmetric lenses, including all bijective lenses, all of the popular category of "asymmetric" lenses, and a rich subset of the more powerful "symmetric lenses" proposed by Hofmann et al. Intuitively, simple symmetric lenses allow some information to be present on one side but not the other and vice versa. They are of independent theoretical interest, being the largest class of symmetric lenses that do not rely on persistent internal state. Synthesizing simple symmetric lenses is substantially more challenging than synthesizing bijective lenses: Since some of the information on each side can be "disconnected" from the other side, there will, in general, be many lenses that agree with a given example. To guide the search process, we use stochastic regular expressions and ideas from information theory to estimate the amount of information propagated by a candidate lens, generally preferring lenses that propagate more information, as well as user annotations marking parts of the source and target data structures as either irrelevant or essential. We describe an implementation of simple symmetric lenses and our synthesis procedure as extensions to the Boomerang language. We evaluate its performance on 48 benchmark examples drawn from Flash Fill, Augeas, the bidirectional programming literature, and electronic file format synchronization tasks. Our implementation can synthesize each of these lenses in under 30 seconds. @Article{ICFP19p95, author = {Anders Miltner and Solomon Maina and Kathleen Fisher and Benjamin C. Pierce and David Walker and Steve Zdancewic}, title = {Synthesizing Symmetric Lenses}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {95}, numpages = {28}, doi = {10.1145/3341699}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional ICFP '19: "Fuzzi: A ThreeLevel Logic ..." Fuzzi: A ThreeLevel Logic for Differential Privacy Hengchu Zhang, Edo Roth, Andreas Haeberlen, Benjamin C. Pierce, and Aaron Roth (University of Pennsylvania, USA) Curators of sensitive datasets sometimes need to know whether queries against the data are differentially private. Two sorts of logics have been proposed for checking this property: (1) type systems and other static analyses, which fully automate straightforward reasoning with concepts like “program sensitivity” and “privacy loss,” and (2) fullblown program logics such as apRHL (an approximate, probabilistic, relational Hoare logic), which support more flexible reasoning about subtle privacypreserving algorithmic techniques but offer only minimal automation. We propose a threelevel logic for differential privacy in an imperative setting and present a prototype implementation called Fuzzi. Fuzzi’s lowest level is a generalpurpose logic; its middle level is apRHL; and its top level is a novel sensitivity logic adapted from the linearlogicinspired type system of Fuzz, a differentially private functional language. The key novelty is a high degree of integration between the sensitivity logic and the two lowerlevel logics: the judgments and proofs of the sensitivity logic can be easily translated into apRHL; conversely, privacy properties of key algorithmic building blocks can be proved manually in apRHL and the base logic, then packaged up as typing rules that can be applied by a checker for the sensitivity logic to automatically construct privacy proofs for composite programs of arbitrary size. We demonstrate Fuzzi’s utility by implementing four different private machinelearning algorithms and showing that Fuzzi’s checker is able to derive tight sensitivity bounds. @Article{ICFP19p93, author = {Hengchu Zhang and Edo Roth and Andreas Haeberlen and Benjamin C. Pierce and Aaron Roth}, title = {Fuzzi: A ThreeLevel Logic for Differential Privacy}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {93}, numpages = {28}, doi = {10.1145/3341697}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Piessens, Frank 
ICFP '19: "Linear Capabilities for Fully ..."
Linear Capabilities for Fully Abstract Compilation of SeparationLogicVerified Code
Thomas Van Strydonck, Frank Piessens, and Dominique Devriese (KU Leuven, Belgium; Vrije Universiteit Brussel, Belgium) Separation logic is a powerful program logic for the static modular verification of imperative programs. However, dynamic checking of separation logic contracts on the boundaries between verified and untrusted modules is hard, because it requires one to enforce (among other things) that outcalls from a verified to an untrusted module do not access memory resources currently owned by the verified module. This paper proposes an approach to dynamic contract checking by relying on support for capabilities, a wellstudied form of unforgeable memory pointers that enables finegrained, efficient memory access control. More specifically, we rely on a form of capabilities called linear capabilities for which the hardware enforces that they cannot be copied. We formalize our approach as a fully abstract compiler from a statically verified source language to an unverified target language with support for linear capabilities. The key insight behind our compiler is that memory resources described by spatial separation logic predicates can be represented at run time by linear capabilities. The compiler is separationlogicproofdirected: it uses the separation logic proof of the source program to determine how memory accesses in the source program should be compiled to linear capability accesses in the target program. The full abstraction property of the compiler essentially guarantees that compiled verified modules can interact with untrusted target language modules as if they were compiled from verified code as well. @Article{ICFP19p84, author = {Thomas Van Strydonck and Frank Piessens and Dominique Devriese}, title = {Linear Capabilities for Fully Abstract Compilation of SeparationLogicVerified Code}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {84}, numpages = {29}, doi = {10.1145/3341688}, year = {2019}, } Publisher's Version Article Search Artifacts Available 

PitClaudel, Clément 
ICFP '19: "Narcissus: CorrectbyConstruction ..."
Narcissus: CorrectbyConstruction Derivation of Decoders and Encoders from Binary Formats
Benjamin Delaware, Sorawit Suriyakarn, Clément PitClaudel, Qianchuan Ye, and Adam Chlipala (Purdue University, USA; Band Protocol, Thailand; Massachusetts Institute of Technology, USA) It is a neat result from functional programming that libraries of parser combinators can support rapid construction of decoders for quite a range of formats. With a little more work, the same combinator program can denote both a decoder and an encoder. Unfortunately, the real world is full of gnarly formats, as with the packet formats that make up the standard Internet protocol stack. Most past parsercombinator approaches cannot handle these formats, and the few exceptions require redundancy – one part of the natural grammar needs to be handtranslated into hints in multiple parts of a parser program. We show how to recover very natural and nonredundant format specifications, covering all popular network packet formats and generating both decoders and encoders automatically. The catch is that we use the Coq proof assistant to derive both kinds of artifacts using tactics, automatically, in a way that guarantees that they form inverses of each other. We used our approach to reimplement packet processing for a full Internet protocol stack, inserting our replacement into the OCamlbased MirageOS unikernel, resulting in minimal performance degradation. @Article{ICFP19p82, author = {Benjamin Delaware and Sorawit Suriyakarn and Clément PitClaudel and Qianchuan Ye and Adam Chlipala}, title = {Narcissus: CorrectbyConstruction Derivation of Decoders and Encoders from Binary Formats}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {82}, numpages = {29}, doi = {10.1145/3341686}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Qu, Weihao 
ICFP '19: "Relational Cost Analysis for ..."
Relational Cost Analysis for FunctionalImperative Programs
Weihao Qu, Marco Gaboardi, and Deepak Garg (SUNY Buffalo, USA; MPISWS, Germany) Relational cost analysis aims at formally establishing bounds on the difference in the evaluation costs of two programs. As a particular case, one can also use relational cost analysis to establish bounds on the difference in the evaluation cost of the same program on two different inputs. One way to perform relational cost analysis is to use a relational typeandeffect system that supports reasoning about relations between two executions of two programs. Building on this basic idea, we present a typeandeffect system, called ARel, for reasoning about the relative cost of arraymanipulating, higherorder functionalimperative programs. The key ingredient of our approach is a new lightweight type refinement discipline that we use to track relations (differences) between two mutable arrays. This discipline combined with Hoarestyle triples built into the types allows us to express and establish precise relative costs of several interesting programs which imperatively update their data. We have implemented ARel using ideas from bidirectional type checking. @Article{ICFP19p92, author = {Weihao Qu and Marco Gaboardi and Deepak Garg}, title = {Relational Cost Analysis for FunctionalImperative Programs}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {92}, numpages = {29}, doi = {10.1145/3341696}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Radhakrishna, Arjun 
ICFP '19: "Sequential Programming for ..."
Sequential Programming for Replicated Data Stores
Nicholas V. Lewchenko, Arjun Radhakrishna, Akash Gaonkar, and Pavol Černý (University of Colorado Boulder, USA; Microsoft, USA) We introduce Carol, a refinementtyped programming language for replicated data stores. The salient feature of Carol is that it allows programming and verifying replicated store operations modularly, without consideration of other operations that might interleave, and sequentially, without requiring reference to or knowledge of the concurrent execution model. This is in stark contrast with existing systems, which require understanding the concurrent interactions of all pairs of operations when developing or verifying them. The key enabling idea is the consistency guard, a twostate predicate relating the locallyviewed store and the hypothetical remote store that an operation’s updates may eventually be applied to, which is used by the Carol programmer to declare their precise consistency requirements. Guards appear to the programmer and refinement typechecker as simple data preconditions, enabling sequential reasoning, while appearing to the distributed runtime as consistency control instructions. We implement and evaluate the Carol system in two parts: (1) the algorithm used to statically translate guards into the runtime coordination actions required to enforce them, and (2) the networkedreplica runtime which executes arbitrary operations, written in a Haskell DSL, according to the Carol language semantics. @Article{ICFP19p106, author = {Nicholas V. Lewchenko and Arjun Radhakrishna and Akash Gaonkar and Pavol Černý}, title = {Sequential Programming for Replicated Data Stores}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {106}, numpages = {28}, doi = {10.1145/3341710}, year = {2019}, } Publisher's Version Article Search 

Ravitch, Tristan 
ICFP '19: "Dependently Typed Haskell ..."
Dependently Typed Haskell in Industry (Experience Report)
David Thrane Christiansen, Iavor S. Diatchki, Robert Dockins, Joe Hendrix, and Tristan Ravitch (Galois, USA) Recent versions of the Haskell compiler GHC have a number of advanced features that allow many idioms from dependently typed programming to be encoded. We describe our experiences using this "dependently typed Haskell" to construct a performancecritical library that is a key component in a number of verification tools. We have discovered that it can be done, and it brings significant value, but also at a high cost. In this experience report, we describe the ways in which programming at the edge of what is expressible in Haskell's type system has brought us value, the difficulties that it has imposed, and some of the ways we coped with the difficulties. @Article{ICFP19p100, author = {David Thrane Christiansen and Iavor S. Diatchki and Robert Dockins and Joe Hendrix and Tristan Ravitch}, title = {Dependently Typed Haskell in Industry (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {100}, numpages = {16}, doi = {10.1145/3341704}, year = {2019}, } Publisher's Version Article Search 

Rivas, Exequiel 
ICFP '19: "Dijkstra Monads for All ..."
Dijkstra Monads for All
Kenji Maillard, Danel Ahman, Robert Atkey, Guido Martínez, Cătălin Hriţcu, Exequiel Rivas, and Éric Tanter (Inria, France; ENS, France; University of Ljubljana, Slovenia; University of Strathclyde, UK; CIFASISCONICET, Argentina; University of Chile, Chile) This paper proposes a general semantic framework for verifying programs with arbitrary monadic sideeffects using Dijkstra monads, which we define as monadlike structures indexed by a specification monad. We prove that any monad morphism between a computational monad and a specification monad gives rise to a Dijkstra monad, which provides great flexibility for obtaining Dijkstra monads tailored to the verification task at hand. We moreover show that a large variety of specification monads can be obtained by applying monad transformers to various base specification monads, including predicate transformers and Hoarestyle pre and postconditions. For defining correct monad transformers, we propose a language inspired by Moggi's monadic metalanguage that is parameterized by a dependent type theory. We also develop a notion of algebraic operations for Dijkstra monads, and start to investigate two ways of also accommodating effect handlers. We implement our framework in both Coq and F*, and illustrate that it supports a wide variety of verification styles for effects such as exceptions, nondeterminism, state, inputoutput, and general recursion. @Article{ICFP19p104, author = {Kenji Maillard and Danel Ahman and Robert Atkey and Guido Martínez and Cătălin Hriţcu and Exequiel Rivas and Éric Tanter}, title = {Dijkstra Monads for All}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {104}, numpages = {29}, doi = {10.1145/3341708}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Rompf, Tiark 
ICFP '19: "Compiling with Continuations, ..."
Compiling with Continuations, or without? Whatever.
Youyou Cong, Leo Osvald, Grégory M. Essertel, and Tiark Rompf (Tokyo Institute of Technology, Japan; Purdue University, USA) What makes a good compiler IR? In the context of functional languages, there has been an extensive debate on the advantages and disadvantages of continuationpassingstyle (CPS). The consensus seems to be that some form of explicit continuations is necessary to model jumps in a functional style, but that they should have a 2ndclass status, separate from regular functions, to ensure efficient code generation. Building on this observation, a recent study from PLDI 2017 proposed a directstyle IR with explicit join points, which essentially represent local continuations, i.e., functions that do not return or escape. While this IR can work well in practice, as evidenced by the implementation of join points in the Glasgow Haskell Compiler (GHC), there still seems to be room for improvement, especially with regard to the way continuations are handled in the course of optimization. In this paper, we contribute to the CPS debate by developing a novel IR with the following features. First, we integrate a control operator that resembles Felleisen’s C, eliminating certain redundant rewrites observed in the previous study. Second, we treat the nonreturning and nonescaping aspects of continuations separately, allowing efficient compilation of wellbehaved functions defined by the user. Third, we define a selective CPS translation of our IR, which erases control operators while preserving the meaning and typing of programs. These features enable optimizations in both direct style and full CPS, as well as in any intermediate style with selectively exposed continuations. Thus, we change the spectrum of available options from “CPS yes or no” to “as much or as little CPS as you want, when you want it”. @Article{ICFP19p79, author = {Youyou Cong and Leo Osvald and Grégory M. Essertel and Tiark Rompf}, title = {Compiling with Continuations, or without? Whatever.}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {79}, numpages = {28}, doi = {10.1145/3341643}, year = {2019}, } Publisher's Version Article Search ICFP '19: "Demystifying Differentiable ..." Demystifying Differentiable Programming: Shift/Reset the Penultimate Backpropagator Fei Wang, Daniel Zheng, James Decker, Xilun Wu, Grégory M. Essertel, and Tiark Rompf (Purdue University, USA) Deep learning has seen tremendous success over the past decade in computer vision, machine translation, and gameplay. This success rests crucially on gradientdescent optimization and the ability to “learn” parameters of a neural network by backpropagating observed errors. However, neural network architectures are growing increasingly sophisticated and diverse, which motivates an emerging quest for even more general forms of differentiable programming, where arbitrary parameterized computations can be trained by gradient descent. In this paper, we take a fresh look at automatic differentiation (AD) techniques, and especially aim to demystify the reversemode form of AD that generalizes backpropagation in neural networks. We uncover a tight connection between reversemode AD and delimited continuations, which permits implementing reversemode AD purely via operator overloading and without managing any auxiliary data structures. We further show how this formulation of AD can be fruitfully combined with multistage programming (staging), leading to an efficient implementation that combines the performance benefits of deep learning frameworks based on explicit reified computation graphs (e.g., TensorFlow) with the expressiveness of pure library approaches (e.g., PyTorch). @Article{ICFP19p96, author = {Fei Wang and Daniel Zheng and James Decker and Xilun Wu and Grégory M. Essertel and Tiark Rompf}, title = {Demystifying Differentiable Programming: Shift/Reset the Penultimate Backpropagator}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {96}, numpages = {31}, doi = {10.1145/3341700}, year = {2019}, } Publisher's Version Article Search 

Roth, Aaron 
ICFP '19: "Fuzzi: A ThreeLevel Logic ..."
Fuzzi: A ThreeLevel Logic for Differential Privacy
Hengchu Zhang, Edo Roth, Andreas Haeberlen, Benjamin C. Pierce, and Aaron Roth (University of Pennsylvania, USA) Curators of sensitive datasets sometimes need to know whether queries against the data are differentially private. Two sorts of logics have been proposed for checking this property: (1) type systems and other static analyses, which fully automate straightforward reasoning with concepts like “program sensitivity” and “privacy loss,” and (2) fullblown program logics such as apRHL (an approximate, probabilistic, relational Hoare logic), which support more flexible reasoning about subtle privacypreserving algorithmic techniques but offer only minimal automation. We propose a threelevel logic for differential privacy in an imperative setting and present a prototype implementation called Fuzzi. Fuzzi’s lowest level is a generalpurpose logic; its middle level is apRHL; and its top level is a novel sensitivity logic adapted from the linearlogicinspired type system of Fuzz, a differentially private functional language. The key novelty is a high degree of integration between the sensitivity logic and the two lowerlevel logics: the judgments and proofs of the sensitivity logic can be easily translated into apRHL; conversely, privacy properties of key algorithmic building blocks can be proved manually in apRHL and the base logic, then packaged up as typing rules that can be applied by a checker for the sensitivity logic to automatically construct privacy proofs for composite programs of arbitrary size. We demonstrate Fuzzi’s utility by implementing four different private machinelearning algorithms and showing that Fuzzi’s checker is able to derive tight sensitivity bounds. @Article{ICFP19p93, author = {Hengchu Zhang and Edo Roth and Andreas Haeberlen and Benjamin C. Pierce and Aaron Roth}, title = {Fuzzi: A ThreeLevel Logic for Differential Privacy}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {93}, numpages = {28}, doi = {10.1145/3341697}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Roth, Edo 
ICFP '19: "Fuzzi: A ThreeLevel Logic ..."
Fuzzi: A ThreeLevel Logic for Differential Privacy
Hengchu Zhang, Edo Roth, Andreas Haeberlen, Benjamin C. Pierce, and Aaron Roth (University of Pennsylvania, USA) Curators of sensitive datasets sometimes need to know whether queries against the data are differentially private. Two sorts of logics have been proposed for checking this property: (1) type systems and other static analyses, which fully automate straightforward reasoning with concepts like “program sensitivity” and “privacy loss,” and (2) fullblown program logics such as apRHL (an approximate, probabilistic, relational Hoare logic), which support more flexible reasoning about subtle privacypreserving algorithmic techniques but offer only minimal automation. We propose a threelevel logic for differential privacy in an imperative setting and present a prototype implementation called Fuzzi. Fuzzi’s lowest level is a generalpurpose logic; its middle level is apRHL; and its top level is a novel sensitivity logic adapted from the linearlogicinspired type system of Fuzz, a differentially private functional language. The key novelty is a high degree of integration between the sensitivity logic and the two lowerlevel logics: the judgments and proofs of the sensitivity logic can be easily translated into apRHL; conversely, privacy properties of key algorithmic building blocks can be proved manually in apRHL and the base logic, then packaged up as typing rules that can be applied by a checker for the sensitivity logic to automatically construct privacy proofs for composite programs of arbitrary size. We demonstrate Fuzzi’s utility by implementing four different private machinelearning algorithms and showing that Fuzzi’s checker is able to derive tight sensitivity bounds. @Article{ICFP19p93, author = {Hengchu Zhang and Edo Roth and Andreas Haeberlen and Benjamin C. Pierce and Aaron Roth}, title = {Fuzzi: A ThreeLevel Logic for Differential Privacy}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {93}, numpages = {28}, doi = {10.1145/3341697}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Schrijvers, Tom 
ICFP '19: "A Mechanical Formalization ..."
A Mechanical Formalization of HigherRanked Polymorphic Type Inference
Jinxu Zhao, Bruno C. d. S. Oliveira, and Tom Schrijvers (University of Hong Kong, China; KU Leuven, Belgium) Modern functional programming languages, such as Haskell or OCaml, use sophisticated forms of type inference. While an important topic in the Programming Languages research, there is little work on the mechanization of the metatheory of type inference in theorem provers. In particular we are unaware of any complete formalization of the type inference algorithms that are the backbone of modern functional languages. This paper presents the first full mechanical formalization of the metatheory for higherranked polymorphic type inference. The system that we formalize is the bidirectional type system by Dunfield and Krishnaswami (DK). The DK type system has two variants (a declarative and an algorithmic one) that have been manually proven sound, complete and decidable. We present a mechanical formalization in the Abella theorem prover of DK’s declarative type system with a novel algorithmic system. We have a few reasons to use a new algorithm. Firstly, our new algorithm employs worklist judgments, which precisely capture the scope of variables and simplify the formalization of scoping in a theorem prover. Secondly, while DK’s original formalization comes with very wellwritten manual proofs, there are several details missing and some incorrect proofs, which complicate the task of writing a mechanized proof. Despite the use of a different algorithm we prove the same results as DK, although with significantly different proofs and proof techniques. Since such type inference algorithms are quite subtle and have a complex metatheory, mechanical formalizations are an important advance in typeinference research. @Article{ICFP19p112, author = {Jinxu Zhao and Bruno C. d. S. Oliveira and Tom Schrijvers}, title = {A Mechanical Formalization of HigherRanked Polymorphic Type Inference}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {112}, numpages = {29}, doi = {10.1145/3341716}, year = {2019}, } Publisher's Version Article Search Info Artifacts Functional ICFP '19: "Coherence of Type Class Resolution ..." Coherence of Type Class Resolution GertJan Bottu, Ningning Xie, Koar Marntirosian, and Tom Schrijvers (KU Leuven, Belgium; University of Hong Kong, China) Elaborationbased type class resolution, as found in languages like Haskell, Mercury and PureScript, is generally nondeterministic: there can be multiple ways to satisfy a wanted constraint in terms of global instances and locally given constraints. Coherence is the key property that keeps this sane; it guarantees that, despite the nondeterminism, programs still behave predictably. Even though elaborationbased resolution is generally assumed coherent, as far as we know, there is no formal proof of this property in the presence of sources of nondeterminism, like superclasses and flexible contexts. This paper provides a formal proof to remedy the situation. The proof is nontrivial because the semantics elaborates resolution into a target language where different elaborations can be distinguished by contexts that do not have a source language counterpart. Inspired by the notion of full abstraction, we present a twostep strategy that first elaborates nondeterministically into an intermediate language that preserves contextual equivalence, and then deterministically elaborates from there into the target language. We use an approach based on logical relations to establish contextual equivalence and thus coherence for the first step of elaboration, while the second step’s determinism straightforwardly preserves this coherence property. @Article{ICFP19p91, author = {GertJan Bottu and Ningning Xie and Koar Marntirosian and Tom Schrijvers}, title = {Coherence of Type Class Resolution}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {91}, numpages = {28}, doi = {10.1145/3341695}, year = {2019}, } Publisher's Version Article Search 

Shaikhha, Amir 
ICFP '19: "Efficient Differentiable Programming ..."
Efficient Differentiable Programming in a Functional ArrayProcessing Language
Amir Shaikhha, Andrew Fitzgibbon, Dimitrios Vytiniotis, and Simon Peyton Jones (University of Oxford, UK; Microsoft Research, UK; DeepMind, UK) We present a system for the automatic differentiation (AD) of a higherorder functional arrayprocessing language. The core functional language underlying this system simultaneously supports both sourcetosource forwardmode AD and global optimisations such as loop transformations. In combination, gradient computation with forwardmode AD can be as efficient as reverse mode, and that the Jacobian matrices required for numerical algorithms such as GaussNewton and LevenbergMarquardt can be efficiently computed. @Article{ICFP19p97, author = {Amir Shaikhha and Andrew Fitzgibbon and Dimitrios Vytiniotis and Simon Peyton Jones}, title = {Efficient Differentiable Programming in a Functional ArrayProcessing Language}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {97}, numpages = {30}, doi = {10.1145/3341701}, year = {2019}, } Publisher's Version Article Search 

Shan, Chungchieh 
ICFP '19: "From HighLevel Inference ..."
From HighLevel Inference Algorithms to Efficient Code
Rajan Walia, Praveen Narayanan, Jacques Carette, Sam TobinHochstadt, and Chungchieh Shan (Indiana University, USA; McMaster University, Canada) Probabilistic programming languages are valuable because they allow domain experts to express probabilistic models and inference algorithms without worrying about irrelevant details. However, for decades there remained an important and popular class of probabilistic inference algorithms whose efficient implementation required manual lowlevel coding that is tedious and errorprone. They are algorithms whose idiomatic expression requires random array variables that are latent or whose likelihood is conjugate. Although that is how practitioners communicate and compose these algorithms on paper, executing such expressions requires eliminating the latent variables and recognizing the conjugacy by symbolic mathematics. Moreover, matching the performance of handwritten code requires speeding up loops by more than a constant factor. We show how probabilistic programs that directly and concisely express these desired inference algorithms can be compiled while maintaining efficiency. We introduce new transformations that turn highlevel probabilistic programs with arrays into pure loop code. We then make great use of domainspecific invariants and norms to optimize the code, and to specialize and JITcompile the code per execution. The resulting performance is competitive with manual implementations. @Article{ICFP19p98, author = {Rajan Walia and Praveen Narayanan and Jacques Carette and Sam TobinHochstadt and Chungchieh Shan}, title = {From HighLevel Inference Algorithms to Efficient Code}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {98}, numpages = {30}, doi = {10.1145/3341702}, year = {2019}, } Publisher's Version Article Search Artifacts Available 

Sherman, Benjamin 
ICFP '19: "Sound and Robust Solid Modeling ..."
Sound and Robust Solid Modeling via Exact Real Arithmetic and Continuity
Benjamin Sherman, Jesse Michel, and Michael Carbin (Massachusetts Institute of Technology, USA) Algorithms for solid modeling, i.e., ComputerAided Design (CAD) and computer graphics, are often specified on real numbers and then implemented with finiteprecision arithmetic, such as floatingpoint. The result is that these implementations do not soundly compute the results that are expected from their specifications. We present a new library, StoneWorks, that provides sound and robust solid modeling primitives. We implement StoneWorks in MarshallB, a pure functional programming language for exact real arithmetic in which types denote topological spaces and functions denote continuous maps, ensuring that all programs are sound and robust. We developed MarshallB as an extension of the Marshall language. We also define a new shape representation, compact representation (Krep), that enables constructions such as Minkowski sum and analyses such as Hausdorff distance that are not possible with traditional representations. Krep is a nondeterminism monad for describing all the points in a shape. With our library, language, and representation together, we show that short StoneWorks programs can specify and execute sound and robust solid modeling algorithms and tasks. @Article{ICFP19p99, author = {Benjamin Sherman and Jesse Michel and Michael Carbin}, title = {Sound and Robust Solid Modeling via Exact Real Arithmetic and Continuity}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {99}, numpages = {29}, doi = {10.1145/3341703}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Smith, Calvin 
ICFP '19: "Synthesizing Differentially ..."
Synthesizing Differentially Private Programs
Calvin Smith and Aws Albarghouthi (University of WisconsinMadison, USA) Inspired by the proliferation of dataanalysis tasks, recent research in program synthesis has had a strong focus on enabling users to specify dataanalysis programs through intuitive specifications, like examples and natural language. However, with the everincreasing threat to privacy through data analysis, we believe it is imperative to reimagine program synthesis technology in the presence of formal privacy constraints. In this paper, we study the problem of automatically synthesizing randomized, differentially private programs, where the user can provide the synthesizer with a constraint on the privacy of the desired algorithm. We base our technique on a linear dependent type system that can track the resources consumed by a program, and hence its privacy cost. We develop a novel typedirected synthesis algorithm that constructs randomized differentially private programs. We apply our technique to the problems of synthesizing databaselike queries as well as recursive differential privacy mechanisms from the literature. @Article{ICFP19p94, author = {Calvin Smith and Aws Albarghouthi}, title = {Synthesizing Differentially Private Programs}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {94}, numpages = {29}, doi = {10.1145/3341698}, year = {2019}, } Publisher's Version Article Search 

Sozeau, Matthieu 
ICFP '19: "Equations Reloaded: HighLevel ..."
Equations Reloaded: HighLevel DependentlyTyped Functional Programming and Proving in Coq
Matthieu Sozeau and Cyprien Mangin (Inria, France; IRIF, France; University of Paris Diderot, France) Equations is a plugin for the Coq proof assistant which provides a notation for defining programs by dependent patternmatching and structural or wellfounded recursion. It additionally derives useful highlevel proof principles for demonstrating properties about them, abstracting away from the implementation details of the function and its compiled form. We present a general design and implementation that provides a robust and expressive function definition package as a definitional extension to the Coq kernel. At the core of the system is a new simplifier for dependent equalities based on an original handling of the noconfusion property of constructors. @Article{ICFP19p86, author = {Matthieu Sozeau and Cyprien Mangin}, title = {Equations Reloaded: HighLevel DependentlyTyped Functional Programming and Proving in Coq}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {86}, numpages = {29}, doi = {10.1145/3341690}, year = {2019}, } Publisher's Version Article Search Info Artifacts Available Artifacts Functional 

Spall, Sarah 
ICFP '19: "Rebuilding Racket on Chez ..."
Rebuilding Racket on Chez Scheme (Experience Report)
Matthew Flatt, Caner Derici, R. Kent Dybvig, Andrew W. Keep, Gustavo E. Massaccesi, Sarah Spall, Sam TobinHochstadt, and Jon Zeppieri (University of Utah, USA; Indiana University, USA; Cisco Systems, USA; University of Buenos Aires, Argentina) We rebuilt Racket on Chez Scheme, and it works well—as long as we're allowed a few patches to Chez Scheme. DrRacket runs, the Racket distribution can build itself, and nearly all of the core Racket test suite passes. Maintainability and performance of the resulting implementation are good, although some work remains to improve endtoend performance. The least predictable part of our effort was how big the differences between Racket and Chez Scheme would turn out to be and how we would manage those differences. We expect Racket on Chez Scheme to become the main Racket implementation, and we encourage other language implementers to consider Chez Scheme as a target virtual machine. @Article{ICFP19p78, author = {Matthew Flatt and Caner Derici and R. Kent Dybvig and Andrew W. Keep and Gustavo E. Massaccesi and Sarah Spall and Sam TobinHochstadt and Jon Zeppieri}, title = {Rebuilding Racket on Chez Scheme (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {78}, numpages = {15}, doi = {10.1145/3341642}, year = {2019}, } Publisher's Version Article Search Info Artifacts Available Artifacts Functional 

Sterling, Jonathan 
ICFP '19: "Implementing a Modal Dependent ..."
Implementing a Modal Dependent Type Theory
Daniel Gratzer, Jonathan Sterling, and Lars Birkedal (Aarhus University, Denmark; Carnegie Mellon University, USA) Modalities are everywhere in programming and mathematics! Despite this, however, there are still significant technical challenges in formulating a core dependent type theory with modalities. We present a dependent type theory MLTT_{🔒} supporting the connectives of standard MartinLöf Type Theory as well as an S4style necessity operator. MLTT_{🔒} supports a smooth interaction between modal and dependent types and provides a common basis for the use of modalities in programming and in synthetic mathematics. We design and prove the soundness and completeness of a type checking algorithm for MLTT_{🔒}, using a novel extension of normalization by evaluation. We have also implemented our algorithm in a prototype proof assistant for MLTT_{🔒}, demonstrating the ease of applying our techniques. @Article{ICFP19p107, author = {Daniel Gratzer and Jonathan Sterling and Lars Birkedal}, title = {Implementing a Modal Dependent Type Theory}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {107}, numpages = {29}, doi = {10.1145/3341711}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Suriyakarn, Sorawit 
ICFP '19: "Narcissus: CorrectbyConstruction ..."
Narcissus: CorrectbyConstruction Derivation of Decoders and Encoders from Binary Formats
Benjamin Delaware, Sorawit Suriyakarn, Clément PitClaudel, Qianchuan Ye, and Adam Chlipala (Purdue University, USA; Band Protocol, Thailand; Massachusetts Institute of Technology, USA) It is a neat result from functional programming that libraries of parser combinators can support rapid construction of decoders for quite a range of formats. With a little more work, the same combinator program can denote both a decoder and an encoder. Unfortunately, the real world is full of gnarly formats, as with the packet formats that make up the standard Internet protocol stack. Most past parsercombinator approaches cannot handle these formats, and the few exceptions require redundancy – one part of the natural grammar needs to be handtranslated into hints in multiple parts of a parser program. We show how to recover very natural and nonredundant format specifications, covering all popular network packet formats and generating both decoders and encoders automatically. The catch is that we use the Coq proof assistant to derive both kinds of artifacts using tactics, automatically, in a way that guarantees that they form inverses of each other. We used our approach to reimplement packet processing for a full Internet protocol stack, inserting our replacement into the OCamlbased MirageOS unikernel, resulting in minimal performance degradation. @Article{ICFP19p82, author = {Benjamin Delaware and Sorawit Suriyakarn and Clément PitClaudel and Qianchuan Ye and Adam Chlipala}, title = {Narcissus: CorrectbyConstruction Derivation of Decoders and Encoders from Binary Formats}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {82}, numpages = {29}, doi = {10.1145/3341686}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Swierstra, Wouter 
ICFP '19: "An Efficient Algorithm for ..."
An Efficient Algorithm for TypeSafe Structural Diffing
Victor Cacciari Miraldo and Wouter Swierstra (Utrecht University, Netherlands) Effectively computing the difference between two version of a source file has become an indispensable part of software development. The de facto standard tool used by most version control systems is the UNIX diff utility, that compares two files on a linebyline basis without any regard for the structure of the data stored in these files. This paper presents an alternative datatype generic algorithm for computing the difference between two values of any algebraic datatype. This algorithm maximizes sharing between the source and target trees, while still running in linear time. Finally, this paper demonstrates that by instantiating this algorithm to the Lua abstract syntax tree and mining the commit history of repositories found on GitHub, the resulting patches can often be merged automatically, even when existing technology has failed. @Article{ICFP19p113, author = {Victor Cacciari Miraldo and Wouter Swierstra}, title = {An Efficient Algorithm for TypeSafe Structural Diffing}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {113}, numpages = {29}, doi = {10.1145/3341717}, year = {2019}, } Publisher's Version Article Search ICFP '19: "A Predicate Transformer Semantics ..." A Predicate Transformer Semantics for Effects (Functional Pearl) Wouter Swierstra and Tim Baanen (Utrecht University, Netherlands) Reasoning about programs that use effects can be much harder than reasoning about their pure counterparts. This paper presents a predicate transformer semantics for a variety of effects, including exceptions, state, nondeterminism, and general recursion. The predicate transformer semantics gives rise to a refinement relation that can be used to relate a program to its specification, or even calculate effectful programs that are correct by construction. @Article{ICFP19p103, author = {Wouter Swierstra and Tim Baanen}, title = {A Predicate Transformer Semantics for Effects (Functional Pearl)}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {103}, numpages = {26}, doi = {10.1145/3341707}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Tabareau, Nicolas 
ICFP '19: "A Reasonably Exceptional Type ..."
A Reasonably Exceptional Type Theory
PierreMarie Pédrot, Nicolas Tabareau, Hans Jacob Fehrmann, and Éric Tanter (Inria, France; University of Chile, Chile) Traditional approaches to compensate for the lack of exceptions in type theories for proof assistants have severe drawbacks from both a programming and a reasoning perspective. Pédrot and Tabareau recently extended the Calculus of Inductive Constructions (CIC) with exceptions. The new exceptional type theory is interpreted by a translation into CIC, covering full dependent elimination, decidable typechecking and canonicity. However, the exceptional theory is inconsistent as a logical system. To recover consistency, Pédrot and Tabareau propose an additional translation that uses parametricity to enforce that all exceptions are caught locally. While this enforcement brings logical expressivity gains over CIC, it completely prevents reasoning about exceptional programs such as partial functions. This work addresses the dilemma between exceptions and consistency in a more flexible manner, with the Reasonably Exceptional Type Theory (RETT). RETT is structured in three layers: (a) the exceptional layer, in which all terms can raise exceptions; (b) the mediation layer, in which exceptional terms must be provably parametric; (c) the pure layer, in which terms are nonexceptional, but can refer to exceptional terms. We present the general theory of RETT, where each layer is realized by a predicative hierarchy of universes, and develop an instance of RETT in Coq: the impure layer corresponds to the predicative universe hierarchy, the pure layer is realized by the impredicative universe of propositions, and the mediation layer is reified via a parametricity type class. RETT is the first full dependent type theory to support consistent reasoning about exceptional terms, and the CoqRETT plugin readily brings this ability to Coq programmers. @Article{ICFP19p108, author = {PierreMarie Pédrot and Nicolas Tabareau and Hans Jacob Fehrmann and Éric Tanter}, title = {A Reasonably Exceptional Type Theory}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {108}, numpages = {29}, doi = {10.1145/3341712}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Tanter, Éric 
ICFP '19: "A Reasonably Exceptional Type ..."
A Reasonably Exceptional Type Theory
PierreMarie Pédrot, Nicolas Tabareau, Hans Jacob Fehrmann, and Éric Tanter (Inria, France; University of Chile, Chile) Traditional approaches to compensate for the lack of exceptions in type theories for proof assistants have severe drawbacks from both a programming and a reasoning perspective. Pédrot and Tabareau recently extended the Calculus of Inductive Constructions (CIC) with exceptions. The new exceptional type theory is interpreted by a translation into CIC, covering full dependent elimination, decidable typechecking and canonicity. However, the exceptional theory is inconsistent as a logical system. To recover consistency, Pédrot and Tabareau propose an additional translation that uses parametricity to enforce that all exceptions are caught locally. While this enforcement brings logical expressivity gains over CIC, it completely prevents reasoning about exceptional programs such as partial functions. This work addresses the dilemma between exceptions and consistency in a more flexible manner, with the Reasonably Exceptional Type Theory (RETT). RETT is structured in three layers: (a) the exceptional layer, in which all terms can raise exceptions; (b) the mediation layer, in which exceptional terms must be provably parametric; (c) the pure layer, in which terms are nonexceptional, but can refer to exceptional terms. We present the general theory of RETT, where each layer is realized by a predicative hierarchy of universes, and develop an instance of RETT in Coq: the impure layer corresponds to the predicative universe hierarchy, the pure layer is realized by the impredicative universe of propositions, and the mediation layer is reified via a parametricity type class. RETT is the first full dependent type theory to support consistent reasoning about exceptional terms, and the CoqRETT plugin readily brings this ability to Coq programmers. @Article{ICFP19p108, author = {PierreMarie Pédrot and Nicolas Tabareau and Hans Jacob Fehrmann and Éric Tanter}, title = {A Reasonably Exceptional Type Theory}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {108}, numpages = {29}, doi = {10.1145/3341712}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional ICFP '19: "Approximate Normalization ..." Approximate Normalization for Gradual Dependent Types Joseph Eremondi, Éric Tanter, and Ronald Garcia (University of British Columbia, Canada; University of Chile, Chile; Inria, France) Dependent types help programmers write highly reliable code. However, this reliability comes at a cost: it can be challenging to write new prototypes in (or migrate old code to) dependentlytyped programming languages. Gradual typing makes static type disciplines more flexible, so an appropriate notion of gradual dependent types could fruitfully lower this cost. However, dependent types raise unique challenges for gradual typing. Dependent typechecking involves the execution of program code, but graduallytyped code can signal runtime type errors or diverge. These runtime errors threaten the soundness guarantees that make dependent types so attractive, while divergence spoils the typedriven programming experience. This paper presents GDTL, a gradual dependentlytyped language that emphasizes pragmatic dependentlytyped programming. GDTL fully embeds both an untyped and dependentlytyped language, and allows for smooth transitions between the two. In addition to gradual types we introduce gradual terms, which allow the user to be imprecise in type indices and to omit proof terms; runtime checks ensure type safety. To account for nontermination and failure, we distinguish between compiletime normalization and runtime execution: compiletime normalization is approximate but total, while runtime execution is exact, but may fail or diverge. We prove that GDTL has decidable typechecking and satisfies all the expected properties of gradual languages. In particular, GDTL satisfies the static and dynamic gradual guarantees: reducing type precision preserves typedness, and altering type precision does not change program behavior outside of dynamic type failures. To prove these properties, we were led to establish a novel normalization gradual guarantee that captures the monotonicity of approximate normalization with respect to imprecision. @Article{ICFP19p88, author = {Joseph Eremondi and Éric Tanter and Ronald Garcia}, title = {Approximate Normalization for Gradual Dependent Types}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {88}, numpages = {30}, doi = {10.1145/3341692}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional ICFP '19: "Dijkstra Monads for All ..." Dijkstra Monads for All Kenji Maillard, Danel Ahman, Robert Atkey, Guido Martínez, Cătălin Hriţcu, Exequiel Rivas, and Éric Tanter (Inria, France; ENS, France; University of Ljubljana, Slovenia; University of Strathclyde, UK; CIFASISCONICET, Argentina; University of Chile, Chile) This paper proposes a general semantic framework for verifying programs with arbitrary monadic sideeffects using Dijkstra monads, which we define as monadlike structures indexed by a specification monad. We prove that any monad morphism between a computational monad and a specification monad gives rise to a Dijkstra monad, which provides great flexibility for obtaining Dijkstra monads tailored to the verification task at hand. We moreover show that a large variety of specification monads can be obtained by applying monad transformers to various base specification monads, including predicate transformers and Hoarestyle pre and postconditions. For defining correct monad transformers, we propose a language inspired by Moggi's monadic metalanguage that is parameterized by a dependent type theory. We also develop a notion of algebraic operations for Dijkstra monads, and start to investigate two ways of also accommodating effect handlers. We implement our framework in both Coq and F*, and illustrate that it supports a wide variety of verification styles for effects such as exceptions, nondeterminism, state, inputoutput, and general recursion. @Article{ICFP19p104, author = {Kenji Maillard and Danel Ahman and Robert Atkey and Guido Martínez and Cătălin Hriţcu and Exequiel Rivas and Éric Tanter}, title = {Dijkstra Monads for All}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {104}, numpages = {29}, doi = {10.1145/3341708}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Timany, Amin 
ICFP '19: "Mechanized Relational Verification ..."
Mechanized Relational Verification of Concurrent Programs with Continuations
Amin Timany and Lars Birkedal (KU Leuven, Belgium; Aarhus University, Denmark) Concurrent higherorder imperative programming languages with continuations are very flexible and allow for the implementation of sophisticated programming patterns. For instance, it is well known that continuations can be used to implement cooperative concurrency. Continuations can also simplify web server implementations. This, in particular, helps simplify keeping track of the state of server’s clients. However, such advanced programming languages are very challenging to reason about. One of the main challenges in reasoning about programs in the presence of continuations is due to the fact that the nonlocal flow of control breaks the bind rule, one of the important modular reasoning principles of Hoare logic. In this paper we present the first completely formalized tool for interactive mechanized relational verification of programs written in a concurrent higherorder imperative programming language with continuations (call/cc and throw). We develop novel logical relations which can be used to give mechanized proofs of relational properties. In particular, we prove correctness of an implementation of cooperative concurrency with continuations. In addition, we show that that a rudimentary web server implemented using the continuationbased pattern is contextually equivalent to one implemented without the continuationbased pattern. We introduce contextlocal reasoning principles for our calculus which allows us to regain modular reasoning principles for the fragment of the language without nonlocal control flow. These novel reasoning principles can be used in tandem with our (noncontextlocal) Hoare logic for reasoning about programs that do feature nonlocal control flow. Indeed, we use the combination of contextlocal and noncontextlocal reasoning to simplify reasoning about the examples. @Article{ICFP19p105, author = {Amin Timany and Lars Birkedal}, title = {Mechanized Relational Verification of Concurrent Programs with Continuations}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {105}, numpages = {28}, doi = {10.1145/3341709}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

TobinHochstadt, Sam 
ICFP '19: "Rebuilding Racket on Chez ..."
Rebuilding Racket on Chez Scheme (Experience Report)
Matthew Flatt, Caner Derici, R. Kent Dybvig, Andrew W. Keep, Gustavo E. Massaccesi, Sarah Spall, Sam TobinHochstadt, and Jon Zeppieri (University of Utah, USA; Indiana University, USA; Cisco Systems, USA; University of Buenos Aires, Argentina) We rebuilt Racket on Chez Scheme, and it works well—as long as we're allowed a few patches to Chez Scheme. DrRacket runs, the Racket distribution can build itself, and nearly all of the core Racket test suite passes. Maintainability and performance of the resulting implementation are good, although some work remains to improve endtoend performance. The least predictable part of our effort was how big the differences between Racket and Chez Scheme would turn out to be and how we would manage those differences. We expect Racket on Chez Scheme to become the main Racket implementation, and we encourage other language implementers to consider Chez Scheme as a target virtual machine. @Article{ICFP19p78, author = {Matthew Flatt and Caner Derici and R. Kent Dybvig and Andrew W. Keep and Gustavo E. Massaccesi and Sarah Spall and Sam TobinHochstadt and Jon Zeppieri}, title = {Rebuilding Racket on Chez Scheme (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {78}, numpages = {15}, doi = {10.1145/3341642}, year = {2019}, } Publisher's Version Article Search Info Artifacts Available Artifacts Functional ICFP '19: "From HighLevel Inference ..." From HighLevel Inference Algorithms to Efficient Code Rajan Walia, Praveen Narayanan, Jacques Carette, Sam TobinHochstadt, and Chungchieh Shan (Indiana University, USA; McMaster University, Canada) Probabilistic programming languages are valuable because they allow domain experts to express probabilistic models and inference algorithms without worrying about irrelevant details. However, for decades there remained an important and popular class of probabilistic inference algorithms whose efficient implementation required manual lowlevel coding that is tedious and errorprone. They are algorithms whose idiomatic expression requires random array variables that are latent or whose likelihood is conjugate. Although that is how practitioners communicate and compose these algorithms on paper, executing such expressions requires eliminating the latent variables and recognizing the conjugacy by symbolic mathematics. Moreover, matching the performance of handwritten code requires speeding up loops by more than a constant factor. We show how probabilistic programs that directly and concisely express these desired inference algorithms can be compiled while maintaining efficiency. We introduce new transformations that turn highlevel probabilistic programs with arrays into pure loop code. We then make great use of domainspecific invariants and norms to optimize the code, and to specialize and JITcompile the code per execution. The resulting performance is competitive with manual implementations. @Article{ICFP19p98, author = {Rajan Walia and Praveen Narayanan and Jacques Carette and Sam TobinHochstadt and Chungchieh Shan}, title = {From HighLevel Inference Algorithms to Efficient Code}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {98}, numpages = {30}, doi = {10.1145/3341702}, year = {2019}, } Publisher's Version Article Search Artifacts Available 

Van Strydonck, Thomas 
ICFP '19: "Linear Capabilities for Fully ..."
Linear Capabilities for Fully Abstract Compilation of SeparationLogicVerified Code
Thomas Van Strydonck, Frank Piessens, and Dominique Devriese (KU Leuven, Belgium; Vrije Universiteit Brussel, Belgium) Separation logic is a powerful program logic for the static modular verification of imperative programs. However, dynamic checking of separation logic contracts on the boundaries between verified and untrusted modules is hard, because it requires one to enforce (among other things) that outcalls from a verified to an untrusted module do not access memory resources currently owned by the verified module. This paper proposes an approach to dynamic contract checking by relying on support for capabilities, a wellstudied form of unforgeable memory pointers that enables finegrained, efficient memory access control. More specifically, we rely on a form of capabilities called linear capabilities for which the hardware enforces that they cannot be copied. We formalize our approach as a fully abstract compiler from a statically verified source language to an unverified target language with support for linear capabilities. The key insight behind our compiler is that memory resources described by spatial separation logic predicates can be represented at run time by linear capabilities. The compiler is separationlogicproofdirected: it uses the separation logic proof of the source program to determine how memory accesses in the source program should be compiled to linear capability accesses in the target program. The full abstraction property of the compiler essentially guarantees that compiled verified modules can interact with untrusted target language modules as if they were compiled from verified code as well. @Article{ICFP19p84, author = {Thomas Van Strydonck and Frank Piessens and Dominique Devriese}, title = {Linear Capabilities for Fully Abstract Compilation of SeparationLogicVerified Code}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {84}, numpages = {29}, doi = {10.1145/3341688}, year = {2019}, } Publisher's Version Article Search Artifacts Available 

Vezzosi, Andrea 
ICFP '19: "Cubical Agda: A Dependently ..."
Cubical Agda: A Dependently Typed Programming Language with Univalence and Higher Inductive Types
Andrea Vezzosi, Anders Mörtberg, and Andreas Abel (IT University of Copenhagen, Denmark; Stockholm University, Sweden; Carnegie Mellon University, USA; Chalmers University of Technology, Sweden; University of Gothenburg, Sweden) Proof assistants based on dependent type theory provide expressive languages for both programming and proving within the same system. However, all of the major implementations lack powerful extensionality principles for reasoning about equality, such as function and propositional extensionality. These principles are typically added axiomatically which disrupts the constructive properties of these systems. Cubical type theory provides a solution by giving computational meaning to Homotopy Type Theory and Univalent Foundations, in particular to the univalence axiom and higher inductive types. This paper describes an extension of the dependently typed functional programming language Agda with cubical primitives, making it into a fullblown proof assistant with native support for univalence and a general schema of higher inductive types. These new primitives make function and propositional extensionality as well as quotient types directly definable with computational content. Additionally, thanks also to copatterns, bisimilarity is equivalent to equality for coinductive types. This extends Agda with support for a wide range of extensionality principles, without sacrificing type checking and constructivity. @Article{ICFP19p87, author = {Andrea Vezzosi and Anders Mörtberg and Andreas Abel}, title = {Cubical Agda: A Dependently Typed Programming Language with Univalence and Higher Inductive Types}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {87}, numpages = {29}, doi = {10.1145/3341691}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Voizard, Antoine 
ICFP '19: "A Role for Dependent Types ..."
A Role for Dependent Types in Haskell
Stephanie Weirich, Pritam Choudhury, Antoine Voizard, and Richard A. Eisenberg (University of Pennsylvania, USA; Bryn Mawr College, USA) Modern Haskell supports zerocost coercions, a mechanism where types that share the same runtime representation may be freely converted between. To make sure such conversions are safe and desirable, this feature relies on a mechanism of roles to prohibit invalid coercions. In this work, we show how to incorporate roles into dependent types systems and prove, using the Coq proof assistant, that the resulting system is sound. We have designed this work as a foundation for the addition of dependent types to the Glasgow Haskell Compiler, but we also expect that it will be of use to designers of other dependentlytyped languages who might want to adopt Haskell’s safe coercions feature. @Article{ICFP19p101, author = {Stephanie Weirich and Pritam Choudhury and Antoine Voizard and Richard A. Eisenberg}, title = {A Role for Dependent Types in Haskell}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {101}, numpages = {29}, doi = {10.1145/3341705}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Vytiniotis, Dimitrios 
ICFP '19: "Efficient Differentiable Programming ..."
Efficient Differentiable Programming in a Functional ArrayProcessing Language
Amir Shaikhha, Andrew Fitzgibbon, Dimitrios Vytiniotis, and Simon Peyton Jones (University of Oxford, UK; Microsoft Research, UK; DeepMind, UK) We present a system for the automatic differentiation (AD) of a higherorder functional arrayprocessing language. The core functional language underlying this system simultaneously supports both sourcetosource forwardmode AD and global optimisations such as loop transformations. In combination, gradient computation with forwardmode AD can be as efficient as reverse mode, and that the Jacobian matrices required for numerical algorithms such as GaussNewton and LevenbergMarquardt can be efficiently computed. @Article{ICFP19p97, author = {Amir Shaikhha and Andrew Fitzgibbon and Dimitrios Vytiniotis and Simon Peyton Jones}, title = {Efficient Differentiable Programming in a Functional ArrayProcessing Language}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {97}, numpages = {30}, doi = {10.1145/3341701}, year = {2019}, } Publisher's Version Article Search 

Walia, Rajan 
ICFP '19: "From HighLevel Inference ..."
From HighLevel Inference Algorithms to Efficient Code
Rajan Walia, Praveen Narayanan, Jacques Carette, Sam TobinHochstadt, and Chungchieh Shan (Indiana University, USA; McMaster University, Canada) Probabilistic programming languages are valuable because they allow domain experts to express probabilistic models and inference algorithms without worrying about irrelevant details. However, for decades there remained an important and popular class of probabilistic inference algorithms whose efficient implementation required manual lowlevel coding that is tedious and errorprone. They are algorithms whose idiomatic expression requires random array variables that are latent or whose likelihood is conjugate. Although that is how practitioners communicate and compose these algorithms on paper, executing such expressions requires eliminating the latent variables and recognizing the conjugacy by symbolic mathematics. Moreover, matching the performance of handwritten code requires speeding up loops by more than a constant factor. We show how probabilistic programs that directly and concisely express these desired inference algorithms can be compiled while maintaining efficiency. We introduce new transformations that turn highlevel probabilistic programs with arrays into pure loop code. We then make great use of domainspecific invariants and norms to optimize the code, and to specialize and JITcompile the code per execution. The resulting performance is competitive with manual implementations. @Article{ICFP19p98, author = {Rajan Walia and Praveen Narayanan and Jacques Carette and Sam TobinHochstadt and Chungchieh Shan}, title = {From HighLevel Inference Algorithms to Efficient Code}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {98}, numpages = {30}, doi = {10.1145/3341702}, year = {2019}, } Publisher's Version Article Search Artifacts Available 

Walker, David 
ICFP '19: "Synthesizing Symmetric Lenses ..."
Synthesizing Symmetric Lenses
Anders Miltner, Solomon Maina, Kathleen Fisher, Benjamin C. Pierce, David Walker, and Steve Zdancewic (Princeton University, USA; University of Pennsylvania, USA; Tufts University, USA) Lenses are programs that can be run both "front to back" and "back to front," allowing updates to either their source or their target data to be transferred in both directions. Since their introduction by Foster et al., lenses have been extensively studied, extended, and applied. Recent work has also demonstrated how techniques from typedirected program synthesis can be used to efficiently synthesize a simple class of lensessocalled bijective lenses over string datagiven a pair of types (regular expressions) and a small number of examples. We extend this synthesis algorithm to a much broader class of lenses, called simple symmetric lenses, including all bijective lenses, all of the popular category of "asymmetric" lenses, and a rich subset of the more powerful "symmetric lenses" proposed by Hofmann et al. Intuitively, simple symmetric lenses allow some information to be present on one side but not the other and vice versa. They are of independent theoretical interest, being the largest class of symmetric lenses that do not rely on persistent internal state. Synthesizing simple symmetric lenses is substantially more challenging than synthesizing bijective lenses: Since some of the information on each side can be "disconnected" from the other side, there will, in general, be many lenses that agree with a given example. To guide the search process, we use stochastic regular expressions and ideas from information theory to estimate the amount of information propagated by a candidate lens, generally preferring lenses that propagate more information, as well as user annotations marking parts of the source and target data structures as either irrelevant or essential. We describe an implementation of simple symmetric lenses and our synthesis procedure as extensions to the Boomerang language. We evaluate its performance on 48 benchmark examples drawn from Flash Fill, Augeas, the bidirectional programming literature, and electronic file format synchronization tasks. Our implementation can synthesize each of these lenses in under 30 seconds. @Article{ICFP19p95, author = {Anders Miltner and Solomon Maina and Kathleen Fisher and Benjamin C. Pierce and David Walker and Steve Zdancewic}, title = {Synthesizing Symmetric Lenses}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {95}, numpages = {28}, doi = {10.1145/3341699}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Wang, Fei 
ICFP '19: "Demystifying Differentiable ..."
Demystifying Differentiable Programming: Shift/Reset the Penultimate Backpropagator
Fei Wang, Daniel Zheng, James Decker, Xilun Wu, Grégory M. Essertel, and Tiark Rompf (Purdue University, USA) Deep learning has seen tremendous success over the past decade in computer vision, machine translation, and gameplay. This success rests crucially on gradientdescent optimization and the ability to “learn” parameters of a neural network by backpropagating observed errors. However, neural network architectures are growing increasingly sophisticated and diverse, which motivates an emerging quest for even more general forms of differentiable programming, where arbitrary parameterized computations can be trained by gradient descent. In this paper, we take a fresh look at automatic differentiation (AD) techniques, and especially aim to demystify the reversemode form of AD that generalizes backpropagation in neural networks. We uncover a tight connection between reversemode AD and delimited continuations, which permits implementing reversemode AD purely via operator overloading and without managing any auxiliary data structures. We further show how this formulation of AD can be fruitfully combined with multistage programming (staging), leading to an efficient implementation that combines the performance benefits of deep learning frameworks based on explicit reified computation graphs (e.g., TensorFlow) with the expressiveness of pure library approaches (e.g., PyTorch). @Article{ICFP19p96, author = {Fei Wang and Daniel Zheng and James Decker and Xilun Wu and Grégory M. Essertel and Tiark Rompf}, title = {Demystifying Differentiable Programming: Shift/Reset the Penultimate Backpropagator}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {96}, numpages = {31}, doi = {10.1145/3341700}, year = {2019}, } Publisher's Version Article Search 

Weirich, Stephanie 
ICFP '19: "A Role for Dependent Types ..."
A Role for Dependent Types in Haskell
Stephanie Weirich, Pritam Choudhury, Antoine Voizard, and Richard A. Eisenberg (University of Pennsylvania, USA; Bryn Mawr College, USA) Modern Haskell supports zerocost coercions, a mechanism where types that share the same runtime representation may be freely converted between. To make sure such conversions are safe and desirable, this feature relies on a mechanism of roles to prohibit invalid coercions. In this work, we show how to incorporate roles into dependent types systems and prove, using the Coq proof assistant, that the resulting system is sound. We have designed this work as a foundation for the addition of dependent types to the Glasgow Haskell Compiler, but we also expect that it will be of use to designers of other dependentlytyped languages who might want to adopt Haskell’s safe coercions feature. @Article{ICFP19p101, author = {Stephanie Weirich and Pritam Choudhury and Antoine Voizard and Richard A. Eisenberg}, title = {A Role for Dependent Types in Haskell}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {101}, numpages = {29}, doi = {10.1145/3341705}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Westrick, Sam 
ICFP '19: "Fairness in Responsive Parallelism ..."
Fairness in Responsive Parallelism
Stefan K. Muller, Sam Westrick, and Umut A. Acar (Carnegie Mellon University, USA; Inria, France) Research on parallel computing has historically revolved around computeintensive applications drawn from traditional areas such as highperformance computing. With the growing availability and usage of multicore chips, applications of parallel computing now include interactive parallel applications that mix computeintensive tasks with interaction, e.g., with the user or more generally with the external world. Recent theoretical work on responsive parallelism presents abstract cost models and type systems for ensuring and reasoning about responsiveness and throughput of such interactive parallel programs. In this paper, we extend prior work by considering a crucial metric: fairness. To express rich interactive parallel programs, we allow programmers to assign priorities to threads and instruct the scheduler to obey a notion of fairness. We then propose the fairly prompt scheduling principle for executing such programs; the principle specifies the schedule for multithreaded programs on multiple processors. For such schedules, we prove theoretical bounds on the execution and response times of jobs of various priorities. In particular, we bound the amount, i.e., stretch, by which a lowpriority job can be delayed by higherpriority work. We also present an algorithm designed to approximate the fairly prompt scheduling principle on multicore computers, implement the algorithm by extending the Standard ML language, and present an empirical evaluation. @Article{ICFP19p81, author = {Stefan K. Muller and Sam Westrick and Umut A. Acar}, title = {Fairness in Responsive Parallelism}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {81}, numpages = {30}, doi = {10.1145/3341685}, year = {2019}, } Publisher's Version Article Search 

White, Leo 
ICFP '19: "Lambda: The Ultimate Sublanguage ..."
Lambda: The Ultimate Sublanguage (Experience Report)
Jeremy Yallop and Leo White (University of Cambridge, UK; Jane Street, UK) We describe our experience teaching an advanced typed functional programming course based around the use of System Fω as a programming language. @Article{ICFP19p116, author = {Jeremy Yallop and Leo White}, title = {Lambda: The Ultimate Sublanguage (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {116}, numpages = {17}, doi = {10.1145/3342713}, year = {2019}, } Publisher's Version Article Search 

Wu, Xilun 
ICFP '19: "Demystifying Differentiable ..."
Demystifying Differentiable Programming: Shift/Reset the Penultimate Backpropagator
Fei Wang, Daniel Zheng, James Decker, Xilun Wu, Grégory M. Essertel, and Tiark Rompf (Purdue University, USA) Deep learning has seen tremendous success over the past decade in computer vision, machine translation, and gameplay. This success rests crucially on gradientdescent optimization and the ability to “learn” parameters of a neural network by backpropagating observed errors. However, neural network architectures are growing increasingly sophisticated and diverse, which motivates an emerging quest for even more general forms of differentiable programming, where arbitrary parameterized computations can be trained by gradient descent. In this paper, we take a fresh look at automatic differentiation (AD) techniques, and especially aim to demystify the reversemode form of AD that generalizes backpropagation in neural networks. We uncover a tight connection between reversemode AD and delimited continuations, which permits implementing reversemode AD purely via operator overloading and without managing any auxiliary data structures. We further show how this formulation of AD can be fruitfully combined with multistage programming (staging), leading to an efficient implementation that combines the performance benefits of deep learning frameworks based on explicit reified computation graphs (e.g., TensorFlow) with the expressiveness of pure library approaches (e.g., PyTorch). @Article{ICFP19p96, author = {Fei Wang and Daniel Zheng and James Decker and Xilun Wu and Grégory M. Essertel and Tiark Rompf}, title = {Demystifying Differentiable Programming: Shift/Reset the Penultimate Backpropagator}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {96}, numpages = {31}, doi = {10.1145/3341700}, year = {2019}, } Publisher's Version Article Search 

Xie, Ningning 
ICFP '19: "Coherence of Type Class Resolution ..."
Coherence of Type Class Resolution
GertJan Bottu, Ningning Xie, Koar Marntirosian, and Tom Schrijvers (KU Leuven, Belgium; University of Hong Kong, China) Elaborationbased type class resolution, as found in languages like Haskell, Mercury and PureScript, is generally nondeterministic: there can be multiple ways to satisfy a wanted constraint in terms of global instances and locally given constraints. Coherence is the key property that keeps this sane; it guarantees that, despite the nondeterminism, programs still behave predictably. Even though elaborationbased resolution is generally assumed coherent, as far as we know, there is no formal proof of this property in the presence of sources of nondeterminism, like superclasses and flexible contexts. This paper provides a formal proof to remedy the situation. The proof is nontrivial because the semantics elaborates resolution into a target language where different elaborations can be distinguished by contexts that do not have a source language counterpart. Inspired by the notion of full abstraction, we present a twostep strategy that first elaborates nondeterministically into an intermediate language that preserves contextual equivalence, and then deterministically elaborates from there into the target language. We use an approach based on logical relations to establish contextual equivalence and thus coherence for the first step of elaboration, while the second step’s determinism straightforwardly preserves this coherence property. @Article{ICFP19p91, author = {GertJan Bottu and Ningning Xie and Koar Marntirosian and Tom Schrijvers}, title = {Coherence of Type Class Resolution}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {91}, numpages = {28}, doi = {10.1145/3341695}, year = {2019}, } Publisher's Version Article Search 

Yallop, Jeremy 
ICFP '19: "Lambda: The Ultimate Sublanguage ..."
Lambda: The Ultimate Sublanguage (Experience Report)
Jeremy Yallop and Leo White (University of Cambridge, UK; Jane Street, UK) We describe our experience teaching an advanced typed functional programming course based around the use of System Fω as a programming language. @Article{ICFP19p116, author = {Jeremy Yallop and Leo White}, title = {Lambda: The Ultimate Sublanguage (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {116}, numpages = {17}, doi = {10.1145/3342713}, year = {2019}, } Publisher's Version Article Search 

Ye, Qianchuan 
ICFP '19: "Narcissus: CorrectbyConstruction ..."
Narcissus: CorrectbyConstruction Derivation of Decoders and Encoders from Binary Formats
Benjamin Delaware, Sorawit Suriyakarn, Clément PitClaudel, Qianchuan Ye, and Adam Chlipala (Purdue University, USA; Band Protocol, Thailand; Massachusetts Institute of Technology, USA) It is a neat result from functional programming that libraries of parser combinators can support rapid construction of decoders for quite a range of formats. With a little more work, the same combinator program can denote both a decoder and an encoder. Unfortunately, the real world is full of gnarly formats, as with the packet formats that make up the standard Internet protocol stack. Most past parsercombinator approaches cannot handle these formats, and the few exceptions require redundancy – one part of the natural grammar needs to be handtranslated into hints in multiple parts of a parser program. We show how to recover very natural and nonredundant format specifications, covering all popular network packet formats and generating both decoders and encoders automatically. The catch is that we use the Coq proof assistant to derive both kinds of artifacts using tactics, automatically, in a way that guarantees that they form inverses of each other. We used our approach to reimplement packet processing for a full Internet protocol stack, inserting our replacement into the OCamlbased MirageOS unikernel, resulting in minimal performance degradation. @Article{ICFP19p82, author = {Benjamin Delaware and Sorawit Suriyakarn and Clément PitClaudel and Qianchuan Ye and Adam Chlipala}, title = {Narcissus: CorrectbyConstruction Derivation of Decoders and Encoders from Binary Formats}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {82}, numpages = {29}, doi = {10.1145/3341686}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Zamdzhiev, Vladimir 
ICFP '19: "Mixed Linear and Nonlinear ..."
Mixed Linear and Nonlinear Recursive Types
Bert Lindenhovius, Michael Mislove, and Vladimir Zamdzhiev (Tulane University, USA; University of Lorraine, France; CNRS, France; Inria, France; LORIA, France) We describe a type system with mixed linear and nonlinear recursive types called LNLFPC (the linear/nonlinear fixpoint calculus). The type system supports linear typing which enhances the safety properties of programs, but also supports nonlinear typing as well which makes the type system more convenient for programming. Just like in FPC, we show that LNLFPC supports typelevel recursion which in turn induces termlevel recursion. We also provide sound and computationally adequate categorical models for LNLFPC which describe the categorical structure of the substructural operations of Intuitionistic Linear Logic at all nonlinear types, including the recursive ones. In order to do so, we describe a new technique for solving recursive domain equations within the category CPO by constructing the solutions over preembeddings. The type system also enjoys implicit weakening and contraction rules which we are able to model by identifying the canonical comonoid structure of all nonlinear types. We also show that the requirements of our abstract model are reasonable by constructing a large class of concrete models that have found applications not only in classical functional programming, but also in emerging programming paradigms that incorporate linear types, such as quantum programming and circuit description programming languages. @Article{ICFP19p111, author = {Bert Lindenhovius and Michael Mislove and Vladimir Zamdzhiev}, title = {Mixed Linear and Nonlinear Recursive Types}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {111}, numpages = {29}, doi = {10.1145/3341715}, year = {2019}, } Publisher's Version Article Search 

Zdancewic, Steve 
ICFP '19: "Synthesizing Symmetric Lenses ..."
Synthesizing Symmetric Lenses
Anders Miltner, Solomon Maina, Kathleen Fisher, Benjamin C. Pierce, David Walker, and Steve Zdancewic (Princeton University, USA; University of Pennsylvania, USA; Tufts University, USA) Lenses are programs that can be run both "front to back" and "back to front," allowing updates to either their source or their target data to be transferred in both directions. Since their introduction by Foster et al., lenses have been extensively studied, extended, and applied. Recent work has also demonstrated how techniques from typedirected program synthesis can be used to efficiently synthesize a simple class of lensessocalled bijective lenses over string datagiven a pair of types (regular expressions) and a small number of examples. We extend this synthesis algorithm to a much broader class of lenses, called simple symmetric lenses, including all bijective lenses, all of the popular category of "asymmetric" lenses, and a rich subset of the more powerful "symmetric lenses" proposed by Hofmann et al. Intuitively, simple symmetric lenses allow some information to be present on one side but not the other and vice versa. They are of independent theoretical interest, being the largest class of symmetric lenses that do not rely on persistent internal state. Synthesizing simple symmetric lenses is substantially more challenging than synthesizing bijective lenses: Since some of the information on each side can be "disconnected" from the other side, there will, in general, be many lenses that agree with a given example. To guide the search process, we use stochastic regular expressions and ideas from information theory to estimate the amount of information propagated by a candidate lens, generally preferring lenses that propagate more information, as well as user annotations marking parts of the source and target data structures as either irrelevant or essential. We describe an implementation of simple symmetric lenses and our synthesis procedure as extensions to the Boomerang language. We evaluate its performance on 48 benchmark examples drawn from Flash Fill, Augeas, the bidirectional programming literature, and electronic file format synchronization tasks. Our implementation can synthesize each of these lenses in under 30 seconds. @Article{ICFP19p95, author = {Anders Miltner and Solomon Maina and Kathleen Fisher and Benjamin C. Pierce and David Walker and Steve Zdancewic}, title = {Synthesizing Symmetric Lenses}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {95}, numpages = {28}, doi = {10.1145/3341699}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Zeppieri, Jon 
ICFP '19: "Rebuilding Racket on Chez ..."
Rebuilding Racket on Chez Scheme (Experience Report)
Matthew Flatt, Caner Derici, R. Kent Dybvig, Andrew W. Keep, Gustavo E. Massaccesi, Sarah Spall, Sam TobinHochstadt, and Jon Zeppieri (University of Utah, USA; Indiana University, USA; Cisco Systems, USA; University of Buenos Aires, Argentina) We rebuilt Racket on Chez Scheme, and it works well—as long as we're allowed a few patches to Chez Scheme. DrRacket runs, the Racket distribution can build itself, and nearly all of the core Racket test suite passes. Maintainability and performance of the resulting implementation are good, although some work remains to improve endtoend performance. The least predictable part of our effort was how big the differences between Racket and Chez Scheme would turn out to be and how we would manage those differences. We expect Racket on Chez Scheme to become the main Racket implementation, and we encourage other language implementers to consider Chez Scheme as a target virtual machine. @Article{ICFP19p78, author = {Matthew Flatt and Caner Derici and R. Kent Dybvig and Andrew W. Keep and Gustavo E. Massaccesi and Sarah Spall and Sam TobinHochstadt and Jon Zeppieri}, title = {Rebuilding Racket on Chez Scheme (Experience Report)}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {78}, numpages = {15}, doi = {10.1145/3341642}, year = {2019}, } Publisher's Version Article Search Info Artifacts Available Artifacts Functional 

Zhang, Hengchu 
ICFP '19: "Fuzzi: A ThreeLevel Logic ..."
Fuzzi: A ThreeLevel Logic for Differential Privacy
Hengchu Zhang, Edo Roth, Andreas Haeberlen, Benjamin C. Pierce, and Aaron Roth (University of Pennsylvania, USA) Curators of sensitive datasets sometimes need to know whether queries against the data are differentially private. Two sorts of logics have been proposed for checking this property: (1) type systems and other static analyses, which fully automate straightforward reasoning with concepts like “program sensitivity” and “privacy loss,” and (2) fullblown program logics such as apRHL (an approximate, probabilistic, relational Hoare logic), which support more flexible reasoning about subtle privacypreserving algorithmic techniques but offer only minimal automation. We propose a threelevel logic for differential privacy in an imperative setting and present a prototype implementation called Fuzzi. Fuzzi’s lowest level is a generalpurpose logic; its middle level is apRHL; and its top level is a novel sensitivity logic adapted from the linearlogicinspired type system of Fuzz, a differentially private functional language. The key novelty is a high degree of integration between the sensitivity logic and the two lowerlevel logics: the judgments and proofs of the sensitivity logic can be easily translated into apRHL; conversely, privacy properties of key algorithmic building blocks can be proved manually in apRHL and the base logic, then packaged up as typing rules that can be applied by a checker for the sensitivity logic to automatically construct privacy proofs for composite programs of arbitrary size. We demonstrate Fuzzi’s utility by implementing four different private machinelearning algorithms and showing that Fuzzi’s checker is able to derive tight sensitivity bounds. @Article{ICFP19p93, author = {Hengchu Zhang and Edo Roth and Andreas Haeberlen and Benjamin C. Pierce and Aaron Roth}, title = {Fuzzi: A ThreeLevel Logic for Differential Privacy}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {93}, numpages = {28}, doi = {10.1145/3341697}, year = {2019}, } Publisher's Version Article Search Artifacts Available Artifacts Functional 

Zhao, Jinxu 
ICFP '19: "A Mechanical Formalization ..."
A Mechanical Formalization of HigherRanked Polymorphic Type Inference
Jinxu Zhao, Bruno C. d. S. Oliveira, and Tom Schrijvers (University of Hong Kong, China; KU Leuven, Belgium) Modern functional programming languages, such as Haskell or OCaml, use sophisticated forms of type inference. While an important topic in the Programming Languages research, there is little work on the mechanization of the metatheory of type inference in theorem provers. In particular we are unaware of any complete formalization of the type inference algorithms that are the backbone of modern functional languages. This paper presents the first full mechanical formalization of the metatheory for higherranked polymorphic type inference. The system that we formalize is the bidirectional type system by Dunfield and Krishnaswami (DK). The DK type system has two variants (a declarative and an algorithmic one) that have been manually proven sound, complete and decidable. We present a mechanical formalization in the Abella theorem prover of DK’s declarative type system with a novel algorithmic system. We have a few reasons to use a new algorithm. Firstly, our new algorithm employs worklist judgments, which precisely capture the scope of variables and simplify the formalization of scoping in a theorem prover. Secondly, while DK’s original formalization comes with very wellwritten manual proofs, there are several details missing and some incorrect proofs, which complicate the task of writing a mechanized proof. Despite the use of a different algorithm we prove the same results as DK, although with significantly different proofs and proof techniques. Since such type inference algorithms are quite subtle and have a complex metatheory, mechanical formalizations are an important advance in typeinference research. @Article{ICFP19p112, author = {Jinxu Zhao and Bruno C. d. S. Oliveira and Tom Schrijvers}, title = {A Mechanical Formalization of HigherRanked Polymorphic Type Inference}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {112}, numpages = {29}, doi = {10.1145/3341716}, year = {2019}, } Publisher's Version Article Search Info Artifacts Functional 

Zheng, Daniel 
ICFP '19: "Demystifying Differentiable ..."
Demystifying Differentiable Programming: Shift/Reset the Penultimate Backpropagator
Fei Wang, Daniel Zheng, James Decker, Xilun Wu, Grégory M. Essertel, and Tiark Rompf (Purdue University, USA) Deep learning has seen tremendous success over the past decade in computer vision, machine translation, and gameplay. This success rests crucially on gradientdescent optimization and the ability to “learn” parameters of a neural network by backpropagating observed errors. However, neural network architectures are growing increasingly sophisticated and diverse, which motivates an emerging quest for even more general forms of differentiable programming, where arbitrary parameterized computations can be trained by gradient descent. In this paper, we take a fresh look at automatic differentiation (AD) techniques, and especially aim to demystify the reversemode form of AD that generalizes backpropagation in neural networks. We uncover a tight connection between reversemode AD and delimited continuations, which permits implementing reversemode AD purely via operator overloading and without managing any auxiliary data structures. We further show how this formulation of AD can be fruitfully combined with multistage programming (staging), leading to an efficient implementation that combines the performance benefits of deep learning frameworks based on explicit reified computation graphs (e.g., TensorFlow) with the expressiveness of pure library approaches (e.g., PyTorch). @Article{ICFP19p96, author = {Fei Wang and Daniel Zheng and James Decker and Xilun Wu and Grégory M. Essertel and Tiark Rompf}, title = {Demystifying Differentiable Programming: Shift/Reset the Penultimate Backpropagator}, journal = {Proc. ACM Program. Lang.}, volume = {3}, number = {ICFP}, articleno = {96}, numpages = {31}, doi = {10.1145/3341700}, year = {2019}, } Publisher's Version Article Search 
125 authors
proc time: 7.87