ICFP 2017
Proceedings of the ACM on Programming Languages, Volume 1, Number ICFP
Powered by
Conference Publishing Consulting

Proceedings of the ACM on Programming Languages, Volume 1, Number ICFP, September 3–9, 2017, Oxford, UK

ICFP – Journal Issue

Contents - Abstracts - Authors


Title Page

Editorial Message
Welcome to the Proceedings of the ACM on Programming Languages (PACMPL)!

Art and Education

Herbarium Racketensis: A Stroll through the Woods (Functional Pearl)
Vincent St-Amour, Daniel Feltey, Spencer P. Florence, Shu-Hung You, and Robert Bruce Findler
(Northwestern University, USA)
Domain-specific languages are the ultimate abstraction, dixit Paul Hudak. But what abstraction should we use to build such ultimate abstractions? What is sauce for the goose is sauce for the gander: a language, of course! Racket is the ultimate abstraction-abstraction, a platform for quickly and easily building new ultimate abstractions. This pearl demonstrates Racket's power by taking a leisurely walk through the implementation of a DSL for Lindenmayer systems, the computational model par excellence of theoretical botany.
Publisher's Version Article Search Artifacts Available Artifacts Reusable
Testing and Debugging Functional Reactive Programming
Ivan Perez and Henrik Nilsson
(University of Nottingham, UK)
Many types of interactive applications, including video games, raise particular challenges when it comes to testing and debugging. Reasons include de-facto lack of reproducibility and difficulties of automatically generating suitable test data. This paper demonstrates that certain variants of Functional Reactive Programming (FRP) implemented in pure functional languages can mitigate such difficulties by offering referential transparency at the level of whole programs. This opens up for a multi-pronged approach for assisting with testing and debugging that works across platforms, including assertions based on temporal logic, recording and replaying of runs (also from deployed code), and automated random testing using QuickCheck. The approach has been validated on real, non-trivial games implemented in the FRP system Yampa through a tool providing a convenient Graphical User Interface that allows the execution of the code under scrutiny to be controlled, moving along the execution time line, and pin-pointing of violations of assertions on PCs as well as mobile platforms.
Publisher's Version Article Search
Lock-Step Simulation Is Child's Play (Experience Report)
Joachim Breitner and Chris Smith
(University of Pennsylvania, USA; Google, USA)
Implementing multi-player networked games by broadcasting the player’s input and letting each client calculate the game state -- a scheme known as *lock-step simulation* – is an established technique. However, ensuring that every client in this scheme obtains a consistent state is infamously hard and in general requires great discipline from the game programmer. The thesis of this pearl is that in the realm of functional programming – in particular with Haskell's purity and static pointers – this hard problem becomes almost trivially easy. We support this thesis by implementing lock-step simulation under very adverse conditions. We extended the educational programming environment CodeWorld, which is used to teach math and programming to middle school students, with the ability to create and run interactive, networked multi-user games. Despite providing a very abstract and high-level interface, and without requiring any discipline from the programmer, we can provide consistent lock-step simulation with client prediction.
Publisher's Version Article Search Artifacts Available Artifacts Functional
Scaling up Functional Programming Education: Under the Hood of the OCaml MOOC
Benjamin Canou, Roberto Di Cosmo, and Grégoire Henry
(OCamlPro, France; Inria, France; University of Paris Diderot, France)
This article describes the key innovations used in the massive open online course ``Introduction to Functional Programming using OCaml'' that has run since the fall semester of 2015. A fully in-browser development environment with an integrated grader provides an exceptional level of feedback to the learners. A functional library of grading combinators greatly simplifies the notoriously complex task of writing test suites for the exercises, and provides static type-safety guarantees on the tested user code. Even the error-prone manual process of importing the course content in the learning platform has been replaced by a functional program that describes the course and statically checks its contents. A detailed statistical analysis of the data collected during and after the course assesses the effectiveness of these innovations.
Publisher's Version Article Search

Functional Programming Techniques

Faster Coroutine Pipelines
Michael Spivey
(University of Oxford, UK)
Coroutine pipelines provide an attractive structuring mechanism for complex programs that process streams of data, with the advantage over lazy streams that both ends of a pipeline may interact with the I/O system, as may processes in the middle. Two popular Haskell libraries, Pipes and Conduit, support such pipelines. In both libraries, pipelines are implemented in a direct style by combining a free monad of communication events with an interpreter for (pseudo-)parallel composition that interleaves the events of its argument processes. These implementations both suffer from a slow-down when processes are deeply nested in sequence or in parallel. We propose an alternative implementation of pipelines based on continuations that does not suffer from this slow-down. What is more, the implementation is significantly faster on small, communication-intensive examples even where they do not suffer from the slow-down, and comparable in speed with similar programs based on lazy streams. We also show that the continuation-based implementation may be derived from the direct-style implementation by algebraic reasoning.
Publisher's Version Article Search Artifacts Available Artifacts Reusable
A Pretty But Not Greedy Printer (Functional Pearl)
Jean-Philippe Bernardy
(University of Gothenburg, Sweden)
This paper proposes a new specification of pretty printing which is stronger than the state of the art: we require the output to be the shortest possible, and we also offer the ability to align sub-documents at will. We argue that our specification precludes a greedy implementation. Yet, we provide an implementation which behaves linearly in the size of the output. The derivation of the implementation demonstrates functional programming methodology.
Publisher's Version Article Search
Generic Functional Parallel Algorithms: Scan and FFT
Conal Elliott
(Target, USA)

Parallel programming, whether imperative or functional, has long focused on arrays as the central data type. Meanwhile, typed functional programming has explored a variety of data types, including lists and various forms of trees. Generic functional programming decomposes these data types into a small set of fundamental building blocks: sum, product, composition, and their associated identities. Definitions over these few fundamental type constructions then automatically assemble into algorithms for an infinite variety of data types—some familiar and some new. This paper presents generic functional formulations for two important and well-known classes of parallel algorithms: parallel scan (generalized prefix sum) and fast Fourier transform (FFT). Notably, arrays play no role in these formulations. Consequent benefits include a simpler and more compositional style, much use of common algebraic patterns and freedom from possibility of run-time indexing errors. The functional generic style also clearly reveals deep commonality among what otherwise appear to be quite different algorithms. Instantiating the generic formulations, two well-known algorithms for each of parallel scan and FFT naturally emerge, as well as two possibly new algorithms.

Publisher's Version Article Search
A Unified Approach to Solving Seven Programming Problems (Functional Pearl)
William E. Byrd, Michael Ballantyne, Gregory Rosenblatt, and Matthew Might
(University of Utah, USA)
We present seven programming challenges in Racket, and an elegant, unified approach to solving them using constraint logic programming in miniKanren.
Publisher's Version Article Search Artifacts Available Artifacts Reusable


Prototyping a Query Compiler using Coq (Experience Report)
Joshua S. Auerbach, Martin Hirzel, Louis Mandel, Avraham Shinnar, and Jérôme Siméon
(IBM Research, USA)
Designing and prototyping new features is important in many industrial projects. Functional programming and formal verification tools can prove valuable for that purpose, but lead to challenges when integrating with existing product code or when planning technology transfer. This article reports on our experience using the Coq proof assistant as a prototyping environment for building a query compiler intended for use in IBM's ODM Insights product. We discuss the pros and cons of using Coq for this purpose and describe our methodology for porting the compiler to Java, as required for product integration.
Publisher's Version Article Search Info Artifacts Available Artifacts Functional
A Framework for Adaptive Differential Privacy
Daniel Winograd-Cort, Andreas Haeberlen, Aaron Roth, and Benjamin C. Pierce
(University of Pennsylvania, USA)

Differential privacy is a widely studied theory for analyzing sensitive data with a strong privacy guarantee—any change in an individual’s data can have only a small statistical effect on the result—and a growing number of programming languages now support differentially private data analysis. A common shortcoming of these languages is poor support for adaptivity. In practice, a data analyst rarely wants to run just one function over a sensitive database, nor even a predetermined sequence of functions with fixed privacy parameters; rather, she wants to engage in an interaction where, at each step, both the choice of the next function and its privacy parameters are informed by the results of prior functions. Existing languages support this scenario using a simple composition theorem, which often gives rather loose bounds on the actual privacy cost of composite functions, substantially reducing how much computation can be performed within a given privacy budget. The theory of differential privacy includes other theorems with much better bounds, but these have not yet been incorporated into programming languages.

We propose a novel framework for adaptive composition that is elegant, practical, and implementable. It consists of a reformulation based on typed functional programming of privacy filters, together with a concrete realization of this framework in the design and implementation of a new language, called Adaptive Fuzz. Adaptive Fuzz transplants the core static type system of Fuzz to the adaptive setting by wrapping the Fuzz typechecker and runtime system in an outer adaptive layer, allowing Fuzz programs to be conveniently constructed and typechecked on the fly. We describe an interpreter for Adaptive Fuzz and report results from two case studies demonstrating its effectiveness for implementing common statistical algorithms over real data sets.

Publisher's Version Article Search Info Artifacts Available Artifacts Functional
Symbolic Conditioning of Arrays in Probabilistic Programs
Praveen Narayanan and Chung-chieh Shan
(Indiana University, USA)

Probabilistic programming systems make machine learning more modular by automating inference. Recent work by Shan and Ramsey makes inference more modular by automating conditioning. Their technique uses a symbolic program transformation that treats conditioning generally via the measure-theoretic notion of disintegration. This technique, however, is limited to conditioning a single scalar variable. As a step towards modular inference for realistic machine learning applications, we have extended the disintegration algorithm to symbolically condition arrays in probabilistic programs. The extended algorithm implements lifted disintegration, where repetition is treated symbolically and without unrolling loops. The technique uses a language of index variables for tracking expressions at various array levels. We find that the method works well for arbitrarily-sized arrays of independent random choices, with the conditioning step taking time linear in the number of indices needed to select an element.

Publisher's Version Article Search Artifacts Available Artifacts Reusable


Abstracting Definitional Interpreters (Functional Pearl)
David Darais, Nicholas Labich, Phúc C. Nguyen, and David Van Horn
(University of Maryland, USA)

In this functional pearl, we examine the use of definitional interpreters as a basis for abstract interpretation of higher-order programming languages. As it turns out, definitional interpreters, especially those written in monadic style, can provide a nice basis for a wide variety of collecting semantics, abstract interpretations, symbolic executions, and their intermixings.

But the real insight of this story is a replaying of an insight from Reynold’s landmark paper, Definitional Interpreters for Higher-Order Programming Languages, in which he observes definitional interpreters enable the defined-language to inherit properties of the defining-language. We show the same holds true for definitional abstract interpreters. Remarkably, we observe that abstract definitional interpreters can inherit the so-called “pushdown control flow” property, wherein function calls and returns are precisely matched in the abstract semantics, simply by virtue of the function call mechanism of the defining-language.

The first approaches to achieve this property for higher-order languages appeared within the last ten years, and have since been the subject of many papers. These approaches start from a state-machine semantics and uniformly involve significant technical engineering to recover the precision of pushdown control flow. In contrast, starting from a definitional interpreter, the pushdown control flow property is inherent in the meta-language and requires no further technical mechanism to achieve.

Publisher's Version Article Search Info Artifacts Available Artifacts Reusable
On the Expressive Power of User-Defined Effects: Effect Handlers, Monadic Reflection, Delimited Control
Yannick Forster, Ohad Kammar, Sam Lindley, and Matija Pretnar
(Saarland University, Germany; University of Cambridge, UK; University of Oxford, UK; University of Edinburgh, UK; University of Ljubljana, Slovenia)
We compare the expressive power of three programming abstractions for user-defined computational effects: Plotkin and Pretnar's effect handlers, Filinski's monadic reflection, and delimited control without answer-type-modification. This comparison allows a precise discussion about the relative expressiveness of each programming abstraction. It also demonstrates the sensitivity of the relative expressiveness of user-defined effects to seemingly orthogonal language features. We present three calculi, one per abstraction, extending Levy's call-by-push-value. For each calculus, we present syntax, operational semantics, a natural type-and-effect system, and, for effect handlers and monadic reflection, a set-theoretic denotational semantics. We establish their basic metatheoretic properties: safety, termination, and, where applicable, soundness and adequacy. Using Felleisen's notion of a macro translation, we show that these abstractions can macro-express each other, and show which translations preserve typeability. We use the adequate finitary set-theoretic denotational semantics for the monadic calculus to show that effect handlers cannot be macro-expressed while preserving typeability either by monadic reflection or by delimited control. Our argument fails with simple changes to the type system such as polymorphism and inductive types. We supplement our development with a mechanised Abella formalisation.
Publisher's Version Article Search Artifacts Available Artifacts Functional
Imperative Functional Programs That Explain Their Work
Wilmer Ricciotti, Jan Stolarek, Roly Perera, and James Cheney
(University of Edinburgh, UK)

Program slicing provides explanations that illustrate how program outputs were produced from inputs. We build on an approach introduced in prior work, where dynamic slicing was defined for pure higher-order functional programs as a Galois connection between lattices of partial inputs and partial outputs. We extend this approach to imperative functional programs that combine higher-order programming with references and exceptions. We present proofs of correctness and optimality of our approach and a proof-of-concept implementation and experimental evaluation.

Publisher's Version Article Search Artifacts Available Artifacts Functional
Effect-Driven QuickChecking of Compilers
Jan Midtgaard, Mathias Nygaard Justesen, Patrick Kasting, Flemming Nielson, and Hanne Riis Nielson
(DTU, Denmark)
How does one test a language implementation with QuickCheck (aka. property-based testing)? One approach is to generate programs following the grammar of the language. But in a statically-typed language such as OCaml too many of these candidate programs will be rejected as ill-typed by the type checker. As a refinement Pałka et al. propose to generate programs in a goal-directed, bottom-up reading up of the typing relation. We have written such a generator. However many of the generated programs has output that depend on the evaluation order, which is commonly under-specified in languages such as OCaml, Scheme, C, C++, etc. In this paper we develop a type and effect system for conservatively detecting evaluation-order dependence and propose its goal-directed reading as a generator of programs that are independent of evaluation order. We illustrate the approach by generating programs to test OCaml's two compiler backends against each other and report on a number of bugs we have found doing so.
Publisher's Version Article Search Artifacts Functional

Low-Level and Systems Programming

Persistence for the Masses: RRB-Vectors in a Systems Language
Juan Pedro Bolívar Puente

Relaxed Radix Balanced Trees (RRB-Trees) is one of the latest members in a family of persistent tree based data-structures that combine wide branching factors with simple and relatively flat structures. Like the battle-tested immutable sequences of Clojure and Scala, they have effectively constant lookup and updates, good cache utilization, but also logarithmic concatenation and slicing. Our goal is to bring the benefits of persistent data structures to the discipline of systems programming via generic yet efficient immutable vectors supporting transient batch updates. We describe a C++ implementation that can be integrated in the runtime of higher level languages with a C core (Lisps like Guile or Racket, but also Python or Ruby), thus widening the access to these persistent data structures.

In this work we propose (1) an Embedding RRB-Tree (ERRB-Tree) data structure that efficiently stores arbitrary unboxed types, (2) a technique for implementing tree operations orthogonal to optimizations for a more compact representation of the tree, (3) a policy-based design to support multiple memory management and reclamation mechanisms (including automatic garbage collection and reference counting), (4) a model of transience based on move-semantics and reference counting, and (5) a definition of transience for confluent meld operations. Combining these techniques a performance comparable to that of mutable arrays can be achieved in many situations, while using the data structure in a functional way.

Publisher's Version Article Search Info Artifacts Available Artifacts Functional
Verified Low-Level Programming Embedded in F*
Jonathan Protzenko, Jean-Karim Zinzindohoué, Aseem Rastogi, Tahina Ramananandro, Peng Wang, Santiago Zanella-Béguelin, Antoine Delignat-Lavaud, Cătălin Hriţcu, Karthikeyan Bhargavan, Cédric Fournet, and Nikhil Swamy
(Microsoft Research, USA; Inria, France; Massachusetts Institute of Technology, USA)
We present Low*, a language for low-level programming and verification, and its application to high-assurance optimized cryptographic libraries. Low* is a shallow embedding of a small, sequential, well-behaved subset of C in F*, a dependently- typed variant of ML aimed at program verification. Departing from ML, Low* does not involve any garbage collection or implicit heap allocation; instead, it has a structured memory model à la CompCert, and it provides the control required for writing efficient low-level security-critical code. By virtue of typing, any Low* program is memory safe. In addition, the programmer can make full use of the verification power of F* to write high-level specifications and verify the functional correctness of Low* code using a combination of SMT automation and sophisticated manual proofs. At extraction time, specifications and proofs are erased, and the remaining code enjoys a predictable translation to C. We prove that this translation preserves semantics and side-channel resistance. We provide a new compiler back-end from Low* to C and, to evaluate our approach, we implement and verify various cryptographic algorithms, constructions, and tools for a total of about 28,000 lines of code. We show that our Low* code delivers performance competitive with existing (unverified) C cryptographic libraries, suggesting our approach may be applicable to larger-scale low-level software.
Publisher's Version Article Search Artifacts Available Artifacts Reusable
Verifying Efficient Function Calls in CakeML
Scott Owens, Michael Norrish, Ramana Kumar, Magnus O. Myreen, and Yong Kiam Tan
(University of Kent, UK; Data61 at CSIRO, Australia; Australian National University, Australia; UNSW, Australia; Chalmers University of Technology, Sweden; Carnegie Mellon University, USA)
We have designed an intermediate language (IL) for the CakeML compiler that supports the verified, efficient compilation of functions and calls. Verified compilation steps include batching of multiple curried arguments, detecting calls to statically known functions, and specialising calls to known functions with no free variables. Finally, we verify the translation to a lower-level IL that only supports closed, first-order functions. These compilation steps resemble those found in other compilers (especially OCaml). Our contribution here is the design of the semantics of the IL, and the demonstration that our verification techniques over this semantics work well in practice at this scale. The entire development was carried out in the HOL4 theorem prover.
Publisher's Version Article Search Artifacts Reusable
Better Living through Operational Semantics: An Optimizing Compiler for Radio Protocols
Geoffrey Mainland
(Drexel University, USA)
Software-defined radio (SDR) promises to bring the flexibility and rapid iterative workflow of software to radio protocol design. Many factors make achieving this promise challenging, not least of which are the high data rates and timing requirements of real-world radio protocols. The Ziria language and accompanying compiler demonstrated that a high-level language can compete in this demanding space, but extracting reusable lessons from this success is difficult due to Ziria's lack of a formal semantics. Our first contribution is a core language, operational semantics, and type system for Ziria. The insight we gained through developing this operational semantics led to our second contribution, consisting of two program transformations. The first is fusion, which can eliminate intermediate queues in Ziria programs. Fusion subsumes many one-off optimizations performed by the original Ziria compiler. The second transformation is pipeline coalescing, which reduces execution overhead by batching IO. Pipeline coalescing relies critically on fusion and provides a much simpler story for the original Ziria compiler's "vectorization" transformation. These developments serve as the basis of our third contribution, a new compiler for Ziria that produces significantly faster code than the original compiler. The new compiler leverages our intermediate language to help eliminate unnecessary memory traffic. As well as providing a firm foundation for the Ziria language, our work on an operational semantics resulted in very real software engineering benefits. These benefits need not be limited to SDR---the core language and accompanying transformations we present are applicable to other domains that require processing streaming data at high speed.
Publisher's Version Article Search Artifacts Available Artifacts Reusable

Foundations of Higher-Order Programming

Foundations of Strong Call by Need
Thibaut Balabonski, Pablo Barenbaum, Eduardo Bonelli, and Delia Kesner
(LRI, France; University of Paris-Sud, France; CNRS, France; University of Paris-Saclay, France; University of Buenos Aires, Argentina; IRIF, France; University of Paris Diderot, France; CONICET, Argentina; Universidad Nacional de Quilmes, Argentina; Stevens Institute of Technology, USA)

We present a call-by-need strategy for computing strong normal forms of open terms (reduction is admitted inside the body of abstractions and substitutions, and the terms may contain free variables), which guarantees that arguments are only evaluated when needed and at most once. The strategy is shown to be complete with respect to β-reduction to strong normal form. The proof of completeness relies on two key tools: (1) the definition of a strong call-by-need calculus where reduction may be performed inside any context, and (2) the use of non-idempotent intersection types. More precisely, terms admitting a β-normal form in pure lambda calculus are typable, typability implies (weak) normalisation in the strong call-by-need calculus, and weak normalisation in the strong call-by-need calculus implies normalisation in the strong call-by-need strategy. Our (strong) call-by-need strategy is also shown to be conservative over the standard (weak) call-by-need.

Publisher's Version Article Search
A Relational Logic for Higher-Order Programs
Alejandro Aguirre, Gilles Barthe, Marco Gaboardi, Deepak Garg, and Pierre-Yves Strub
(IMDEA Software Institute, Spain; University at Buffalo, USA; SUNY, USA; MPI-SWS, Germany; École Polytechnique, France)

Relational program verification is a variant of program verification where one can reason about two programs and as a special case about two executions of a single program on different inputs. Relational program verification can be used for reasoning about a broad range of properties, including equivalence and refinement, and specialized notions such as continuity, information flow security or relative cost. In a higher-order setting, relational program verification can be achieved using relational refinement type systems, a form of refinement types where assertions have a relational interpretation. Relational refinement type systems excel at relating structurally equivalent terms but provide limited support for relating terms with very different structures.

We present a logic, called Relational Higher Order Logic (RHOL), for proving relational properties of a simply typed λ-calculus with inductive types and recursive definitions. RHOL retains the type-directed flavour of relational refinement type systems but achieves greater expressivity through rules which simultaneously reason about the two terms as well as rules which only contemplate one of the two terms. We show that RHOL has strong foundations, by proving an equivalence with higher-order logic (HOL), and leverage this equivalence to derive key meta-theoretical properties: subject reduction, admissibility of a transitivity rule and set-theoretical soundness. Moreover, we define sound embeddings for several existing relational type systems such as relational refinement types and type systems for dependency analysis and relative cost, and we verify examples that were out of reach of prior work.

Publisher's Version Article Search Artifacts Available Artifacts Functional
How to Prove Your Calculus Is Decidable: Practical Applications of Second-Order Algebraic Theories and Computation
Makoto Hamana
(Gunma University, Japan)
We present a general methodology of proving the decidability of equational theory of programming language concepts in the framework of second-order algebraic theories. We propose a Haskell-based analysis tool SOL, Second-Order Laboratory, which assists the proofs of confluence and strong normalisation of computation rules derived from second-order algebraic theories. To cover various examples in programming language theory, we combine and extend both syntactical and semantical results of second-order computation in a non-trivial manner. We demonstrate how to prove decidability of various algebraic theories in the literature. It includes the equational theories of monad and lambda-calculi, Plotkin and Power's theory of states, and Stark's theory of pi-calculus.
Publisher's Version Article Search
No-Brainer CPS Conversion (Functional Pearl)
Milo Davis, William Meehan, and Olin Shivers
(Northeastern University, USA)

Algorithms that convert direct-style λ-calculus terms to their equivalent terms in continuation-passing style (CPS) typically introduce so-called “administrative redexes:” useless artifacts of the conversion that must be cleaned up by a subsequent pass over the result to reduce them away. We present a simple, linear-time algorithm for CPS conversion that introduces no administrative redexes. In fact, the output term is a normal form in a reduction system that generalizes the notion of “administrative redexes” to what we call “no-brainer redexes,” that is, redexes whose reduction shrinks the size of the term. We state the theorems which establish the algorithm’s desireable properties, along with sketches of the full proofs.

Publisher's Version Article Search

Tools for Verification

Kami: A Platform for High-Level Parametric Hardware Specification and Its Modular Verification
Joonwon Choi, Muralidaran Vijayaraghavan, Benjamin Sherman, Adam Chlipala, and Arvind
(Massachusetts Institute of Technology, USA)
It has become fairly standard in the programming-languages research world to verify functional programs in proof assistants using induction, algebraic simplification, and rewriting. In this paper, we introduce Kami, a Coq library that enables similar expressive and modular reasoning for hardware designs expressed in the style of the Bluespec language. We can specify, implement, and verify realistic designs entirely within Coq, ending with automatic extraction into a pipeline that bottoms out in FPGAs. Our methodology, using labeled transition systems, has been evaluated in a case study verifying an infinite family of multicore systems, with cache-coherent shared memory and pipelined cores implementing (the base integer subset of) the RISC-V instruction set.
Publisher's Version Article Search Artifacts Available Artifacts Functional
SpaceSearch: A Library for Building and Verifying Solver-Aided Tools
Konstantin Weitz, Steven Lyubomirsky, Stefan Heule, Emina Torlak, Michael D. Ernst, and Zachary Tatlock
(University of Washington, USA; Stanford University, USA)
Many verification tools build on automated solvers. These tools reduce problems in a specific application domain (e.g., compiler optimization validation) to queries that can be discharged with a highly optimized solver. But the correctness of the reductions themselves is rarely verified in practice, limiting the confidence that the solver's output establishes the desired domain-level property. This paper presents SpaceSearch, a new library for developing solver-aided tools within a proof assistant. A user builds their solver-aided tool in Coq against the SpaceSearch interface, and the user then verifies that the results provided by the interface are sufficient to establish the tool's desired high-level properties. Once verified, the tool can be extracted to an implementation in a solver-aided language (e.g., Rosette), where SpaceSearch provides an efficient instantiation of the SpaceSearch interface with calls to an underlying SMT solver. This combines the strong correctness guarantees of developing a tool in a proof assistant with the high performance of modern SMT solvers. This paper also introduces new optimizations for such verified solver-aided tools, including parallelization and incrementalization. We evaluate SpaceSearch by building and verifying two solver-aided tools. The first, SaltShaker, checks that RockSalt's x86 semantics for a given instruction agrees with STOKE's x86 semantics. SaltShaker identified 7 bugs in RockSalt and 1 bug in STOKE. After these systems were patched by their developers, SaltShaker verified the semantics' agreement on 15,255 instruction instantiations in under 2h. The second tool, BGProof, is a verified version of an existing Border Gateway Protocol (BGP) router configuration checker. Like the existing checker, BGProof scales to checking industrial configurations spanning over 240 KLOC, identifying 19 configuration inconsistencies with no false positives. However, the correctness of BGProof has been formally proven, and we found 2 bugs in the unverified implementation. These results demonstrate that SpaceSearch is a practical approach to developing efficient, verified solver-aided tools.
Publisher's Version Article Search Artifacts Reusable
Local Refinement Typing
Benjamin Cosman and Ranjit Jhala
(University of California at San Diego, USA)

We introduce the FUSION algorithm for local refinement type inference, yielding a new SMT-based method for verifying programs with polymorphic data types and higher-order functions. FUSION is concise as the programmer need only write signatures for (externally exported) top-level functions and places with cyclic (recursive) dependencies, after which FUSION can predictably synthesize the most precise refinement types for all intermediate terms (expressible in the decidable refinement logic), thereby checking the program without false alarms. We have implemented FUSION and evaluated it on the benchmarks from the LiquidHaskell suite totalling about 12KLOC. FUSION checks an existing safety benchmark suite using about half as many templates as previously required and nearly 2 × faster. In a new set of theorem proving benchmarks FUSION is both 10 − 50 × faster and, by synthesizing the most precise types, avoids false alarms to make verification possible.

Publisher's Version Article Search Artifacts Reusable

Program Construction

Compiling to Categories
Conal Elliott
(Target, USA)
It is well-known that the simply typed lambda-calculus is modeled by any cartesian closed category (CCC). This correspondence suggests giving typed functional programs a variety of interpretations, each corresponding to a different category. A convenient way to realize this idea is as a collection of meaning-preserving transformations added to an existing compiler, such as GHC for Haskell. This paper describes such an implementation and demonstrates its use for a variety of interpretations including hardware circuits, automatic differentiation, incremental computation, and interval analysis. Each such interpretation is a category easily defined in Haskell (outside of the compiler). The general technique appears to provide a compelling alternative to deeply embedded domain-specific languages.
Publisher's Version Article Search
Visitors Unchained
François Pottier
(Inria, France)
Traversing and transforming abstract syntax trees that involve name binding is notoriously difficult to do in a correct, concise, modular, customizable manner. We address this problem in the setting of OCaml, a functional programming language equipped with powerful object-oriented features. We use visitor classes as partial, composable descriptions of the operations that we wish to perform on abstract syntax trees. We introduce "visitors", a simple type-directed facility for generating visitor classes that have no knowledge of binding. Separately, we present "alphaLib", a library of small hand-written visitor classes, each of which knows about a specific binding construct, a specific representation of names, and/or a specific operation on abstract syntax trees. By combining these components, a wide range of operations can be defined. Multiple representations of names can be supported, as well as conversions between representations. Binding structure can be described either in a programmatic style, by writing visitor methods, or in a declarative style, via preprogrammed binding combinators.
Publisher's Version Article Search
Staged Generic Programming
Jeremy Yallop
(University of Cambridge, UK)
Generic programming libraries such as Scrap Your Boilerplate eliminate the need to write repetitive code, but typically introduce significant performance overheads. This leaves programmers with the regrettable choice between writing succinct but slow programs and writing tedious but efficient programs. Applying structured multi-stage programming techniques transforms Scrap Your Boilerplate from an inefficient library into a typed optimising code generator, bringing its performance in line with hand-written code, and so combining high-level programming with uncompromised performance.
Publisher's Version Article Search

Domain-Specific Languages

Super 8 Languages for Making Movies (Functional Pearl)
Leif Andersen, Stephen Chang, and Matthias Felleisen
(Northeastern University, USA)
The Racket doctrine tells developers to narrow the gap between the terminology of a problem domain and general programming constructs by creating languages instead of just plain programs. This pearl illustrates this point with the creation of a relatively simple domain-specific language for editing videos. To produce the video proceedings of a conference, for example, video professionals traditionally use "non-linear" GUI editors to manually edit each talk, despite the repetitive nature of the process. As it turns out, video editing naturally splits the work into a declarative phase and an imperative rendering phase at the end. Hence it is natural to create a functional-declarative language for the first phase, which reduces a lot of manual labor. This user-facing DSL utilizes a second, internal DSL to implement the second phase, which is an interface to a general, low-level C library. Finally, we inject type checking into our language via another DSL that supports programming in the language of type formalisms. In short, the development of the video editing language cleanly demonstrates how the Racket doctrine naturally leads to the creation of language hierarchies, analogous to the hierarchies of modules found in conventional functional languages.
Publisher's Version Article Search Artifacts Available Artifacts Reusable

Dependently Typed Programming

A Specification for Dependent Types in Haskell
Stephanie Weirich, Antoine Voizard, Pedro Henrique Azevedo de Amorim, and Richard A. Eisenberg
(University of Pennsylvania, USA; École Polytechnique, France; University of Campinas, Brazil; Bryn Mawr College, USA)
We propose a core semantics for Dependent Haskell, an extension of Haskell with full-spectrum dependent types. Our semantics consists of two related languages. The first is a Curry-style dependently-typed language with nontermination, irrelevant arguments, and equality abstraction. The second, inspired by the Glasgow Haskell Compiler's core language FC, is its explicitly-typed analogue, suitable for implementation in GHC. All of our results---chiefly, type safety, along with theorems that relate these two languages---have been formalized using the Coq proof assistant. Because our work is backwards compatible with Haskell, our type safety proof holds in the presence of nonterminating computation. However, unlike other full-spectrum dependently-typed languages, such as Coq, Agda or Idris, because of this nontermination, Haskell's term language does not correspond to a consistent logic.
Publisher's Version Article Search Info Artifacts Available Artifacts Reusable
Parametric Quantifiers for Dependent Type Theory
Andreas Nuyts, Andrea Vezzosi, and Dominique Devriese
(KU Leuven, Belgium; Chalmers University of Technology, Sweden)

Polymorphic type systems such as System F enjoy the parametricity property: polymorphic functions cannot inspect their type argument and will therefore apply the same algorithm to any type they are instantiated on. This idea is formalized mathematically in Reynolds’s theory of relational parametricity, which allows the metatheoretical derivation of parametricity theorems about all values of a given type. Although predicative System F embeds into dependent type systems such as Martin-Löf Type Theory (MLTT), parametricity does not carry over as easily. The identity extension lemma, which is crucial if we want to prove theorems involving equality, has only been shown to hold for small types, excluding the universe.

We attribute this to the fact that MLTT uses a single type former Π to generalize both the parametric quantifier ∀ and the type former → which is non-parametric in the sense that its elements may use their argument as a value. We equip MLTT with parametric quantifiers ∀ and ∃ alongside the existing Π and Σ, and provide relation type formers for proving parametricity theorems internally. We show internally the existence of initial algebras and final co-algebras of indexed functors both by Church encoding and, for a large class of functors, by using sized types.

We prove soundness of our type system by enhancing existing iterated reflexive graph (cubical set) models of dependently typed parametricity by distinguishing between edges that express relatedness of objects (bridges) and edges that express equality (paths). The parametric functions are those that map bridges to paths.

We implement an extension to the Agda proof assistant that type-checks proofs in our type system.

Publisher's Version Article Search Artifacts Available Artifacts Reusable
Normalization by Evaluation for Sized Dependent Types
Andreas Abel, Andrea Vezzosi, and Theo Winterhalter
(University of Gothenburg, Sweden; Chalmers University of Technology, Sweden; ENS Paris-Saclay, France)

Sized types have been developed to make termination checking more perspicuous, more powerful, and more modular by integrating termination into type checking. In dependently-typed proof assistants where proofs by induction are just recursive functional programs, the termination checker is an integral component of the trusted core, as validity of proofs depend on termination. However, a rigorous integration of full-fledged sized types into dependent type theory is lacking so far. Such an integration is non-trivial, as explicit sizes in proof terms might get in the way of equality checking, making terms appear distinct that should have the same semantics.

In this article, we integrate dependent types and sized types with higher-rank size polymorphism, which is essential for generic programming and abstraction. We introduce a size quantifier ∀ which lets us ignore sizes in terms for equality checking, alongside with a second quantifier Π for abstracting over sizes that do affect the semantics of types and terms. Judgmental equality is decided by an adaptation of normalization-by-evaluation for our new type theory, which features type shape-directed reflection and reification. It follows that subtyping and type checking of normal forms are decidable as well, the latter by a bidirectional algorithm.

Publisher's Version Article Search Artifacts Available Artifacts Reusable
A Metaprogramming Framework for Formal Verification
Gabriel Ebner, Sebastian Ullrich, Jared Roesch, Jeremy Avigad, and Leonardo de Moura
(Vienna University of Technology, Austria; KIT, Germany; University of Washington, USA; Carnegie Mellon University, USA; Microsoft Research, USA)
We describe the metaprogramming framework currently used in Lean, an interactive theorem prover based on dependent type theory. This framework extends Lean's object language with an API to some of Lean's internal structures and procedures, and provides ways of reflecting object-level expressions into the metalanguage. We provide evidence to show that our implementation is performant, and that it provides a convenient and flexible way of writing not only small-scale interactive tactics, but also more substantial kinds of automation.
Publisher's Version Article Search Artifacts Available Artifacts Reusable

Contracts and Sessions

Chaperone Contracts for Higher-Order Sessions
Hernán Melgratti and Luca Padovani
(University of Buenos Aires, Argentina; University of Turin, Italy)
Contracts have proved to be an effective mechanism that helps developers in identifying those modules of a program that violate the contracts of the functions and objects they use. In recent years, sessions have established as a key mechanism for realizing inter-module communications in concurrent programs. Just like values flow into or out of a function or object, messages are sent on, and received from, a session endpoint. Unlike conventional functions and objects, however, the kind, direction, and properties of messages exchanged in a session may vary over time, as the session progresses. This feature of sessions calls for contracts that evolve along with the session they describe. In this work, we extend to sessions the notion of chaperone contract (roughly, a contract that applies to a mutable object) and investigate the ramifications of contract monitoring in a higher-order language that features sessions. We give a characterization of correct module, one that honors the contracts of the sessions it uses, and prove a blame theorem. Guided by the calculus, we describe a lightweight implementation of monitored sessions as an OCaml module with which programmers can benefit from static session type checking and dynamic contract monitoring using an off-the-shelf version of OCaml.
Publisher's Version Article Search Artifacts Available Artifacts Reusable
Whip: Higher-Order Contracts for Modern Services
Lucas Waye, Stephen Chong, and Christos Dimoulas
(Harvard University, USA)
Modern service-oriented applications forgo semantically rich protocols and middleware when composing services. Instead, they embrace the loosely-coupled development and deployment of services that communicate via simple network protocols. Even though these applications do expose interfaces that are higher-order in spirit, the simplicity of the network protocols forces them to rely on brittle low-level encodings. To bridge the apparent semantic gap, programmers introduce ad-hoc and error-prone defensive code. Inspired by Design by Contract, we choose a different route to bridge this gap. We introduce Whip, a contract system for modern services. Whip (i) provides programmers with a higher-order contract language tailored to the needs of modern services; and (ii) monitors services at run time to detect services that do not live up to their advertised interfaces. Contract monitoring is local to a service. Services are treated as black boxes, allowing heterogeneous implementation languages without modification to services' code. Thus, Whip does not disturb the loosely coupled nature of modern services.
Publisher's Version Article Search Info Artifacts Available Artifacts Reusable
Manifest Sharing with Session Types
Stephanie Balzer and Frank Pfenning
(Carnegie Mellon University, USA)

Session-typed languages building on the Curry-Howard isomorphism between linear logic and session-typed communication guarantee session fidelity and deadlock freedom. Unfortunately, these strong guarantees exclude many naturally occurring programming patterns pertaining to shared resources. In this paper, we introduce sharing into a session-typed language where types are stratified into linear and shared layers with modal operators connecting the layers. The resulting language retains session fidelity but not the absence of deadlocks, which can arise from contention for shared processes. We illustrate our language on various examples, such as the dining philosophers problem, and provide a translation of the untyped asynchronous π-calculus into our language.

Publisher's Version Article Search
Gradual Session Types
Atsushi Igarashi, Peter Thiemann, Vasco T. Vasconcelos, and Philip Wadler
(Kyoto University, Japan; University of Freiburg, Germany; University of Lisbon, Portugal; University of Edinburgh, UK)
Session types are a rich type discipline, based on linear types, that lift the sort of safety claims that come with type systems to communications. However, web-based applications and micro services are often written in a mix of languages, with type disciplines in a spectrum between static and dynamic typing. Gradual session types address this mixed setting by providing a framework which grants seamless transition between statically typed handling of sessions and any required degree of dynamic typing. We propose GradualGV as an extension of the functional session type system GV with dynamic types and casts. We demonstrate type and communication safety as well as blame safety, thus extending previous results to functional languages with session-based communication. The interplay of linearity and dynamic types requires a novel approach to specifying the dynamics of the language.
Publisher's Version Article Search Info Artifacts Available Artifacts Reusable

Integrating Static and Dynamic Typing

Theorems for Free for Free: Parametricity, With and Without Types
Amal Ahmed, Dustin Jamner, Jeremy G. Siek, and Philip Wadler
(Northeastern University, USA; Indiana University, USA; University of Edinburgh, UK)
The polymorphic blame calculus integrates static typing, including universal types, with dynamic typing. The primary challenge with this integration is preserving parametricity: even dynamically-typed code should satisfy it once it has been cast to a universal type. Ahmed et al. (2011) employ runtime type generation in the polymorphic blame calculus to preserve parametricity, but a proof that it does so has been elusive. Matthews and Ahmed (2008) gave a proof of parametricity for a closely related system that combines ML and Scheme, but later found a flaw in their proof. In this paper we present an improved version of the polymorphic blame calculus and we prove that it satisfies relational parametricity. The proof relies on a step-indexed Kripke logical relation. The step-indexing is required to make the logical relation well-defined in the case for the dynamic type. The possible worlds include the mapping of generated type names to their types and the mapping of type names to relations. We prove the Fundamental Property of this logical relation and that it is sound with respect to contextual equivalence. To demonstrate the utility of parametricity in the polymorphic blame calculus, we derive two free theorems.
Publisher's Version Article Search Artifacts Available Artifacts Functional
On Polymorphic Gradual Typing
Yuu Igarashi, Taro Sekiyama, and Atsushi Igarashi
(Kyoto University, Japan; IBM Research, Japan)

We study an extension of gradual typing—a method to integrate dynamic typing and static typing smoothly in a single language—to parametric polymorphism and its theoretical properties, including conservativity of typing and semantics over both statically and dynamically typed languages, type safety, blame-subtyping theorem, and the gradual guarantee—the so-called refined criteria, advocated by Siek et al. We develop System FG, which is a gradually typed extension of System F with the dynamic type and a new type consistency relation, and translation to a new polymorphic blame calculus System FC, which is based on previous polymorphic blame calculi by Ahmed et al. The design of System FG and System FC, geared to the criteria, is influenced by the distinction between static and gradual type variables, first observed by Garcia and Cimini. This distinction is also useful to execute statically typed code without incurring additional overhead to manage type names as in the prior calculi. We prove that System FG satisfies most of the criteria: all but the hardest property of the gradual guarantee on semantics. We show that a key conjecture to prove the gradual guarantee leads to the Jack-of-All-Trades property, conjectured as an important property of the polymorphic blame calculus by Ahmed et al.

Publisher's Version Article Search Artifacts Available Artifacts Functional
Gradual Typing with Union and Intersection Types
Giuseppe Castagna and Victor Lanvin
(CNRS, France; University of Paris Diderot, France; ENS Cachan, France)
We propose a type system for functional languages with gradual types and set-theoretic type connectives and prove its soundness. In particular, we show how to lift the definition of the domain and result type of an application from non-gradual types to gradual ones and likewise for the subtyping relation. We also show that deciding subtyping for gradual types can be reduced in linear time to deciding subtyping on non-gradual types and that the same holds true for all subtyping-related decision problems that must be solved for type inference. More generally, this work not only enriches gradual type systems with unions and intersections and with the type precision that arise from their use, but also proposes and advocates a new style of gradual types programming where union and intersection types are used by programmers to instruct the system to perform fewer dynamic checks.
Publisher's Version Article Search Artifacts Available

Inference and Analysis

Constrained Type Families
J. Garrett Morris and Richard A. Eisenberg
(University of Edinburgh, UK; Bryn Mawr College, USA)
We present an approach to support partiality in type-level computation without compromising expressiveness or type safety. Existing frameworks for type-level computation either require totality or implicitly assume it. For example, type families in Haskell provide a powerful, modular means of defining type-level computation. However, their current design implicitly assumes that type families are total, introducing nonsensical types and significantly complicating the metatheory of type families and their extensions. We propose an alternative design, using qualified types to pair type-level computations with predicates that capture their domains. Our approach naturally captures the intuitive partiality of type families, simplifying their metatheory. As evidence, we present the first complete proof of consistency for a language with closed type families.
Publisher's Version Article Search Artifacts Available Artifacts Functional
Automating Sized-Type Inference for Complexity Analysis
Martin Avanzini and Ugo Dal Lago
(University of Innsbruck, Austria; University of Bologna, Italy; Inria, France)
This paper introduces a new methodology for the complexity analysis of higher-order functional programs, which is based on three ingredients: a powerful type system for size analysis and a sound type inference procedure for it, a ticking monadic transformation and constraint solving. Noticeably, the presented methodology can be fully automated, and is able to analyse a series of examples which cannot be handled by most competitor methodologies. This is possible due to various key ingredients, and in particular an abstract index language and index polymorphism at higher ranks. A prototype implementation is available.
Publisher's Version Article Search
Inferring Scope through Syntactic Sugar
Justin Pombrio, Shriram Krishnamurthi, and Mitchell Wand
(Brown University, USA; Northeastern University, USA)
Many languages use syntactic sugar to define parts of their surface language in terms of a smaller core. Thus some properties of the surface language, like its scoping rules, are not immediately evident. Nevertheless, IDEs, refactorers, and other tools that traffic in source code depend on these rules to present information to users and to soundly perform their operations. In this paper, we show how to lift scoping rules defined on a core language to rules on the surface, a process of scope inference. In the process we introduce a new representation of binding structure---scope as a preorder---and present a theoretical advance: proving that a desugaring system preserves α-equivalence even though scoping rules have been provided only for the core language. We have also implemented the system presented in this paper.
Publisher's Version Article Search Artifacts Available Artifacts Reusable

proc time: 0.11