ICFP 2016 – Author Index 
Contents 
Abstracts 
Authors

A B C D E F G H I J K L M N O P R S T U V W Y Z
Abadi, Martín 
ICFP '16KEY: "TensorFlow: Learning Functions ..."
TensorFlow: Learning Functions at Scale
Martín Abadi (Google, USA) TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. Its computational model is based on dataflow graphs with mutable state. Graph nodes may be mapped to different machines in a cluster, and within each machine to CPUs, GPUs, and other devices. TensorFlow supports a variety of applications, but it particularly targets training and inference with deep neural networks. It serves as a platform for research and for deploying machine learning systems across many areas, such as speech recognition, computer vision, robotics, information retrieval, and natural language processing. In this talk, we describe TensorFlow and outline some of its applications. We also discuss the question of what TensorFlow and deep learning may have to do with functional programming. Although TensorFlow is not purely functional, many of its uses are concerned with optimizing functions (during training), then with applying those functions (during inference). These functions are defined as compositions of simple primitives (as is common in functional programming), with internal data representations that are learned rather than manually designed. TensorFlow is joint work with many other people in the Google Brain team and elsewhere. More information is available at tensorflow.org. @InProceedings{ICFP16p1, author = {Martín Abadi}, title = {TensorFlow: Learning Functions at Scale}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {11}, doi = {10.1145/2951913.2976746}, year = {2016}, } Publisher's Version Article Search 

Acar, Umut A. 
ICFP '16: "Hierarchical Memory Management ..."
Hierarchical Memory Management for Parallel Programs
Ram Raghunathan, Stefan K. Muller, Umut A. Acar, and Guy Blelloch (Carnegie Mellon University, USA; Inria, France) An important feature of functional programs is that they are parallel by default. Implementing an efficient parallel functional language, however, is a major challenge, in part because the high rate of allocation and freeing associated with functional programs requires an efficient and scalable memory manager. In this paper, we present a technique for parallel memory management for strict functional languages with nested parallelism. At the highest level of abstraction, the approach consists of a technique to organize memory as a hierarchy of heaps, and an algorithm for performing automatic memory reclamation by taking advantage of a disentanglement property of parallel functional programs. More specifically, the idea is to assign to each parallel task its own heap in memory and organize the heaps in a hierarchy/tree that mirrors the hierarchy of tasks. We present a nestedparallel calculus that specifies hierarchical heaps and prove in this calculus a disentanglement property, which prohibits a task from accessing objects allocated by another task that might execute in parallel. Leveraging the disentanglement property, we present a garbage collection technique that can operate on any subtree in the memory hierarchy concurrently as other tasks (and/or other collections) proceed in parallel. We prove the safety of this collector by formalizing it in the context of our parallel calculus. In addition, we describe how the proposed techniques can be implemented on modern sharedmemory machines and present a prototype implementation as an extension to MLton, a highperformance compiler for the Standard ML language. Finally, we evaluate the performance of this implementation on a number of parallel benchmarks. @InProceedings{ICFP16p392, author = {Ram Raghunathan and Stefan K. Muller and Umut A. Acar and Guy Blelloch}, title = {Hierarchical Memory Management for Parallel Programs}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {392406}, doi = {10.1145/2951913.2951935}, year = {2016}, } Publisher's Version Article Search ICFP '16: "DagCalculus: A Calculus for ..." DagCalculus: A Calculus for Parallel Computation Umut A. Acar, Arthur Charguéraud, Mike Rainey, and Filip Sieczkowski (Carnegie Mellon University, USA; Inria, France) Increasing availability of multicore systems has led to greater focus on the design and implementation of languages for writing parallel programs. Such languages support various abstractions for parallelism, such as forkjoin, asyncfinish, futures. While they may seem similar, these abstractions lead to different semantics, language design and implementation decisions, and can significantly impact the performance of enduser applications. In this paper, we consider the question of whether it would be possible to unify various paradigms of parallel computing. To this end, we propose a calculus, called dag calculus, that can encode forkjoin, asyncfinish, and futures, and possibly others. We describe dag calculus and its semantics, establish translations from the aforementioned paradigms into dag calculus. These translations establish that dag calculus is sufficiently powerful for encoding programs written in prevailing paradigms of parallelism. We present concurrent algorithms and data structures for realizing dag calculus on multicore hardware and prove that the proposed techniques are consistent with the semantics. Finally, we present an implementation of the calculus and evaluate it empirically by comparing its performance to highly optimized code from prior work. The results show that the calculus is expressive and that it competes well with, and sometimes outperforms, the state of the art. @InProceedings{ICFP16p18, author = {Umut A. Acar and Arthur Charguéraud and Mike Rainey and Filip Sieczkowski}, title = {DagCalculus: A Calculus for Parallel Computation}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {1832}, doi = {10.1145/2951913.2951946}, year = {2016}, } Publisher's Version Article Search 

Adams, Michael D. 
ICFP '16: "Allocation Characterizes Polyvariance: ..."
Allocation Characterizes Polyvariance: A Unified Methodology for Polyvariant ControlFlow Analysis
Thomas Gilray, Michael D. Adams, and Matthew Might (University of Utah, USA) The polyvariance of a static analysis is the degree to which it structurally differentiates approximations of program values. Polyvariant techniques come in a number of different flavors that represent alternative heuristics for managing the tradeoff an analysis strikes between precision and complexity. For example, call sensitivity supposes that values will tend to correlate with recent call sites, object sensitivity supposes that values will correlate with the allocation points of related objects, the Cartesian product algorithm supposes correlations between the values of arguments to the same function, and so forth. In this paper, we describe a unified methodology for implementing and understanding polyvariance in a higherorder setting (i.e., for controlflow analyses). We do this by extending the method of abstracting abstract machines (AAM), a systematic approach to producing an abstract interpretation of abstractmachine semantics. AAM eliminates recursion within a language’s semantics by passing around an explicit store, and thus places importance on the strategy an analysis uses for allocating abstract addresses within the abstract heap or store. We build on AAM by showing that the design space of possible abstract allocators exactly and uniquely corresponds to the design space of polyvariant strategies. This allows us to both unify and generalize polyvariance as tunings of a single function. Changes to the behavior of this function easily recapitulate classic styles of analysis and produce novel variations, combinations of techniques, and fundamentally new techniques. @InProceedings{ICFP16p407, author = {Thomas Gilray and Michael D. Adams and Matthew Might}, title = {Allocation Characterizes Polyvariance: A Unified Methodology for Polyvariant ControlFlow Analysis}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {407420}, doi = {10.1145/2951913.2951936}, year = {2016}, } Publisher's Version Article Search 

Ahmed, Amal 
ICFP '16: "Fully Abstract Compilation ..."
Fully Abstract Compilation via Universal Embedding
Max S. New, William J. Bowman, and Amal Ahmed (Northeastern University, USA) A fully abstract compiler guarantees that two source components are observationally equivalent in the source language if and only if their translations are observationally equivalent in the target. Full abstraction implies the translation is secure: targetlanguage attackers can make no more observations of a compiled component than a sourcelanguage attacker interacting with the original source component. Proving full abstraction for realistic compilers is challenging because realistic target languages contain features (such as control effects) unavailable in the source, while proofs of full abstraction require showing that every target context to which a compiled component may be linked can be backtranslated to a behaviorally equivalent source context. We prove the first full abstraction result for a translation whose target language contains exceptions, but the source does not. Our translationspecifically, closure conversion of simply typed λcalculus with recursive typesuses types at the target level to ensure that a compiled component is never linked with attackers that have more distinguishing power than sourcelevel attackers. We present a new backtranslation technique based on a shallow embedding of the target language into the source language at a dynamic type. Then boundaries are inserted that mediate terms between the untyped embedding and the stronglytyped source. This technique allows backtranslating nonterminating programs, target features that are untypeable in the source, and wellbracketed effects. @InProceedings{ICFP16p103, author = {Max S. New and William J. Bowman and Amal Ahmed}, title = {Fully Abstract Compilation via Universal Embedding}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {103116}, doi = {10.1145/2951913.2951941}, year = {2016}, } Publisher's Version Article Search Info 

Alpuim, João 
ICFP '16: "Disjoint Intersection Types ..."
Disjoint Intersection Types
Bruno C. d. S. Oliveira, Zhiyuan Shi, and João Alpuim (University of Hong Kong, China) Dunfield showed that a simply typed core calculus with intersection types and a merge operator is able to capture various programming language features. While his calculus is typesafe, it is not coherent: different derivations for the same expression can elaborate to expressions that evaluate to different values. The lack of coherence is an important disadvantage for adoption of his core calculus in implementations of programming languages, as the semantics of the programming language becomes implementationdependent. This paper presents λ_i: a coherent and typesafe calculus with a form of intersection types and a merge operator. Coherence is achieved by ensuring that intersection types are disjoint and programs are sufficiently annotated to avoid type ambiguity. We propose a definition of disjointness where two types A and B are disjoint only if certain set of types are common supertypes of A and B. We investigate three different variants of λ_i, with three variants of disjointness. In the simplest variant, which does not allow ⊤ types, two types are disjoint if they do not share any common supertypes at all. The other two variants introduce ⊤ types and refine the notion of disjointness to allow two types to be disjoint when the only the set of common supertypes are toplike. The difference between the two variants with ⊤ types is on the definition of toplike types, which has an impact on which types are allowed on intersections. We present a type system that prevents intersection types that are not disjoint, as well as an algorithmic specifications to determine whether two types are disjoint for all three variants. @InProceedings{ICFP16p364, author = {Bruno C. d. S. Oliveira and Zhiyuan Shi and João Alpuim}, title = {Disjoint Intersection Types}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {364377}, doi = {10.1145/2951913.2951945}, year = {2016}, } Publisher's Version Article Search 

Amani, Sidney 
ICFP '16: "Refinement through Restraint: ..."
Refinement through Restraint: Bringing Down the Cost of Verification
Liam O'Connor, Zilin Chen, Christine Rizkallah, Sidney Amani, Japheth Lim, Toby Murray, Yutaka Nagashima, Thomas Sewell, and Gerwin Klein (UNSW, Australia; Data61, Australia; University of Pennsylvania, USA; University of Melbourne, Australia) We present a framework aimed at significantly reducing the cost of verifying certain classes of systems software, such as file systems. Our framework allows for equational reasoning about systems code written in our new language, Cogent. Cogent is a restricted, polymorphic, higherorder, and purely functional language with linear types and without the need for a trusted runtime or garbage collector. Linear types allow us to assign two semantics to the language: one imperative, suitable for efficient C code generation; and one functional, suitable for equational reasoning and verification. As Cogent is a restricted language, it is designed to easily interoperate with existing C functions and to connect to existing C verification frameworks. Our framework is based on certifying compilation: For a welltyped Cogent program, our compiler produces C code, a highlevel shallow embedding of its semantics in Isabelle/HOL, and a proof that the C code correctly refines this embedding. Thus one can reason about the full semantics of realworld systems code productively and equationally, while retaining the interoperability and leanness of C. The compiler certificate is a series of languagelevel proofs and perprogram translation validation phases, combined into one coherent toplevel theorem in Isabelle/HOL. @InProceedings{ICFP16p89, author = {Liam O'Connor and Zilin Chen and Christine Rizkallah and Sidney Amani and Japheth Lim and Toby Murray and Yutaka Nagashima and Thomas Sewell and Gerwin Klein}, title = {Refinement through Restraint: Bringing Down the Cost of Verification}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {89102}, doi = {10.1145/2951913.2951940}, year = {2016}, } Publisher's Version Article Search Info 

Ariola, Zena M. 
ICFP '16: "Sequent Calculus as a Compiler ..."
Sequent Calculus as a Compiler Intermediate Language
Paul Downen, Luke Maurer, Zena M. Ariola, and Simon Peyton Jones (University of Oregon, USA; Microsoft Research, UK) The λcalculus is popular as an intermediate language for practical compilers. But in the world of logic it has a lesserknown twin, born at the same time, called the sequent calculus. Perhaps that would make for a good intermediate language, too? To explore this question we designed Sequent Core, a practicallyoriented core calculus based on the sequent calculus, and used it to reimplement a substantial chunk of the Glasgow Haskell Compiler. @InProceedings{ICFP16p74, author = {Paul Downen and Luke Maurer and Zena M. Ariola and Simon Peyton Jones}, title = {Sequent Calculus as a Compiler Intermediate Language}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {7488}, doi = {10.1145/2951913.2951931}, year = {2016}, } Publisher's Version Article Search Info 

Arntzenius, Michael 
ICFP '16: "Datafun: A Functional Datalog ..."
Datafun: A Functional Datalog
Michael Arntzenius and Neelakantan R. Krishnaswami (University of Birmingham, UK) Datalog may be considered either an unusually powerful query language or a carefully limited logic programming language. Datalog is declarative, expressive, and optimizable, and has been applied successfully in a wide variety of problem domains. However, most usecases require extending Datalog in an applicationspecific manner. In this paper we define Datafun, an analogue of Datalog supporting higherorder functional programming. The key idea is to track monotonicity with types. @InProceedings{ICFP16p214, author = {Michael Arntzenius and Neelakantan R. Krishnaswami}, title = {Datafun: A Functional Datalog}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {214227}, doi = {10.1145/2951913.2951948}, year = {2016}, } Publisher's Version Article Search 

Birkedal, Lars 
ICFP '16: "HigherOrder Ghost State ..."
HigherOrder Ghost State
Ralf Jung, Robbert Krebbers, Lars Birkedal, and Derek Dreyer (MPISWS, Germany; Aarhus University, Denmark) The development of concurrent separation logic (CSL) has sparked a long line of work on modular verification of sophisticated concurrent programs. Two of the most important features supported by several existing extensions to CSL are higherorder quantification and custom ghost state. However, none of the logics that support both of these features reap the full potential of their combination. In particular, none of them provide general support for a feature we dub "higherorder ghost state": the ability to store arbitrary higherorder separationlogic predicates in ghost variables. In this paper, we propose higherorder ghost state as a interesting and useful extension to CSL, which we formalize in the framework of Jung et al.'s recently developed Iris logic. To justify its soundness, we develop a novel algebraic structure called CMRAs ("cameras"), which can be thought of as "stepindexed partial commutative monoids". Finally, we show that Iris proofs utilizing higherorder ghost state can be effectively formalized in Coq, and discuss the challenges we faced in formalizing them. @InProceedings{ICFP16p256, author = {Ralf Jung and Robbert Krebbers and Lars Birkedal and Derek Dreyer}, title = {HigherOrder Ghost State}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {256269}, doi = {10.1145/2951913.2951943}, year = {2016}, } Publisher's Version Article Search Info 

Blazy, Sandrine 
ICFP '16: "An Abstract Memory Functor ..."
An Abstract Memory Functor for Verified C Static Analyzers
Sandrine Blazy, Vincent Laporte, and David Pichardie (University of Rennes 1, France; IRISA, France; IMDEA Software Institute, Spain; ENS Rennes, France) Abstract interpretation provides advanced techniques to infer numerical invariants on programs. There is an abundant literature about numerical abstract domains that operate on scalar variables. This work deals with lifting these techniques to a realistic C memory model. We present an abstract memory functor that takes as argument any standard numerical abstract domain, and builds a memory abstract domain that finely tracks properties about memory contents, taking into account union types, pointer arithmetic and type casts. This functor is implemented and verified inside the Coq proof assistant with respect to the CompCert compiler memory model. Using the Coq extraction mechanism, it is fully executable and used by the Verasco C static analyzer. @InProceedings{ICFP16p325, author = {Sandrine Blazy and Vincent Laporte and David Pichardie}, title = {An Abstract Memory Functor for Verified C Static Analyzers}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {325337}, doi = {10.1145/2951913.2951937}, year = {2016}, } Publisher's Version Article Search Info 

Blelloch, Guy 
ICFP '16: "Hierarchical Memory Management ..."
Hierarchical Memory Management for Parallel Programs
Ram Raghunathan, Stefan K. Muller, Umut A. Acar, and Guy Blelloch (Carnegie Mellon University, USA; Inria, France) An important feature of functional programs is that they are parallel by default. Implementing an efficient parallel functional language, however, is a major challenge, in part because the high rate of allocation and freeing associated with functional programs requires an efficient and scalable memory manager. In this paper, we present a technique for parallel memory management for strict functional languages with nested parallelism. At the highest level of abstraction, the approach consists of a technique to organize memory as a hierarchy of heaps, and an algorithm for performing automatic memory reclamation by taking advantage of a disentanglement property of parallel functional programs. More specifically, the idea is to assign to each parallel task its own heap in memory and organize the heaps in a hierarchy/tree that mirrors the hierarchy of tasks. We present a nestedparallel calculus that specifies hierarchical heaps and prove in this calculus a disentanglement property, which prohibits a task from accessing objects allocated by another task that might execute in parallel. Leveraging the disentanglement property, we present a garbage collection technique that can operate on any subtree in the memory hierarchy concurrently as other tasks (and/or other collections) proceed in parallel. We prove the safety of this collector by formalizing it in the context of our parallel calculus. In addition, we describe how the proposed techniques can be implemented on modern sharedmemory machines and present a prototype implementation as an extension to MLton, a highperformance compiler for the Standard ML language. Finally, we evaluate the performance of this implementation on a number of parallel benchmarks. @InProceedings{ICFP16p392, author = {Ram Raghunathan and Stefan K. Muller and Umut A. Acar and Guy Blelloch}, title = {Hierarchical Memory Management for Parallel Programs}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {392406}, doi = {10.1145/2951913.2951935}, year = {2016}, } Publisher's Version Article Search 

Borgström, Johannes 
ICFP '16: "A LambdaCalculus Foundation ..."
A LambdaCalculus Foundation for Universal Probabilistic Programming
Johannes Borgström, Ugo Dal Lago, Andrew D. Gordon, and Marcin Szymczak (Uppsala University, Sweden; University of Bologna, Italy; Inria, France; Microsoft Research, UK; University of Edinburgh, UK) We develop the operational semantics of an untyped probabilistic λcalculus with continuous distributions, and both hard and soft constraints,as a foundation for universal probabilistic programming languages such as Church, Anglican, and Venture. Our first contribution is to adapt the classic operational semantics of λcalculus to a continuous setting via creating a measure space on terms and defining stepindexed approximations. We prove equivalence of bigstep and smallstep formulations of this distributionbased semantics. To move closer to inference techniques, we also define the samplingbased semantics of a term as a function from a trace of random samples to a value. We show that the distribution induced by integration over the space of traces equals the distributionbased semantics. Our second contribution is to formalize the implementation technique of trace Markov chain Monte Carlo (MCMC) for our calculus and to show its correctness. A key step is defining sufficient conditions for the distribution induced by trace MCMC to converge to the distributionbased semantics. To the best of our knowledge, this is the first rigorous correctness proof for trace MCMC for a higherorder functional language, or for a language with soft constraints. @InProceedings{ICFP16p33, author = {Johannes Borgström and Ugo Dal Lago and Andrew D. Gordon and Marcin Szymczak}, title = {A LambdaCalculus Foundation for Universal Probabilistic Programming}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {3346}, doi = {10.1145/2951913.2951942}, year = {2016}, } Publisher's Version Article Search 

Bowman, William J. 
ICFP '16: "Fully Abstract Compilation ..."
Fully Abstract Compilation via Universal Embedding
Max S. New, William J. Bowman, and Amal Ahmed (Northeastern University, USA) A fully abstract compiler guarantees that two source components are observationally equivalent in the source language if and only if their translations are observationally equivalent in the target. Full abstraction implies the translation is secure: targetlanguage attackers can make no more observations of a compiled component than a sourcelanguage attacker interacting with the original source component. Proving full abstraction for realistic compilers is challenging because realistic target languages contain features (such as control effects) unavailable in the source, while proofs of full abstraction require showing that every target context to which a compiled component may be linked can be backtranslated to a behaviorally equivalent source context. We prove the first full abstraction result for a translation whose target language contains exceptions, but the source does not. Our translationspecifically, closure conversion of simply typed λcalculus with recursive typesuses types at the target level to ensure that a compiled component is never linked with attackers that have more distinguishing power than sourcelevel attackers. We present a new backtranslation technique based on a shallow embedding of the target language into the source language at a dynamic type. Then boundaries are inserted that mediate terms between the untyped embedding and the stronglytyped source. This technique allows backtranslating nonterminating programs, target features that are untypeable in the source, and wellbracketed effects. @InProceedings{ICFP16p103, author = {Max S. New and William J. Bowman and Amal Ahmed}, title = {Fully Abstract Compilation via Universal Embedding}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {103116}, doi = {10.1145/2951913.2951941}, year = {2016}, } Publisher's Version Article Search Info 

Brady, Edwin 
ICFP '16: "Elaborator Reflection: Extending ..."
Elaborator Reflection: Extending Idris in Idris
David Christiansen and Edwin Brady (Indiana University, USA; University of St. Andrews, UK) Many programming languages and proof assistants are defined by elaboration from a highlevel language with a great deal of implicit information to a highly explicit core language. In many advanced languages, these elaboration facilities contain powerful tools for program construction, but these tools are rarely designed to be repurposed by users. We describe elaborator reflection, a paradigm for metaprogramming in which the elaboration machinery is made directly available to metaprograms, as well as a concrete realization of elaborator reflection in Idris, a functional language with full dependent types. We demonstrate the applicability of Idris’s reflected elaboration framework to a number of realistic problems, we discuss the motivation for the specific features of its design, and we explore the broader meaning of elaborator reflection as it can relate to other languages. @InProceedings{ICFP16p284, author = {David Christiansen and Edwin Brady}, title = {Elaborator Reflection: Extending Idris in Idris}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {284297}, doi = {10.1145/2951913.2951932}, year = {2016}, } Publisher's Version Article Search 

Breuvart, Flavien 
ICFP '16: "Combining Effects and Coeffects ..."
Combining Effects and Coeffects via Grading
Marco Gaboardi, Shinya Katsumata, Dominic Orchard, Flavien Breuvart, and Tarmo Uustalu (SUNY Buffalo, USA; Kyoto University, Japan; University of Cambridge, UK; University of Kent, UK; Inria, France; Tallinn University of Technology, Estonia) Effects and coeffects are two general, complementary aspects of program behaviour. They roughly correspond to computations which change the execution context (effects) versus computations which make demands on the context (coeffects). Effectful features include partiality, nondeterminism, inputoutput, state, and exceptions. Coeffectful features include resource demands, variable access, notions of linearity, and data input requirements. The effectful or coeffectful behaviour of a program can be captured and described via typebased analyses, with fine grained information provided by monoidal effect annotations and semiring coeffects. Various recent work has proposed models for such typed calculi in terms of graded (strong) monads for effects and graded (monoidal) comonads for coeffects. Effects and coeffects have been studied separately so far, but in practice many computations are both effectful and coeffectful, e.g., possibly throwing exceptions but with resource requirements. To remedy this, we introduce a new general calculus with a combined effectcoeffect system. This can describe both the changes and requirements that a program has on its context, as well as interactions between these effectful and coeffectful features of computation. The effectcoeffect system has a denotational model in terms of effectgraded monads and coeffectgraded comonads where interaction is expressed via the novel concept of graded distributive laws. This graded semantics unifies the syntactic type theory with the denotational model. We show that our calculus can be instantiated to describe in a natural way various different kinds of interaction between a program and its evaluation context. @InProceedings{ICFP16p476, author = {Marco Gaboardi and Shinya Katsumata and Dominic Orchard and Flavien Breuvart and Tarmo Uustalu}, title = {Combining Effects and Coeffects via Grading}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {476489}, doi = {10.1145/2951913.2951939}, year = {2016}, } Publisher's Version Article Search 

Castagna, Giuseppe 
ICFP '16: "SetTheoretic Types for Polymorphic ..."
SetTheoretic Types for Polymorphic Variants
Giuseppe Castagna, Tommaso Petrucciani, and Kim Nguyễn (University of Paris Diderot, France; University of Genoa, Italy; University of ParisSud, France) Polymorphic variants are a useful feature of the OCaml language whose current definition and implementation rely on kinding constraints to simulate a subtyping relation via unification. This yields an awkward formalization and results in a type system whose behaviour is in some cases unintuitive and/or unduly restrictive. In this work, we present an alternative formalization of polymorphic variants, based on settheoretic types and subtyping, that yields a cleaner and more streamlined system. Our formalization is more expressive than the current one (it types more programs while preserving type safety), it can internalize some metatheoretic properties, and it removes some pathological cases of the current implementation resulting in a more intuitive and, thus, predictable type system. More generally, this work shows how to add fullfledged union types to functional languages of the ML family that usually rely on the HindleyMilner type system. As an aside, our system also improves the theory of semantic subtyping, notably by proving completeness for the type reconstruction algorithm. @InProceedings{ICFP16p378, author = {Giuseppe Castagna and Tommaso Petrucciani and Kim Nguyễn}, title = {SetTheoretic Types for Polymorphic Variants}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {378391}, doi = {10.1145/2951913.2951928}, year = {2016}, } Publisher's Version Article Search Info 

Castro, David 
ICFP '16: "Farms, Pipes, Streams and ..."
Farms, Pipes, Streams and Reforestation: Reasoning about Structured Parallel Processes using Types and Hylomorphisms
David Castro, Kevin Hammond, and Susmit Sarkar (University of St. Andrews, UK) The increasing importance of parallelism has motivated the creation of better abstractions for writing parallel software, including structured parallelism using nested algorithmic skeletons. Such approaches provide highlevel abstractions that avoid common problems, such as race conditions, and often allow strong cost models to be defined. However, choosing a combination of algorithmic skeletons that yields good parallel speedups for a program on some specific parallel architecture remains a difficult task. In order to achieve this, it is necessary to simultaneously reason both about the costs of different parallel structures and about the semantic equivalences between them. This paper presents a new typebased mechanism that enables strong static reasoning about these properties. We exploit wellknown properties of a very general recursion pattern, hylomorphisms, and give a denotational semantics for structured parallel processes in terms of these hylomorphisms. Using our approach, it is possible to determine formally whether it is possible to introduce a desired parallel structure into a program without altering its functional behaviour, and also to choose a version of that parallel structure that minimises some given cost model. @InProceedings{ICFP16p4, author = {David Castro and Kevin Hammond and Susmit Sarkar}, title = {Farms, Pipes, Streams and Reforestation: Reasoning about Structured Parallel Processes using Types and Hylomorphisms}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {417}, doi = {10.1145/2951913.2951920}, year = {2016}, } Publisher's Version Article Search 

Cave, Andrew 
ICFP '16: "Indexed Codata Types ..."
Indexed Codata Types
David Thibodeau, Andrew Cave, and Brigitte Pientka (McGill University, Canada) Indexed data types allow us to specify and verify many interesting invariants about finite data in a general purpose programming language. In this paper we investigate the dual idea: indexed codata types, which allow us to describe datadependencies about infinite data structures. Unlike finite data which is defined by constructors, we define infinite data by observations. Dual to pattern matching on indexed data which may refine the type indices, we define copattern matching on indexed codata where type indices guard observations we can make. Our key technical contributions are threefold: first, we extend Levy's callbypush value language with support for indexed (co)data and deep (co)pattern matching; second, we provide a clean foundation for dependent (co)pattern matching using equality constraints; third, we describe a smallstep semantics using a continuationbased abstract machine, define coverage for indexed (co)patterns, and prove type safety. This is an important step towards building a foundation where (co)data type definitions and dependent types can coexist. @InProceedings{ICFP16p351, author = {David Thibodeau and Andrew Cave and Brigitte Pientka}, title = {Indexed Codata Types}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {351363}, doi = {10.1145/2951913.2951929}, year = {2016}, } Publisher's Version Article Search 

Charguéraud, Arthur 
ICFP '16: "DagCalculus: A Calculus for ..."
DagCalculus: A Calculus for Parallel Computation
Umut A. Acar, Arthur Charguéraud, Mike Rainey, and Filip Sieczkowski (Carnegie Mellon University, USA; Inria, France) Increasing availability of multicore systems has led to greater focus on the design and implementation of languages for writing parallel programs. Such languages support various abstractions for parallelism, such as forkjoin, asyncfinish, futures. While they may seem similar, these abstractions lead to different semantics, language design and implementation decisions, and can significantly impact the performance of enduser applications. In this paper, we consider the question of whether it would be possible to unify various paradigms of parallel computing. To this end, we propose a calculus, called dag calculus, that can encode forkjoin, asyncfinish, and futures, and possibly others. We describe dag calculus and its semantics, establish translations from the aforementioned paradigms into dag calculus. These translations establish that dag calculus is sufficiently powerful for encoding programs written in prevailing paradigms of parallelism. We present concurrent algorithms and data structures for realizing dag calculus on multicore hardware and prove that the proposed techniques are consistent with the semantics. Finally, we present an implementation of the calculus and evaluate it empirically by comparing its performance to highly optimized code from prior work. The results show that the calculus is expressive and that it competes well with, and sometimes outperforms, the state of the art. @InProceedings{ICFP16p18, author = {Umut A. Acar and Arthur Charguéraud and Mike Rainey and Filip Sieczkowski}, title = {DagCalculus: A Calculus for Parallel Computation}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {1832}, doi = {10.1145/2951913.2951946}, year = {2016}, } Publisher's Version Article Search 

Chen, Zilin 
ICFP '16: "Refinement through Restraint: ..."
Refinement through Restraint: Bringing Down the Cost of Verification
Liam O'Connor, Zilin Chen, Christine Rizkallah, Sidney Amani, Japheth Lim, Toby Murray, Yutaka Nagashima, Thomas Sewell, and Gerwin Klein (UNSW, Australia; Data61, Australia; University of Pennsylvania, USA; University of Melbourne, Australia) We present a framework aimed at significantly reducing the cost of verifying certain classes of systems software, such as file systems. Our framework allows for equational reasoning about systems code written in our new language, Cogent. Cogent is a restricted, polymorphic, higherorder, and purely functional language with linear types and without the need for a trusted runtime or garbage collector. Linear types allow us to assign two semantics to the language: one imperative, suitable for efficient C code generation; and one functional, suitable for equational reasoning and verification. As Cogent is a restricted language, it is designed to easily interoperate with existing C functions and to connect to existing C verification frameworks. Our framework is based on certifying compilation: For a welltyped Cogent program, our compiler produces C code, a highlevel shallow embedding of its semantics in Isabelle/HOL, and a proof that the C code correctly refines this embedding. Thus one can reason about the full semantics of realworld systems code productively and equationally, while retaining the interoperability and leanness of C. The compiler certificate is a series of languagelevel proofs and perprogram translation validation phases, combined into one coherent toplevel theorem in Isabelle/HOL. @InProceedings{ICFP16p89, author = {Liam O'Connor and Zilin Chen and Christine Rizkallah and Sidney Amani and Japheth Lim and Toby Murray and Yutaka Nagashima and Thomas Sewell and Gerwin Klein}, title = {Refinement through Restraint: Bringing Down the Cost of Verification}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {89102}, doi = {10.1145/2951913.2951940}, year = {2016}, } Publisher's Version Article Search Info 

Chiang, YuHsi 
ICFP '16: "Queueing and Glueing for Optimal ..."
Queueing and Glueing for Optimal Partitioning (Functional Pearl)
ShinCheng Mu, YuHsi Chiang, and YuHan Lyu (Academia Sinica, Taiwan; National Taiwan University, Taiwan; Dartmouth College, USA) The queueingglueing algorithm is the nickname we give to an algorithmic pattern that provides amortised linear time solutions to a number of optimal list partition problems that have a peculiar property: at various moments we know that two of three candidate solutions could be optimal. The algorithm works by keeping a queue of lists, glueing them from one end, while chopping from the other end, hence the name. We give a formal derivation of the algorithm, and demonstrate it with several nontrivial examples. @InProceedings{ICFP16p158, author = {ShinCheng Mu and YuHsi Chiang and YuHan Lyu}, title = {Queueing and Glueing for Optimal Partitioning (Functional Pearl)}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {158167}, doi = {10.1145/2951913.2951923}, year = {2016}, } Publisher's Version Article Search Info 

Christiansen, David 
ICFP '16: "Elaborator Reflection: Extending ..."
Elaborator Reflection: Extending Idris in Idris
David Christiansen and Edwin Brady (Indiana University, USA; University of St. Andrews, UK) Many programming languages and proof assistants are defined by elaboration from a highlevel language with a great deal of implicit information to a highly explicit core language. In many advanced languages, these elaboration facilities contain powerful tools for program construction, but these tools are rarely designed to be repurposed by users. We describe elaborator reflection, a paradigm for metaprogramming in which the elaboration machinery is made directly available to metaprograms, as well as a concrete realization of elaborator reflection in Idris, a functional language with full dependent types. We demonstrate the applicability of Idris’s reflected elaboration framework to a number of realistic problems, we discuss the motivation for the specific features of its design, and we explore the broader meaning of elaborator reflection as it can relate to other languages. @InProceedings{ICFP16p284, author = {David Christiansen and Edwin Brady}, title = {Elaborator Reflection: Extending Idris in Idris}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {284297}, doi = {10.1145/2951913.2951932}, year = {2016}, } Publisher's Version Article Search 

Christiansen, Jan 
ICFP '16: "All Sorts of Permutations ..."
All Sorts of Permutations (Functional Pearl)
Jan Christiansen, Nikita Danilenko, and Sandra Dylus (Flensburg University of Applied Sciences, Germany; University of Kiel, Germany) The combination of nondeterminism and sorting is mostly associated with permutation sort, a sorting algorithm that is not very useful for sorting and has an awful running time. In this paper we look at the combination of nondeterminism and sorting in a different light: given a sorting function, we apply it to a nondeterministic predicate to gain a function that enumerates permutations of the input list. We get to the bottom of necessary properties of the sorting algorithms and predicates in play as well as discuss variations of the modelled nondeterminism. On top of that, we formulate and prove a theorem stating that no matter which sorting function we use, the corresponding permutation function enumerates all permutations of the input list. We use free theorems, which are derived from the type of a function alone, to prove the statement. @InProceedings{ICFP16p168, author = {Jan Christiansen and Nikita Danilenko and Sandra Dylus}, title = {All Sorts of Permutations (Functional Pearl)}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {168179}, doi = {10.1145/2951913.2951949}, year = {2016}, } Publisher's Version Article Search 

Çiçek, Ezgi 
ICFP '16: "A Type Theory for Incremental ..."
A Type Theory for Incremental Computational Complexity with Control Flow Changes
Ezgi Çiçek, Zoe Paraskevopoulou, and Deepak Garg (MPISWS, Germany; Princeton University, USA) Incremental computation aims to speed up reruns of a program after its inputs have been modified slightly. It works by recording a trace of the program's first run and propagating changes through the trace in incremental runs, trying to reuse as much of the original trace as possible. The recent work CostIt is a type and effect system to establish the time complexity of incremental runs of a program, as a function of input changes. However, CostIt is limited in two ways. First, it prohibits input changes that influence control flow. This makes it impossible to type programs that, for instance, branch on inputs that may change. Second, the soundness of CostIt is proved relative to an abstract cost semantics, but it is unclear how the semantics can be realized. In this paper, we address both these limitations. We present DuCostIt, a redesign of CostIt, that combines reasoning about costs of change propagation and costs of fromscratch evaluation. The latter lifts the restriction on control flow changes. To obtain the type system, we refine Flow Caml, a type system for information flow analysis, with cost effects. Additionally, we inherit from CostIt index refinements to track data structure sizes and a comonadic type. Using a combination of binary and unary stepindexed logical relations, we prove DuCostIt's cost analysis sound relative to not only an abstract cost semantics, but also a concrete semantics, which is obtained by translation to an MLlike language. @InProceedings{ICFP16p132, author = {Ezgi Çiçek and Zoe Paraskevopoulou and Deepak Garg}, title = {A Type Theory for Incremental Computational Complexity with Control Flow Changes}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {132145}, doi = {10.1145/2951913.2951950}, year = {2016}, } Publisher's Version Article Search 

Cimini, Matteo 
ICFP '16: "Ghostbuster: A Tool for Simplifying ..."
Ghostbuster: A Tool for Simplifying and Converting GADTs
Trevor L. McDonell, Timothy A. K. Zakian, Matteo Cimini, and Ryan R. Newton (Indiana University, USA; Oxford University, UK) Generalized Algebraic Dataypes, or simply GADTs, can encode nontrivial properties in the types of the constructors. Once such properties are encoded in a datatype, however, all code manipulating that datatype must provide proof that it maintains these properties in order to typecheck. In this paper, we take a step towards gradualizing these obligations. We introduce a tool, Ghostbuster, that produces simplified versions of GADTs which elide selected type parameters, thereby weakening the guarantees of the simplified datatype in exchange for reducing the obligations necessary to manipulate it. Like ornaments, these simplified datatypes preserve the recursive structure of the original, but unlike ornaments we focus on informationpreserving bidirectional transformations. Ghostbuster generates typesafe conversion functions between the original and simplified datatypes, which we prove are the identity function when composed. We evaluate a prototype tool for Haskell against thousands of GADTs found on the Hackage package database, generating simpler Haskell'98 datatypes and roundtrip conversion functions between the two. @InProceedings{ICFP16p338, author = {Trevor L. McDonell and Timothy A. K. Zakian and Matteo Cimini and Ryan R. Newton}, title = {Ghostbuster: A Tool for Simplifying and Converting GADTs}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {338350}, doi = {10.1145/2951913.2951914}, year = {2016}, } Publisher's Version Article Search 

Cockx, Jesper 
ICFP '16: "Unifiers as Equivalences: ..."
Unifiers as Equivalences: ProofRelevant Unification of Dependently Typed Data
Jesper Cockx, Dominique Devriese, and Frank Piessens (iMinds, Belgium; KU Leuven, Belgium) Dependently typed languages such as Agda, Coq and Idris use a syntactic firstorder unification algorithm to check definitions by dependent pattern matching. However, these algorithms don’t adequately consider the types of the terms being unified, leading to various unintended results. As a consequence, they require ad hoc restrictions to preserve soundness, but this makes them very hard to prove correct, modify, or extend. This paper proposes a framework for reasoning formally about unification in a dependently typed setting. In this framework, unification rules compute not just a unifier but also a corresponding correctness proof in the form of an equivalence between two sets of equations. By rephrasing the standard unification rules in a proofrelevant manner, they are guaranteed to preserve soundness of the theory. In addition, it enables us to safely add new rules that can exploit the dependencies between the types of equations. Using our framework, we reimplemented the unification algorithm used by Agda. As a result, we were able to replace previous ad hoc restrictions with formally verified unification rules, fixing a number of bugs in the process. We are convinced this will also enable the addition of new and interesting unification rules in the future, without compromising soundness along the way. @InProceedings{ICFP16p270, author = {Jesper Cockx and Dominique Devriese and Frank Piessens}, title = {Unifiers as Equivalences: ProofRelevant Unification of Dependently Typed Data}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {270283}, doi = {10.1145/2951913.2951917}, year = {2016}, } Publisher's Version Article Search 

Dagand, PierreEvariste 
ICFP '16: "Partial Type Equivalences ..."
Partial Type Equivalences for Verified Dependent Interoperability
PierreEvariste Dagand, Nicolas Tabareau, and Éric Tanter (UPMC, France; LIP6, France; Inria, France; University of Chile, Chile) Fullspectrum dependent types promise to enable the development of correctbyconstruction software. However, even certified software needs to interact with simplytyped or untyped programs, be it to perform system calls, or to use legacy libraries. Trading static guarantees for runtime checks, the dependent interoperability framework provides a mechanism by which simplytyped values can safely be coerced to dependent types and, conversely, dependentlytyped programs can defensively be exported to a simplytyped application. In this paper, we give a semantic account of dependent interoperability. Our presentation relies on and is guided by a pervading notion of type equivalence, whose importance has been emphasized in recent work on homotopy type theory. Specifically, we develop the notion of partial type equivalences as a key foundation for dependent interoperability. Our framework is developed in Coq; it is thus constructive and verified in the strictest sense of the terms. Using our library, users can specify domainspecific partial equivalences between data structures. Our library then takes care of the (sometimes, heavy) lifting that leads to interoperable programs. It thus becomes possible, as we shall illustrate, to internalize and handtune the extraction of dependentlytyped programs to interoperable OCaml programs within Coq itself. @InProceedings{ICFP16p298, author = {PierreEvariste Dagand and Nicolas Tabareau and Éric Tanter}, title = {Partial Type Equivalences for Verified Dependent Interoperability}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {298310}, doi = {10.1145/2951913.2951933}, year = {2016}, } Publisher's Version Article Search Info 

Dal Lago, Ugo 
ICFP '16: "A LambdaCalculus Foundation ..."
A LambdaCalculus Foundation for Universal Probabilistic Programming
Johannes Borgström, Ugo Dal Lago, Andrew D. Gordon, and Marcin Szymczak (Uppsala University, Sweden; University of Bologna, Italy; Inria, France; Microsoft Research, UK; University of Edinburgh, UK) We develop the operational semantics of an untyped probabilistic λcalculus with continuous distributions, and both hard and soft constraints,as a foundation for universal probabilistic programming languages such as Church, Anglican, and Venture. Our first contribution is to adapt the classic operational semantics of λcalculus to a continuous setting via creating a measure space on terms and defining stepindexed approximations. We prove equivalence of bigstep and smallstep formulations of this distributionbased semantics. To move closer to inference techniques, we also define the samplingbased semantics of a term as a function from a trace of random samples to a value. We show that the distribution induced by integration over the space of traces equals the distributionbased semantics. Our second contribution is to formalize the implementation technique of trace Markov chain Monte Carlo (MCMC) for our calculus and to show its correctness. A key step is defining sufficient conditions for the distribution induced by trace MCMC to converge to the distributionbased semantics. To the best of our knowledge, this is the first rigorous correctness proof for trace MCMC for a higherorder functional language, or for a language with soft constraints. @InProceedings{ICFP16p33, author = {Johannes Borgström and Ugo Dal Lago and Andrew D. Gordon and Marcin Szymczak}, title = {A LambdaCalculus Foundation for Universal Probabilistic Programming}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {3346}, doi = {10.1145/2951913.2951942}, year = {2016}, } Publisher's Version Article Search 

Danilenko, Nikita 
ICFP '16: "All Sorts of Permutations ..."
All Sorts of Permutations (Functional Pearl)
Jan Christiansen, Nikita Danilenko, and Sandra Dylus (Flensburg University of Applied Sciences, Germany; University of Kiel, Germany) The combination of nondeterminism and sorting is mostly associated with permutation sort, a sorting algorithm that is not very useful for sorting and has an awful running time. In this paper we look at the combination of nondeterminism and sorting in a different light: given a sorting function, we apply it to a nondeterministic predicate to gain a function that enumerates permutations of the input list. We get to the bottom of necessary properties of the sorting algorithms and predicates in play as well as discuss variations of the modelled nondeterminism. On top of that, we formulate and prove a theorem stating that no matter which sorting function we use, the corresponding permutation function enumerates all permutations of the input list. We use free theorems, which are derived from the type of a function alone, to prove the statement. @InProceedings{ICFP16p168, author = {Jan Christiansen and Nikita Danilenko and Sandra Dylus}, title = {All Sorts of Permutations (Functional Pearl)}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {168179}, doi = {10.1145/2951913.2951949}, year = {2016}, } Publisher's Version Article Search 

Darais, David 
ICFP '16: "Constructive Galois Connections: ..."
Constructive Galois Connections: Taming the Galois Connection Framework for Mechanized Metatheory
David Darais and David Van Horn (University of Maryland, USA) Galois connections are a foundational tool for structuring abstraction in semantics and their use lies at the heart of the theory of abstract interpretation. Yet, mechanization of Galois connections remains limited to restricted modes of use, preventing their general application in mechanized metatheory and certified programming. This paper presents constructive Galois connections, a variant of Galois connections that is effective both on paper and in proof assistants; is complete with respect to a large subset of classical Galois connections; and enables more general reasoning principles, including the "calculational" style advocated by Cousot. To design constructive Galois connection we identify a restricted mode of use of classical ones which is both general and amenable to mechanization in dependentlytyped functional programming languages. Crucial to our metatheory is the addition of monadic structure to Galois connections to control a "specification effect". Effectful calculations may reason classically, while pure calculations have extractable computational content. Explicitly moving between the worlds of specification and implementation is enabled by our metatheory. To validate our approach, we provide two case studies in mechanizing existing proofs from the literature: one uses calculational abstract interpretation to design a static analyzer, the other forms a semantic basis for gradual typing. Both mechanized proofs closely follow their original paperandpencil counterparts, employ reasoning principles not captured by previous mechanization approaches, support the extraction of verified algorithms, and are novel. @InProceedings{ICFP16p311, author = {David Darais and David Van Horn}, title = {Constructive Galois Connections: Taming the Galois Connection Framework for Mechanized Metatheory}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {311324}, doi = {10.1145/2951913.2951934}, year = {2016}, } Publisher's Version Article Search 

Devriese, Dominique 
ICFP '16: "Unifiers as Equivalences: ..."
Unifiers as Equivalences: ProofRelevant Unification of Dependently Typed Data
Jesper Cockx, Dominique Devriese, and Frank Piessens (iMinds, Belgium; KU Leuven, Belgium) Dependently typed languages such as Agda, Coq and Idris use a syntactic firstorder unification algorithm to check definitions by dependent pattern matching. However, these algorithms don’t adequately consider the types of the terms being unified, leading to various unintended results. As a consequence, they require ad hoc restrictions to preserve soundness, but this makes them very hard to prove correct, modify, or extend. This paper proposes a framework for reasoning formally about unification in a dependently typed setting. In this framework, unification rules compute not just a unifier but also a corresponding correctness proof in the form of an equivalence between two sets of equations. By rephrasing the standard unification rules in a proofrelevant manner, they are guaranteed to preserve soundness of the theory. In addition, it enables us to safely add new rules that can exploit the dependencies between the types of equations. Using our framework, we reimplemented the unification algorithm used by Agda. As a result, we were able to replace previous ad hoc restrictions with formally verified unification rules, fixing a number of bugs in the process. We are convinced this will also enable the addition of new and interesting unification rules in the future, without compromising soundness along the way. @InProceedings{ICFP16p270, author = {Jesper Cockx and Dominique Devriese and Frank Piessens}, title = {Unifiers as Equivalences: ProofRelevant Unification of Dependently Typed Data}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {270283}, doi = {10.1145/2951913.2951917}, year = {2016}, } Publisher's Version Article Search 

Dimoulas, Christos 
ICFP '16: "Oh Lord, Please Don't ..."
Oh Lord, Please Don't Let Contracts Be Misunderstood (Functional Pearl)
Christos Dimoulas, Max S. New, Robert Bruce Findler, and Matthias Felleisen (PLT, USA) Contracts feel misunderstood, especially those with a higherorder soul. While software engineers appreciate contracts as tools for articulating the interface between components, functional programmers desperately search for their types and meaning, completely forgetting about their pragmatics. This gem presents a novel analysis of contract systems. Applied to the higherorder kind, this analysis reveals their large and clearly unappreciated software engineering potential. Three sample applications illustrate where this kind of exploration may lead. @InProceedings{ICFP16p117, author = {Christos Dimoulas and Max S. New and Robert Bruce Findler and Matthias Felleisen}, title = {Oh Lord, Please Don't Let Contracts Be Misunderstood (Functional Pearl)}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {117131}, doi = {10.1145/2951913.2951930}, year = {2016}, } Publisher's Version Article Search 

Downen, Paul 
ICFP '16: "Sequent Calculus as a Compiler ..."
Sequent Calculus as a Compiler Intermediate Language
Paul Downen, Luke Maurer, Zena M. Ariola, and Simon Peyton Jones (University of Oregon, USA; Microsoft Research, UK) The λcalculus is popular as an intermediate language for practical compilers. But in the world of logic it has a lesserknown twin, born at the same time, called the sequent calculus. Perhaps that would make for a good intermediate language, too? To explore this question we designed Sequent Core, a practicallyoriented core calculus based on the sequent calculus, and used it to reimplement a substantial chunk of the Glasgow Haskell Compiler. @InProceedings{ICFP16p74, author = {Paul Downen and Luke Maurer and Zena M. Ariola and Simon Peyton Jones}, title = {Sequent Calculus as a Compiler Intermediate Language}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {7488}, doi = {10.1145/2951913.2951931}, year = {2016}, } Publisher's Version Article Search Info 

Dreyer, Derek 
ICFP '16: "HigherOrder Ghost State ..."
HigherOrder Ghost State
Ralf Jung, Robbert Krebbers, Lars Birkedal, and Derek Dreyer (MPISWS, Germany; Aarhus University, Denmark) The development of concurrent separation logic (CSL) has sparked a long line of work on modular verification of sophisticated concurrent programs. Two of the most important features supported by several existing extensions to CSL are higherorder quantification and custom ghost state. However, none of the logics that support both of these features reap the full potential of their combination. In particular, none of them provide general support for a feature we dub "higherorder ghost state": the ability to store arbitrary higherorder separationlogic predicates in ghost variables. In this paper, we propose higherorder ghost state as a interesting and useful extension to CSL, which we formalize in the framework of Jung et al.'s recently developed Iris logic. To justify its soundness, we develop a novel algebraic structure called CMRAs ("cameras"), which can be thought of as "stepindexed partial commutative monoids". Finally, we show that Iris proofs utilizing higherorder ghost state can be effectively formalized in Coq, and discuss the challenges we faced in formalizing them. @InProceedings{ICFP16p256, author = {Ralf Jung and Robbert Krebbers and Lars Birkedal and Derek Dreyer}, title = {HigherOrder Ghost State}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {256269}, doi = {10.1145/2951913.2951943}, year = {2016}, } Publisher's Version Article Search Info 

Dylus, Sandra 
ICFP '16: "All Sorts of Permutations ..."
All Sorts of Permutations (Functional Pearl)
Jan Christiansen, Nikita Danilenko, and Sandra Dylus (Flensburg University of Applied Sciences, Germany; University of Kiel, Germany) The combination of nondeterminism and sorting is mostly associated with permutation sort, a sorting algorithm that is not very useful for sorting and has an awful running time. In this paper we look at the combination of nondeterminism and sorting in a different light: given a sorting function, we apply it to a nondeterministic predicate to gain a function that enumerates permutations of the input list. We get to the bottom of necessary properties of the sorting algorithms and predicates in play as well as discuss variations of the modelled nondeterminism. On top of that, we formulate and prove a theorem stating that no matter which sorting function we use, the corresponding permutation function enumerates all permutations of the input list. We use free theorems, which are derived from the type of a function alone, to prove the statement. @InProceedings{ICFP16p168, author = {Jan Christiansen and Nikita Danilenko and Sandra Dylus}, title = {All Sorts of Permutations (Functional Pearl)}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {168179}, doi = {10.1145/2951913.2951949}, year = {2016}, } Publisher's Version Article Search 

Emoto, Kento 
ICFP '16: "Think Like a Vertex, Behave ..."
Think Like a Vertex, Behave Like a Function! A Functional DSL for VertexCentric Big Graph Processing
Kento Emoto, Kiminori Matsuzaki, Zhenjiang Hu, Akimasa Morihata, and Hideya Iwasaki (Kyushu Institute of Technology, Japan; Kochi University of Technology, Japan; National Institute of Informatics, Japan; University of Tokyo, Japan; University of ElectroCommunications, Japan) The vertexcentric programming model, known as “think like a vertex”, is being used more and more to support various big graph processing methods through iterative supersteps that execute in parallel a userdefined vertex program over each vertex of a graph. However, the imperative and messagepassing style of existing systems makes defining a vertex program unintuitive. In this paper, we show that one can benefit more from “Thinking like a vertex” by “Behaving like a function” rather than “Acting like a procedure” with full use of side effects and explicit control of message passing, state, and termination. We propose a functional approach to vertexcentric graph processing in which the computation at every vertex is abstracted as a higherorder function and present Fregel, a new domainspecific language. Fregel has clear functional semantics, supports declarative description of vertex computation, and can be automatically translated into Pregel, an emerging imperativestyle distributed graph processing framework, and thereby achieve promising performance. Experimental results for several typical examples show the promise of this functional approach. @InProceedings{ICFP16p200, author = {Kento Emoto and Kiminori Matsuzaki and Zhenjiang Hu and Akimasa Morihata and Hideya Iwasaki}, title = {Think Like a Vertex, Behave Like a Function! A Functional DSL for VertexCentric Big Graph Processing}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {200213}, doi = {10.1145/2951913.2951938}, year = {2016}, } Publisher's Version Article Search 

Felleisen, Matthias 
ICFP '16: "Oh Lord, Please Don't ..."
Oh Lord, Please Don't Let Contracts Be Misunderstood (Functional Pearl)
Christos Dimoulas, Max S. New, Robert Bruce Findler, and Matthias Felleisen (PLT, USA) Contracts feel misunderstood, especially those with a higherorder soul. While software engineers appreciate contracts as tools for articulating the interface between components, functional programmers desperately search for their types and meaning, completely forgetting about their pragmatics. This gem presents a novel analysis of contract systems. Applied to the higherorder kind, this analysis reveals their large and clearly unappreciated software engineering potential. Three sample applications illustrate where this kind of exploration may lead. @InProceedings{ICFP16p117, author = {Christos Dimoulas and Max S. New and Robert Bruce Findler and Matthias Felleisen}, title = {Oh Lord, Please Don't Let Contracts Be Misunderstood (Functional Pearl)}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {117131}, doi = {10.1145/2951913.2951930}, year = {2016}, } Publisher's Version Article Search 

Findler, Robert Bruce 
ICFP '16: "Oh Lord, Please Don't ..."
Oh Lord, Please Don't Let Contracts Be Misunderstood (Functional Pearl)
Christos Dimoulas, Max S. New, Robert Bruce Findler, and Matthias Felleisen (PLT, USA) Contracts feel misunderstood, especially those with a higherorder soul. While software engineers appreciate contracts as tools for articulating the interface between components, functional programmers desperately search for their types and meaning, completely forgetting about their pragmatics. This gem presents a novel analysis of contract systems. Applied to the higherorder kind, this analysis reveals their large and clearly unappreciated software engineering potential. Three sample applications illustrate where this kind of exploration may lead. @InProceedings{ICFP16p117, author = {Christos Dimoulas and Max S. New and Robert Bruce Findler and Matthias Felleisen}, title = {Oh Lord, Please Don't Let Contracts Be Misunderstood (Functional Pearl)}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {117131}, doi = {10.1145/2951913.2951930}, year = {2016}, } Publisher's Version Article Search 

Fox, Anthony 
ICFP '16: "A New Verified Compiler Backend ..."
A New Verified Compiler Backend for CakeML
Yong Kiam Tan, Magnus O. Myreen, Ramana Kumar, Anthony Fox, Scott Owens, and Michael Norrish (IHPC at A*STAR, Singapore; Chalmers University of Technology, Sweden; Data61 at CSIRO, Australia; UNSW, Australia; University of Cambridge, UK; University of Kent, UK; Australian National University, Australia) We have developed and mechanically verified a new compiler backend for CakeML. Our new compiler features a sequence of intermediate languages that allows it to incrementally compile away highlevel features and enables verification at the right levels of semantic detail. In this way, it resembles mainstream (unverified) compilers for strict functional languages. The compiler supports efficient curried multiargument functions, configurable data representations, exceptions that unwind the call stack, register allocation, and more. The compiler targets several architectures: x8664, ARMv6, ARMv8, MIPS64, and RISCV. In this paper, we present the overall structure of the compiler, including its 12 intermediate languages, and explain how everything fits together. We focus particularly on the interaction between the verification of the register allocator and the garbage collector, and memory representations. The entire development has been carried out within the HOL4 theorem prover. @InProceedings{ICFP16p60, author = {Yong Kiam Tan and Magnus O. Myreen and Ramana Kumar and Anthony Fox and Scott Owens and Michael Norrish}, title = {A New Verified Compiler Backend for CakeML}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {6073}, doi = {10.1145/2951913.2951924}, year = {2016}, } Publisher's Version Article Search 

Gaboardi, Marco 
ICFP '16: "Combining Effects and Coeffects ..."
Combining Effects and Coeffects via Grading
Marco Gaboardi, Shinya Katsumata, Dominic Orchard, Flavien Breuvart, and Tarmo Uustalu (SUNY Buffalo, USA; Kyoto University, Japan; University of Cambridge, UK; University of Kent, UK; Inria, France; Tallinn University of Technology, Estonia) Effects and coeffects are two general, complementary aspects of program behaviour. They roughly correspond to computations which change the execution context (effects) versus computations which make demands on the context (coeffects). Effectful features include partiality, nondeterminism, inputoutput, state, and exceptions. Coeffectful features include resource demands, variable access, notions of linearity, and data input requirements. The effectful or coeffectful behaviour of a program can be captured and described via typebased analyses, with fine grained information provided by monoidal effect annotations and semiring coeffects. Various recent work has proposed models for such typed calculi in terms of graded (strong) monads for effects and graded (monoidal) comonads for coeffects. Effects and coeffects have been studied separately so far, but in practice many computations are both effectful and coeffectful, e.g., possibly throwing exceptions but with resource requirements. To remedy this, we introduce a new general calculus with a combined effectcoeffect system. This can describe both the changes and requirements that a program has on its context, as well as interactions between these effectful and coeffectful features of computation. The effectcoeffect system has a denotational model in terms of effectgraded monads and coeffectgraded comonads where interaction is expressed via the novel concept of graded distributive laws. This graded semantics unifies the syntactic type theory with the denotational model. We show that our calculus can be instantiated to describe in a natural way various different kinds of interaction between a program and its evaluation context. @InProceedings{ICFP16p476, author = {Marco Gaboardi and Shinya Katsumata and Dominic Orchard and Flavien Breuvart and Tarmo Uustalu}, title = {Combining Effects and Coeffects via Grading}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {476489}, doi = {10.1145/2951913.2951939}, year = {2016}, } Publisher's Version Article Search 

Garg, Deepak 
ICFP '16: "A Type Theory for Incremental ..."
A Type Theory for Incremental Computational Complexity with Control Flow Changes
Ezgi Çiçek, Zoe Paraskevopoulou, and Deepak Garg (MPISWS, Germany; Princeton University, USA) Incremental computation aims to speed up reruns of a program after its inputs have been modified slightly. It works by recording a trace of the program's first run and propagating changes through the trace in incremental runs, trying to reuse as much of the original trace as possible. The recent work CostIt is a type and effect system to establish the time complexity of incremental runs of a program, as a function of input changes. However, CostIt is limited in two ways. First, it prohibits input changes that influence control flow. This makes it impossible to type programs that, for instance, branch on inputs that may change. Second, the soundness of CostIt is proved relative to an abstract cost semantics, but it is unclear how the semantics can be realized. In this paper, we address both these limitations. We present DuCostIt, a redesign of CostIt, that combines reasoning about costs of change propagation and costs of fromscratch evaluation. The latter lifts the restriction on control flow changes. To obtain the type system, we refine Flow Caml, a type system for information flow analysis, with cost effects. Additionally, we inherit from CostIt index refinements to track data structure sizes and a comonadic type. Using a combination of binary and unary stepindexed logical relations, we prove DuCostIt's cost analysis sound relative to not only an abstract cost semantics, but also a concrete semantics, which is obtained by translation to an MLlike language. @InProceedings{ICFP16p132, author = {Ezgi Çiçek and Zoe Paraskevopoulou and Deepak Garg}, title = {A Type Theory for Incremental Computational Complexity with Control Flow Changes}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {132145}, doi = {10.1145/2951913.2951950}, year = {2016}, } Publisher's Version Article Search 

Gilray, Thomas 
ICFP '16: "Allocation Characterizes Polyvariance: ..."
Allocation Characterizes Polyvariance: A Unified Methodology for Polyvariant ControlFlow Analysis
Thomas Gilray, Michael D. Adams, and Matthew Might (University of Utah, USA) The polyvariance of a static analysis is the degree to which it structurally differentiates approximations of program values. Polyvariant techniques come in a number of different flavors that represent alternative heuristics for managing the tradeoff an analysis strikes between precision and complexity. For example, call sensitivity supposes that values will tend to correlate with recent call sites, object sensitivity supposes that values will correlate with the allocation points of related objects, the Cartesian product algorithm supposes correlations between the values of arguments to the same function, and so forth. In this paper, we describe a unified methodology for implementing and understanding polyvariance in a higherorder setting (i.e., for controlflow analyses). We do this by extending the method of abstracting abstract machines (AAM), a systematic approach to producing an abstract interpretation of abstractmachine semantics. AAM eliminates recursion within a language’s semantics by passing around an explicit store, and thus places importance on the strategy an analysis uses for allocating abstract addresses within the abstract heap or store. We build on AAM by showing that the design space of possible abstract allocators exactly and uniquely corresponds to the design space of polyvariant strategies. This allows us to both unify and generalize polyvariance as tunings of a single function. Changes to the behavior of this function easily recapitulate classic styles of analysis and produce novel variations, combinations of techniques, and fundamentally new techniques. @InProceedings{ICFP16p407, author = {Thomas Gilray and Michael D. Adams and Matthew Might}, title = {Allocation Characterizes Polyvariance: A Unified Methodology for Polyvariant ControlFlow Analysis}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {407420}, doi = {10.1145/2951913.2951936}, year = {2016}, } Publisher's Version Article Search 

Gordon, Andrew D. 
ICFP '16: "A LambdaCalculus Foundation ..."
A LambdaCalculus Foundation for Universal Probabilistic Programming
Johannes Borgström, Ugo Dal Lago, Andrew D. Gordon, and Marcin Szymczak (Uppsala University, Sweden; University of Bologna, Italy; Inria, France; Microsoft Research, UK; University of Edinburgh, UK) We develop the operational semantics of an untyped probabilistic λcalculus with continuous distributions, and both hard and soft constraints,as a foundation for universal probabilistic programming languages such as Church, Anglican, and Venture. Our first contribution is to adapt the classic operational semantics of λcalculus to a continuous setting via creating a measure space on terms and defining stepindexed approximations. We prove equivalence of bigstep and smallstep formulations of this distributionbased semantics. To move closer to inference techniques, we also define the samplingbased semantics of a term as a function from a trace of random samples to a value. We show that the distribution induced by integration over the space of traces equals the distributionbased semantics. Our second contribution is to formalize the implementation technique of trace Markov chain Monte Carlo (MCMC) for our calculus and to show its correctness. A key step is defining sufficient conditions for the distribution induced by trace MCMC to converge to the distributionbased semantics. To the best of our knowledge, this is the first rigorous correctness proof for trace MCMC for a higherorder functional language, or for a language with soft constraints. @InProceedings{ICFP16p33, author = {Johannes Borgström and Ugo Dal Lago and Andrew D. Gordon and Marcin Szymczak}, title = {A LambdaCalculus Foundation for Universal Probabilistic Programming}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {3346}, doi = {10.1145/2951913.2951942}, year = {2016}, } Publisher's Version Article Search 

Hammond, Kevin 
ICFP '16: "Farms, Pipes, Streams and ..."
Farms, Pipes, Streams and Reforestation: Reasoning about Structured Parallel Processes using Types and Hylomorphisms
David Castro, Kevin Hammond, and Susmit Sarkar (University of St. Andrews, UK) The increasing importance of parallelism has motivated the creation of better abstractions for writing parallel software, including structured parallelism using nested algorithmic skeletons. Such approaches provide highlevel abstractions that avoid common problems, such as race conditions, and often allow strong cost models to be defined. However, choosing a combination of algorithmic skeletons that yields good parallel speedups for a program on some specific parallel architecture remains a difficult task. In order to achieve this, it is necessary to simultaneously reason both about the costs of different parallel structures and about the semantic equivalences between them. This paper presents a new typebased mechanism that enables strong static reasoning about these properties. We exploit wellknown properties of a very general recursion pattern, hylomorphisms, and give a denotational semantics for structured parallel processes in terms of these hylomorphisms. Using our approach, it is possible to determine formally whether it is possible to introduce a desired parallel structure into a program without altering its functional behaviour, and also to choose a version of that parallel structure that minimises some given cost model. @InProceedings{ICFP16p4, author = {David Castro and Kevin Hammond and Susmit Sarkar}, title = {Farms, Pipes, Streams and Reforestation: Reasoning about Structured Parallel Processes using Types and Hylomorphisms}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {417}, doi = {10.1145/2951913.2951920}, year = {2016}, } Publisher's Version Article Search 

Hu, Zhenjiang 
ICFP '16: "Think Like a Vertex, Behave ..."
Think Like a Vertex, Behave Like a Function! A Functional DSL for VertexCentric Big Graph Processing
Kento Emoto, Kiminori Matsuzaki, Zhenjiang Hu, Akimasa Morihata, and Hideya Iwasaki (Kyushu Institute of Technology, Japan; Kochi University of Technology, Japan; National Institute of Informatics, Japan; University of Tokyo, Japan; University of ElectroCommunications, Japan) The vertexcentric programming model, known as “think like a vertex”, is being used more and more to support various big graph processing methods through iterative supersteps that execute in parallel a userdefined vertex program over each vertex of a graph. However, the imperative and messagepassing style of existing systems makes defining a vertex program unintuitive. In this paper, we show that one can benefit more from “Thinking like a vertex” by “Behaving like a function” rather than “Acting like a procedure” with full use of side effects and explicit control of message passing, state, and termination. We propose a functional approach to vertexcentric graph processing in which the computation at every vertex is abstracted as a higherorder function and present Fregel, a new domainspecific language. Fregel has clear functional semantics, supports declarative description of vertex computation, and can be automatically translated into Pregel, an emerging imperativestyle distributed graph processing framework, and thereby achieve promising performance. Experimental results for several typical examples show the promise of this functional approach. @InProceedings{ICFP16p200, author = {Kento Emoto and Kiminori Matsuzaki and Zhenjiang Hu and Akimasa Morihata and Hideya Iwasaki}, title = {Think Like a Vertex, Behave Like a Function! A Functional DSL for VertexCentric Big Graph Processing}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {200213}, doi = {10.1145/2951913.2951938}, year = {2016}, } Publisher's Version Article Search 

Iwasaki, Hideya 
ICFP '16: "Think Like a Vertex, Behave ..."
Think Like a Vertex, Behave Like a Function! A Functional DSL for VertexCentric Big Graph Processing
Kento Emoto, Kiminori Matsuzaki, Zhenjiang Hu, Akimasa Morihata, and Hideya Iwasaki (Kyushu Institute of Technology, Japan; Kochi University of Technology, Japan; National Institute of Informatics, Japan; University of Tokyo, Japan; University of ElectroCommunications, Japan) The vertexcentric programming model, known as “think like a vertex”, is being used more and more to support various big graph processing methods through iterative supersteps that execute in parallel a userdefined vertex program over each vertex of a graph. However, the imperative and messagepassing style of existing systems makes defining a vertex program unintuitive. In this paper, we show that one can benefit more from “Thinking like a vertex” by “Behaving like a function” rather than “Acting like a procedure” with full use of side effects and explicit control of message passing, state, and termination. We propose a functional approach to vertexcentric graph processing in which the computation at every vertex is abstracted as a higherorder function and present Fregel, a new domainspecific language. Fregel has clear functional semantics, supports declarative description of vertex computation, and can be automatically translated into Pregel, an emerging imperativestyle distributed graph processing framework, and thereby achieve promising performance. Experimental results for several typical examples show the promise of this functional approach. @InProceedings{ICFP16p200, author = {Kento Emoto and Kiminori Matsuzaki and Zhenjiang Hu and Akimasa Morihata and Hideya Iwasaki}, title = {Think Like a Vertex, Behave Like a Function! A Functional DSL for VertexCentric Big Graph Processing}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {200213}, doi = {10.1145/2951913.2951938}, year = {2016}, } Publisher's Version Article Search 

Jhala, Ranjit 
ICFP '16: "Dynamic Witnesses for Static ..."
Dynamic Witnesses for Static Type Errors (or, IllTyped Programs Usually Go Wrong)
Eric L. Seidel, Ranjit Jhala, and Westley Weimer (University of California at San Diego, USA; University of Virginia, USA) Static type errors are a common stumbling block for newcomers to typed functional languages. We present a dynamic approach to explaining type errors by generating counterexample witness inputs that illustrate how an illtyped program goes wrong. First, given an illtyped function, we symbolically execute the body to synthesize witness values that make the program go wrong. We prove that our procedure synthesizes general witnesses in that if a witness is found, then for all inhabited input types, there exist values that can make the function go wrong. Second, we show how to extend the above procedure to produce a reduction graph that can be used to interactively visualize and debug witness executions. Third, we evaluate the coverage of our approach on two data sets comprising over 4,500 illtyped student programs. Our technique is able to generate witnesses for 88% of the programs, and our reduction graph yields small counterexamples for 81% of the witnesses. Finally, we evaluate whether our witnesses help students understand and fix type errors, and find that students presented with our witnesses show a greater understanding of type errors than those presented with a standard error message. @InProceedings{ICFP16p228, author = {Eric L. Seidel and Ranjit Jhala and Westley Weimer}, title = {Dynamic Witnesses for Static Type Errors (or, IllTyped Programs Usually Go Wrong)}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {228242}, doi = {10.1145/2951913.2951915}, year = {2016}, } Publisher's Version Article Search 

Jung, Ralf 
ICFP '16: "HigherOrder Ghost State ..."
HigherOrder Ghost State
Ralf Jung, Robbert Krebbers, Lars Birkedal, and Derek Dreyer (MPISWS, Germany; Aarhus University, Denmark) The development of concurrent separation logic (CSL) has sparked a long line of work on modular verification of sophisticated concurrent programs. Two of the most important features supported by several existing extensions to CSL are higherorder quantification and custom ghost state. However, none of the logics that support both of these features reap the full potential of their combination. In particular, none of them provide general support for a feature we dub "higherorder ghost state": the ability to store arbitrary higherorder separationlogic predicates in ghost variables. In this paper, we propose higherorder ghost state as a interesting and useful extension to CSL, which we formalize in the framework of Jung et al.'s recently developed Iris logic. To justify its soundness, we develop a novel algebraic structure called CMRAs ("cameras"), which can be thought of as "stepindexed partial commutative monoids". Finally, we show that Iris proofs utilizing higherorder ghost state can be effectively formalized in Coq, and discuss the challenges we faced in formalizing them. @InProceedings{ICFP16p256, author = {Ralf Jung and Robbert Krebbers and Lars Birkedal and Derek Dreyer}, title = {HigherOrder Ghost State}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {256269}, doi = {10.1145/2951913.2951943}, year = {2016}, } Publisher's Version Article Search Info 

Katsumata, Shinya 
ICFP '16: "Combining Effects and Coeffects ..."
Combining Effects and Coeffects via Grading
Marco Gaboardi, Shinya Katsumata, Dominic Orchard, Flavien Breuvart, and Tarmo Uustalu (SUNY Buffalo, USA; Kyoto University, Japan; University of Cambridge, UK; University of Kent, UK; Inria, France; Tallinn University of Technology, Estonia) Effects and coeffects are two general, complementary aspects of program behaviour. They roughly correspond to computations which change the execution context (effects) versus computations which make demands on the context (coeffects). Effectful features include partiality, nondeterminism, inputoutput, state, and exceptions. Coeffectful features include resource demands, variable access, notions of linearity, and data input requirements. The effectful or coeffectful behaviour of a program can be captured and described via typebased analyses, with fine grained information provided by monoidal effect annotations and semiring coeffects. Various recent work has proposed models for such typed calculi in terms of graded (strong) monads for effects and graded (monoidal) comonads for coeffects. Effects and coeffects have been studied separately so far, but in practice many computations are both effectful and coeffectful, e.g., possibly throwing exceptions but with resource requirements. To remedy this, we introduce a new general calculus with a combined effectcoeffect system. This can describe both the changes and requirements that a program has on its context, as well as interactions between these effectful and coeffectful features of computation. The effectcoeffect system has a denotational model in terms of effectgraded monads and coeffectgraded comonads where interaction is expressed via the novel concept of graded distributive laws. This graded semantics unifies the syntactic type theory with the denotational model. We show that our calculus can be instantiated to describe in a natural way various different kinds of interaction between a program and its evaluation context. @InProceedings{ICFP16p476, author = {Marco Gaboardi and Shinya Katsumata and Dominic Orchard and Flavien Breuvart and Tarmo Uustalu}, title = {Combining Effects and Coeffects via Grading}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {476489}, doi = {10.1145/2951913.2951939}, year = {2016}, } Publisher's Version Article Search 

Klein, Gerwin 
ICFP '16: "Refinement through Restraint: ..."
Refinement through Restraint: Bringing Down the Cost of Verification
Liam O'Connor, Zilin Chen, Christine Rizkallah, Sidney Amani, Japheth Lim, Toby Murray, Yutaka Nagashima, Thomas Sewell, and Gerwin Klein (UNSW, Australia; Data61, Australia; University of Pennsylvania, USA; University of Melbourne, Australia) We present a framework aimed at significantly reducing the cost of verifying certain classes of systems software, such as file systems. Our framework allows for equational reasoning about systems code written in our new language, Cogent. Cogent is a restricted, polymorphic, higherorder, and purely functional language with linear types and without the need for a trusted runtime or garbage collector. Linear types allow us to assign two semantics to the language: one imperative, suitable for efficient C code generation; and one functional, suitable for equational reasoning and verification. As Cogent is a restricted language, it is designed to easily interoperate with existing C functions and to connect to existing C verification frameworks. Our framework is based on certifying compilation: For a welltyped Cogent program, our compiler produces C code, a highlevel shallow embedding of its semantics in Isabelle/HOL, and a proof that the C code correctly refines this embedding. Thus one can reason about the full semantics of realworld systems code productively and equationally, while retaining the interoperability and leanness of C. The compiler certificate is a series of languagelevel proofs and perprogram translation validation phases, combined into one coherent toplevel theorem in Isabelle/HOL. @InProceedings{ICFP16p89, author = {Liam O'Connor and Zilin Chen and Christine Rizkallah and Sidney Amani and Japheth Lim and Toby Murray and Yutaka Nagashima and Thomas Sewell and Gerwin Klein}, title = {Refinement through Restraint: Bringing Down the Cost of Verification}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {89102}, doi = {10.1145/2951913.2951940}, year = {2016}, } Publisher's Version Article Search Info 

Kobayashi, Naoki 
ICFP '16: "Automatically Disproving Fair ..."
Automatically Disproving Fair Termination of HigherOrder Functional Programs
Keiichi Watanabe, Ryosuke Sato, Takeshi Tsukada, and Naoki Kobayashi (University of Tokyo, Japan) We propose an automated method for disproving fair termination of higherorder functional programs, which is complementary to Murase et al.’s recent method for proving fair termination. A program is said to be fair terminating if it has no infinite execution trace that satisfies a given fairness constraint. Fair termination is an important property because program verification problems for arbitrary ωregular temporal properties can be transformed to those of fair termination. Our method reduces the problem of disproving fair termination to higherorder model checking by using predicate abstraction and CEGAR. Given a program, we convert it to an abstract program that generates an approximation of the (possibly infinite) execution traces of the original program, so that the original program has a fair infinite execution trace if the tree generated by the abstract program satisfies a certain property. The method is a nontrivial extension of Kuwahara et al.’s method for disproving plain termination. @InProceedings{ICFP16p243, author = {Keiichi Watanabe and Ryosuke Sato and Takeshi Tsukada and Naoki Kobayashi}, title = {Automatically Disproving Fair Termination of HigherOrder Functional Programs}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {243255}, doi = {10.1145/2951913.2951919}, year = {2016}, } Publisher's Version Article Search ICFP '16: "Compact Bit Encoding Schemes ..." Compact Bit Encoding Schemes for SimplyTyped LambdaTerms Kotaro Takeda, Naoki Kobayashi, Kazuya Yaguchi, and Ayumi Shinohara (University of Tokyo, Japan; Tohoku University, Japan) We consider the problem of how to compactly encode simplytyped λterms into bit strings. The work has been motivated by Kobayashi et al.’s recent work on higherorder data compression, where data are encoded as functional programs (or, λterms) that generate them. To exploit its good compression power, the compression scheme has to come with a method for compactly encoding the λterms into bit strings. To this end, we propose two typebased bitencoding schemes; the first one encodes a λterm into a sequence of symbols by using type information, and then applies arithmetic coding to convert the sequence to a bit string. The second one is more sophisticated; we prepare a contextfree grammar (CFG) that describes only welltyped terms, and then use a variation of arithmetic coding specialized for the CFG. We have implemented both schemes and confirmed that they often output more compact codes than previous bit encoding schemes for λterms. @InProceedings{ICFP16p146, author = {Kotaro Takeda and Naoki Kobayashi and Kazuya Yaguchi and Ayumi Shinohara}, title = {Compact Bit Encoding Schemes for SimplyTyped LambdaTerms}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {146157}, doi = {10.1145/2951913.2951918}, year = {2016}, } Publisher's Version Article Search 

Krebbers, Robbert 
ICFP '16: "HigherOrder Ghost State ..."
HigherOrder Ghost State
Ralf Jung, Robbert Krebbers, Lars Birkedal, and Derek Dreyer (MPISWS, Germany; Aarhus University, Denmark) The development of concurrent separation logic (CSL) has sparked a long line of work on modular verification of sophisticated concurrent programs. Two of the most important features supported by several existing extensions to CSL are higherorder quantification and custom ghost state. However, none of the logics that support both of these features reap the full potential of their combination. In particular, none of them provide general support for a feature we dub "higherorder ghost state": the ability to store arbitrary higherorder separationlogic predicates in ghost variables. In this paper, we propose higherorder ghost state as a interesting and useful extension to CSL, which we formalize in the framework of Jung et al.'s recently developed Iris logic. To justify its soundness, we develop a novel algebraic structure called CMRAs ("cameras"), which can be thought of as "stepindexed partial commutative monoids". Finally, we show that Iris proofs utilizing higherorder ghost state can be effectively formalized in Coq, and discuss the challenges we faced in formalizing them. @InProceedings{ICFP16p256, author = {Ralf Jung and Robbert Krebbers and Lars Birkedal and Derek Dreyer}, title = {HigherOrder Ghost State}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {256269}, doi = {10.1145/2951913.2951943}, year = {2016}, } Publisher's Version Article Search Info 

Krishnaswami, Neelakantan R. 
ICFP '16: "Datafun: A Functional Datalog ..."
Datafun: A Functional Datalog
Michael Arntzenius and Neelakantan R. Krishnaswami (University of Birmingham, UK) Datalog may be considered either an unusually powerful query language or a carefully limited logic programming language. Datalog is declarative, expressive, and optimizable, and has been applied successfully in a wide variety of problem domains. However, most usecases require extending Datalog in an applicationspecific manner. In this paper we define Datafun, an analogue of Datalog supporting higherorder functional programming. The key idea is to track monotonicity with types. @InProceedings{ICFP16p214, author = {Michael Arntzenius and Neelakantan R. Krishnaswami}, title = {Datafun: A Functional Datalog}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {214227}, doi = {10.1145/2951913.2951948}, year = {2016}, } Publisher's Version Article Search 

Kumar, Ramana 
ICFP '16: "A New Verified Compiler Backend ..."
A New Verified Compiler Backend for CakeML
Yong Kiam Tan, Magnus O. Myreen, Ramana Kumar, Anthony Fox, Scott Owens, and Michael Norrish (IHPC at A*STAR, Singapore; Chalmers University of Technology, Sweden; Data61 at CSIRO, Australia; UNSW, Australia; University of Cambridge, UK; University of Kent, UK; Australian National University, Australia) We have developed and mechanically verified a new compiler backend for CakeML. Our new compiler features a sequence of intermediate languages that allows it to incrementally compile away highlevel features and enables verification at the right levels of semantic detail. In this way, it resembles mainstream (unverified) compilers for strict functional languages. The compiler supports efficient curried multiargument functions, configurable data representations, exceptions that unwind the call stack, register allocation, and more. The compiler targets several architectures: x8664, ARMv6, ARMv8, MIPS64, and RISCV. In this paper, we present the overall structure of the compiler, including its 12 intermediate languages, and explain how everything fits together. We focus particularly on the interaction between the verification of the register allocator and the garbage collector, and memory representations. The entire development has been carried out within the HOL4 theorem prover. @InProceedings{ICFP16p60, author = {Yong Kiam Tan and Magnus O. Myreen and Ramana Kumar and Anthony Fox and Scott Owens and Michael Norrish}, title = {A New Verified Compiler Backend for CakeML}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {6073}, doi = {10.1145/2951913.2951924}, year = {2016}, } Publisher's Version Article Search 

Laporte, Vincent 
ICFP '16: "An Abstract Memory Functor ..."
An Abstract Memory Functor for Verified C Static Analyzers
Sandrine Blazy, Vincent Laporte, and David Pichardie (University of Rennes 1, France; IRISA, France; IMDEA Software Institute, Spain; ENS Rennes, France) Abstract interpretation provides advanced techniques to infer numerical invariants on programs. There is an abundant literature about numerical abstract domains that operate on scalar variables. This work deals with lifting these techniques to a realistic C memory model. We present an abstract memory functor that takes as argument any standard numerical abstract domain, and builds a memory abstract domain that finely tracks properties about memory contents, taking into account union types, pointer arithmetic and type casts. This functor is implemented and verified inside the Coq proof assistant with respect to the CompCert compiler memory model. Using the Coq extraction mechanism, it is fully executable and used by the Verasco C static analyzer. @InProceedings{ICFP16p325, author = {Sandrine Blazy and Vincent Laporte and David Pichardie}, title = {An Abstract Memory Functor for Verified C Static Analyzers}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {325337}, doi = {10.1145/2951913.2951937}, year = {2016}, } Publisher's Version Article Search Info 

Licata, Dan 
ICFP '16KEY: "A Functional Programmer's ..."
A Functional Programmer's Guide to Homotopy Type Theory
Dan Licata (Wesleyan University, USA) Dependent type theories are functional programming languages with types rich enough to do computerchecked mathematics and software verification. Homotopy type theory is a recent area of work that connects dependent type theory to the mathematical disciplines of homotopy theory and higherdimensional category theory. From a programming point of view, these connections have revealed that all types in dependent type theory support a certain generic program that had not previously been exploited. Specifically, each type can be equipped with computationally relevant witnesses of equality of elements of that type, and all types support a generic program that transports elements along these equalities. One mechanism for equipping types with nontrivial witnesses of equality is Voevodsky’s univalence axiom, which implies that equality of types themselves is witnessed by type isomorphism. Another is higher inductive types, an extended datatype schema that allows identifications between different datatype constructors. While these new mechanisms were originally formulated as axiomatic extensions of type theory, recent work has investigated their computational meaning, leading to the development of new programming languages that better support them. In this talk, I will illustrate what univalence and higher inductive types mean in programming terms. I will also discuss how studying some related semantic settings can reveal additional structure on types; for example, moving from groupoids (categories where all maps are invertible) to general categories yields an account of coercions instead of equalities. Overall, I hope to convey some of the beauty and richness of these connections between disciplines, which we are just beginning to understand. @InProceedings{ICFP16p3, author = {Dan Licata}, title = {A Functional Programmer's Guide to Homotopy Type Theory}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {33}, doi = {10.1145/2951913.2976748}, year = {2016}, } Publisher's Version Article Search 

Lim, Japheth 
ICFP '16: "Refinement through Restraint: ..."
Refinement through Restraint: Bringing Down the Cost of Verification
Liam O'Connor, Zilin Chen, Christine Rizkallah, Sidney Amani, Japheth Lim, Toby Murray, Yutaka Nagashima, Thomas Sewell, and Gerwin Klein (UNSW, Australia; Data61, Australia; University of Pennsylvania, USA; University of Melbourne, Australia) We present a framework aimed at significantly reducing the cost of verifying certain classes of systems software, such as file systems. Our framework allows for equational reasoning about systems code written in our new language, Cogent. Cogent is a restricted, polymorphic, higherorder, and purely functional language with linear types and without the need for a trusted runtime or garbage collector. Linear types allow us to assign two semantics to the language: one imperative, suitable for efficient C code generation; and one functional, suitable for equational reasoning and verification. As Cogent is a restricted language, it is designed to easily interoperate with existing C functions and to connect to existing C verification frameworks. Our framework is based on certifying compilation: For a welltyped Cogent program, our compiler produces C code, a highlevel shallow embedding of its semantics in Isabelle/HOL, and a proof that the C code correctly refines this embedding. Thus one can reason about the full semantics of realworld systems code productively and equationally, while retaining the interoperability and leanness of C. The compiler certificate is a series of languagelevel proofs and perprogram translation validation phases, combined into one coherent toplevel theorem in Isabelle/HOL. @InProceedings{ICFP16p89, author = {Liam O'Connor and Zilin Chen and Christine Rizkallah and Sidney Amani and Japheth Lim and Toby Murray and Yutaka Nagashima and Thomas Sewell and Gerwin Klein}, title = {Refinement through Restraint: Bringing Down the Cost of Verification}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {89102}, doi = {10.1145/2951913.2951940}, year = {2016}, } Publisher's Version Article Search Info 

Lindley, Sam 
ICFP '16: "Talking Bananas: Structural ..."
Talking Bananas: Structural Recursion for Session Types
Sam Lindley and J. Garrett Morris (University of Edinburgh, UK) Session types provide static guarantees that concurrent programs respect communication protocols. We give a novel account of recursive session types in the context of GV, a small concurrent extension of the linear λcalculus. We extend GV with recursive types and catamorphisms, following the initial algebra semantics of recursion, and show that doing so naturally gives rise to recursive session types. We show that this principled approach to recursion resolves longstanding problems in the treatment of duality for recursive session types. We characterize the expressiveness of GV concurrency by giving a CPS translation to (nonconcurrent) λcalculus and proving that reduction in GV is simulated by full reduction in λcalculus. This shows that GV remains terminating in the presence of positive recursive types, and that such arguments extend to other extensions of GV, such as polymorphism or nonlinear types, by appeal to normalization results for sequential λcalculi. We also show that GV remains deadlock free and deterministic in the presence of recursive types. Finally, we extend CP, a sessiontyped process calculus based on linear logic, with recursive types, and show that doing so preserves the connection between reduction in GV and cut elimination in CP. @InProceedings{ICFP16p434, author = {Sam Lindley and J. Garrett Morris}, title = {Talking Bananas: Structural Recursion for Session Types}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {434447}, doi = {10.1145/2951913.2951921}, year = {2016}, } Publisher's Version Article Search 

Lyu, YuHan 
ICFP '16: "Queueing and Glueing for Optimal ..."
Queueing and Glueing for Optimal Partitioning (Functional Pearl)
ShinCheng Mu, YuHsi Chiang, and YuHan Lyu (Academia Sinica, Taiwan; National Taiwan University, Taiwan; Dartmouth College, USA) The queueingglueing algorithm is the nickname we give to an algorithmic pattern that provides amortised linear time solutions to a number of optimal list partition problems that have a peculiar property: at various moments we know that two of three candidate solutions could be optimal. The algorithm works by keeping a queue of lists, glueing them from one end, while chopping from the other end, hence the name. We give a formal derivation of the algorithm, and demonstrate it with several nontrivial examples. @InProceedings{ICFP16p158, author = {ShinCheng Mu and YuHsi Chiang and YuHan Lyu}, title = {Queueing and Glueing for Optimal Partitioning (Functional Pearl)}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {158167}, doi = {10.1145/2951913.2951923}, year = {2016}, } Publisher's Version Article Search Info 

Matsuzaki, Kiminori 
ICFP '16: "Think Like a Vertex, Behave ..."
Think Like a Vertex, Behave Like a Function! A Functional DSL for VertexCentric Big Graph Processing
Kento Emoto, Kiminori Matsuzaki, Zhenjiang Hu, Akimasa Morihata, and Hideya Iwasaki (Kyushu Institute of Technology, Japan; Kochi University of Technology, Japan; National Institute of Informatics, Japan; University of Tokyo, Japan; University of ElectroCommunications, Japan) The vertexcentric programming model, known as “think like a vertex”, is being used more and more to support various big graph processing methods through iterative supersteps that execute in parallel a userdefined vertex program over each vertex of a graph. However, the imperative and messagepassing style of existing systems makes defining a vertex program unintuitive. In this paper, we show that one can benefit more from “Thinking like a vertex” by “Behaving like a function” rather than “Acting like a procedure” with full use of side effects and explicit control of message passing, state, and termination. We propose a functional approach to vertexcentric graph processing in which the computation at every vertex is abstracted as a higherorder function and present Fregel, a new domainspecific language. Fregel has clear functional semantics, supports declarative description of vertex computation, and can be automatically translated into Pregel, an emerging imperativestyle distributed graph processing framework, and thereby achieve promising performance. Experimental results for several typical examples show the promise of this functional approach. @InProceedings{ICFP16p200, author = {Kento Emoto and Kiminori Matsuzaki and Zhenjiang Hu and Akimasa Morihata and Hideya Iwasaki}, title = {Think Like a Vertex, Behave Like a Function! A Functional DSL for VertexCentric Big Graph Processing}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {200213}, doi = {10.1145/2951913.2951938}, year = {2016}, } Publisher's Version Article Search 

Maurer, Luke 
ICFP '16: "Sequent Calculus as a Compiler ..."
Sequent Calculus as a Compiler Intermediate Language
Paul Downen, Luke Maurer, Zena M. Ariola, and Simon Peyton Jones (University of Oregon, USA; Microsoft Research, UK) The λcalculus is popular as an intermediate language for practical compilers. But in the world of logic it has a lesserknown twin, born at the same time, called the sequent calculus. Perhaps that would make for a good intermediate language, too? To explore this question we designed Sequent Core, a practicallyoriented core calculus based on the sequent calculus, and used it to reimplement a substantial chunk of the Glasgow Haskell Compiler. @InProceedings{ICFP16p74, author = {Paul Downen and Luke Maurer and Zena M. Ariola and Simon Peyton Jones}, title = {Sequent Calculus as a Compiler Intermediate Language}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {7488}, doi = {10.1145/2951913.2951931}, year = {2016}, } Publisher's Version Article Search Info 

McDonell, Trevor L. 
ICFP '16: "Ghostbuster: A Tool for Simplifying ..."
Ghostbuster: A Tool for Simplifying and Converting GADTs
Trevor L. McDonell, Timothy A. K. Zakian, Matteo Cimini, and Ryan R. Newton (Indiana University, USA; Oxford University, UK) Generalized Algebraic Dataypes, or simply GADTs, can encode nontrivial properties in the types of the constructors. Once such properties are encoded in a datatype, however, all code manipulating that datatype must provide proof that it maintains these properties in order to typecheck. In this paper, we take a step towards gradualizing these obligations. We introduce a tool, Ghostbuster, that produces simplified versions of GADTs which elide selected type parameters, thereby weakening the guarantees of the simplified datatype in exchange for reducing the obligations necessary to manipulate it. Like ornaments, these simplified datatypes preserve the recursive structure of the original, but unlike ornaments we focus on informationpreserving bidirectional transformations. Ghostbuster generates typesafe conversion functions between the original and simplified datatypes, which we prove are the identity function when composed. We evaluate a prototype tool for Haskell against thousands of GADTs found on the Hackage package database, generating simpler Haskell'98 datatypes and roundtrip conversion functions between the two. @InProceedings{ICFP16p338, author = {Trevor L. McDonell and Timothy A. K. Zakian and Matteo Cimini and Ryan R. Newton}, title = {Ghostbuster: A Tool for Simplifying and Converting GADTs}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {338350}, doi = {10.1145/2951913.2951914}, year = {2016}, } Publisher's Version Article Search 

Might, Matthew 
ICFP '16: "Allocation Characterizes Polyvariance: ..."
Allocation Characterizes Polyvariance: A Unified Methodology for Polyvariant ControlFlow Analysis
Thomas Gilray, Michael D. Adams, and Matthew Might (University of Utah, USA) The polyvariance of a static analysis is the degree to which it structurally differentiates approximations of program values. Polyvariant techniques come in a number of different flavors that represent alternative heuristics for managing the tradeoff an analysis strikes between precision and complexity. For example, call sensitivity supposes that values will tend to correlate with recent call sites, object sensitivity supposes that values will correlate with the allocation points of related objects, the Cartesian product algorithm supposes correlations between the values of arguments to the same function, and so forth. In this paper, we describe a unified methodology for implementing and understanding polyvariance in a higherorder setting (i.e., for controlflow analyses). We do this by extending the method of abstracting abstract machines (AAM), a systematic approach to producing an abstract interpretation of abstractmachine semantics. AAM eliminates recursion within a language’s semantics by passing around an explicit store, and thus places importance on the strategy an analysis uses for allocating abstract addresses within the abstract heap or store. We build on AAM by showing that the design space of possible abstract allocators exactly and uniquely corresponds to the design space of polyvariant strategies. This allows us to both unify and generalize polyvariance as tunings of a single function. Changes to the behavior of this function easily recapitulate classic styles of analysis and produce novel variations, combinations of techniques, and fundamentally new techniques. @InProceedings{ICFP16p407, author = {Thomas Gilray and Michael D. Adams and Matthew Might}, title = {Allocation Characterizes Polyvariance: A Unified Methodology for Polyvariant ControlFlow Analysis}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {407420}, doi = {10.1145/2951913.2951936}, year = {2016}, } Publisher's Version Article Search 

Mohammed Ismail, Wazim 
ICFP '16: "Deriving a Probability Density ..."
Deriving a Probability Density Calculator (Functional Pearl)
Wazim Mohammed Ismail and Chungchieh Shan (Indiana University, USA) Given an expression that denotes a probability distribution, often we want a corresponding density function, to use in probabilistic inference. Fortunately, the task of finding a density has been automated. It turns out that we can derive a compositional procedure for finding a density, by equational reasoning about integrals, starting with the mathematical specification of what a density is. Moreover, the density found can be run as an estimation algorithm, as well as simplified as an exact formula to improve the estimate. @InProceedings{ICFP16p47, author = {Wazim Mohammed Ismail and Chungchieh Shan}, title = {Deriving a Probability Density Calculator (Functional Pearl)}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {4759}, doi = {10.1145/2951913.2951922}, year = {2016}, } Publisher's Version Article Search 

Morihata, Akimasa 
ICFP '16: "Think Like a Vertex, Behave ..."
Think Like a Vertex, Behave Like a Function! A Functional DSL for VertexCentric Big Graph Processing
Kento Emoto, Kiminori Matsuzaki, Zhenjiang Hu, Akimasa Morihata, and Hideya Iwasaki (Kyushu Institute of Technology, Japan; Kochi University of Technology, Japan; National Institute of Informatics, Japan; University of Tokyo, Japan; University of ElectroCommunications, Japan) The vertexcentric programming model, known as “think like a vertex”, is being used more and more to support various big graph processing methods through iterative supersteps that execute in parallel a userdefined vertex program over each vertex of a graph. However, the imperative and messagepassing style of existing systems makes defining a vertex program unintuitive. In this paper, we show that one can benefit more from “Thinking like a vertex” by “Behaving like a function” rather than “Acting like a procedure” with full use of side effects and explicit control of message passing, state, and termination. We propose a functional approach to vertexcentric graph processing in which the computation at every vertex is abstracted as a higherorder function and present Fregel, a new domainspecific language. Fregel has clear functional semantics, supports declarative description of vertex computation, and can be automatically translated into Pregel, an emerging imperativestyle distributed graph processing framework, and thereby achieve promising performance. Experimental results for several typical examples show the promise of this functional approach. @InProceedings{ICFP16p200, author = {Kento Emoto and Kiminori Matsuzaki and Zhenjiang Hu and Akimasa Morihata and Hideya Iwasaki}, title = {Think Like a Vertex, Behave Like a Function! A Functional DSL for VertexCentric Big Graph Processing}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {200213}, doi = {10.1145/2951913.2951938}, year = {2016}, } Publisher's Version Article Search 

Morris, J. Garrett 
ICFP '16: "The Best of Both Worlds: Linear ..."
The Best of Both Worlds: Linear Functional Programming without Compromise
J. Garrett Morris (University of Edinburgh, UK) We present a linear functional calculus with both the safety guarantees expressible with linear types and the rich language of combinators and composition provided by functional programming. Unlike previous combinations of linear typing and functional programming, we compromise neither the linear side (for example, our linear values are firstclass citizens of the language) nor the functional side (for example, we do not require duplicate definitions of compositions for linear and unrestricted functions). To do so, we must generalize abstraction and application to encompass both linear and unrestricted functions. We capture the typing of the generalized constructs with a novel use of qualified types. Our system maintains the metatheoretic properties of the theory of qualified types, including principal types and decidable type inference. Finally, we give a formal basis for our claims of expressiveness, by showing that evaluation respects linearity, and that our language is a conservative extension of existing functional calculi. @InProceedings{ICFP16p448, author = {J. Garrett Morris}, title = {The Best of Both Worlds: Linear Functional Programming without Compromise}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {448461}, doi = {10.1145/2951913.2951925}, year = {2016}, } Publisher's Version Article Search ICFP '16: "Talking Bananas: Structural ..." Talking Bananas: Structural Recursion for Session Types Sam Lindley and J. Garrett Morris (University of Edinburgh, UK) Session types provide static guarantees that concurrent programs respect communication protocols. We give a novel account of recursive session types in the context of GV, a small concurrent extension of the linear λcalculus. We extend GV with recursive types and catamorphisms, following the initial algebra semantics of recursion, and show that doing so naturally gives rise to recursive session types. We show that this principled approach to recursion resolves longstanding problems in the treatment of duality for recursive session types. We characterize the expressiveness of GV concurrency by giving a CPS translation to (nonconcurrent) λcalculus and proving that reduction in GV is simulated by full reduction in λcalculus. This shows that GV remains terminating in the presence of positive recursive types, and that such arguments extend to other extensions of GV, such as polymorphism or nonlinear types, by appeal to normalization results for sequential λcalculi. We also show that GV remains deadlock free and deterministic in the presence of recursive types. Finally, we extend CP, a sessiontyped process calculus based on linear logic, with recursive types, and show that doing so preserves the connection between reduction in GV and cut elimination in CP. @InProceedings{ICFP16p434, author = {Sam Lindley and J. Garrett Morris}, title = {Talking Bananas: Structural Recursion for Session Types}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {434447}, doi = {10.1145/2951913.2951921}, year = {2016}, } Publisher's Version Article Search 

Mu, ShinCheng 
ICFP '16: "Queueing and Glueing for Optimal ..."
Queueing and Glueing for Optimal Partitioning (Functional Pearl)
ShinCheng Mu, YuHsi Chiang, and YuHan Lyu (Academia Sinica, Taiwan; National Taiwan University, Taiwan; Dartmouth College, USA) The queueingglueing algorithm is the nickname we give to an algorithmic pattern that provides amortised linear time solutions to a number of optimal list partition problems that have a peculiar property: at various moments we know that two of three candidate solutions could be optimal. The algorithm works by keeping a queue of lists, glueing them from one end, while chopping from the other end, hence the name. We give a formal derivation of the algorithm, and demonstrate it with several nontrivial examples. @InProceedings{ICFP16p158, author = {ShinCheng Mu and YuHsi Chiang and YuHan Lyu}, title = {Queueing and Glueing for Optimal Partitioning (Functional Pearl)}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {158167}, doi = {10.1145/2951913.2951923}, year = {2016}, } Publisher's Version Article Search Info 

Muller, Stefan K. 
ICFP '16: "Hierarchical Memory Management ..."
Hierarchical Memory Management for Parallel Programs
Ram Raghunathan, Stefan K. Muller, Umut A. Acar, and Guy Blelloch (Carnegie Mellon University, USA; Inria, France) An important feature of functional programs is that they are parallel by default. Implementing an efficient parallel functional language, however, is a major challenge, in part because the high rate of allocation and freeing associated with functional programs requires an efficient and scalable memory manager. In this paper, we present a technique for parallel memory management for strict functional languages with nested parallelism. At the highest level of abstraction, the approach consists of a technique to organize memory as a hierarchy of heaps, and an algorithm for performing automatic memory reclamation by taking advantage of a disentanglement property of parallel functional programs. More specifically, the idea is to assign to each parallel task its own heap in memory and organize the heaps in a hierarchy/tree that mirrors the hierarchy of tasks. We present a nestedparallel calculus that specifies hierarchical heaps and prove in this calculus a disentanglement property, which prohibits a task from accessing objects allocated by another task that might execute in parallel. Leveraging the disentanglement property, we present a garbage collection technique that can operate on any subtree in the memory hierarchy concurrently as other tasks (and/or other collections) proceed in parallel. We prove the safety of this collector by formalizing it in the context of our parallel calculus. In addition, we describe how the proposed techniques can be implemented on modern sharedmemory machines and present a prototype implementation as an extension to MLton, a highperformance compiler for the Standard ML language. Finally, we evaluate the performance of this implementation on a number of parallel benchmarks. @InProceedings{ICFP16p392, author = {Ram Raghunathan and Stefan K. Muller and Umut A. Acar and Guy Blelloch}, title = {Hierarchical Memory Management for Parallel Programs}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {392406}, doi = {10.1145/2951913.2951935}, year = {2016}, } Publisher's Version Article Search 

Murray, Toby 
ICFP '16: "Refinement through Restraint: ..."
Refinement through Restraint: Bringing Down the Cost of Verification
Liam O'Connor, Zilin Chen, Christine Rizkallah, Sidney Amani, Japheth Lim, Toby Murray, Yutaka Nagashima, Thomas Sewell, and Gerwin Klein (UNSW, Australia; Data61, Australia; University of Pennsylvania, USA; University of Melbourne, Australia) We present a framework aimed at significantly reducing the cost of verifying certain classes of systems software, such as file systems. Our framework allows for equational reasoning about systems code written in our new language, Cogent. Cogent is a restricted, polymorphic, higherorder, and purely functional language with linear types and without the need for a trusted runtime or garbage collector. Linear types allow us to assign two semantics to the language: one imperative, suitable for efficient C code generation; and one functional, suitable for equational reasoning and verification. As Cogent is a restricted language, it is designed to easily interoperate with existing C functions and to connect to existing C verification frameworks. Our framework is based on certifying compilation: For a welltyped Cogent program, our compiler produces C code, a highlevel shallow embedding of its semantics in Isabelle/HOL, and a proof that the C code correctly refines this embedding. Thus one can reason about the full semantics of realworld systems code productively and equationally, while retaining the interoperability and leanness of C. The compiler certificate is a series of languagelevel proofs and perprogram translation validation phases, combined into one coherent toplevel theorem in Isabelle/HOL. @InProceedings{ICFP16p89, author = {Liam O'Connor and Zilin Chen and Christine Rizkallah and Sidney Amani and Japheth Lim and Toby Murray and Yutaka Nagashima and Thomas Sewell and Gerwin Klein}, title = {Refinement through Restraint: Bringing Down the Cost of Verification}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {89102}, doi = {10.1145/2951913.2951940}, year = {2016}, } Publisher's Version Article Search Info 

Myreen, Magnus O. 
ICFP '16: "A New Verified Compiler Backend ..."
A New Verified Compiler Backend for CakeML
Yong Kiam Tan, Magnus O. Myreen, Ramana Kumar, Anthony Fox, Scott Owens, and Michael Norrish (IHPC at A*STAR, Singapore; Chalmers University of Technology, Sweden; Data61 at CSIRO, Australia; UNSW, Australia; University of Cambridge, UK; University of Kent, UK; Australian National University, Australia) We have developed and mechanically verified a new compiler backend for CakeML. Our new compiler features a sequence of intermediate languages that allows it to incrementally compile away highlevel features and enables verification at the right levels of semantic detail. In this way, it resembles mainstream (unverified) compilers for strict functional languages. The compiler supports efficient curried multiargument functions, configurable data representations, exceptions that unwind the call stack, register allocation, and more. The compiler targets several architectures: x8664, ARMv6, ARMv8, MIPS64, and RISCV. In this paper, we present the overall structure of the compiler, including its 12 intermediate languages, and explain how everything fits together. We focus particularly on the interaction between the verification of the register allocator and the garbage collector, and memory representations. The entire development has been carried out within the HOL4 theorem prover. @InProceedings{ICFP16p60, author = {Yong Kiam Tan and Magnus O. Myreen and Ramana Kumar and Anthony Fox and Scott Owens and Michael Norrish}, title = {A New Verified Compiler Backend for CakeML}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {6073}, doi = {10.1145/2951913.2951924}, year = {2016}, } Publisher's Version Article Search 

Nagashima, Yutaka 
ICFP '16: "Refinement through Restraint: ..."
Refinement through Restraint: Bringing Down the Cost of Verification
Liam O'Connor, Zilin Chen, Christine Rizkallah, Sidney Amani, Japheth Lim, Toby Murray, Yutaka Nagashima, Thomas Sewell, and Gerwin Klein (UNSW, Australia; Data61, Australia; University of Pennsylvania, USA; University of Melbourne, Australia) We present a framework aimed at significantly reducing the cost of verifying certain classes of systems software, such as file systems. Our framework allows for equational reasoning about systems code written in our new language, Cogent. Cogent is a restricted, polymorphic, higherorder, and purely functional language with linear types and without the need for a trusted runtime or garbage collector. Linear types allow us to assign two semantics to the language: one imperative, suitable for efficient C code generation; and one functional, suitable for equational reasoning and verification. As Cogent is a restricted language, it is designed to easily interoperate with existing C functions and to connect to existing C verification frameworks. Our framework is based on certifying compilation: For a welltyped Cogent program, our compiler produces C code, a highlevel shallow embedding of its semantics in Isabelle/HOL, and a proof that the C code correctly refines this embedding. Thus one can reason about the full semantics of realworld systems code productively and equationally, while retaining the interoperability and leanness of C. The compiler certificate is a series of languagelevel proofs and perprogram translation validation phases, combined into one coherent toplevel theorem in Isabelle/HOL. @InProceedings{ICFP16p89, author = {Liam O'Connor and Zilin Chen and Christine Rizkallah and Sidney Amani and Japheth Lim and Toby Murray and Yutaka Nagashima and Thomas Sewell and Gerwin Klein}, title = {Refinement through Restraint: Bringing Down the Cost of Verification}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {89102}, doi = {10.1145/2951913.2951940}, year = {2016}, } Publisher's Version Article Search Info 

New, Max S. 
ICFP '16: "Fully Abstract Compilation ..."
Fully Abstract Compilation via Universal Embedding
Max S. New, William J. Bowman, and Amal Ahmed (Northeastern University, USA) A fully abstract compiler guarantees that two source components are observationally equivalent in the source language if and only if their translations are observationally equivalent in the target. Full abstraction implies the translation is secure: targetlanguage attackers can make no more observations of a compiled component than a sourcelanguage attacker interacting with the original source component. Proving full abstraction for realistic compilers is challenging because realistic target languages contain features (such as control effects) unavailable in the source, while proofs of full abstraction require showing that every target context to which a compiled component may be linked can be backtranslated to a behaviorally equivalent source context. We prove the first full abstraction result for a translation whose target language contains exceptions, but the source does not. Our translationspecifically, closure conversion of simply typed λcalculus with recursive typesuses types at the target level to ensure that a compiled component is never linked with attackers that have more distinguishing power than sourcelevel attackers. We present a new backtranslation technique based on a shallow embedding of the target language into the source language at a dynamic type. Then boundaries are inserted that mediate terms between the untyped embedding and the stronglytyped source. This technique allows backtranslating nonterminating programs, target features that are untypeable in the source, and wellbracketed effects. @InProceedings{ICFP16p103, author = {Max S. New and William J. Bowman and Amal Ahmed}, title = {Fully Abstract Compilation via Universal Embedding}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {103116}, doi = {10.1145/2951913.2951941}, year = {2016}, } Publisher's Version Article Search Info ICFP '16: "Oh Lord, Please Don't ..." Oh Lord, Please Don't Let Contracts Be Misunderstood (Functional Pearl) Christos Dimoulas, Max S. New, Robert Bruce Findler, and Matthias Felleisen (PLT, USA) Contracts feel misunderstood, especially those with a higherorder soul. While software engineers appreciate contracts as tools for articulating the interface between components, functional programmers desperately search for their types and meaning, completely forgetting about their pragmatics. This gem presents a novel analysis of contract systems. Applied to the higherorder kind, this analysis reveals their large and clearly unappreciated software engineering potential. Three sample applications illustrate where this kind of exploration may lead. @InProceedings{ICFP16p117, author = {Christos Dimoulas and Max S. New and Robert Bruce Findler and Matthias Felleisen}, title = {Oh Lord, Please Don't Let Contracts Be Misunderstood (Functional Pearl)}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {117131}, doi = {10.1145/2951913.2951930}, year = {2016}, } Publisher's Version Article Search 

Newton, Ryan R. 
ICFP '16: "Ghostbuster: A Tool for Simplifying ..."
Ghostbuster: A Tool for Simplifying and Converting GADTs
Trevor L. McDonell, Timothy A. K. Zakian, Matteo Cimini, and Ryan R. Newton (Indiana University, USA; Oxford University, UK) Generalized Algebraic Dataypes, or simply GADTs, can encode nontrivial properties in the types of the constructors. Once such properties are encoded in a datatype, however, all code manipulating that datatype must provide proof that it maintains these properties in order to typecheck. In this paper, we take a step towards gradualizing these obligations. We introduce a tool, Ghostbuster, that produces simplified versions of GADTs which elide selected type parameters, thereby weakening the guarantees of the simplified datatype in exchange for reducing the obligations necessary to manipulate it. Like ornaments, these simplified datatypes preserve the recursive structure of the original, but unlike ornaments we focus on informationpreserving bidirectional transformations. Ghostbuster generates typesafe conversion functions between the original and simplified datatypes, which we prove are the identity function when composed. We evaluate a prototype tool for Haskell against thousands of GADTs found on the Hackage package database, generating simpler Haskell'98 datatypes and roundtrip conversion functions between the two. @InProceedings{ICFP16p338, author = {Trevor L. McDonell and Timothy A. K. Zakian and Matteo Cimini and Ryan R. Newton}, title = {Ghostbuster: A Tool for Simplifying and Converting GADTs}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {338350}, doi = {10.1145/2951913.2951914}, year = {2016}, } Publisher's Version Article Search 

Nguyễn, Kim 
ICFP '16: "SetTheoretic Types for Polymorphic ..."
SetTheoretic Types for Polymorphic Variants
Giuseppe Castagna, Tommaso Petrucciani, and Kim Nguyễn (University of Paris Diderot, France; University of Genoa, Italy; University of ParisSud, France) Polymorphic variants are a useful feature of the OCaml language whose current definition and implementation rely on kinding constraints to simulate a subtyping relation via unification. This yields an awkward formalization and results in a type system whose behaviour is in some cases unintuitive and/or unduly restrictive. In this work, we present an alternative formalization of polymorphic variants, based on settheoretic types and subtyping, that yields a cleaner and more streamlined system. Our formalization is more expressive than the current one (it types more programs while preserving type safety), it can internalize some metatheoretic properties, and it removes some pathological cases of the current implementation resulting in a more intuitive and, thus, predictable type system. More generally, this work shows how to add fullfledged union types to functional languages of the ML family that usually rely on the HindleyMilner type system. As an aside, our system also improves the theory of semantic subtyping, notably by proving completeness for the type reconstruction algorithm. @InProceedings{ICFP16p378, author = {Giuseppe Castagna and Tommaso Petrucciani and Kim Nguyễn}, title = {SetTheoretic Types for Polymorphic Variants}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {378391}, doi = {10.1145/2951913.2951928}, year = {2016}, } Publisher's Version Article Search Info 

Norrish, Michael 
ICFP '16: "A New Verified Compiler Backend ..."
A New Verified Compiler Backend for CakeML
Yong Kiam Tan, Magnus O. Myreen, Ramana Kumar, Anthony Fox, Scott Owens, and Michael Norrish (IHPC at A*STAR, Singapore; Chalmers University of Technology, Sweden; Data61 at CSIRO, Australia; UNSW, Australia; University of Cambridge, UK; University of Kent, UK; Australian National University, Australia) We have developed and mechanically verified a new compiler backend for CakeML. Our new compiler features a sequence of intermediate languages that allows it to incrementally compile away highlevel features and enables verification at the right levels of semantic detail. In this way, it resembles mainstream (unverified) compilers for strict functional languages. The compiler supports efficient curried multiargument functions, configurable data representations, exceptions that unwind the call stack, register allocation, and more. The compiler targets several architectures: x8664, ARMv6, ARMv8, MIPS64, and RISCV. In this paper, we present the overall structure of the compiler, including its 12 intermediate languages, and explain how everything fits together. We focus particularly on the interaction between the verification of the register allocator and the garbage collector, and memory representations. The entire development has been carried out within the HOL4 theorem prover. @InProceedings{ICFP16p60, author = {Yong Kiam Tan and Magnus O. Myreen and Ramana Kumar and Anthony Fox and Scott Owens and Michael Norrish}, title = {A New Verified Compiler Backend for CakeML}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {6073}, doi = {10.1145/2951913.2951924}, year = {2016}, } Publisher's Version Article Search 

O'Connor, Liam 
ICFP '16: "Refinement through Restraint: ..."
Refinement through Restraint: Bringing Down the Cost of Verification
Liam O'Connor, Zilin Chen, Christine Rizkallah, Sidney Amani, Japheth Lim, Toby Murray, Yutaka Nagashima, Thomas Sewell, and Gerwin Klein (UNSW, Australia; Data61, Australia; University of Pennsylvania, USA; University of Melbourne, Australia) We present a framework aimed at significantly reducing the cost of verifying certain classes of systems software, such as file systems. Our framework allows for equational reasoning about systems code written in our new language, Cogent. Cogent is a restricted, polymorphic, higherorder, and purely functional language with linear types and without the need for a trusted runtime or garbage collector. Linear types allow us to assign two semantics to the language: one imperative, suitable for efficient C code generation; and one functional, suitable for equational reasoning and verification. As Cogent is a restricted language, it is designed to easily interoperate with existing C functions and to connect to existing C verification frameworks. Our framework is based on certifying compilation: For a welltyped Cogent program, our compiler produces C code, a highlevel shallow embedding of its semantics in Isabelle/HOL, and a proof that the C code correctly refines this embedding. Thus one can reason about the full semantics of realworld systems code productively and equationally, while retaining the interoperability and leanness of C. The compiler certificate is a series of languagelevel proofs and perprogram translation validation phases, combined into one coherent toplevel theorem in Isabelle/HOL. @InProceedings{ICFP16p89, author = {Liam O'Connor and Zilin Chen and Christine Rizkallah and Sidney Amani and Japheth Lim and Toby Murray and Yutaka Nagashima and Thomas Sewell and Gerwin Klein}, title = {Refinement through Restraint: Bringing Down the Cost of Verification}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {89102}, doi = {10.1145/2951913.2951940}, year = {2016}, } Publisher's Version Article Search Info 

Ohori, Atsushi 
ICFP '16: "A Fully Concurrent Garbage ..."
A Fully Concurrent Garbage Collector for Functional Programs on Multicore Processors
Katsuhiro Ueno and Atsushi Ohori (Tohoku University, Japan) This paper presents a concurrent garbage collection method for functional programs running on a multicore processor. It is a concurrent extension of our bitmapmarking nonmoving collector with Yuasa's snapshotatthebeginning strategy. Our collector is unobtrusive in the sense of the DoligezLeroyGonthier collector; the collector does not stop any mutator thread nor does it force them to synchronize globally. The only critical sections between a mutator and the collector are the code to enqueue/dequeue a 32 kB allocation segment to/from a global segment list and the write barrier code to push an object pointer onto the collector's stack. Most of these data structures can be implemented in standard lockfree data structures. This achieves both efficient allocation and unobtrusive collection in a multicore system. The proposed method has been implemented in SML#, a fullscale Standard ML compiler supporting multiple native threads on multicore CPUs. Our benchmark tests show a drastically short pause time with reasonably low overhead compared to the sequential bitmapmarking collector. @InProceedings{ICFP16p421, author = {Katsuhiro Ueno and Atsushi Ohori}, title = {A Fully Concurrent Garbage Collector for Functional Programs on Multicore Processors}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {421433}, doi = {10.1145/2951913.2951944}, year = {2016}, } Publisher's Version Article Search 

Oliveira, Bruno C. d. S. 
ICFP '16: "Disjoint Intersection Types ..."
Disjoint Intersection Types
Bruno C. d. S. Oliveira, Zhiyuan Shi, and João Alpuim (University of Hong Kong, China) Dunfield showed that a simply typed core calculus with intersection types and a merge operator is able to capture various programming language features. While his calculus is typesafe, it is not coherent: different derivations for the same expression can elaborate to expressions that evaluate to different values. The lack of coherence is an important disadvantage for adoption of his core calculus in implementations of programming languages, as the semantics of the programming language becomes implementationdependent. This paper presents λ_i: a coherent and typesafe calculus with a form of intersection types and a merge operator. Coherence is achieved by ensuring that intersection types are disjoint and programs are sufficiently annotated to avoid type ambiguity. We propose a definition of disjointness where two types A and B are disjoint only if certain set of types are common supertypes of A and B. We investigate three different variants of λ_i, with three variants of disjointness. In the simplest variant, which does not allow ⊤ types, two types are disjoint if they do not share any common supertypes at all. The other two variants introduce ⊤ types and refine the notion of disjointness to allow two types to be disjoint when the only the set of common supertypes are toplike. The difference between the two variants with ⊤ types is on the definition of toplike types, which has an impact on which types are allowed on intersections. We present a type system that prevents intersection types that are not disjoint, as well as an algorithmic specifications to determine whether two types are disjoint for all three variants. @InProceedings{ICFP16p364, author = {Bruno C. d. S. Oliveira and Zhiyuan Shi and João Alpuim}, title = {Disjoint Intersection Types}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {364377}, doi = {10.1145/2951913.2951945}, year = {2016}, } Publisher's Version Article Search 

Orchard, Dominic 
ICFP '16: "Combining Effects and Coeffects ..."
Combining Effects and Coeffects via Grading
Marco Gaboardi, Shinya Katsumata, Dominic Orchard, Flavien Breuvart, and Tarmo Uustalu (SUNY Buffalo, USA; Kyoto University, Japan; University of Cambridge, UK; University of Kent, UK; Inria, France; Tallinn University of Technology, Estonia) Effects and coeffects are two general, complementary aspects of program behaviour. They roughly correspond to computations which change the execution context (effects) versus computations which make demands on the context (coeffects). Effectful features include partiality, nondeterminism, inputoutput, state, and exceptions. Coeffectful features include resource demands, variable access, notions of linearity, and data input requirements. The effectful or coeffectful behaviour of a program can be captured and described via typebased analyses, with fine grained information provided by monoidal effect annotations and semiring coeffects. Various recent work has proposed models for such typed calculi in terms of graded (strong) monads for effects and graded (monoidal) comonads for coeffects. Effects and coeffects have been studied separately so far, but in practice many computations are both effectful and coeffectful, e.g., possibly throwing exceptions but with resource requirements. To remedy this, we introduce a new general calculus with a combined effectcoeffect system. This can describe both the changes and requirements that a program has on its context, as well as interactions between these effectful and coeffectful features of computation. The effectcoeffect system has a denotational model in terms of effectgraded monads and coeffectgraded comonads where interaction is expressed via the novel concept of graded distributive laws. This graded semantics unifies the syntactic type theory with the denotational model. We show that our calculus can be instantiated to describe in a natural way various different kinds of interaction between a program and its evaluation context. @InProceedings{ICFP16p476, author = {Marco Gaboardi and Shinya Katsumata and Dominic Orchard and Flavien Breuvart and Tarmo Uustalu}, title = {Combining Effects and Coeffects via Grading}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {476489}, doi = {10.1145/2951913.2951939}, year = {2016}, } Publisher's Version Article Search 

Owens, Scott 
ICFP '16: "A New Verified Compiler Backend ..."
A New Verified Compiler Backend for CakeML
Yong Kiam Tan, Magnus O. Myreen, Ramana Kumar, Anthony Fox, Scott Owens, and Michael Norrish (IHPC at A*STAR, Singapore; Chalmers University of Technology, Sweden; Data61 at CSIRO, Australia; UNSW, Australia; University of Cambridge, UK; University of Kent, UK; Australian National University, Australia) We have developed and mechanically verified a new compiler backend for CakeML. Our new compiler features a sequence of intermediate languages that allows it to incrementally compile away highlevel features and enables verification at the right levels of semantic detail. In this way, it resembles mainstream (unverified) compilers for strict functional languages. The compiler supports efficient curried multiargument functions, configurable data representations, exceptions that unwind the call stack, register allocation, and more. The compiler targets several architectures: x8664, ARMv6, ARMv8, MIPS64, and RISCV. In this paper, we present the overall structure of the compiler, including its 12 intermediate languages, and explain how everything fits together. We focus particularly on the interaction between the verification of the register allocator and the garbage collector, and memory representations. The entire development has been carried out within the HOL4 theorem prover. @InProceedings{ICFP16p60, author = {Yong Kiam Tan and Magnus O. Myreen and Ramana Kumar and Anthony Fox and Scott Owens and Michael Norrish}, title = {A New Verified Compiler Backend for CakeML}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {6073}, doi = {10.1145/2951913.2951924}, year = {2016}, } Publisher's Version Article Search 

Paraskevopoulou, Zoe 
ICFP '16: "A Type Theory for Incremental ..."
A Type Theory for Incremental Computational Complexity with Control Flow Changes
Ezgi Çiçek, Zoe Paraskevopoulou, and Deepak Garg (MPISWS, Germany; Princeton University, USA) Incremental computation aims to speed up reruns of a program after its inputs have been modified slightly. It works by recording a trace of the program's first run and propagating changes through the trace in incremental runs, trying to reuse as much of the original trace as possible. The recent work CostIt is a type and effect system to establish the time complexity of incremental runs of a program, as a function of input changes. However, CostIt is limited in two ways. First, it prohibits input changes that influence control flow. This makes it impossible to type programs that, for instance, branch on inputs that may change. Second, the soundness of CostIt is proved relative to an abstract cost semantics, but it is unclear how the semantics can be realized. In this paper, we address both these limitations. We present DuCostIt, a redesign of CostIt, that combines reasoning about costs of change propagation and costs of fromscratch evaluation. The latter lifts the restriction on control flow changes. To obtain the type system, we refine Flow Caml, a type system for information flow analysis, with cost effects. Additionally, we inherit from CostIt index refinements to track data structure sizes and a comonadic type. Using a combination of binary and unary stepindexed logical relations, we prove DuCostIt's cost analysis sound relative to not only an abstract cost semantics, but also a concrete semantics, which is obtained by translation to an MLlike language. @InProceedings{ICFP16p132, author = {Ezgi Çiçek and Zoe Paraskevopoulou and Deepak Garg}, title = {A Type Theory for Incremental Computational Complexity with Control Flow Changes}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {132145}, doi = {10.1145/2951913.2951950}, year = {2016}, } Publisher's Version Article Search 

Petrucciani, Tommaso 
ICFP '16: "SetTheoretic Types for Polymorphic ..."
SetTheoretic Types for Polymorphic Variants
Giuseppe Castagna, Tommaso Petrucciani, and Kim Nguyễn (University of Paris Diderot, France; University of Genoa, Italy; University of ParisSud, France) Polymorphic variants are a useful feature of the OCaml language whose current definition and implementation rely on kinding constraints to simulate a subtyping relation via unification. This yields an awkward formalization and results in a type system whose behaviour is in some cases unintuitive and/or unduly restrictive. In this work, we present an alternative formalization of polymorphic variants, based on settheoretic types and subtyping, that yields a cleaner and more streamlined system. Our formalization is more expressive than the current one (it types more programs while preserving type safety), it can internalize some metatheoretic properties, and it removes some pathological cases of the current implementation resulting in a more intuitive and, thus, predictable type system. More generally, this work shows how to add fullfledged union types to functional languages of the ML family that usually rely on the HindleyMilner type system. As an aside, our system also improves the theory of semantic subtyping, notably by proving completeness for the type reconstruction algorithm. @InProceedings{ICFP16p378, author = {Giuseppe Castagna and Tommaso Petrucciani and Kim Nguyễn}, title = {SetTheoretic Types for Polymorphic Variants}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {378391}, doi = {10.1145/2951913.2951928}, year = {2016}, } Publisher's Version Article Search Info 

Peyton Jones, Simon 
ICFP '16: "Sequent Calculus as a Compiler ..."
Sequent Calculus as a Compiler Intermediate Language
Paul Downen, Luke Maurer, Zena M. Ariola, and Simon Peyton Jones (University of Oregon, USA; Microsoft Research, UK) The λcalculus is popular as an intermediate language for practical compilers. But in the world of logic it has a lesserknown twin, born at the same time, called the sequent calculus. Perhaps that would make for a good intermediate language, too? To explore this question we designed Sequent Core, a practicallyoriented core calculus based on the sequent calculus, and used it to reimplement a substantial chunk of the Glasgow Haskell Compiler. @InProceedings{ICFP16p74, author = {Paul Downen and Luke Maurer and Zena M. Ariola and Simon Peyton Jones}, title = {Sequent Calculus as a Compiler Intermediate Language}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {7488}, doi = {10.1145/2951913.2951931}, year = {2016}, } Publisher's Version Article Search Info 

Pichardie, David 
ICFP '16: "An Abstract Memory Functor ..."
An Abstract Memory Functor for Verified C Static Analyzers
Sandrine Blazy, Vincent Laporte, and David Pichardie (University of Rennes 1, France; IRISA, France; IMDEA Software Institute, Spain; ENS Rennes, France) Abstract interpretation provides advanced techniques to infer numerical invariants on programs. There is an abundant literature about numerical abstract domains that operate on scalar variables. This work deals with lifting these techniques to a realistic C memory model. We present an abstract memory functor that takes as argument any standard numerical abstract domain, and builds a memory abstract domain that finely tracks properties about memory contents, taking into account union types, pointer arithmetic and type casts. This functor is implemented and verified inside the Coq proof assistant with respect to the CompCert compiler memory model. Using the Coq extraction mechanism, it is fully executable and used by the Verasco C static analyzer. @InProceedings{ICFP16p325, author = {Sandrine Blazy and Vincent Laporte and David Pichardie}, title = {An Abstract Memory Functor for Verified C Static Analyzers}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {325337}, doi = {10.1145/2951913.2951937}, year = {2016}, } Publisher's Version Article Search Info 

Pientka, Brigitte 
ICFP '16: "Indexed Codata Types ..."
Indexed Codata Types
David Thibodeau, Andrew Cave, and Brigitte Pientka (McGill University, Canada) Indexed data types allow us to specify and verify many interesting invariants about finite data in a general purpose programming language. In this paper we investigate the dual idea: indexed codata types, which allow us to describe datadependencies about infinite data structures. Unlike finite data which is defined by constructors, we define infinite data by observations. Dual to pattern matching on indexed data which may refine the type indices, we define copattern matching on indexed codata where type indices guard observations we can make. Our key technical contributions are threefold: first, we extend Levy's callbypush value language with support for indexed (co)data and deep (co)pattern matching; second, we provide a clean foundation for dependent (co)pattern matching using equality constraints; third, we describe a smallstep semantics using a continuationbased abstract machine, define coverage for indexed (co)patterns, and prove type safety. This is an important step towards building a foundation where (co)data type definitions and dependent types can coexist. @InProceedings{ICFP16p351, author = {David Thibodeau and Andrew Cave and Brigitte Pientka}, title = {Indexed Codata Types}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {351363}, doi = {10.1145/2951913.2951929}, year = {2016}, } Publisher's Version Article Search 

Piessens, Frank 
ICFP '16: "Unifiers as Equivalences: ..."
Unifiers as Equivalences: ProofRelevant Unification of Dependently Typed Data
Jesper Cockx, Dominique Devriese, and Frank Piessens (iMinds, Belgium; KU Leuven, Belgium) Dependently typed languages such as Agda, Coq and Idris use a syntactic firstorder unification algorithm to check definitions by dependent pattern matching. However, these algorithms don’t adequately consider the types of the terms being unified, leading to various unintended results. As a consequence, they require ad hoc restrictions to preserve soundness, but this makes them very hard to prove correct, modify, or extend. This paper proposes a framework for reasoning formally about unification in a dependently typed setting. In this framework, unification rules compute not just a unifier but also a corresponding correctness proof in the form of an equivalence between two sets of equations. By rephrasing the standard unification rules in a proofrelevant manner, they are guaranteed to preserve soundness of the theory. In addition, it enables us to safely add new rules that can exploit the dependencies between the types of equations. Using our framework, we reimplemented the unification algorithm used by Agda. As a result, we were able to replace previous ad hoc restrictions with formally verified unification rules, fixing a number of bugs in the process. We are convinced this will also enable the addition of new and interesting unification rules in the future, without compromising soundness along the way. @InProceedings{ICFP16p270, author = {Jesper Cockx and Dominique Devriese and Frank Piessens}, title = {Unifiers as Equivalences: ProofRelevant Unification of Dependently Typed Data}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {270283}, doi = {10.1145/2951913.2951917}, year = {2016}, } Publisher's Version Article Search 

Piróg, Maciej 
ICFP '16: "String Diagrams for Free Monads ..."
String Diagrams for Free Monads (Functional Pearl)
Maciej Piróg and Nicolas Wu (KU Leuven, Belgium; University of Bristol, UK) We show how one can reason about free monads using their universal properties rather than any concrete implementation. We introduce a graphical, twodimensional calculus tailormade to accommodate these properties. @InProceedings{ICFP16p490, author = {Maciej Piróg and Nicolas Wu}, title = {String Diagrams for Free Monads (Functional Pearl)}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {490501}, doi = {10.1145/2951913.2951947}, year = {2016}, } Publisher's Version Article Search 

Prunet, Vincent 
ICFP '16: "A Glimpse of Hopjs ..."
A Glimpse of Hopjs
Manuel Serrano and Vincent Prunet (Inria, France) Hop.js is a multitier programming environment for JavaScript. It allows a single JavaScript program to describe the clientside and the serverside components of a web application. Its runtime environment ensures consistent executions of the application on the server and on the client. This paper overviews the Hop.js design. It shows the JavaScript extensions that makes it possible to conceive web applications globally. It presents how Hop.js interacts with the outside world. It also briefly presents the Hop.js implementation. It presents the Hop.js web server implementation, the handling of serverside parallelism, and the JavaScript and HTML compilers. @InProceedings{ICFP16p180, author = {Manuel Serrano and Vincent Prunet}, title = {A Glimpse of Hopjs}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {180192}, doi = {10.1145/2951913.2951916}, year = {2016}, } Publisher's Version Article Search Info 

Raghunathan, Ram 
ICFP '16: "Hierarchical Memory Management ..."
Hierarchical Memory Management for Parallel Programs
Ram Raghunathan, Stefan K. Muller, Umut A. Acar, and Guy Blelloch (Carnegie Mellon University, USA; Inria, France) An important feature of functional programs is that they are parallel by default. Implementing an efficient parallel functional language, however, is a major challenge, in part because the high rate of allocation and freeing associated with functional programs requires an efficient and scalable memory manager. In this paper, we present a technique for parallel memory management for strict functional languages with nested parallelism. At the highest level of abstraction, the approach consists of a technique to organize memory as a hierarchy of heaps, and an algorithm for performing automatic memory reclamation by taking advantage of a disentanglement property of parallel functional programs. More specifically, the idea is to assign to each parallel task its own heap in memory and organize the heaps in a hierarchy/tree that mirrors the hierarchy of tasks. We present a nestedparallel calculus that specifies hierarchical heaps and prove in this calculus a disentanglement property, which prohibits a task from accessing objects allocated by another task that might execute in parallel. Leveraging the disentanglement property, we present a garbage collection technique that can operate on any subtree in the memory hierarchy concurrently as other tasks (and/or other collections) proceed in parallel. We prove the safety of this collector by formalizing it in the context of our parallel calculus. In addition, we describe how the proposed techniques can be implemented on modern sharedmemory machines and present a prototype implementation as an extension to MLton, a highperformance compiler for the Standard ML language. Finally, we evaluate the performance of this implementation on a number of parallel benchmarks. @InProceedings{ICFP16p392, author = {Ram Raghunathan and Stefan K. Muller and Umut A. Acar and Guy Blelloch}, title = {Hierarchical Memory Management for Parallel Programs}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {392406}, doi = {10.1145/2951913.2951935}, year = {2016}, } Publisher's Version Article Search 

Rainey, Mike 
ICFP '16: "DagCalculus: A Calculus for ..."
DagCalculus: A Calculus for Parallel Computation
Umut A. Acar, Arthur Charguéraud, Mike Rainey, and Filip Sieczkowski (Carnegie Mellon University, USA; Inria, France) Increasing availability of multicore systems has led to greater focus on the design and implementation of languages for writing parallel programs. Such languages support various abstractions for parallelism, such as forkjoin, asyncfinish, futures. While they may seem similar, these abstractions lead to different semantics, language design and implementation decisions, and can significantly impact the performance of enduser applications. In this paper, we consider the question of whether it would be possible to unify various paradigms of parallel computing. To this end, we propose a calculus, called dag calculus, that can encode forkjoin, asyncfinish, and futures, and possibly others. We describe dag calculus and its semantics, establish translations from the aforementioned paradigms into dag calculus. These translations establish that dag calculus is sufficiently powerful for encoding programs written in prevailing paradigms of parallelism. We present concurrent algorithms and data structures for realizing dag calculus on multicore hardware and prove that the proposed techniques are consistent with the semantics. Finally, we present an implementation of the calculus and evaluate it empirically by comparing its performance to highly optimized code from prior work. The results show that the calculus is expressive and that it competes well with, and sometimes outperforms, the state of the art. @InProceedings{ICFP16p18, author = {Umut A. Acar and Arthur Charguéraud and Mike Rainey and Filip Sieczkowski}, title = {DagCalculus: A Calculus for Parallel Computation}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {1832}, doi = {10.1145/2951913.2951946}, year = {2016}, } Publisher's Version Article Search 

Rizkallah, Christine 
ICFP '16: "Refinement through Restraint: ..."
Refinement through Restraint: Bringing Down the Cost of Verification
Liam O'Connor, Zilin Chen, Christine Rizkallah, Sidney Amani, Japheth Lim, Toby Murray, Yutaka Nagashima, Thomas Sewell, and Gerwin Klein (UNSW, Australia; Data61, Australia; University of Pennsylvania, USA; University of Melbourne, Australia) We present a framework aimed at significantly reducing the cost of verifying certain classes of systems software, such as file systems. Our framework allows for equational reasoning about systems code written in our new language, Cogent. Cogent is a restricted, polymorphic, higherorder, and purely functional language with linear types and without the need for a trusted runtime or garbage collector. Linear types allow us to assign two semantics to the language: one imperative, suitable for efficient C code generation; and one functional, suitable for equational reasoning and verification. As Cogent is a restricted language, it is designed to easily interoperate with existing C functions and to connect to existing C verification frameworks. Our framework is based on certifying compilation: For a welltyped Cogent program, our compiler produces C code, a highlevel shallow embedding of its semantics in Isabelle/HOL, and a proof that the C code correctly refines this embedding. Thus one can reason about the full semantics of realworld systems code productively and equationally, while retaining the interoperability and leanness of C. The compiler certificate is a series of languagelevel proofs and perprogram translation validation phases, combined into one coherent toplevel theorem in Isabelle/HOL. @InProceedings{ICFP16p89, author = {Liam O'Connor and Zilin Chen and Christine Rizkallah and Sidney Amani and Japheth Lim and Toby Murray and Yutaka Nagashima and Thomas Sewell and Gerwin Klein}, title = {Refinement through Restraint: Bringing Down the Cost of Verification}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {89102}, doi = {10.1145/2951913.2951940}, year = {2016}, } Publisher's Version Article Search Info 

Ryu, Sukyoung 
ICFP '16KEY: "Journey to Find Bugs in JavaScript ..."
Journey to Find Bugs in JavaScript Web Applications in the Wild
Sukyoung Ryu (KAIST, South Korea) Analyzing realworld JavaScript web applications is a challenging task. On top of understanding the semantics of JavaScript, it requires modeling of web documents, platform objects, and interactions between them. Not only the JavaScript language itself but also its usage patterns are extremely dynamic. JavaScript can generate code and run it during evaluation, and most web applications load JavaScript code dynamically. Such dynamic characteristics of JavaScript web applications make pure static analysis approaches inapplicable. In this talk, we present our attempts to analyze JavaScript web applications in the wild mostly statically using various approaches. From pure JavaScript programs to JavaScript web applications using platformspecific libraries and dynamic code loading, we explain technical challenges in analyzing each of them and how we built an opensource analysis framework for JavaScript, SAFE, that addresses the challenges incrementally. In spite of active research accomplishments in analysis of JavaScript web applications, many issues still remain to be resolved such as events, callback functions, and hybrid web applications. We discuss possible future research directions and open challenges. @InProceedings{ICFP16p2, author = {Sukyoung Ryu}, title = {Journey to Find Bugs in JavaScript Web Applications in the Wild}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {22}, doi = {10.1145/2951913.2976747}, year = {2016}, } Publisher's Version Article Search 

Sarkar, Susmit 
ICFP '16: "Farms, Pipes, Streams and ..."
Farms, Pipes, Streams and Reforestation: Reasoning about Structured Parallel Processes using Types and Hylomorphisms
David Castro, Kevin Hammond, and Susmit Sarkar (University of St. Andrews, UK) The increasing importance of parallelism has motivated the creation of better abstractions for writing parallel software, including structured parallelism using nested algorithmic skeletons. Such approaches provide highlevel abstractions that avoid common problems, such as race conditions, and often allow strong cost models to be defined. However, choosing a combination of algorithmic skeletons that yields good parallel speedups for a program on some specific parallel architecture remains a difficult task. In order to achieve this, it is necessary to simultaneously reason both about the costs of different parallel structures and about the semantic equivalences between them. This paper presents a new typebased mechanism that enables strong static reasoning about these properties. We exploit wellknown properties of a very general recursion pattern, hylomorphisms, and give a denotational semantics for structured parallel processes in terms of these hylomorphisms. Using our approach, it is possible to determine formally whether it is possible to introduce a desired parallel structure into a program without altering its functional behaviour, and also to choose a version of that parallel structure that minimises some given cost model. @InProceedings{ICFP16p4, author = {David Castro and Kevin Hammond and Susmit Sarkar}, title = {Farms, Pipes, Streams and Reforestation: Reasoning about Structured Parallel Processes using Types and Hylomorphisms}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {417}, doi = {10.1145/2951913.2951920}, year = {2016}, } Publisher's Version Article Search 

Sato, Ryosuke 
ICFP '16: "Automatically Disproving Fair ..."
Automatically Disproving Fair Termination of HigherOrder Functional Programs
Keiichi Watanabe, Ryosuke Sato, Takeshi Tsukada, and Naoki Kobayashi (University of Tokyo, Japan) We propose an automated method for disproving fair termination of higherorder functional programs, which is complementary to Murase et al.’s recent method for proving fair termination. A program is said to be fair terminating if it has no infinite execution trace that satisfies a given fairness constraint. Fair termination is an important property because program verification problems for arbitrary ωregular temporal properties can be transformed to those of fair termination. Our method reduces the problem of disproving fair termination to higherorder model checking by using predicate abstraction and CEGAR. Given a program, we convert it to an abstract program that generates an approximation of the (possibly infinite) execution traces of the original program, so that the original program has a fair infinite execution trace if the tree generated by the abstract program satisfies a certain property. The method is a nontrivial extension of Kuwahara et al.’s method for disproving plain termination. @InProceedings{ICFP16p243, author = {Keiichi Watanabe and Ryosuke Sato and Takeshi Tsukada and Naoki Kobayashi}, title = {Automatically Disproving Fair Termination of HigherOrder Functional Programs}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {243255}, doi = {10.1145/2951913.2951919}, year = {2016}, } Publisher's Version Article Search 

Seidel, Eric L. 
ICFP '16: "Dynamic Witnesses for Static ..."
Dynamic Witnesses for Static Type Errors (or, IllTyped Programs Usually Go Wrong)
Eric L. Seidel, Ranjit Jhala, and Westley Weimer (University of California at San Diego, USA; University of Virginia, USA) Static type errors are a common stumbling block for newcomers to typed functional languages. We present a dynamic approach to explaining type errors by generating counterexample witness inputs that illustrate how an illtyped program goes wrong. First, given an illtyped function, we symbolically execute the body to synthesize witness values that make the program go wrong. We prove that our procedure synthesizes general witnesses in that if a witness is found, then for all inhabited input types, there exist values that can make the function go wrong. Second, we show how to extend the above procedure to produce a reduction graph that can be used to interactively visualize and debug witness executions. Third, we evaluate the coverage of our approach on two data sets comprising over 4,500 illtyped student programs. Our technique is able to generate witnesses for 88% of the programs, and our reduction graph yields small counterexamples for 81% of the witnesses. Finally, we evaluate whether our witnesses help students understand and fix type errors, and find that students presented with our witnesses show a greater understanding of type errors than those presented with a standard error message. @InProceedings{ICFP16p228, author = {Eric L. Seidel and Ranjit Jhala and Westley Weimer}, title = {Dynamic Witnesses for Static Type Errors (or, IllTyped Programs Usually Go Wrong)}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {228242}, doi = {10.1145/2951913.2951915}, year = {2016}, } Publisher's Version Article Search 

Sergey, Ilya 
ICFP '16: "Experience Report: Growing ..."
Experience Report: Growing and Shrinking Polygons for Random Testing of Computational Geometry Algorithms
Ilya Sergey (University College London, UK) This paper documents our experience of adapting and using the QuickCheckstyle approach for extensive randomised propertybased testing of computational geometry algorithms. The need in rigorous evaluation of computational geometry procedures has naturally arisen in our quest of organising a mediumsize programming contest for second year university students—an experiment we conducted as an attempt to introduce them to computational geometry. The main effort in organising the event was implementation of a solid infrastructure for testing and ranking solutions. For this, we employed functional programming techniques. The choice of the language and the paradigm made it possible for us to engineer, from scratch and in a very short period of time, a series of robust geometric primitives and algorithms, as well as implement a scalable framework for their randomised testing. We describe the main insights, enabling efficient random testing of geometric procedures, and report on our experience of using the testing framework, which helped us to detect and fix a number of issues not just in our programming artefacts, but also in the published algorithms we had implemented. @InProceedings{ICFP16p193, author = {Ilya Sergey}, title = {Experience Report: Growing and Shrinking Polygons for Random Testing of Computational Geometry Algorithms}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {193199}, doi = {10.1145/2951913.2951927}, year = {2016}, } Publisher's Version Article Search Info 

Serrano, Manuel 
ICFP '16: "A Glimpse of Hopjs ..."
A Glimpse of Hopjs
Manuel Serrano and Vincent Prunet (Inria, France) Hop.js is a multitier programming environment for JavaScript. It allows a single JavaScript program to describe the clientside and the serverside components of a web application. Its runtime environment ensures consistent executions of the application on the server and on the client. This paper overviews the Hop.js design. It shows the JavaScript extensions that makes it possible to conceive web applications globally. It presents how Hop.js interacts with the outside world. It also briefly presents the Hop.js implementation. It presents the Hop.js web server implementation, the handling of serverside parallelism, and the JavaScript and HTML compilers. @InProceedings{ICFP16p180, author = {Manuel Serrano and Vincent Prunet}, title = {A Glimpse of Hopjs}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {180192}, doi = {10.1145/2951913.2951916}, year = {2016}, } Publisher's Version Article Search Info 

Sewell, Thomas 
ICFP '16: "Refinement through Restraint: ..."
Refinement through Restraint: Bringing Down the Cost of Verification
Liam O'Connor, Zilin Chen, Christine Rizkallah, Sidney Amani, Japheth Lim, Toby Murray, Yutaka Nagashima, Thomas Sewell, and Gerwin Klein (UNSW, Australia; Data61, Australia; University of Pennsylvania, USA; University of Melbourne, Australia) We present a framework aimed at significantly reducing the cost of verifying certain classes of systems software, such as file systems. Our framework allows for equational reasoning about systems code written in our new language, Cogent. Cogent is a restricted, polymorphic, higherorder, and purely functional language with linear types and without the need for a trusted runtime or garbage collector. Linear types allow us to assign two semantics to the language: one imperative, suitable for efficient C code generation; and one functional, suitable for equational reasoning and verification. As Cogent is a restricted language, it is designed to easily interoperate with existing C functions and to connect to existing C verification frameworks. Our framework is based on certifying compilation: For a welltyped Cogent program, our compiler produces C code, a highlevel shallow embedding of its semantics in Isabelle/HOL, and a proof that the C code correctly refines this embedding. Thus one can reason about the full semantics of realworld systems code productively and equationally, while retaining the interoperability and leanness of C. The compiler certificate is a series of languagelevel proofs and perprogram translation validation phases, combined into one coherent toplevel theorem in Isabelle/HOL. @InProceedings{ICFP16p89, author = {Liam O'Connor and Zilin Chen and Christine Rizkallah and Sidney Amani and Japheth Lim and Toby Murray and Yutaka Nagashima and Thomas Sewell and Gerwin Klein}, title = {Refinement through Restraint: Bringing Down the Cost of Verification}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {89102}, doi = {10.1145/2951913.2951940}, year = {2016}, } Publisher's Version Article Search Info 

Shan, Chungchieh 
ICFP '16: "Deriving a Probability Density ..."
Deriving a Probability Density Calculator (Functional Pearl)
Wazim Mohammed Ismail and Chungchieh Shan (Indiana University, USA) Given an expression that denotes a probability distribution, often we want a corresponding density function, to use in probabilistic inference. Fortunately, the task of finding a density has been automated. It turns out that we can derive a compositional procedure for finding a density, by equational reasoning about integrals, starting with the mathematical specification of what a density is. Moreover, the density found can be run as an estimation algorithm, as well as simplified as an exact formula to improve the estimate. @InProceedings{ICFP16p47, author = {Wazim Mohammed Ismail and Chungchieh Shan}, title = {Deriving a Probability Density Calculator (Functional Pearl)}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {4759}, doi = {10.1145/2951913.2951922}, year = {2016}, } Publisher's Version Article Search 

Shi, Zhiyuan 
ICFP '16: "Disjoint Intersection Types ..."
Disjoint Intersection Types
Bruno C. d. S. Oliveira, Zhiyuan Shi, and João Alpuim (University of Hong Kong, China) Dunfield showed that a simply typed core calculus with intersection types and a merge operator is able to capture various programming language features. While his calculus is typesafe, it is not coherent: different derivations for the same expression can elaborate to expressions that evaluate to different values. The lack of coherence is an important disadvantage for adoption of his core calculus in implementations of programming languages, as the semantics of the programming language becomes implementationdependent. This paper presents λ_i: a coherent and typesafe calculus with a form of intersection types and a merge operator. Coherence is achieved by ensuring that intersection types are disjoint and programs are sufficiently annotated to avoid type ambiguity. We propose a definition of disjointness where two types A and B are disjoint only if certain set of types are common supertypes of A and B. We investigate three different variants of λ_i, with three variants of disjointness. In the simplest variant, which does not allow ⊤ types, two types are disjoint if they do not share any common supertypes at all. The other two variants introduce ⊤ types and refine the notion of disjointness to allow two types to be disjoint when the only the set of common supertypes are toplike. The difference between the two variants with ⊤ types is on the definition of toplike types, which has an impact on which types are allowed on intersections. We present a type system that prevents intersection types that are not disjoint, as well as an algorithmic specifications to determine whether two types are disjoint for all three variants. @InProceedings{ICFP16p364, author = {Bruno C. d. S. Oliveira and Zhiyuan Shi and João Alpuim}, title = {Disjoint Intersection Types}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {364377}, doi = {10.1145/2951913.2951945}, year = {2016}, } Publisher's Version Article Search 

Shinohara, Ayumi 
ICFP '16: "Compact Bit Encoding Schemes ..."
Compact Bit Encoding Schemes for SimplyTyped LambdaTerms
Kotaro Takeda, Naoki Kobayashi, Kazuya Yaguchi, and Ayumi Shinohara (University of Tokyo, Japan; Tohoku University, Japan) We consider the problem of how to compactly encode simplytyped λterms into bit strings. The work has been motivated by Kobayashi et al.’s recent work on higherorder data compression, where data are encoded as functional programs (or, λterms) that generate them. To exploit its good compression power, the compression scheme has to come with a method for compactly encoding the λterms into bit strings. To this end, we propose two typebased bitencoding schemes; the first one encodes a λterm into a sequence of symbols by using type information, and then applies arithmetic coding to convert the sequence to a bit string. The second one is more sophisticated; we prepare a contextfree grammar (CFG) that describes only welltyped terms, and then use a variation of arithmetic coding specialized for the CFG. We have implemented both schemes and confirmed that they often output more compact codes than previous bit encoding schemes for λterms. @InProceedings{ICFP16p146, author = {Kotaro Takeda and Naoki Kobayashi and Kazuya Yaguchi and Ayumi Shinohara}, title = {Compact Bit Encoding Schemes for SimplyTyped LambdaTerms}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {146157}, doi = {10.1145/2951913.2951918}, year = {2016}, } Publisher's Version Article Search 

Sieczkowski, Filip 
ICFP '16: "DagCalculus: A Calculus for ..."
DagCalculus: A Calculus for Parallel Computation
Umut A. Acar, Arthur Charguéraud, Mike Rainey, and Filip Sieczkowski (Carnegie Mellon University, USA; Inria, France) Increasing availability of multicore systems has led to greater focus on the design and implementation of languages for writing parallel programs. Such languages support various abstractions for parallelism, such as forkjoin, asyncfinish, futures. While they may seem similar, these abstractions lead to different semantics, language design and implementation decisions, and can significantly impact the performance of enduser applications. In this paper, we consider the question of whether it would be possible to unify various paradigms of parallel computing. To this end, we propose a calculus, called dag calculus, that can encode forkjoin, asyncfinish, and futures, and possibly others. We describe dag calculus and its semantics, establish translations from the aforementioned paradigms into dag calculus. These translations establish that dag calculus is sufficiently powerful for encoding programs written in prevailing paradigms of parallelism. We present concurrent algorithms and data structures for realizing dag calculus on multicore hardware and prove that the proposed techniques are consistent with the semantics. Finally, we present an implementation of the calculus and evaluate it empirically by comparing its performance to highly optimized code from prior work. The results show that the calculus is expressive and that it competes well with, and sometimes outperforms, the state of the art. @InProceedings{ICFP16p18, author = {Umut A. Acar and Arthur Charguéraud and Mike Rainey and Filip Sieczkowski}, title = {DagCalculus: A Calculus for Parallel Computation}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {1832}, doi = {10.1145/2951913.2951946}, year = {2016}, } Publisher's Version Article Search 

Szymczak, Marcin 
ICFP '16: "A LambdaCalculus Foundation ..."
A LambdaCalculus Foundation for Universal Probabilistic Programming
Johannes Borgström, Ugo Dal Lago, Andrew D. Gordon, and Marcin Szymczak (Uppsala University, Sweden; University of Bologna, Italy; Inria, France; Microsoft Research, UK; University of Edinburgh, UK) We develop the operational semantics of an untyped probabilistic λcalculus with continuous distributions, and both hard and soft constraints,as a foundation for universal probabilistic programming languages such as Church, Anglican, and Venture. Our first contribution is to adapt the classic operational semantics of λcalculus to a continuous setting via creating a measure space on terms and defining stepindexed approximations. We prove equivalence of bigstep and smallstep formulations of this distributionbased semantics. To move closer to inference techniques, we also define the samplingbased semantics of a term as a function from a trace of random samples to a value. We show that the distribution induced by integration over the space of traces equals the distributionbased semantics. Our second contribution is to formalize the implementation technique of trace Markov chain Monte Carlo (MCMC) for our calculus and to show its correctness. A key step is defining sufficient conditions for the distribution induced by trace MCMC to converge to the distributionbased semantics. To the best of our knowledge, this is the first rigorous correctness proof for trace MCMC for a higherorder functional language, or for a language with soft constraints. @InProceedings{ICFP16p33, author = {Johannes Borgström and Ugo Dal Lago and Andrew D. Gordon and Marcin Szymczak}, title = {A LambdaCalculus Foundation for Universal Probabilistic Programming}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {3346}, doi = {10.1145/2951913.2951942}, year = {2016}, } Publisher's Version Article Search 

Tabareau, Nicolas 
ICFP '16: "Partial Type Equivalences ..."
Partial Type Equivalences for Verified Dependent Interoperability
PierreEvariste Dagand, Nicolas Tabareau, and Éric Tanter (UPMC, France; LIP6, France; Inria, France; University of Chile, Chile) Fullspectrum dependent types promise to enable the development of correctbyconstruction software. However, even certified software needs to interact with simplytyped or untyped programs, be it to perform system calls, or to use legacy libraries. Trading static guarantees for runtime checks, the dependent interoperability framework provides a mechanism by which simplytyped values can safely be coerced to dependent types and, conversely, dependentlytyped programs can defensively be exported to a simplytyped application. In this paper, we give a semantic account of dependent interoperability. Our presentation relies on and is guided by a pervading notion of type equivalence, whose importance has been emphasized in recent work on homotopy type theory. Specifically, we develop the notion of partial type equivalences as a key foundation for dependent interoperability. Our framework is developed in Coq; it is thus constructive and verified in the strictest sense of the terms. Using our library, users can specify domainspecific partial equivalences between data structures. Our library then takes care of the (sometimes, heavy) lifting that leads to interoperable programs. It thus becomes possible, as we shall illustrate, to internalize and handtune the extraction of dependentlytyped programs to interoperable OCaml programs within Coq itself. @InProceedings{ICFP16p298, author = {PierreEvariste Dagand and Nicolas Tabareau and Éric Tanter}, title = {Partial Type Equivalences for Verified Dependent Interoperability}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {298310}, doi = {10.1145/2951913.2951933}, year = {2016}, } Publisher's Version Article Search Info 

Takeda, Kotaro 
ICFP '16: "Compact Bit Encoding Schemes ..."
Compact Bit Encoding Schemes for SimplyTyped LambdaTerms
Kotaro Takeda, Naoki Kobayashi, Kazuya Yaguchi, and Ayumi Shinohara (University of Tokyo, Japan; Tohoku University, Japan) We consider the problem of how to compactly encode simplytyped λterms into bit strings. The work has been motivated by Kobayashi et al.’s recent work on higherorder data compression, where data are encoded as functional programs (or, λterms) that generate them. To exploit its good compression power, the compression scheme has to come with a method for compactly encoding the λterms into bit strings. To this end, we propose two typebased bitencoding schemes; the first one encodes a λterm into a sequence of symbols by using type information, and then applies arithmetic coding to convert the sequence to a bit string. The second one is more sophisticated; we prepare a contextfree grammar (CFG) that describes only welltyped terms, and then use a variation of arithmetic coding specialized for the CFG. We have implemented both schemes and confirmed that they often output more compact codes than previous bit encoding schemes for λterms. @InProceedings{ICFP16p146, author = {Kotaro Takeda and Naoki Kobayashi and Kazuya Yaguchi and Ayumi Shinohara}, title = {Compact Bit Encoding Schemes for SimplyTyped LambdaTerms}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {146157}, doi = {10.1145/2951913.2951918}, year = {2016}, } Publisher's Version Article Search 

Tan, Yong Kiam 
ICFP '16: "A New Verified Compiler Backend ..."
A New Verified Compiler Backend for CakeML
Yong Kiam Tan, Magnus O. Myreen, Ramana Kumar, Anthony Fox, Scott Owens, and Michael Norrish (IHPC at A*STAR, Singapore; Chalmers University of Technology, Sweden; Data61 at CSIRO, Australia; UNSW, Australia; University of Cambridge, UK; University of Kent, UK; Australian National University, Australia) We have developed and mechanically verified a new compiler backend for CakeML. Our new compiler features a sequence of intermediate languages that allows it to incrementally compile away highlevel features and enables verification at the right levels of semantic detail. In this way, it resembles mainstream (unverified) compilers for strict functional languages. The compiler supports efficient curried multiargument functions, configurable data representations, exceptions that unwind the call stack, register allocation, and more. The compiler targets several architectures: x8664, ARMv6, ARMv8, MIPS64, and RISCV. In this paper, we present the overall structure of the compiler, including its 12 intermediate languages, and explain how everything fits together. We focus particularly on the interaction between the verification of the register allocator and the garbage collector, and memory representations. The entire development has been carried out within the HOL4 theorem prover. @InProceedings{ICFP16p60, author = {Yong Kiam Tan and Magnus O. Myreen and Ramana Kumar and Anthony Fox and Scott Owens and Michael Norrish}, title = {A New Verified Compiler Backend for CakeML}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {6073}, doi = {10.1145/2951913.2951924}, year = {2016}, } Publisher's Version Article Search 

Tanter, Éric 
ICFP '16: "Partial Type Equivalences ..."
Partial Type Equivalences for Verified Dependent Interoperability
PierreEvariste Dagand, Nicolas Tabareau, and Éric Tanter (UPMC, France; LIP6, France; Inria, France; University of Chile, Chile) Fullspectrum dependent types promise to enable the development of correctbyconstruction software. However, even certified software needs to interact with simplytyped or untyped programs, be it to perform system calls, or to use legacy libraries. Trading static guarantees for runtime checks, the dependent interoperability framework provides a mechanism by which simplytyped values can safely be coerced to dependent types and, conversely, dependentlytyped programs can defensively be exported to a simplytyped application. In this paper, we give a semantic account of dependent interoperability. Our presentation relies on and is guided by a pervading notion of type equivalence, whose importance has been emphasized in recent work on homotopy type theory. Specifically, we develop the notion of partial type equivalences as a key foundation for dependent interoperability. Our framework is developed in Coq; it is thus constructive and verified in the strictest sense of the terms. Using our library, users can specify domainspecific partial equivalences between data structures. Our library then takes care of the (sometimes, heavy) lifting that leads to interoperable programs. It thus becomes possible, as we shall illustrate, to internalize and handtune the extraction of dependentlytyped programs to interoperable OCaml programs within Coq itself. @InProceedings{ICFP16p298, author = {PierreEvariste Dagand and Nicolas Tabareau and Éric Tanter}, title = {Partial Type Equivalences for Verified Dependent Interoperability}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {298310}, doi = {10.1145/2951913.2951933}, year = {2016}, } Publisher's Version Article Search Info 

Thibodeau, David 
ICFP '16: "Indexed Codata Types ..."
Indexed Codata Types
David Thibodeau, Andrew Cave, and Brigitte Pientka (McGill University, Canada) Indexed data types allow us to specify and verify many interesting invariants about finite data in a general purpose programming language. In this paper we investigate the dual idea: indexed codata types, which allow us to describe datadependencies about infinite data structures. Unlike finite data which is defined by constructors, we define infinite data by observations. Dual to pattern matching on indexed data which may refine the type indices, we define copattern matching on indexed codata where type indices guard observations we can make. Our key technical contributions are threefold: first, we extend Levy's callbypush value language with support for indexed (co)data and deep (co)pattern matching; second, we provide a clean foundation for dependent (co)pattern matching using equality constraints; third, we describe a smallstep semantics using a continuationbased abstract machine, define coverage for indexed (co)patterns, and prove type safety. This is an important step towards building a foundation where (co)data type definitions and dependent types can coexist. @InProceedings{ICFP16p351, author = {David Thibodeau and Andrew Cave and Brigitte Pientka}, title = {Indexed Codata Types}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {351363}, doi = {10.1145/2951913.2951929}, year = {2016}, } Publisher's Version Article Search 

Thiemann, Peter 
ICFP '16: "ContextFree Session Types ..."
ContextFree Session Types
Peter Thiemann and Vasco T. Vasconcelos (University of Freiburg, Germany; University of Lisbon, Portugal) Session types describe structured communication on heterogeneously typed channels at a high level. Their tailrecursive structure imposes a protocol that can be described by a regular language. The types of transmitted values are drawn from the underlying functional language, abstracting from the details of serializing values of structured data types. Contextfree session types extend session types by allowing nested protocols that are not restricted to tail recursion. Nested protocols correspond to deterministic contextfree languages. Such protocols are interesting in their own right, but they are particularly suited to describe the lowlevel serialization of treestructured data in a typesafe way. We establish the metatheory of contextfree session types, prove that they properly generalize standard (twoparty) session types, and take first steps towards type checking by showing that type equivalence is decidable. @InProceedings{ICFP16p462, author = {Peter Thiemann and Vasco T. Vasconcelos}, title = {ContextFree Session Types}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {462475}, doi = {10.1145/2951913.2951926}, year = {2016}, } Publisher's Version Article Search 

Tsukada, Takeshi 
ICFP '16: "Automatically Disproving Fair ..."
Automatically Disproving Fair Termination of HigherOrder Functional Programs
Keiichi Watanabe, Ryosuke Sato, Takeshi Tsukada, and Naoki Kobayashi (University of Tokyo, Japan) We propose an automated method for disproving fair termination of higherorder functional programs, which is complementary to Murase et al.’s recent method for proving fair termination. A program is said to be fair terminating if it has no infinite execution trace that satisfies a given fairness constraint. Fair termination is an important property because program verification problems for arbitrary ωregular temporal properties can be transformed to those of fair termination. Our method reduces the problem of disproving fair termination to higherorder model checking by using predicate abstraction and CEGAR. Given a program, we convert it to an abstract program that generates an approximation of the (possibly infinite) execution traces of the original program, so that the original program has a fair infinite execution trace if the tree generated by the abstract program satisfies a certain property. The method is a nontrivial extension of Kuwahara et al.’s method for disproving plain termination. @InProceedings{ICFP16p243, author = {Keiichi Watanabe and Ryosuke Sato and Takeshi Tsukada and Naoki Kobayashi}, title = {Automatically Disproving Fair Termination of HigherOrder Functional Programs}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {243255}, doi = {10.1145/2951913.2951919}, year = {2016}, } Publisher's Version Article Search 

Ueno, Katsuhiro 
ICFP '16: "A Fully Concurrent Garbage ..."
A Fully Concurrent Garbage Collector for Functional Programs on Multicore Processors
Katsuhiro Ueno and Atsushi Ohori (Tohoku University, Japan) This paper presents a concurrent garbage collection method for functional programs running on a multicore processor. It is a concurrent extension of our bitmapmarking nonmoving collector with Yuasa's snapshotatthebeginning strategy. Our collector is unobtrusive in the sense of the DoligezLeroyGonthier collector; the collector does not stop any mutator thread nor does it force them to synchronize globally. The only critical sections between a mutator and the collector are the code to enqueue/dequeue a 32 kB allocation segment to/from a global segment list and the write barrier code to push an object pointer onto the collector's stack. Most of these data structures can be implemented in standard lockfree data structures. This achieves both efficient allocation and unobtrusive collection in a multicore system. The proposed method has been implemented in SML#, a fullscale Standard ML compiler supporting multiple native threads on multicore CPUs. Our benchmark tests show a drastically short pause time with reasonably low overhead compared to the sequential bitmapmarking collector. @InProceedings{ICFP16p421, author = {Katsuhiro Ueno and Atsushi Ohori}, title = {A Fully Concurrent Garbage Collector for Functional Programs on Multicore Processors}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {421433}, doi = {10.1145/2951913.2951944}, year = {2016}, } Publisher's Version Article Search 

Uustalu, Tarmo 
ICFP '16: "Combining Effects and Coeffects ..."
Combining Effects and Coeffects via Grading
Marco Gaboardi, Shinya Katsumata, Dominic Orchard, Flavien Breuvart, and Tarmo Uustalu (SUNY Buffalo, USA; Kyoto University, Japan; University of Cambridge, UK; University of Kent, UK; Inria, France; Tallinn University of Technology, Estonia) Effects and coeffects are two general, complementary aspects of program behaviour. They roughly correspond to computations which change the execution context (effects) versus computations which make demands on the context (coeffects). Effectful features include partiality, nondeterminism, inputoutput, state, and exceptions. Coeffectful features include resource demands, variable access, notions of linearity, and data input requirements. The effectful or coeffectful behaviour of a program can be captured and described via typebased analyses, with fine grained information provided by monoidal effect annotations and semiring coeffects. Various recent work has proposed models for such typed calculi in terms of graded (strong) monads for effects and graded (monoidal) comonads for coeffects. Effects and coeffects have been studied separately so far, but in practice many computations are both effectful and coeffectful, e.g., possibly throwing exceptions but with resource requirements. To remedy this, we introduce a new general calculus with a combined effectcoeffect system. This can describe both the changes and requirements that a program has on its context, as well as interactions between these effectful and coeffectful features of computation. The effectcoeffect system has a denotational model in terms of effectgraded monads and coeffectgraded comonads where interaction is expressed via the novel concept of graded distributive laws. This graded semantics unifies the syntactic type theory with the denotational model. We show that our calculus can be instantiated to describe in a natural way various different kinds of interaction between a program and its evaluation context. @InProceedings{ICFP16p476, author = {Marco Gaboardi and Shinya Katsumata and Dominic Orchard and Flavien Breuvart and Tarmo Uustalu}, title = {Combining Effects and Coeffects via Grading}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {476489}, doi = {10.1145/2951913.2951939}, year = {2016}, } Publisher's Version Article Search 

Van Horn, David 
ICFP '16: "Constructive Galois Connections: ..."
Constructive Galois Connections: Taming the Galois Connection Framework for Mechanized Metatheory
David Darais and David Van Horn (University of Maryland, USA) Galois connections are a foundational tool for structuring abstraction in semantics and their use lies at the heart of the theory of abstract interpretation. Yet, mechanization of Galois connections remains limited to restricted modes of use, preventing their general application in mechanized metatheory and certified programming. This paper presents constructive Galois connections, a variant of Galois connections that is effective both on paper and in proof assistants; is complete with respect to a large subset of classical Galois connections; and enables more general reasoning principles, including the "calculational" style advocated by Cousot. To design constructive Galois connection we identify a restricted mode of use of classical ones which is both general and amenable to mechanization in dependentlytyped functional programming languages. Crucial to our metatheory is the addition of monadic structure to Galois connections to control a "specification effect". Effectful calculations may reason classically, while pure calculations have extractable computational content. Explicitly moving between the worlds of specification and implementation is enabled by our metatheory. To validate our approach, we provide two case studies in mechanizing existing proofs from the literature: one uses calculational abstract interpretation to design a static analyzer, the other forms a semantic basis for gradual typing. Both mechanized proofs closely follow their original paperandpencil counterparts, employ reasoning principles not captured by previous mechanization approaches, support the extraction of verified algorithms, and are novel. @InProceedings{ICFP16p311, author = {David Darais and David Van Horn}, title = {Constructive Galois Connections: Taming the Galois Connection Framework for Mechanized Metatheory}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {311324}, doi = {10.1145/2951913.2951934}, year = {2016}, } Publisher's Version Article Search 

Vasconcelos, Vasco T. 
ICFP '16: "ContextFree Session Types ..."
ContextFree Session Types
Peter Thiemann and Vasco T. Vasconcelos (University of Freiburg, Germany; University of Lisbon, Portugal) Session types describe structured communication on heterogeneously typed channels at a high level. Their tailrecursive structure imposes a protocol that can be described by a regular language. The types of transmitted values are drawn from the underlying functional language, abstracting from the details of serializing values of structured data types. Contextfree session types extend session types by allowing nested protocols that are not restricted to tail recursion. Nested protocols correspond to deterministic contextfree languages. Such protocols are interesting in their own right, but they are particularly suited to describe the lowlevel serialization of treestructured data in a typesafe way. We establish the metatheory of contextfree session types, prove that they properly generalize standard (twoparty) session types, and take first steps towards type checking by showing that type equivalence is decidable. @InProceedings{ICFP16p462, author = {Peter Thiemann and Vasco T. Vasconcelos}, title = {ContextFree Session Types}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {462475}, doi = {10.1145/2951913.2951926}, year = {2016}, } Publisher's Version Article Search 

Watanabe, Keiichi 
ICFP '16: "Automatically Disproving Fair ..."
Automatically Disproving Fair Termination of HigherOrder Functional Programs
Keiichi Watanabe, Ryosuke Sato, Takeshi Tsukada, and Naoki Kobayashi (University of Tokyo, Japan) We propose an automated method for disproving fair termination of higherorder functional programs, which is complementary to Murase et al.’s recent method for proving fair termination. A program is said to be fair terminating if it has no infinite execution trace that satisfies a given fairness constraint. Fair termination is an important property because program verification problems for arbitrary ωregular temporal properties can be transformed to those of fair termination. Our method reduces the problem of disproving fair termination to higherorder model checking by using predicate abstraction and CEGAR. Given a program, we convert it to an abstract program that generates an approximation of the (possibly infinite) execution traces of the original program, so that the original program has a fair infinite execution trace if the tree generated by the abstract program satisfies a certain property. The method is a nontrivial extension of Kuwahara et al.’s method for disproving plain termination. @InProceedings{ICFP16p243, author = {Keiichi Watanabe and Ryosuke Sato and Takeshi Tsukada and Naoki Kobayashi}, title = {Automatically Disproving Fair Termination of HigherOrder Functional Programs}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {243255}, doi = {10.1145/2951913.2951919}, year = {2016}, } Publisher's Version Article Search 

Weimer, Westley 
ICFP '16: "Dynamic Witnesses for Static ..."
Dynamic Witnesses for Static Type Errors (or, IllTyped Programs Usually Go Wrong)
Eric L. Seidel, Ranjit Jhala, and Westley Weimer (University of California at San Diego, USA; University of Virginia, USA) Static type errors are a common stumbling block for newcomers to typed functional languages. We present a dynamic approach to explaining type errors by generating counterexample witness inputs that illustrate how an illtyped program goes wrong. First, given an illtyped function, we symbolically execute the body to synthesize witness values that make the program go wrong. We prove that our procedure synthesizes general witnesses in that if a witness is found, then for all inhabited input types, there exist values that can make the function go wrong. Second, we show how to extend the above procedure to produce a reduction graph that can be used to interactively visualize and debug witness executions. Third, we evaluate the coverage of our approach on two data sets comprising over 4,500 illtyped student programs. Our technique is able to generate witnesses for 88% of the programs, and our reduction graph yields small counterexamples for 81% of the witnesses. Finally, we evaluate whether our witnesses help students understand and fix type errors, and find that students presented with our witnesses show a greater understanding of type errors than those presented with a standard error message. @InProceedings{ICFP16p228, author = {Eric L. Seidel and Ranjit Jhala and Westley Weimer}, title = {Dynamic Witnesses for Static Type Errors (or, IllTyped Programs Usually Go Wrong)}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {228242}, doi = {10.1145/2951913.2951915}, year = {2016}, } Publisher's Version Article Search 

Wu, Nicolas 
ICFP '16: "String Diagrams for Free Monads ..."
String Diagrams for Free Monads (Functional Pearl)
Maciej Piróg and Nicolas Wu (KU Leuven, Belgium; University of Bristol, UK) We show how one can reason about free monads using their universal properties rather than any concrete implementation. We introduce a graphical, twodimensional calculus tailormade to accommodate these properties. @InProceedings{ICFP16p490, author = {Maciej Piróg and Nicolas Wu}, title = {String Diagrams for Free Monads (Functional Pearl)}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {490501}, doi = {10.1145/2951913.2951947}, year = {2016}, } Publisher's Version Article Search 

Yaguchi, Kazuya 
ICFP '16: "Compact Bit Encoding Schemes ..."
Compact Bit Encoding Schemes for SimplyTyped LambdaTerms
Kotaro Takeda, Naoki Kobayashi, Kazuya Yaguchi, and Ayumi Shinohara (University of Tokyo, Japan; Tohoku University, Japan) We consider the problem of how to compactly encode simplytyped λterms into bit strings. The work has been motivated by Kobayashi et al.’s recent work on higherorder data compression, where data are encoded as functional programs (or, λterms) that generate them. To exploit its good compression power, the compression scheme has to come with a method for compactly encoding the λterms into bit strings. To this end, we propose two typebased bitencoding schemes; the first one encodes a λterm into a sequence of symbols by using type information, and then applies arithmetic coding to convert the sequence to a bit string. The second one is more sophisticated; we prepare a contextfree grammar (CFG) that describes only welltyped terms, and then use a variation of arithmetic coding specialized for the CFG. We have implemented both schemes and confirmed that they often output more compact codes than previous bit encoding schemes for λterms. @InProceedings{ICFP16p146, author = {Kotaro Takeda and Naoki Kobayashi and Kazuya Yaguchi and Ayumi Shinohara}, title = {Compact Bit Encoding Schemes for SimplyTyped LambdaTerms}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {146157}, doi = {10.1145/2951913.2951918}, year = {2016}, } Publisher's Version Article Search 

Zakian, Timothy A. K. 
ICFP '16: "Ghostbuster: A Tool for Simplifying ..."
Ghostbuster: A Tool for Simplifying and Converting GADTs
Trevor L. McDonell, Timothy A. K. Zakian, Matteo Cimini, and Ryan R. Newton (Indiana University, USA; Oxford University, UK) Generalized Algebraic Dataypes, or simply GADTs, can encode nontrivial properties in the types of the constructors. Once such properties are encoded in a datatype, however, all code manipulating that datatype must provide proof that it maintains these properties in order to typecheck. In this paper, we take a step towards gradualizing these obligations. We introduce a tool, Ghostbuster, that produces simplified versions of GADTs which elide selected type parameters, thereby weakening the guarantees of the simplified datatype in exchange for reducing the obligations necessary to manipulate it. Like ornaments, these simplified datatypes preserve the recursive structure of the original, but unlike ornaments we focus on informationpreserving bidirectional transformations. Ghostbuster generates typesafe conversion functions between the original and simplified datatypes, which we prove are the identity function when composed. We evaluate a prototype tool for Haskell against thousands of GADTs found on the Hackage package database, generating simpler Haskell'98 datatypes and roundtrip conversion functions between the two. @InProceedings{ICFP16p338, author = {Trevor L. McDonell and Timothy A. K. Zakian and Matteo Cimini and Ryan R. Newton}, title = {Ghostbuster: A Tool for Simplifying and Converting GADTs}, booktitle = {Proc.\ ICFP}, publisher = {ACM}, pages = {338350}, doi = {10.1145/2951913.2951914}, year = {2016}, } Publisher's Version Article Search 
119 authors
proc time: 4.17