ICFP 2015 – Author Index |
Contents -
Abstracts -
Authors
|
A B C D E F G H J K L M N O P R S T U V W Y Z
Ağacan, Ömer S. |
![]() Edward Z. Yang, Giovanni Campagna, Ömer S. Ağacan, Ahmed El-Hassany, Abhishek Kulkarni, and Ryan R. Newton (Stanford University, USA; Indiana University, USA) In distributed applications, the transmission of non-contiguous data structures is greatly slowed down by the need to serialize them into a buffer before sending. We describe Compact Normal Forms, an API that allows programmers to explicitly place immutable heap objects into regions, which can both be accessed like ordinary data as well as efficiently transmitted over the network. The process of placing objects into compact regions (essentially a copy) is faster than any serializer and can be amortized over a series of functional updates to the data structure in question. We implement this scheme in the Glasgow Haskell Compiler and show that even with the space expansion attendant with memory-oriented data structure representations, we achieve between x2 and x4 speedups on fast local networks with sufficiently large data structures. ![]() |
|
Ahmed, Amal |
![]() William J. Bowman and Amal Ahmed (Northeastern University, USA) The dependency core calculus (DCC) is a framework for studying a variety of dependency analyses (e.g., secure information flow). The key property provided by DCC is noninterference, which guarantees that a low-level observer (attacker) cannot distinguish high-level (protected) computations. The proof of noninterference for DCC suggests a connection to parametricity in System F, which suggests that it should be possible to implement dependency analyses in languages with parametric polymorphism. We present a translation from DCC into Fω and prove that the translation preserves noninterference. To express noninterference in Fω, we define a notion of observer-sensitive equivalence that makes essential use of both first-order and higher-order polymorphism. Our translation provides insights into DCC's type system and shows how DCC can be implemented in a polymorphic language without loss of the noninterference (security) guarantees available in DCC. Our contributions include proof techniques that should be valuable when proving other secure compilation or full abstraction results. ![]() |
|
Amin, Nada |
![]() Tiark Rompf and Nada Amin (Purdue University, USA; EPFL, Switzerland) We present the design and implementation of a SQL query processor that outperforms existing database systems and is written in just about 500 lines of Scala code -- a convincing case study that high-level functional programming can handily beat C for systems-level programming where the last drop of performance matters. The key enabler is a shift in perspective towards generative programming. The core of the query engine is an interpreter for relational algebra operations, written in Scala. Using the open-source LMS Framework (Lightweight Modular Staging), we turn this interpreter into a query compiler with very low effort. To do so, we capitalize on an old and widely known result from partial evaluation known as Futamura projections, which state that a program that can specialize an interpreter to any given input program is equivalent to a compiler. In this pearl, we discuss LMS programming patterns such as mixed-stage data structures (e.g. data records with static schema and dynamic field components) and techniques to generate low-level C code, including specialized data structures and data loading primitives. ![]() ![]() |
|
Ariola, Zena M. |
![]() Paul Downen, Philip Johnson-Freyd, and Zena M. Ariola (University of Oregon, USA) Our goal is to develop co-induction from our understanding of induction, putting them on level ground as equal partners for reasoning about programs. We investigate several structures which represent well-founded forms of recursion in programs. These simple structures encapsulate reasoning by primitive and noetherian induction principles, and can be composed together to form complex recursion schemes for programs operating over a wide class of data and co-data types. At its heart, this study is guided by duality: each structure for recursion has a dual form, giving perfectly symmetric pairs of equal and opposite data and co-data types for representing recursion in programs. Duality is brought out through a framework presented in sequent style, which inherently includes control effects that are interpreted logically as classical reasoning principles. To accommodate the presence of effects, we give a calculus parameterized by a notion of strategy, which is strongly normalizing for a wide range of strategies. We also present a more traditional calculus for representing effect-free functional programs, but at the cost of losing some of the founding dualities. ![]() ![]() |
|
Avanzini, Martin |
![]() Martin Avanzini, Ugo Dal Lago, and Georg Moser (University of Bologna, Italy; INRIA, France; University of Innsbruck, Austria) We show how the complexity of higher-order functional programs can be analysed automatically by applying program transformations to a defunctionalised versions of them, and feeding the result to existing tools for the complexity analysis of first-order term rewrite systems. This is done while carefully analysing complexity preservation and reflection of the employed transformations such that the complexity of the obtained term rewrite system reflects on the complexity of the initial program. Further, we describe suitable strategies for the application of the studied transformations and provide ample experimental data for assessing the viability of our method. ![]() |
|
Bagwell, Phil |
![]() Nicolas Stucki, Tiark Rompf, Vlad Ureche, and Phil Bagwell (EPFL, Switzerland; Purdue University, USA) State-of-the-art immutable collections have wildly differing performance characteristics across their operations, often forcing programmers to choose different collection implementations for each task. Thus, changes to the program can invalidate the choice of collections, making code evolution costly. It would be desirable to have a collection that performs well for a broad range of operations. To this end, we present the RRB-Vector, an immutable sequence collection that offers good performance across a large number of sequential and parallel operations. The underlying innovations are: (1) the Relaxed-Radix-Balanced (RRB) tree structure, which allows efficient structural reorganization, and (2) an optimization that exploits spatio-temporal locality on the RRB data structure in order to offset the cost of traversing the tree. In our benchmarks, the RRB-Vector speedup for parallel operations is lower bounded by 7x when executing on 4 CPUs of 8 cores each. The performance for discrete operations, such as appending on either end, or updating and removing elements, is consistently good and compares favorably to the most important immutable sequence collections in the literature and in use today. The memory footprint of RRB-Vector is on par with arrays and an order of magnitude less than competing collections. ![]() |
|
Bahr, Patrick |
![]() Patrick Bahr, Jost Berthold, and Martin Elsman (University of Copenhagen, Denmark; Commonwealth Bank of Australia, Australia) Domain-specific languages (DSLs) for complex financial contracts are in practical use in many banks and financial institutions today. Given the level of automation and pervasiveness of software in the sector, the financial domain is immensely sensitive to software bugs. At the same time, there is an increasing need to analyse (and report on) the interaction between multiple parties. In this paper, we present a multi-party contract language that rigorously relegates any artefacts of simulation and computation from its core, which leads to favourable algebraic properties, and therefore allows for formalising domain-specific analyses and transformations using a proof assistant. At the centre of our formalisation is a simple denotational semantics independent of any stochastic aspects. Based on this semantics, we devise certified contract analyses and transformations. In particular, we give a type system, with an accompanying type inference procedure, that statically ensures that contracts follow the principle of causality. Moreover, we devise a reduction semantics that allows us to evolve contracts over time, in accordance with the denotational semantics. From the verified Coq definitions, we automatically extract a Haskell implementation of an embedded contract DSL along with the formally verified contract management functionality. This approach opens a road map towards more reliable contract management software, including the possibility of analysing contracts based on symbolic instead of numeric methods. ![]() ![]() |
|
Bakst, Alexander |
![]() Niki Vazou, Alexander Bakst, and Ranjit Jhala (University of California at San Diego, USA) We present a notion of bounded quantification for refinement types and show how it expands the expressiveness of refinement typing by using it to develop typed combinators for: (1) relational algebra and safe database access, (2) Floyd-Hoare logic within a state transformer monad equipped with combinators for branching and looping, and (3) using the above to implement a refined IO monad that tracks capabilities and resource usage. This leap in expressiveness comes via a translation to ``ghost" functions, which lets us retain the automated and decidable SMT based checking and inference that makes refinement typing effective in practice. ![]() ![]() |
|
Bauman, Spenser |
![]() Spenser Bauman, Carl Friedrich Bolz, Robert Hirschfeld, Vasily Kirilichev, Tobias Pape, Jeremy G. Siek, and Sam Tobin-Hochstadt (Indiana University, USA; Kings College London, UK; HPI, Germany) We present Pycket, a high-performance tracing JIT compiler for Racket. Pycket supports a wide variety of the sophisticated features in Racket such as contracts, continuations, classes, structures, dynamic binding, and more. On average, over a standard suite of benchmarks, Pycket outperforms existing compilers, both Racket's JIT and other highly-optimizing Scheme compilers. Further, Pycket provides much better performance for Racket proxies than existing systems, dramatically reducing the overhead of contracts and gradual typing. We validate this claim with performance evaluation on multiple existing benchmark suites. The Pycket implementation is of independent interest as an application of the RPython meta-tracing framework (originally created for PyPy), which automatically generates tracing JIT compilers from interpreters. Prior work on meta-tracing focuses on bytecode interpreters, whereas Pycket is a high-level interpreter based on the CEK abstract machine and operates directly on abstract syntax trees. Pycket supports proper tail calls and first-class continuations. In the setting of a functional language, where recursion and higher-order functions are more prevalent than explicit loops, the most significant performance challenge for a tracing JIT is identifying which control flows constitute a loop---we discuss two strategies for identifying loops and measure their impact. ![]() |
|
Berthold, Jost |
![]() Patrick Bahr, Jost Berthold, and Martin Elsman (University of Copenhagen, Denmark; Commonwealth Bank of Australia, Australia) Domain-specific languages (DSLs) for complex financial contracts are in practical use in many banks and financial institutions today. Given the level of automation and pervasiveness of software in the sector, the financial domain is immensely sensitive to software bugs. At the same time, there is an increasing need to analyse (and report on) the interaction between multiple parties. In this paper, we present a multi-party contract language that rigorously relegates any artefacts of simulation and computation from its core, which leads to favourable algebraic properties, and therefore allows for formalising domain-specific analyses and transformations using a proof assistant. At the centre of our formalisation is a simple denotational semantics independent of any stochastic aspects. Based on this semantics, we devise certified contract analyses and transformations. In particular, we give a type system, with an accompanying type inference procedure, that statically ensures that contracts follow the principle of causality. Moreover, we devise a reduction semantics that allows us to evolve contracts over time, in accordance with the denotational semantics. From the verified Coq definitions, we automatically extract a Haskell implementation of an embedded contract DSL along with the formally verified contract management functionality. This approach opens a road map towards more reliable contract management software, including the possibility of analysing contracts based on symbolic instead of numeric methods. ![]() ![]() |
|
Blanchette, Jasmin Christian |
![]() Jasmin Christian Blanchette, Andrei Popescu, and Dmitriy Traytel (INRIA, France; LORIA, France; Middlesex University, UK; TU München, Germany) This paper presents a formalized framework for defining corecursive functions safely in a total setting, based on corecursion up-to and relational parametricity. The end product is a general corecursor that allows corecursive (and even recursive) calls under "friendly" operations, including constructors. Friendly corecursive functions can be registered as such, thereby increasing the corecursor's expressiveness. The metatheory is formalized in the Isabelle proof assistant and forms the core of a prototype tool. The corecursor is derived from first principles, without requiring new axioms or extensions of the logic. ![]() |
|
Bodik, Rastislav |
![]() Rastislav Bodik (University of Washington, USA) Program synthesis is the contemporary answer to automatic programming. It innovates in two ways: First, it replaces batch automation with interactivity, assisting the programmer in refining the understanding of the programming problem. Second, it produces programs using search in a candidate space rather than by derivation from a specification. Searching for an acceptable program means that we can accommodate incomplete specifications, such as examples. Additionally, search makes synthesis applicable to domains that lack correct-by-construction derivation rules, such as hardware design, education, end-user programming, and systems biology. The future of synthesis rests on four challenges, each presenting an opportunity to develop novel abstractions for "programming with search." Larger scope: today, we synthesize small, flat programs; synthesis of large software will need constructs for modularity and stepwise refinement. New interaction modes: to solicit the specification without simply asking for more examples, we need to impose a structure on the candidate space and explore it in a dialogue. Construction: how to compile a synthesis problem to a search algorithm without building a compiler? Everything is a program: whatever can be phrased as a program can be in principle synthesized. Indeed, we will see synthesis advance from synthesis of plain programs to synthesis of compilers and languages. The latter may include DSLs, type systems, and modeling languages for biology. As such, synthesis could help mechanize the crown jewel of programming languages research --- the design of abstractions --- which has so far been done manually and only by experts. ![]() |
|
Bolz, Carl Friedrich |
![]() Spenser Bauman, Carl Friedrich Bolz, Robert Hirschfeld, Vasily Kirilichev, Tobias Pape, Jeremy G. Siek, and Sam Tobin-Hochstadt (Indiana University, USA; Kings College London, UK; HPI, Germany) We present Pycket, a high-performance tracing JIT compiler for Racket. Pycket supports a wide variety of the sophisticated features in Racket such as contracts, continuations, classes, structures, dynamic binding, and more. On average, over a standard suite of benchmarks, Pycket outperforms existing compilers, both Racket's JIT and other highly-optimizing Scheme compilers. Further, Pycket provides much better performance for Racket proxies than existing systems, dramatically reducing the overhead of contracts and gradual typing. We validate this claim with performance evaluation on multiple existing benchmark suites. The Pycket implementation is of independent interest as an application of the RPython meta-tracing framework (originally created for PyPy), which automatically generates tracing JIT compilers from interpreters. Prior work on meta-tracing focuses on bytecode interpreters, whereas Pycket is a high-level interpreter based on the CEK abstract machine and operates directly on abstract syntax trees. Pycket supports proper tail calls and first-class continuations. In the setting of a functional language, where recursion and higher-order functions are more prevalent than explicit loops, the most significant performance challenge for a tracing JIT is identifying which control flows constitute a loop---we discuss two strategies for identifying loops and measure their impact. ![]() |
|
Bowman, William J. |
![]() William J. Bowman and Amal Ahmed (Northeastern University, USA) The dependency core calculus (DCC) is a framework for studying a variety of dependency analyses (e.g., secure information flow). The key property provided by DCC is noninterference, which guarantees that a low-level observer (attacker) cannot distinguish high-level (protected) computations. The proof of noninterference for DCC suggests a connection to parametricity in System F, which suggests that it should be possible to implement dependency analyses in languages with parametric polymorphism. We present a translation from DCC into Fω and prove that the translation preserves noninterference. To express noninterference in Fω, we define a notion of observer-sensitive equivalence that makes essential use of both first-order and higher-order polymorphism. Our translation provides insights into DCC's type system and shows how DCC can be implemented in a polymorphic language without loss of the noninterference (security) guarantees available in DCC. Our contributions include proof techniques that should be valuable when proving other secure compilation or full abstraction results. ![]() |
|
Buiras, Pablo |
![]() Pablo Buiras, Dimitrios Vytiniotis, and Alejandro Russo (Chalmers University of Technology, Sweden; Microsoft Research, UK) Information-Flow Control (IFC) is a well-established approach for allowing untrusted code to manipulate sensitive data without disclosing it. IFC is typically enforced via type systems and static analyses or via dynamic execution monitors. The LIO Haskell library, originating in operating systems research, implements a purely dynamic monitor of the sensitivity level of a computation, particularly suitable when data sensitivity levels are only known at runtime. In this paper, we show how to give programmers the flexibility of deferring IFC checks to runtime (as in LIO), while also providing static guarantees---and the absence of runtime checks---for parts of their programs that can be statically verified (unlike LIO). We present the design and implementation of our approach, HLIO (Hybrid LIO), as an embedding in Haskell that uses a novel technique for deferring IFC checks based on singleton types and constraint polymorphism. We formalize HLIO, prove non-interference, and show how interesting IFC examples can be programmed. Although our motivation is IFC, our technique for deferring constraints goes well beyond and offers a methodology for programmer-controlled hybrid type checking in Haskell. ![]() |
|
Campagna, Giovanni |
![]() Edward Z. Yang, Giovanni Campagna, Ömer S. Ağacan, Ahmed El-Hassany, Abhishek Kulkarni, and Ryan R. Newton (Stanford University, USA; Indiana University, USA) In distributed applications, the transmission of non-contiguous data structures is greatly slowed down by the need to serialize them into a buffer before sending. We describe Compact Normal Forms, an API that allows programmers to explicitly place immutable heap objects into regions, which can both be accessed like ordinary data as well as efficiently transmitted over the network. The process of placing objects into compact regions (essentially a copy) is faster than any serializer and can be amortized over a series of functional updates to the data structure in question. We implement this scheme in the Glasgow Haskell Compiler and show that even with the space expansion attendant with memory-oriented data structure representations, we achieve between x2 and x4 speedups on fast local networks with sufficiently large data structures. ![]() |
|
Chlipala, Adam |
![]() Adam Chlipala (Massachusetts Institute of Technology, USA) High-level scripting languages have become tremendously popular for development of dynamic Web applications. Many programmers appreciate the productivity benefits of automatic storage management, freedom from verbose type annotations, and so on. While it is often possible to improve performance substantially by rewriting an application in C or a similar language, very few programmers bother to do so, because of the consequences for human development effort. This paper describes a compiler that makes it possible to have most of the best of both worlds, coding Web applications in a high-level language but compiling to native code with performance comparable to handwritten C code. The source language is Ur/Web, a domain-specific, purely functional, statically typed language for the Web. Through a coordinated suite of relatively straightforward program analyses and algebraic optimizations, we transform Ur/Web programs into almost-idiomatic C code, with no garbage collection, little unnecessary memory allocation for intermediate values, etc. Our compiler is in production use for commercial Web sites supporting thousands of users, and microbenchmarks demonstrate very competitive performance versus mainstream tools. ![]() ![]() |
|
Claessen, Koen |
![]() Atze van der Ploeg and Koen Claessen (Chalmers University of Technology, Sweden) We present a new interface for practical Functional Reactive Programming (FRP) that (1) is close in spirit to the original FRP ideas, (2) does not have the original space-leak problems, without using arrows or advanced types, and (3) provides a simple and expressive way for performing IO actions from FRP code. We also provide a denotational semantics for this new interface, and a technique (using Kripke logical relations) for reasoning about which FRP functions may "forget their past", i.e. which functions do not have an inherent space-leak. Finally, we show how we have implemented this interface as a Haskell library called FRPNow. ![]() |
|
Dal Lago, Ugo |
![]() Martin Avanzini, Ugo Dal Lago, and Georg Moser (University of Bologna, Italy; INRIA, France; University of Innsbruck, Austria) We show how the complexity of higher-order functional programs can be analysed automatically by applying program transformations to a defunctionalised versions of them, and feeding the result to existing tools for the complexity analysis of first-order term rewrite systems. This is done while carefully analysing complexity preservation and reflection of the employed transformations such that the complexity of the obtained term rewrite system reflects on the complexity of the initial program. Further, we describe suitable strategies for the application of the studied transformations and provide ample experimental data for assessing the viability of our method. ![]() |
|
Danner, Norman |
![]() Norman Danner, Daniel R. Licata, and Ramyaa Ramyaa (Wesleyan University, USA) A central method for analyzing the asymptotic complexity of a functional program is to extract and then solve a recurrence that expresses evaluation cost in terms of input size. The relevant notion of input size is often specific to a datatype, with measures including the length of a list, the maximum element in a list, and the height of a tree. In this work, we give a formal account of the extraction of cost and size recurrences from higher-order functional programs over inductive datatypes. Our approach allows a wide range of programmer-specified notions of size, and ensures that the extracted recurrences correctly predict evaluation cost. To extract a recurrence from a program, we first make costs explicit by applying a monadic translation from the source language to a complexity language, and then abstract datatype values as sizes. Size abstraction can be done semantically, working in models of the complexity language, or syntactically, by adding rules to a preorder judgement. We give several different models of the complexity language, which support different notions of size. Additionally, we prove by a logical relations argument that recurrences extracted by this process are upper bounds for evaluation cost; the proof is entirely syntactic and therefore applies to all of the models we consider. ![]() |
|
Downen, Paul |
![]() Paul Downen, Philip Johnson-Freyd, and Zena M. Ariola (University of Oregon, USA) Our goal is to develop co-induction from our understanding of induction, putting them on level ground as equal partners for reasoning about programs. We investigate several structures which represent well-founded forms of recursion in programs. These simple structures encapsulate reasoning by primitive and noetherian induction principles, and can be composed together to form complex recursion schemes for programs operating over a wide class of data and co-data types. At its heart, this study is guided by duality: each structure for recursion has a dual form, giving perfectly symmetric pairs of equal and opposite data and co-data types for representing recursion in programs. Duality is brought out through a framework presented in sequent style, which inherently includes control effects that are interpreted logically as classical reasoning principles. To accommodate the presence of effects, we give a calculus parameterized by a notion of strategy, which is strongly normalizing for a wide range of strategies. We also present a more traditional calculus for representing effect-free functional programs, but at the cost of losing some of the founding dualities. ![]() ![]() |
|
Dreyer, Derek |
![]() Georg Neis, Chung-Kil Hur, Jan-Oliver Kaiser, Craig McLaughlin, Derek Dreyer, and Viktor Vafeiadis (MPI-SWS, Germany; Seoul National University, South Kroea; University of Glasgow, UK) Compiler verification is essential for the construction of fully verified software, but most prior work (such as CompCert) has focused on verifying whole-program compilers. To support separate compilation and to enable linking of results from different verified compilers, it is important to develop a compositional notion of compiler correctness that is modular (preserved under linking), transitive (supports multi-pass compilation), and flexible (applicable to compilers that use different intermediate languages or employ non-standard program transformations). In this paper, building on prior work of Hur et al., we develop a novel approach to compositional compiler verification based on parametric inter-language simulations (PILS). PILS are modular: they enable compiler verification in a manner that supports separate compilation. PILS are transitive: we use them to verify Pilsner, a simple (but non-trivial) multi-pass optimizing compiler (programmed in Coq) from an ML-like source language S to an assembly-like target language T, going through a CPS-based intermediate language. Pilsner is the first multi-pass compiler for a higher-order imperative language to be compositionally verified. Lastly, PILS are flexible: we use them to additionally verify (1) Zwickel, a direct non-optimizing compiler for S, and (2) a hand-coded self-modifying T module, proven correct w.r.t. an S-level specification. The output of Zwickel and the self-modifying T module can then be safely linked together with the output of Pilsner. All together, this has been a significant undertaking, involving several person-years of work and over 55,000 lines of Coq. ![]() |
|
Dubach, Christophe |
![]() Michel Steuwer, Christian Fensch, Sam Lindley, and Christophe Dubach (University of Edinburgh, UK; University of Münster, Germany; Heriot-Watt University, UK) Computers have become increasingly complex with the emergence of heterogeneous hardware combining multicore CPUs and GPUs. These parallel systems exhibit tremendous computational power at the cost of increased programming effort resulting in a tension between performance and code portability. Typically, code is either tuned in a low-level imperative language using hardware-specific optimizations to achieve maximum performance or is written in a high-level, possibly functional, language to achieve portability at the expense of performance. We propose a novel approach aiming to combine high-level programming, code portability, and high-performance. Starting from a high-level functional expression we apply a simple set of rewrite rules to transform it into a low-level functional representation, close to the OpenCL programming model, from which OpenCL code is generated. Our rewrite rules define a space of possible implementations which we automatically explore to generate hardware-specific OpenCL implementations. We formalize our system with a core dependently-typed lambda-calculus along with a denotational semantics which we use to prove the correctness of the rewrite rules. We test our design in practice by implementing a compiler which generates high performance imperative OpenCL code. Our experiments show that we can automatically derive hardware-specific implementations from simple functional high-level algorithmic expressions offering performance on a par with highly tuned code for multicore CPUs and GPUs written by experts. ![]() |
|
Dunfield, Joshua |
![]() Joshua Dunfield (University of British Columbia, Canada) We classify programming languages according to evaluation order: each language fixes one evaluation order as the default, making it transparent to program in that evaluation order, and troublesome to program in the other. This paper develops a type system that is impartial with respect to evaluation order. Evaluation order is implicit in terms, and explicit in types, with by-value and by-name versions of type connectives. A form of intersection type quantifies over evaluation orders, describing code that is agnostic over (that is, polymorphic in) evaluation order. By allowing such generic code, programs can express the by-value and by-name versions of a computation without code duplication. We also formulate a type system that only has by-value connectives, plus a type that generalizes the difference between by-value and by-name connectives: it is either a suspension (by name) or a "no-op" (by value). We show a straightforward encoding of the impartial type system into the more economical one. Then we define an elaboration from the economical language to a call-by-value semantics, and prove that elaborating a well-typed source program, where evaluation order is implicit, produces a well-typed target program where evaluation order is explicit. We also prove a simulation between evaluation of the target program and reductions (either by-value or by-name) in the source program. Finally, we prove that typing, elaboration, and evaluation are faithful to the type annotations given in the source program: if the programmer only writes by-value types, no by-name reductions can occur at run time. ![]() |
|
El-Hassany, Ahmed |
![]() Edward Z. Yang, Giovanni Campagna, Ömer S. Ağacan, Ahmed El-Hassany, Abhishek Kulkarni, and Ryan R. Newton (Stanford University, USA; Indiana University, USA) In distributed applications, the transmission of non-contiguous data structures is greatly slowed down by the need to serialize them into a buffer before sending. We describe Compact Normal Forms, an API that allows programmers to explicitly place immutable heap objects into regions, which can both be accessed like ordinary data as well as efficiently transmitted over the network. The process of placing objects into compact regions (essentially a copy) is faster than any serializer and can be amortized over a series of functional updates to the data structure in question. We implement this scheme in the Glasgow Haskell Compiler and show that even with the space expansion attendant with memory-oriented data structure representations, we achieve between x2 and x4 speedups on fast local networks with sufficiently large data structures. ![]() |
|
Eliopoulos, Spiridon |
![]() Steffen Smolka, Spiridon Eliopoulos, Nate Foster, and Arjun Guha (Cornell University, USA; Inhabited Type, USA; University of Massachusetts at Amherst, USA) High-level programming languages play a key role in a growing number of networking platforms, streamlining application development and enabling precise formal reasoning about network behavior. Unfortunately, current compilers only handle "local" programs that specify behavior in terms of hop-by-hop forwarding behavior, or modest extensions such as simple paths. To encode richer "global" behaviors, programmers must add extra state -- something that is tricky to get right and makes programs harder to write and maintain. Making matters worse, existing compilers can take tens of minutes to generate the forwarding state for the network, even on relatively small inputs. This forces programmers to waste time working around performance issues or even revert to using hardware-level APIs. This paper presents a new compiler for the NetKAT language that handles rich features including regular paths and virtual networks, and yet is several orders of magnitude faster than previous compilers. The compiler uses symbolic automata to calculate the extra state needed to implement "global" programs, and an intermediate representation based on binary decision diagrams to dramatically improve performance. We describe the design and implementation of three essential compiler stages: from virtual programs (which specify behavior in terms of virtual topologies) to global programs (which specify network-wide behavior in terms of physical topologies), from global programs to local programs (which specify behavior in terms of single-switch behavior), and from local programs to hardware-level forwarding tables. We present results from experiments on real-world benchmarks that quantify performance in terms of compilation time and forwarding table size. ![]() |
|
Elsman, Martin |
![]() Patrick Bahr, Jost Berthold, and Martin Elsman (University of Copenhagen, Denmark; Commonwealth Bank of Australia, Australia) Domain-specific languages (DSLs) for complex financial contracts are in practical use in many banks and financial institutions today. Given the level of automation and pervasiveness of software in the sector, the financial domain is immensely sensitive to software bugs. At the same time, there is an increasing need to analyse (and report on) the interaction between multiple parties. In this paper, we present a multi-party contract language that rigorously relegates any artefacts of simulation and computation from its core, which leads to favourable algebraic properties, and therefore allows for formalising domain-specific analyses and transformations using a proof assistant. At the centre of our formalisation is a simple denotational semantics independent of any stochastic aspects. Based on this semantics, we devise certified contract analyses and transformations. In particular, we give a type system, with an accompanying type inference procedure, that statically ensures that contracts follow the principle of causality. Moreover, we devise a reduction semantics that allows us to evolve contracts over time, in accordance with the denotational semantics. From the verified Coq definitions, we automatically extract a Haskell implementation of an embedded contract DSL along with the formally verified contract management functionality. This approach opens a road map towards more reliable contract management software, including the possibility of analysing contracts based on symbolic instead of numeric methods. ![]() ![]() |
|
Fensch, Christian |
![]() Michel Steuwer, Christian Fensch, Sam Lindley, and Christophe Dubach (University of Edinburgh, UK; University of Münster, Germany; Heriot-Watt University, UK) Computers have become increasingly complex with the emergence of heterogeneous hardware combining multicore CPUs and GPUs. These parallel systems exhibit tremendous computational power at the cost of increased programming effort resulting in a tension between performance and code portability. Typically, code is either tuned in a low-level imperative language using hardware-specific optimizations to achieve maximum performance or is written in a high-level, possibly functional, language to achieve portability at the expense of performance. We propose a novel approach aiming to combine high-level programming, code portability, and high-performance. Starting from a high-level functional expression we apply a simple set of rewrite rules to transform it into a low-level functional representation, close to the OpenCL programming model, from which OpenCL code is generated. Our rewrite rules define a space of possible implementations which we automatically explore to generate hardware-specific OpenCL implementations. We formalize our system with a core dependently-typed lambda-calculus along with a denotational semantics which we use to prove the correctness of the rewrite rules. We test our design in practice by implementing a compiler which generates high performance imperative OpenCL code. Our experiments show that we can automatically derive hardware-specific implementations from simple functional high-level algorithmic expressions offering performance on a par with highly tuned code for multicore CPUs and GPUs written by experts. ![]() |
|
Fluet, Matthew |
![]() Matthew Le and Matthew Fluet (Rochester Institute of Technology, USA) Software transactional memory (STM) has proven to be a useful abstraction for developing concurrent applications, where programmers denote transactions with an atomic construct that delimits a collection of reads and writes to shared mutable references. The runtime system then guarantees that all transactions are observed to execute atomically with respect to each other. Traditionally, when the runtime system detects that one transaction conflicts with another, it aborts one of the transactions and restarts its execution from the beginning. This can lead to problems with both execution time and throughput. In this paper, we present a novel approach that uses first-class continuations to restart a conflicting transaction at the point of a conflict, avoiding the re-execution of any work from the beginning of the transaction that has not been compromised. In practice, this allows transactions to complete more quickly, decreasing execution time and increasing throughput. We have implemented this idea in the context of the Manticore project, an ML-family language with support for parallelism and concurrency. Crucially, we rely on constant-time continuation capturing via a continuation-passing-style (CPS) transformation and heap-allocated continuations. When comparing our STM that performs partial aborts against one that performs full aborts, we achieve a decrease in execution time of up to 31% and an increase in throughput of up to 351%. ![]() |
|
Fogg, Peter P. |
![]() Ryan R. Newton, Peter P. Fogg, and Ali Varamesh (Indiana University, USA) Purely functional data structures stored inside a mutable variable provide an excellent concurrent data structure—obviously correct, cheap to create, and supporting snapshots. They are not, however, scalable. We provide a way to retain the benefits of these pure-in-a-box data structures while dynamically converting to a more scalable lock-free data structure under contention. Our solution scales to any pair of pure and lock-free container types with key/value set semantics, while retaining lock-freedom. We demonstrate the principle in action on two very different platforms: first in the Glasgow Haskell Compiler and second in Java. To this end we extend GHC to support lock-free data structures and introduce a new approach for safe CAS in a lazy language. ![]() |
|
Foster, Nate |
![]() Steffen Smolka, Spiridon Eliopoulos, Nate Foster, and Arjun Guha (Cornell University, USA; Inhabited Type, USA; University of Massachusetts at Amherst, USA) High-level programming languages play a key role in a growing number of networking platforms, streamlining application development and enabling precise formal reasoning about network behavior. Unfortunately, current compilers only handle "local" programs that specify behavior in terms of hop-by-hop forwarding behavior, or modest extensions such as simple paths. To encode richer "global" behaviors, programmers must add extra state -- something that is tricky to get right and makes programs harder to write and maintain. Making matters worse, existing compilers can take tens of minutes to generate the forwarding state for the network, even on relatively small inputs. This forces programmers to waste time working around performance issues or even revert to using hardware-level APIs. This paper presents a new compiler for the NetKAT language that handles rich features including regular paths and virtual networks, and yet is several orders of magnitude faster than previous compilers. The compiler uses symbolic automata to calculate the extra state needed to implement "global" programs, and an intermediate representation based on binary decision diagrams to dramatically improve performance. We describe the design and implementation of three essential compiler stages: from virtual programs (which specify behavior in terms of virtual topologies) to global programs (which specify network-wide behavior in terms of physical topologies), from global programs to local programs (which specify behavior in terms of single-switch behavior), and from local programs to hardware-level forwarding tables. We present results from experiments on real-world benchmarks that quantify performance in terms of compilation time and forwarding table size. ![]() |
|
Gaboardi, Marco |
![]() Marco Gaboardi and Romain Péchoux (University of Dundee, UK; University of Lorraine, France) Algebra and coalgebra are widely used to model data types in functional programming languages and proof assistants. Their use permits to better structure the computations and also to enhance the expressivity of a language or of a proof system. Interestingly, parametric polymorphism à la System F provides a way to encode algebras and coalgebras in strongly normalizing languages without losing the good logical properties of the calculus. Even if these encodings are sometimes unsatisfying because they provide only limited forms of algebras and coalgebras, they give insights on the expressivity of System F in terms of functions that we can program in it. With the goal of contributing to a better understanding of the expressivity of Implicit Computational Complexity systems, we study the problem of defining algebras and coalgebras in the Light Affine Lambda Calculus, a system characterizing the complexity class FPTIME. This system limits the computational complexity of programs but it also limits the ways we can use parametric polymorphism, and in general the way we can write our programs. We show here that while the restrictions imposed by the Light Affine Lambda Calculus pose some issues to the standard System F encodings, they still permit to encode some form of algebra and coalgebra. Using the algebra encoding one can define in the Light Affine Lambda Calculus the traditional inductive types. Unfortunately, the corresponding coalgebra encoding permits only a very limited form of coinductive data types. To extend this class we study an extension of the Light Affine Lambda Calculus by distributive laws for the modality §. This extension has been discussed but not studied before. ![]() |
|
Genevès, Pierre |
![]() Pierre Genevès and Nils Gesbert (University of Grenoble, France; CNRS, France; INRIA, France) XQuery is a functional language dedicated to XML data querying and manipulation. As opposed to other W3C-standardized languages for XML (e.g. XSLT), it has been intended to feature strong static typing. Currently, however, some expressions of the language cannot be statically typed with any precision. We argue that this is due to a discrepancy between the semantics of the language and its type algebra: namely, the values of the language are (possibly inner) tree nodes, which may have siblings and ancestors in the data. The types on the other hand are regular tree types, as usual in the XML world: they describe sets of trees. The type associated to a node then corresponds to the subtree whose root is that node and contains no information about the rest of the data. This makes navigation expressions using `backward axes,' which return e.g. the siblings of a node, impossible to type. We discuss how to handle this discrepancy by improving the type system. We describe a logic-based language of extended types able to represent inner tree nodes and show how it can dramatically increase the precision of typing for navigation expressions. We describe how inclusion between these extended types and the classical regular tree types can be decided, allowing a hybrid system combining both type languages. The result is a net increase in precision of typing. ![]() |
|
Gesbert, Nils |
![]() Pierre Genevès and Nils Gesbert (University of Grenoble, France; CNRS, France; INRIA, France) XQuery is a functional language dedicated to XML data querying and manipulation. As opposed to other W3C-standardized languages for XML (e.g. XSLT), it has been intended to feature strong static typing. Currently, however, some expressions of the language cannot be statically typed with any precision. We argue that this is due to a discrepancy between the semantics of the language and its type algebra: namely, the values of the language are (possibly inner) tree nodes, which may have siblings and ancestors in the data. The types on the other hand are regular tree types, as usual in the XML world: they describe sets of trees. The type associated to a node then corresponds to the subtree whose root is that node and contains no information about the rest of the data. This makes navigation expressions using `backward axes,' which return e.g. the siblings of a node, impossible to type. We discuss how to handle this discrepancy by improving the type system. We describe a logic-based language of extended types able to represent inner tree nodes and show how it can dramatically increase the precision of typing for navigation expressions. We describe how inclusion between these extended types and the classical regular tree types can be decided, allowing a hybrid system combining both type languages. The result is a net increase in precision of typing. ![]() |
|
Guha, Arjun |
![]() Steffen Smolka, Spiridon Eliopoulos, Nate Foster, and Arjun Guha (Cornell University, USA; Inhabited Type, USA; University of Massachusetts at Amherst, USA) High-level programming languages play a key role in a growing number of networking platforms, streamlining application development and enabling precise formal reasoning about network behavior. Unfortunately, current compilers only handle "local" programs that specify behavior in terms of hop-by-hop forwarding behavior, or modest extensions such as simple paths. To encode richer "global" behaviors, programmers must add extra state -- something that is tricky to get right and makes programs harder to write and maintain. Making matters worse, existing compilers can take tens of minutes to generate the forwarding state for the network, even on relatively small inputs. This forces programmers to waste time working around performance issues or even revert to using hardware-level APIs. This paper presents a new compiler for the NetKAT language that handles rich features including regular paths and virtual networks, and yet is several orders of magnitude faster than previous compilers. The compiler uses symbolic automata to calculate the extra state needed to implement "global" programs, and an intermediate representation based on binary decision diagrams to dramatically improve performance. We describe the design and implementation of three essential compiler stages: from virtual programs (which specify behavior in terms of virtual topologies) to global programs (which specify network-wide behavior in terms of physical topologies), from global programs to local programs (which specify behavior in terms of single-switch behavior), and from local programs to hardware-level forwarding tables. We present results from experiments on real-world benchmarks that quantify performance in terms of compilation time and forwarding table size. ![]() |
|
Hirschfeld, Robert |
![]() Spenser Bauman, Carl Friedrich Bolz, Robert Hirschfeld, Vasily Kirilichev, Tobias Pape, Jeremy G. Siek, and Sam Tobin-Hochstadt (Indiana University, USA; Kings College London, UK; HPI, Germany) We present Pycket, a high-performance tracing JIT compiler for Racket. Pycket supports a wide variety of the sophisticated features in Racket such as contracts, continuations, classes, structures, dynamic binding, and more. On average, over a standard suite of benchmarks, Pycket outperforms existing compilers, both Racket's JIT and other highly-optimizing Scheme compilers. Further, Pycket provides much better performance for Racket proxies than existing systems, dramatically reducing the overhead of contracts and gradual typing. We validate this claim with performance evaluation on multiple existing benchmark suites. The Pycket implementation is of independent interest as an application of the RPython meta-tracing framework (originally created for PyPy), which automatically generates tracing JIT compilers from interpreters. Prior work on meta-tracing focuses on bytecode interpreters, whereas Pycket is a high-level interpreter based on the CEK abstract machine and operates directly on abstract syntax trees. Pycket supports proper tail calls and first-class continuations. In the setting of a functional language, where recursion and higher-order functions are more prevalent than explicit loops, the most significant performance challenge for a tracing JIT is identifying which control flows constitute a loop---we discuss two strategies for identifying loops and measure their impact. ![]() |
|
Hur, Chung-Kil |
![]() Georg Neis, Chung-Kil Hur, Jan-Oliver Kaiser, Craig McLaughlin, Derek Dreyer, and Viktor Vafeiadis (MPI-SWS, Germany; Seoul National University, South Kroea; University of Glasgow, UK) Compiler verification is essential for the construction of fully verified software, but most prior work (such as CompCert) has focused on verifying whole-program compilers. To support separate compilation and to enable linking of results from different verified compilers, it is important to develop a compositional notion of compiler correctness that is modular (preserved under linking), transitive (supports multi-pass compilation), and flexible (applicable to compilers that use different intermediate languages or employ non-standard program transformations). In this paper, building on prior work of Hur et al., we develop a novel approach to compositional compiler verification based on parametric inter-language simulations (PILS). PILS are modular: they enable compiler verification in a manner that supports separate compilation. PILS are transitive: we use them to verify Pilsner, a simple (but non-trivial) multi-pass optimizing compiler (programmed in Coq) from an ML-like source language S to an assembly-like target language T, going through a CPS-based intermediate language. Pilsner is the first multi-pass compiler for a higher-order imperative language to be compositionally verified. Lastly, PILS are flexible: we use them to additionally verify (1) Zwickel, a direct non-optimizing compiler for S, and (2) a hand-coded self-modifying T module, proven correct w.r.t. an S-level specification. The output of Zwickel and the self-modifying T module can then be safely linked together with the output of Pilsner. All together, this has been a significant undertaking, involving several person-years of work and over 55,000 lines of Coq. ![]() |
|
Jagannathan, Suresh |
![]() He Zhu, Aditya V. Nori, and Suresh Jagannathan (Purdue University, USA; Microsoft Research, USA) We propose the integration of a random test generation system (capable of discovering program bugs) and a refinement type system (capable of expressing and verifying program invariants), for higher-order functional programs, using a novel lightweight learning algorithm as an effective intermediary between the two. Our approach is based on the well-understood intuition that useful, but difficult to infer, program properties can often be observed from concrete program states generated by tests; these properties act as likely invariants, which if used to refine simple types, can have their validity checked by a refinement type checker. We describe an implementation of our technique for a variety of benchmarks written in ML, and demonstrate its effectiveness in inferring and proving useful invariants for programs that express complex higher-order control and dataflow. ![]() |
|
Jaskelioff, Mauro |
![]() Mauro Jaskelioff and Exequiel Rivas (CIFASIS-CONICET, Argentina; Universidad Nacional de Rosario, Argentina) Left-nested list concatenations, left-nested binds on the free monad, and left-nested choices in many non-determinism monads have an algorithmically bad performance. Can we solve this problem without losing the ability to pattern-match on the computation? Surprisingly, there is a deceptively simple solution: use a smart view to pattern-match on the datatype. We introduce the notion of smart view and show how it solves the problem of slow left-nested operations. In particular, we use the technique to obtain fast and simple implementations of lists, of free monads, and of two non-determinism monads. ![]() |
|
Jhala, Ranjit |
![]() Niki Vazou, Alexander Bakst, and Ranjit Jhala (University of California at San Diego, USA) We present a notion of bounded quantification for refinement types and show how it expands the expressiveness of refinement typing by using it to develop typed combinators for: (1) relational algebra and safe database access, (2) Floyd-Hoare logic within a state transformer monad equipped with combinators for branching and looping, and (3) using the above to implement a refined IO monad that tracks capabilities and resource usage. This leap in expressiveness comes via a translation to ``ghost" functions, which lets us retain the automated and decidable SMT based checking and inference that makes refinement typing effective in practice. ![]() ![]() |
|
Johnson-Freyd, Philip |
![]() Paul Downen, Philip Johnson-Freyd, and Zena M. Ariola (University of Oregon, USA) Our goal is to develop co-induction from our understanding of induction, putting them on level ground as equal partners for reasoning about programs. We investigate several structures which represent well-founded forms of recursion in programs. These simple structures encapsulate reasoning by primitive and noetherian induction principles, and can be composed together to form complex recursion schemes for programs operating over a wide class of data and co-data types. At its heart, this study is guided by duality: each structure for recursion has a dual form, giving perfectly symmetric pairs of equal and opposite data and co-data types for representing recursion in programs. Duality is brought out through a framework presented in sequent style, which inherently includes control effects that are interpreted logically as classical reasoning principles. To accommodate the presence of effects, we give a calculus parameterized by a notion of strategy, which is strongly normalizing for a wide range of strategies. We also present a more traditional calculus for representing effect-free functional programs, but at the cost of losing some of the founding dualities. ![]() ![]() |
|
Jones, Simon Peyton |
![]() Georgios Karachalias, Tom Schrijvers, Dimitrios Vytiniotis, and Simon Peyton Jones (Ghent University, Belgium; KU Leuven, Belgium; Microsoft Research, UK) For ML and Haskell, accurate warnings when a function definition has redundant or missing patterns are mission critical. But today's compilers generate bogus warnings when the programmer uses guards (even simple ones), GADTs, pattern guards, or view patterns. We give the first algorithm that handles all these cases in a single, uniform framework, together with an implementation in GHC, and evidence of its utility in practice. ![]() |
|
Kaiser, Jan-Oliver |
![]() Georg Neis, Chung-Kil Hur, Jan-Oliver Kaiser, Craig McLaughlin, Derek Dreyer, and Viktor Vafeiadis (MPI-SWS, Germany; Seoul National University, South Kroea; University of Glasgow, UK) Compiler verification is essential for the construction of fully verified software, but most prior work (such as CompCert) has focused on verifying whole-program compilers. To support separate compilation and to enable linking of results from different verified compilers, it is important to develop a compositional notion of compiler correctness that is modular (preserved under linking), transitive (supports multi-pass compilation), and flexible (applicable to compilers that use different intermediate languages or employ non-standard program transformations). In this paper, building on prior work of Hur et al., we develop a novel approach to compositional compiler verification based on parametric inter-language simulations (PILS). PILS are modular: they enable compiler verification in a manner that supports separate compilation. PILS are transitive: we use them to verify Pilsner, a simple (but non-trivial) multi-pass optimizing compiler (programmed in Coq) from an ML-like source language S to an assembly-like target language T, going through a CPS-based intermediate language. Pilsner is the first multi-pass compiler for a higher-order imperative language to be compositionally verified. Lastly, PILS are flexible: we use them to additionally verify (1) Zwickel, a direct non-optimizing compiler for S, and (2) a hand-coded self-modifying T module, proven correct w.r.t. an S-level specification. The output of Zwickel and the self-modifying T module can then be safely linked together with the output of Pilsner. All together, this has been a significant undertaking, involving several person-years of work and over 55,000 lines of Coq. ![]() |
|
Karachalias, Georgios |
![]() Georgios Karachalias, Tom Schrijvers, Dimitrios Vytiniotis, and Simon Peyton Jones (Ghent University, Belgium; KU Leuven, Belgium; Microsoft Research, UK) For ML and Haskell, accurate warnings when a function definition has redundant or missing patterns are mission critical. But today's compilers generate bogus warnings when the programmer uses guards (even simple ones), GADTs, pattern guards, or view patterns. We give the first algorithm that handles all these cases in a single, uniform framework, together with an implementation in GHC, and evidence of its utility in practice. ![]() |
|
Keil, Matthias |
![]() Matthias Keil and Peter Thiemann (University of Freiburg, Germany) We present an untyped calculus of blame assignment for a higher-order contract system with two new operators: intersection and union. The specification of these operators is based on the corresponding type theoretic constructions. This connection makes intersection and union contracts their inevitable dynamic counterparts with a range of desirable properties and makes them suitable for subsequent integration in a gradual type system. A denotational specification provides the semantics of a contract in terms of two sets: a set of terms satisfying the contract and a set of contexts respecting the contract. This kind of specification for contracts is novel and interesting in its own right. A nondeterministic operational semantics serves as the specification for contract monitoring and for proving its correctness. It is complemented by a deterministic semantics that is closer to an implementation and that is connected to the nondeterministic semantics by simulation. The calculus is the formal basis of TJS, a language embedded, higher-order contract system implemented for JavaScript. ![]() |
|
King, Tim |
![]() Zvonimir Pavlinovic, Tim King, and Thomas Wies (New York University, USA; VERIMAG, France) Compilers for statically typed functional programming languages are notorious for generating confusing type error messages. When the compiler detects a type error, it typically reports the program location where the type checking failed as the source of the error. Since other error sources are not even considered, the actual root cause is often missed. A more adequate approach is to consider all possible error sources and report the most useful one subject to some usefulness criterion. In our previous work, we showed that this approach can be formulated as an optimization problem related to satisfiability modulo theories (SMT). This formulation cleanly separates the heuristic nature of usefulness criteria from the underlying search problem. Unfortunately, algorithms that search for an optimal error source cannot directly use principal types which are crucial for dealing with the exponential-time complexity of the decision problem of polymorphic type checking. In this paper, we present a new algorithm that efficiently finds an optimal error source in a given ill-typed program. Our algorithm uses an improved SMT encoding to cope with the high complexity of polymorphic typing by iteratively expanding the typing constraints from which principal types are derived. The algorithm preserves the clean separation between the heuristics and the actual search. We have implemented our algorithm for OCaml. In our experimental evaluation, we found that the algorithm reduces the running times for optimal type error localization from minutes to seconds and scales better than previous localization algorithms. ![]() |
|
Kirilichev, Vasily |
![]() Spenser Bauman, Carl Friedrich Bolz, Robert Hirschfeld, Vasily Kirilichev, Tobias Pape, Jeremy G. Siek, and Sam Tobin-Hochstadt (Indiana University, USA; Kings College London, UK; HPI, Germany) We present Pycket, a high-performance tracing JIT compiler for Racket. Pycket supports a wide variety of the sophisticated features in Racket such as contracts, continuations, classes, structures, dynamic binding, and more. On average, over a standard suite of benchmarks, Pycket outperforms existing compilers, both Racket's JIT and other highly-optimizing Scheme compilers. Further, Pycket provides much better performance for Racket proxies than existing systems, dramatically reducing the overhead of contracts and gradual typing. We validate this claim with performance evaluation on multiple existing benchmark suites. The Pycket implementation is of independent interest as an application of the RPython meta-tracing framework (originally created for PyPy), which automatically generates tracing JIT compilers from interpreters. Prior work on meta-tracing focuses on bytecode interpreters, whereas Pycket is a high-level interpreter based on the CEK abstract machine and operates directly on abstract syntax trees. Pycket supports proper tail calls and first-class continuations. In the setting of a functional language, where recursion and higher-order functions are more prevalent than explicit loops, the most significant performance challenge for a tracing JIT is identifying which control flows constitute a loop---we discuss two strategies for identifying loops and measure their impact. ![]() |
|
Krishnamurthi, Shriram |
![]() Justin Pombrio and Shriram Krishnamurthi (Brown University, USA) Syntactic sugar is widely used in language implementation. Its benefits are, however, offset by the comprehension problems it presents to programmers once their program has been transformed. In particular, after a transformed program has begun to evaluate (or otherwise be altered by a black-box process), it can become unrecognizable. We present a new approach to _resugaring_ programs, which is the act of reflecting evaluation steps in the core language in terms of the syntactic sugar that the programmer used. Relative to prior work, our approach has two important advances: it handles hygiene, and it allows almost arbitrary rewriting rules (as opposed to restricted patterns). We do this in the context of a DAG representation of programs, rather than more traditional trees. ![]() |
|
Kulkarni, Abhishek |
![]() Edward Z. Yang, Giovanni Campagna, Ömer S. Ağacan, Ahmed El-Hassany, Abhishek Kulkarni, and Ryan R. Newton (Stanford University, USA; Indiana University, USA) In distributed applications, the transmission of non-contiguous data structures is greatly slowed down by the need to serialize them into a buffer before sending. We describe Compact Normal Forms, an API that allows programmers to explicitly place immutable heap objects into regions, which can both be accessed like ordinary data as well as efficiently transmitted over the network. The process of placing objects into compact regions (essentially a copy) is faster than any serializer and can be amortized over a series of functional updates to the data structure in question. We implement this scheme in the Glasgow Haskell Compiler and show that even with the space expansion attendant with memory-oriented data structure representations, we achieve between x2 and x4 speedups on fast local networks with sufficiently large data structures. ![]() |
|
Le, Matthew |
![]() Matthew Le and Matthew Fluet (Rochester Institute of Technology, USA) Software transactional memory (STM) has proven to be a useful abstraction for developing concurrent applications, where programmers denote transactions with an atomic construct that delimits a collection of reads and writes to shared mutable references. The runtime system then guarantees that all transactions are observed to execute atomically with respect to each other. Traditionally, when the runtime system detects that one transaction conflicts with another, it aborts one of the transactions and restarts its execution from the beginning. This can lead to problems with both execution time and throughput. In this paper, we present a novel approach that uses first-class continuations to restart a conflicting transaction at the point of a conflict, avoiding the re-execution of any work from the beginning of the transaction that has not been compromised. In practice, this allows transactions to complete more quickly, decreasing execution time and increasing throughput. We have implemented this idea in the context of the Manticore project, an ML-family language with support for parallelism and concurrency. Crucially, we rely on constant-time continuation capturing via a continuation-passing-style (CPS) transformation and heap-allocated continuations. When comparing our STM that performs partial aborts against one that performs full aborts, we achieve a decrease in execution time of up to 31% and an increase in throughput of up to 351%. ![]() |
|
Licata, Daniel R. |
![]() Norman Danner, Daniel R. Licata, and Ramyaa Ramyaa (Wesleyan University, USA) A central method for analyzing the asymptotic complexity of a functional program is to extract and then solve a recurrence that expresses evaluation cost in terms of input size. The relevant notion of input size is often specific to a datatype, with measures including the length of a list, the maximum element in a list, and the height of a tree. In this work, we give a formal account of the extraction of cost and size recurrences from higher-order functional programs over inductive datatypes. Our approach allows a wide range of programmer-specified notions of size, and ensures that the extracted recurrences correctly predict evaluation cost. To extract a recurrence from a program, we first make costs explicit by applying a monadic translation from the source language to a complexity language, and then abstract datatype values as sizes. Size abstraction can be done semantically, working in models of the complexity language, or syntactically, by adding rules to a preorder judgement. We give several different models of the complexity language, which support different notions of size. Additionally, we prove by a logical relations argument that recurrences extracted by this process are upper bounds for evaluation cost; the proof is entirely syntactic and therefore applies to all of the models we consider. ![]() |
|
Lindley, Sam |
![]() Michel Steuwer, Christian Fensch, Sam Lindley, and Christophe Dubach (University of Edinburgh, UK; University of Münster, Germany; Heriot-Watt University, UK) Computers have become increasingly complex with the emergence of heterogeneous hardware combining multicore CPUs and GPUs. These parallel systems exhibit tremendous computational power at the cost of increased programming effort resulting in a tension between performance and code portability. Typically, code is either tuned in a low-level imperative language using hardware-specific optimizations to achieve maximum performance or is written in a high-level, possibly functional, language to achieve portability at the expense of performance. We propose a novel approach aiming to combine high-level programming, code portability, and high-performance. Starting from a high-level functional expression we apply a simple set of rewrite rules to transform it into a low-level functional representation, close to the OpenCL programming model, from which OpenCL code is generated. Our rewrite rules define a space of possible implementations which we automatically explore to generate hardware-specific OpenCL implementations. We formalize our system with a core dependently-typed lambda-calculus along with a denotational semantics which we use to prove the correctness of the rewrite rules. We test our design in practice by implementing a compiler which generates high performance imperative OpenCL code. Our experiments show that we can automatically derive hardware-specific implementations from simple functional high-level algorithmic expressions offering performance on a par with highly tuned code for multicore CPUs and GPUs written by experts. ![]() |
|
Matsuda, Kazutaka |
![]() Kazutaka Matsuda and Meng Wang (Tohoku University, Japan; University of Kent, UK) A bidirectional transformation is a pair of mappings between source and view data objects, one in each direction. When the view is modified, the source is updated accordingly with respect to some laws. One way to reduce the development and maintenance effort of bidirectional transformations is to have specialized languages in which the resulting programs are bidirectional by construction---giving rise to the paradigm of bidirectional programming. In this paper, we develop a framework for applicative-style and higher-order bidirectional programming, in which we can write bidirectional transformations as unidirectional programs in standard functional languages, opening up access to the bundle of language features previously only available to conventional unidirectional languages. Our framework essentially bridges two very different approaches of bidirectional programming, namely the lens framework and Voigtländer's semantic bidirectionalization, creating a new programming style that is able to bag benefits from both. ![]() |
|
McLaughlin, Craig |
![]() Georg Neis, Chung-Kil Hur, Jan-Oliver Kaiser, Craig McLaughlin, Derek Dreyer, and Viktor Vafeiadis (MPI-SWS, Germany; Seoul National University, South Kroea; University of Glasgow, UK) Compiler verification is essential for the construction of fully verified software, but most prior work (such as CompCert) has focused on verifying whole-program compilers. To support separate compilation and to enable linking of results from different verified compilers, it is important to develop a compositional notion of compiler correctness that is modular (preserved under linking), transitive (supports multi-pass compilation), and flexible (applicable to compilers that use different intermediate languages or employ non-standard program transformations). In this paper, building on prior work of Hur et al., we develop a novel approach to compositional compiler verification based on parametric inter-language simulations (PILS). PILS are modular: they enable compiler verification in a manner that supports separate compilation. PILS are transitive: we use them to verify Pilsner, a simple (but non-trivial) multi-pass optimizing compiler (programmed in Coq) from an ML-like source language S to an assembly-like target language T, going through a CPS-based intermediate language. Pilsner is the first multi-pass compiler for a higher-order imperative language to be compositionally verified. Lastly, PILS are flexible: we use them to additionally verify (1) Zwickel, a direct non-optimizing compiler for S, and (2) a hand-coded self-modifying T module, proven correct w.r.t. an S-level specification. The output of Zwickel and the self-modifying T module can then be safely linked together with the output of Pilsner. All together, this has been a significant undertaking, involving several person-years of work and over 55,000 lines of Coq. ![]() |
|
Moser, Georg |
![]() Martin Avanzini, Ugo Dal Lago, and Georg Moser (University of Bologna, Italy; INRIA, France; University of Innsbruck, Austria) We show how the complexity of higher-order functional programs can be analysed automatically by applying program transformations to a defunctionalised versions of them, and feeding the result to existing tools for the complexity analysis of first-order term rewrite systems. This is done while carefully analysing complexity preservation and reflection of the employed transformations such that the complexity of the obtained term rewrite system reflects on the complexity of the initial program. Further, we describe suitable strategies for the application of the studied transformations and provide ample experimental data for assessing the viability of our method. ![]() |
|
Neis, Georg |
![]() Georg Neis, Chung-Kil Hur, Jan-Oliver Kaiser, Craig McLaughlin, Derek Dreyer, and Viktor Vafeiadis (MPI-SWS, Germany; Seoul National University, South Kroea; University of Glasgow, UK) Compiler verification is essential for the construction of fully verified software, but most prior work (such as CompCert) has focused on verifying whole-program compilers. To support separate compilation and to enable linking of results from different verified compilers, it is important to develop a compositional notion of compiler correctness that is modular (preserved under linking), transitive (supports multi-pass compilation), and flexible (applicable to compilers that use different intermediate languages or employ non-standard program transformations). In this paper, building on prior work of Hur et al., we develop a novel approach to compositional compiler verification based on parametric inter-language simulations (PILS). PILS are modular: they enable compiler verification in a manner that supports separate compilation. PILS are transitive: we use them to verify Pilsner, a simple (but non-trivial) multi-pass optimizing compiler (programmed in Coq) from an ML-like source language S to an assembly-like target language T, going through a CPS-based intermediate language. Pilsner is the first multi-pass compiler for a higher-order imperative language to be compositionally verified. Lastly, PILS are flexible: we use them to additionally verify (1) Zwickel, a direct non-optimizing compiler for S, and (2) a hand-coded self-modifying T module, proven correct w.r.t. an S-level specification. The output of Zwickel and the self-modifying T module can then be safely linked together with the output of Pilsner. All together, this has been a significant undertaking, involving several person-years of work and over 55,000 lines of Coq. ![]() |
|
Newton, Ryan R. |
![]() Ryan R. Newton, Peter P. Fogg, and Ali Varamesh (Indiana University, USA) Purely functional data structures stored inside a mutable variable provide an excellent concurrent data structure—obviously correct, cheap to create, and supporting snapshots. They are not, however, scalable. We provide a way to retain the benefits of these pure-in-a-box data structures while dynamically converting to a more scalable lock-free data structure under contention. Our solution scales to any pair of pure and lock-free container types with key/value set semantics, while retaining lock-freedom. We demonstrate the principle in action on two very different platforms: first in the Glasgow Haskell Compiler and second in Java. To this end we extend GHC to support lock-free data structures and introduce a new approach for safe CAS in a lazy language. ![]() ![]() Edward Z. Yang, Giovanni Campagna, Ömer S. Ağacan, Ahmed El-Hassany, Abhishek Kulkarni, and Ryan R. Newton (Stanford University, USA; Indiana University, USA) In distributed applications, the transmission of non-contiguous data structures is greatly slowed down by the need to serialize them into a buffer before sending. We describe Compact Normal Forms, an API that allows programmers to explicitly place immutable heap objects into regions, which can both be accessed like ordinary data as well as efficiently transmitted over the network. The process of placing objects into compact regions (essentially a copy) is faster than any serializer and can be amortized over a series of functional updates to the data structure in question. We implement this scheme in the Glasgow Haskell Compiler and show that even with the space expansion attendant with memory-oriented data structure representations, we achieve between x2 and x4 speedups on fast local networks with sufficiently large data structures. ![]() |
|
Nori, Aditya V. |
![]() He Zhu, Aditya V. Nori, and Suresh Jagannathan (Purdue University, USA; Microsoft Research, USA) We propose the integration of a random test generation system (capable of discovering program bugs) and a refinement type system (capable of expressing and verifying program invariants), for higher-order functional programs, using a novel lightweight learning algorithm as an effective intermediary between the two. Our approach is based on the well-understood intuition that useful, but difficult to infer, program properties can often be observed from concrete program states generated by tests; these properties act as likely invariants, which if used to refine simple types, can have their validity checked by a refinement type checker. We describe an implementation of our technique for a variety of benchmarks written in ML, and demonstrate its effectiveness in inferring and proving useful invariants for programs that express complex higher-order control and dataflow. ![]() |
|
Ostermann, Klaus |
![]() Tillmann Rendel, Julia Trieflinger, and Klaus Ostermann (University of Tübingen, Germany) Defunctionalization and refunctionalization establish a correspondence between first-class functions and pattern matching, but the correspondence is not symmetric: Not all uses of pattern matching can be automatically refunctionalized to uses of higher-order functions. To remedy this asymmetry, we generalize from first-class functions to arbitrary codata. This leads us to full defunctionalization and refunctionalization between a codata language based on copattern matching and a data language based on pattern matching. We observe how programs can be written as matrices so that they are modularly extensible in one dimension but not the other. In this representation, defunctionalization and refunctionalization correspond to matrix transposition which effectively changes the dimension of extensibility a program supports. This suggests applications to the expression problem. ![]() |
|
Pape, Tobias |
![]() Spenser Bauman, Carl Friedrich Bolz, Robert Hirschfeld, Vasily Kirilichev, Tobias Pape, Jeremy G. Siek, and Sam Tobin-Hochstadt (Indiana University, USA; Kings College London, UK; HPI, Germany) We present Pycket, a high-performance tracing JIT compiler for Racket. Pycket supports a wide variety of the sophisticated features in Racket such as contracts, continuations, classes, structures, dynamic binding, and more. On average, over a standard suite of benchmarks, Pycket outperforms existing compilers, both Racket's JIT and other highly-optimizing Scheme compilers. Further, Pycket provides much better performance for Racket proxies than existing systems, dramatically reducing the overhead of contracts and gradual typing. We validate this claim with performance evaluation on multiple existing benchmark suites. The Pycket implementation is of independent interest as an application of the RPython meta-tracing framework (originally created for PyPy), which automatically generates tracing JIT compilers from interpreters. Prior work on meta-tracing focuses on bytecode interpreters, whereas Pycket is a high-level interpreter based on the CEK abstract machine and operates directly on abstract syntax trees. Pycket supports proper tail calls and first-class continuations. In the setting of a functional language, where recursion and higher-order functions are more prevalent than explicit loops, the most significant performance challenge for a tracing JIT is identifying which control flows constitute a loop---we discuss two strategies for identifying loops and measure their impact. ![]() |
|
Pavlinovic, Zvonimir |
![]() Zvonimir Pavlinovic, Tim King, and Thomas Wies (New York University, USA; VERIMAG, France) Compilers for statically typed functional programming languages are notorious for generating confusing type error messages. When the compiler detects a type error, it typically reports the program location where the type checking failed as the source of the error. Since other error sources are not even considered, the actual root cause is often missed. A more adequate approach is to consider all possible error sources and report the most useful one subject to some usefulness criterion. In our previous work, we showed that this approach can be formulated as an optimization problem related to satisfiability modulo theories (SMT). This formulation cleanly separates the heuristic nature of usefulness criteria from the underlying search problem. Unfortunately, algorithms that search for an optimal error source cannot directly use principal types which are crucial for dealing with the exponential-time complexity of the decision problem of polymorphic type checking. In this paper, we present a new algorithm that efficiently finds an optimal error source in a given ill-typed program. Our algorithm uses an improved SMT encoding to cope with the high complexity of polymorphic typing by iteratively expanding the typing constraints from which principal types are derived. The algorithm preserves the clean separation between the heuristics and the actual search. We have implemented our algorithm for OCaml. In our experimental evaluation, we found that the algorithm reduces the running times for optimal type error localization from minutes to seconds and scales better than previous localization algorithms. ![]() |
|
Péchoux, Romain |
![]() Marco Gaboardi and Romain Péchoux (University of Dundee, UK; University of Lorraine, France) Algebra and coalgebra are widely used to model data types in functional programming languages and proof assistants. Their use permits to better structure the computations and also to enhance the expressivity of a language or of a proof system. Interestingly, parametric polymorphism à la System F provides a way to encode algebras and coalgebras in strongly normalizing languages without losing the good logical properties of the calculus. Even if these encodings are sometimes unsatisfying because they provide only limited forms of algebras and coalgebras, they give insights on the expressivity of System F in terms of functions that we can program in it. With the goal of contributing to a better understanding of the expressivity of Implicit Computational Complexity systems, we study the problem of defining algebras and coalgebras in the Light Affine Lambda Calculus, a system characterizing the complexity class FPTIME. This system limits the computational complexity of programs but it also limits the ways we can use parametric polymorphism, and in general the way we can write our programs. We show here that while the restrictions imposed by the Light Affine Lambda Calculus pose some issues to the standard System F encodings, they still permit to encode some form of algebra and coalgebra. Using the algebra encoding one can define in the Light Affine Lambda Calculus the traditional inductive types. Unfortunately, the corresponding coalgebra encoding permits only a very limited form of coinductive data types. To extend this class we study an extension of the Light Affine Lambda Calculus by distributive laws for the modality §. This extension has been discussed but not studied before. ![]() |
|
Ploeg, Atze van der |
![]() Atze van der Ploeg and Koen Claessen (Chalmers University of Technology, Sweden) We present a new interface for practical Functional Reactive Programming (FRP) that (1) is close in spirit to the original FRP ideas, (2) does not have the original space-leak problems, without using arrows or advanced types, and (3) provides a simple and expressive way for performing IO actions from FRP code. We also provide a denotational semantics for this new interface, and a technique (using Kripke logical relations) for reasoning about which FRP functions may "forget their past", i.e. which functions do not have an inherent space-leak. Finally, we show how we have implemented this interface as a Haskell library called FRPNow. ![]() |
|
Pombrio, Justin |
![]() Justin Pombrio and Shriram Krishnamurthi (Brown University, USA) Syntactic sugar is widely used in language implementation. Its benefits are, however, offset by the comprehension problems it presents to programmers once their program has been transformed. In particular, after a transformed program has begun to evaluate (or otherwise be altered by a black-box process), it can become unrecognizable. We present a new approach to _resugaring_ programs, which is the act of reflecting evaluation steps in the core language in terms of the syntactic sugar that the programmer used. Relative to prior work, our approach has two important advances: it handles hygiene, and it allows almost arbitrary rewriting rules (as opposed to restricted patterns). We do this in the context of a DAG representation of programs, rather than more traditional trees. ![]() |
|
Popescu, Andrei |
![]() Jasmin Christian Blanchette, Andrei Popescu, and Dmitriy Traytel (INRIA, France; LORIA, France; Middlesex University, UK; TU München, Germany) This paper presents a formalized framework for defining corecursive functions safely in a total setting, based on corecursion up-to and relational parametricity. The end product is a general corecursor that allows corecursive (and even recursive) calls under "friendly" operations, including constructors. Friendly corecursive functions can be registered as such, thereby increasing the corecursor's expressiveness. The metatheory is formalized in the Isabelle proof assistant and forms the core of a prototype tool. The corecursor is derived from first principles, without requiring new axioms or extensions of the logic. ![]() |
|
Ramyaa, Ramyaa |
![]() Norman Danner, Daniel R. Licata, and Ramyaa Ramyaa (Wesleyan University, USA) A central method for analyzing the asymptotic complexity of a functional program is to extract and then solve a recurrence that expresses evaluation cost in terms of input size. The relevant notion of input size is often specific to a datatype, with measures including the length of a list, the maximum element in a list, and the height of a tree. In this work, we give a formal account of the extraction of cost and size recurrences from higher-order functional programs over inductive datatypes. Our approach allows a wide range of programmer-specified notions of size, and ensures that the extracted recurrences correctly predict evaluation cost. To extract a recurrence from a program, we first make costs explicit by applying a monadic translation from the source language to a complexity language, and then abstract datatype values as sizes. Size abstraction can be done semantically, working in models of the complexity language, or syntactically, by adding rules to a preorder judgement. We give several different models of the complexity language, which support different notions of size. Additionally, we prove by a logical relations argument that recurrences extracted by this process are upper bounds for evaluation cost; the proof is entirely syntactic and therefore applies to all of the models we consider. ![]() |
|
Rémy, Didier |
![]() Gabriel Scherer and Didier Rémy (INRIA, France) We study the question of whether a given type has a unique inhabitant modulo program equivalence. In the setting of simply-typed lambda-calculus with sums, equipped with the strong --equivalence, we show that uniqueness is decidable. We present a saturating focused logic that introduces irreducible cuts on positive types ``as soon as possible''. Backward search in this logic gives an effective algorithm that returns either zero, one or two distinct inhabitants for any given type. Preliminary application studies show that such a feature can be useful in strongly-typed programs, inferring the code of highly-polymorphic library functions, or ``glue code'' inside more complex terms. ![]() |
|
Rendel, Tillmann |
![]() Tillmann Rendel, Julia Trieflinger, and Klaus Ostermann (University of Tübingen, Germany) Defunctionalization and refunctionalization establish a correspondence between first-class functions and pattern matching, but the correspondence is not symmetric: Not all uses of pattern matching can be automatically refunctionalized to uses of higher-order functions. To remedy this asymmetry, we generalize from first-class functions to arbitrary codata. This leads us to full defunctionalization and refunctionalization between a codata language based on copattern matching and a data language based on pattern matching. We observe how programs can be written as matrices so that they are modularly extensible in one dimension but not the other. In this representation, defunctionalization and refunctionalization correspond to matrix transposition which effectively changes the dimension of extensibility a program supports. This suggests applications to the expression problem. ![]() |
|
Rivas, Exequiel |
![]() Mauro Jaskelioff and Exequiel Rivas (CIFASIS-CONICET, Argentina; Universidad Nacional de Rosario, Argentina) Left-nested list concatenations, left-nested binds on the free monad, and left-nested choices in many non-determinism monads have an algorithmically bad performance. Can we solve this problem without losing the ability to pattern-match on the computation? Surprisingly, there is a deceptively simple solution: use a smart view to pattern-match on the datatype. We introduce the notion of smart view and show how it solves the problem of slow left-nested operations. In particular, we use the technique to obtain fast and simple implementations of lists, of free monads, and of two non-determinism monads. ![]() |
|
Rompf, Tiark |
![]() Tiark Rompf and Nada Amin (Purdue University, USA; EPFL, Switzerland) We present the design and implementation of a SQL query processor that outperforms existing database systems and is written in just about 500 lines of Scala code -- a convincing case study that high-level functional programming can handily beat C for systems-level programming where the last drop of performance matters. The key enabler is a shift in perspective towards generative programming. The core of the query engine is an interpreter for relational algebra operations, written in Scala. Using the open-source LMS Framework (Lightweight Modular Staging), we turn this interpreter into a query compiler with very low effort. To do so, we capitalize on an old and widely known result from partial evaluation known as Futamura projections, which state that a program that can specialize an interpreter to any given input program is equivalent to a compiler. In this pearl, we discuss LMS programming patterns such as mixed-stage data structures (e.g. data records with static schema and dynamic field components) and techniques to generate low-level C code, including specialized data structures and data loading primitives. ![]() ![]() ![]() Nicolas Stucki, Tiark Rompf, Vlad Ureche, and Phil Bagwell (EPFL, Switzerland; Purdue University, USA) State-of-the-art immutable collections have wildly differing performance characteristics across their operations, often forcing programmers to choose different collection implementations for each task. Thus, changes to the program can invalidate the choice of collections, making code evolution costly. It would be desirable to have a collection that performs well for a broad range of operations. To this end, we present the RRB-Vector, an immutable sequence collection that offers good performance across a large number of sequential and parallel operations. The underlying innovations are: (1) the Relaxed-Radix-Balanced (RRB) tree structure, which allows efficient structural reorganization, and (2) an optimization that exploits spatio-temporal locality on the RRB data structure in order to offset the cost of traversing the tree. In our benchmarks, the RRB-Vector speedup for parallel operations is lower bounded by 7x when executing on 4 CPUs of 8 cores each. The performance for discrete operations, such as appending on either end, or updating and removing elements, is consistently good and compares favorably to the most important immutable sequence collections in the literature and in use today. The memory footprint of RRB-Vector is on par with arrays and an order of magnitude less than competing collections. ![]() |
|
Rossberg, Andreas |
![]() Andreas Rossberg (Google, Germany) ML is two languages in one: there is the core, with types and expressions, and there are modules, with signatures, structures and functors. Modules form a separate, higher-order functional language on top of the core. There are both practical and technical reasons for this stratification; yet, it creates substantial duplication in syntax and semantics, and it reduces expressiveness. For example, selecting a module cannot be made a dynamic decision. Language extensions allowing modules to be packaged up as first-class values have been proposed and implemented in different variations. However, they remedy expressiveness only to some extent, are syntactically cumbersome, and do not alleviate redundancy. We propose a redesign of ML in which modules are truly first-class values, and core and module layer are unified into one language. In this "1ML", functions, functors, and even type constructors are one and the same construct; likewise, no distinction is made between structures, records, or tuples. Or viewed the other way round, everything is just ("a mode of use of") modules. Yet, 1ML does not require dependent types, and its type structure is expressible in terms of plain System Fω, in a minor variation of our F-ing modules approach. We introduce both an explicitly typed version of 1ML, and an extension with Damas/Milner-style implicit quantification. Type inference for this language is not complete, but, we argue, not substantially worse than for Standard ML. An alternative view is that 1ML is a user-friendly surface syntax for System Fω that allows combining term and type abstraction in a more compositional manner than the bare calculus. ![]() |
|
Russo, Alejandro |
![]() Alejandro Russo (Chalmers University of Technology, Sweden) For several decades, researchers from different communities have independently focused on protecting confidentiality of data. Two distinct technologies have emerged for such purposes: Mandatory Access Control (MAC) and Information-Flow Control (IFC)—the former belonging to operating systems (OS) research, while the latter to the programming languages community. These approaches restrict how data gets propagated within a system in order to avoid information leaks. In this scenario, Haskell plays a unique privileged role: it is able to protect confidentiality via libraries. This pearl presents a monadic API which statically protects confidentiality even in the presence of advanced features like exceptions, concurrency, and mutable data structures. Additionally, we present a mechanism to safely extend the library with new primitives, where library designers only need to indicate the read and write effects of new operations. ![]() ![]() Pablo Buiras, Dimitrios Vytiniotis, and Alejandro Russo (Chalmers University of Technology, Sweden; Microsoft Research, UK) Information-Flow Control (IFC) is a well-established approach for allowing untrusted code to manipulate sensitive data without disclosing it. IFC is typically enforced via type systems and static analyses or via dynamic execution monitors. The LIO Haskell library, originating in operating systems research, implements a purely dynamic monitor of the sensitivity level of a computation, particularly suitable when data sensitivity levels are only known at runtime. In this paper, we show how to give programmers the flexibility of deferring IFC checks to runtime (as in LIO), while also providing static guarantees---and the absence of runtime checks---for parts of their programs that can be statically verified (unlike LIO). We present the design and implementation of our approach, HLIO (Hybrid LIO), as an embedding in Haskell that uses a novel technique for deferring IFC checks based on singleton types and constraint polymorphism. We formalize HLIO, prove non-interference, and show how interesting IFC examples can be programmed. Although our motivation is IFC, our technique for deferring constraints goes well beyond and offers a methodology for programmer-controlled hybrid type checking in Haskell. ![]() |
|
Sabry, Amr |
![]() Cameron Swords, Amr Sabry, and Sam Tobin-Hochstadt (Indiana University, USA) We present a new approach to contract semantics which expresses myriad monitoring strategies using a small core of foundational communication primitives. This approach allows multiple existing contract monitoring approaches, ranging from Findler and Felleisen’s original model of higher-order contracts to semi-eager, parallel, or asynchronous monitors, to be expressed in a single language built on well-understood constructs. We prove that this approach accurately simulates the original semantics of higher-order contracts. A straightforward implementation in Racket demonstrates the practicality of our approach which not only enriches existing Racket monitoring strategies, but also support a new style of monitoring in which collections of contracts collaborate to establish a global invariant. ![]() |
|
Scherer, Gabriel |
![]() Gabriel Scherer and Didier Rémy (INRIA, France) We study the question of whether a given type has a unique inhabitant modulo program equivalence. In the setting of simply-typed lambda-calculus with sums, equipped with the strong --equivalence, we show that uniqueness is decidable. We present a saturating focused logic that introduces irreducible cuts on positive types ``as soon as possible''. Backward search in this logic gives an effective algorithm that returns either zero, one or two distinct inhabitants for any given type. Preliminary application studies show that such a feature can be useful in strongly-typed programs, inferring the code of highly-polymorphic library functions, or ``glue code'' inside more complex terms. ![]() |
|
Schrijvers, Tom |
![]() Georgios Karachalias, Tom Schrijvers, Dimitrios Vytiniotis, and Simon Peyton Jones (Ghent University, Belgium; KU Leuven, Belgium; Microsoft Research, UK) For ML and Haskell, accurate warnings when a function definition has redundant or missing patterns are mission critical. But today's compilers generate bogus warnings when the programmer uses guards (even simple ones), GADTs, pattern guards, or view patterns. We give the first algorithm that handles all these cases in a single, uniform framework, together with an implementation in GHC, and evidence of its utility in practice. ![]() |
|
Sheeran, Mary |
![]() Mary Sheeran (Chalmers University of Technology, Sweden) Higher order functions provide an elegant way to express algorithms designed for implementation in hardware. By showing examples of both classic and new algorithms, I will explain why higher order functions deserve to be studied. Next, I will consider the extent to which ideas from functional programming, and associated formal verification methods, have influenced hardware design in practice. What can we learn from looking back? You might ask "Why are methods of hardware design still important to our community?". Maybe we should just give up? One reason for not giving up is that hardware design is really a form of parallel programming. And here there is still a lot to do! Inspired by Blelloch's wonderful invited talk at ICFP 2010, I still believe that functional programming has much to offer in the central question of how to program the parallel machines of today, and, more particularly, of the future. I will briefly present some of the areas where I think that we are poised to make great contributions. But maybe we need to work harder on getting our act together? ![]() |
|
Siek, Jeremy G. |
![]() Spenser Bauman, Carl Friedrich Bolz, Robert Hirschfeld, Vasily Kirilichev, Tobias Pape, Jeremy G. Siek, and Sam Tobin-Hochstadt (Indiana University, USA; Kings College London, UK; HPI, Germany) We present Pycket, a high-performance tracing JIT compiler for Racket. Pycket supports a wide variety of the sophisticated features in Racket such as contracts, continuations, classes, structures, dynamic binding, and more. On average, over a standard suite of benchmarks, Pycket outperforms existing compilers, both Racket's JIT and other highly-optimizing Scheme compilers. Further, Pycket provides much better performance for Racket proxies than existing systems, dramatically reducing the overhead of contracts and gradual typing. We validate this claim with performance evaluation on multiple existing benchmark suites. The Pycket implementation is of independent interest as an application of the RPython meta-tracing framework (originally created for PyPy), which automatically generates tracing JIT compilers from interpreters. Prior work on meta-tracing focuses on bytecode interpreters, whereas Pycket is a high-level interpreter based on the CEK abstract machine and operates directly on abstract syntax trees. Pycket supports proper tail calls and first-class continuations. In the setting of a functional language, where recursion and higher-order functions are more prevalent than explicit loops, the most significant performance challenge for a tracing JIT is identifying which control flows constitute a loop---we discuss two strategies for identifying loops and measure their impact. ![]() |
|
Smolka, Steffen |
![]() Steffen Smolka, Spiridon Eliopoulos, Nate Foster, and Arjun Guha (Cornell University, USA; Inhabited Type, USA; University of Massachusetts at Amherst, USA) High-level programming languages play a key role in a growing number of networking platforms, streamlining application development and enabling precise formal reasoning about network behavior. Unfortunately, current compilers only handle "local" programs that specify behavior in terms of hop-by-hop forwarding behavior, or modest extensions such as simple paths. To encode richer "global" behaviors, programmers must add extra state -- something that is tricky to get right and makes programs harder to write and maintain. Making matters worse, existing compilers can take tens of minutes to generate the forwarding state for the network, even on relatively small inputs. This forces programmers to waste time working around performance issues or even revert to using hardware-level APIs. This paper presents a new compiler for the NetKAT language that handles rich features including regular paths and virtual networks, and yet is several orders of magnitude faster than previous compilers. The compiler uses symbolic automata to calculate the extra state needed to implement "global" programs, and an intermediate representation based on binary decision diagrams to dramatically improve performance. We describe the design and implementation of three essential compiler stages: from virtual programs (which specify behavior in terms of virtual topologies) to global programs (which specify network-wide behavior in terms of physical topologies), from global programs to local programs (which specify behavior in terms of single-switch behavior), and from local programs to hardware-level forwarding tables. We present results from experiments on real-world benchmarks that quantify performance in terms of compilation time and forwarding table size. ![]() |
|
Sozeau, Matthieu |
![]() Beta Ziliani and Matthieu Sozeau (MPI-SWS, Germany; INRIA, France; University of Paris Diderot, France) Unification is a core component of every proof assistant or programming language featuring dependent types. In many cases, it must deal with higher-order problems up to conversion. Since unification in such conditions is undecidable, unification algorithms may include several heuristics to solve common problems. However, when the stack of heuristics grows large, the result and complexity of the algorithm can become unpredictable. Our contributions are twofold: (1) We present a full description of a new unification algorithm for the Calculus of Inductive Constructions (the base logic of Coq), including universe polymorphism, canonical structures (the overloading mechanism baked into Coq's unification), and a small set of useful heuristics. (2) We implemented our algorithm, and tested it on several libraries, providing evidence that the selected set of heuristics suffices for large developments. ![]() |
|
Steuwer, Michel |
![]() Michel Steuwer, Christian Fensch, Sam Lindley, and Christophe Dubach (University of Edinburgh, UK; University of Münster, Germany; Heriot-Watt University, UK) Computers have become increasingly complex with the emergence of heterogeneous hardware combining multicore CPUs and GPUs. These parallel systems exhibit tremendous computational power at the cost of increased programming effort resulting in a tension between performance and code portability. Typically, code is either tuned in a low-level imperative language using hardware-specific optimizations to achieve maximum performance or is written in a high-level, possibly functional, language to achieve portability at the expense of performance. We propose a novel approach aiming to combine high-level programming, code portability, and high-performance. Starting from a high-level functional expression we apply a simple set of rewrite rules to transform it into a low-level functional representation, close to the OpenCL programming model, from which OpenCL code is generated. Our rewrite rules define a space of possible implementations which we automatically explore to generate hardware-specific OpenCL implementations. We formalize our system with a core dependently-typed lambda-calculus along with a denotational semantics which we use to prove the correctness of the rewrite rules. We test our design in practice by implementing a compiler which generates high performance imperative OpenCL code. Our experiments show that we can automatically derive hardware-specific implementations from simple functional high-level algorithmic expressions offering performance on a par with highly tuned code for multicore CPUs and GPUs written by experts. ![]() |
|
Stucki, Nicolas |
![]() Nicolas Stucki, Tiark Rompf, Vlad Ureche, and Phil Bagwell (EPFL, Switzerland; Purdue University, USA) State-of-the-art immutable collections have wildly differing performance characteristics across their operations, often forcing programmers to choose different collection implementations for each task. Thus, changes to the program can invalidate the choice of collections, making code evolution costly. It would be desirable to have a collection that performs well for a broad range of operations. To this end, we present the RRB-Vector, an immutable sequence collection that offers good performance across a large number of sequential and parallel operations. The underlying innovations are: (1) the Relaxed-Radix-Balanced (RRB) tree structure, which allows efficient structural reorganization, and (2) an optimization that exploits spatio-temporal locality on the RRB data structure in order to offset the cost of traversing the tree. In our benchmarks, the RRB-Vector speedup for parallel operations is lower bounded by 7x when executing on 4 CPUs of 8 cores each. The performance for discrete operations, such as appending on either end, or updating and removing elements, is consistently good and compares favorably to the most important immutable sequence collections in the literature and in use today. The memory footprint of RRB-Vector is on par with arrays and an order of magnitude less than competing collections. ![]() |
|
Swords, Cameron |
![]() Cameron Swords, Amr Sabry, and Sam Tobin-Hochstadt (Indiana University, USA) We present a new approach to contract semantics which expresses myriad monitoring strategies using a small core of foundational communication primitives. This approach allows multiple existing contract monitoring approaches, ranging from Findler and Felleisen’s original model of higher-order contracts to semi-eager, parallel, or asynchronous monitors, to be expressed in a single language built on well-understood constructs. We prove that this approach accurately simulates the original semantics of higher-order contracts. A straightforward implementation in Racket demonstrates the practicality of our approach which not only enriches existing Racket monitoring strategies, but also support a new style of monitoring in which collections of contracts collaborate to establish a global invariant. ![]() |
|
Thiemann, Peter |
![]() Matthias Keil and Peter Thiemann (University of Freiburg, Germany) We present an untyped calculus of blame assignment for a higher-order contract system with two new operators: intersection and union. The specification of these operators is based on the corresponding type theoretic constructions. This connection makes intersection and union contracts their inevitable dynamic counterparts with a range of desirable properties and makes them suitable for subsequent integration in a gradual type system. A denotational specification provides the semantics of a contract in terms of two sets: a set of terms satisfying the contract and a set of contexts respecting the contract. This kind of specification for contracts is novel and interesting in its own right. A nondeterministic operational semantics serves as the specification for contract monitoring and for proving its correctness. It is complemented by a deterministic semantics that is closer to an implementation and that is connected to the nondeterministic semantics by simulation. The calculus is the formal basis of TJS, a language embedded, higher-order contract system implemented for JavaScript. ![]() |
|
Tobin-Hochstadt, Sam |
![]() Spenser Bauman, Carl Friedrich Bolz, Robert Hirschfeld, Vasily Kirilichev, Tobias Pape, Jeremy G. Siek, and Sam Tobin-Hochstadt (Indiana University, USA; Kings College London, UK; HPI, Germany) We present Pycket, a high-performance tracing JIT compiler for Racket. Pycket supports a wide variety of the sophisticated features in Racket such as contracts, continuations, classes, structures, dynamic binding, and more. On average, over a standard suite of benchmarks, Pycket outperforms existing compilers, both Racket's JIT and other highly-optimizing Scheme compilers. Further, Pycket provides much better performance for Racket proxies than existing systems, dramatically reducing the overhead of contracts and gradual typing. We validate this claim with performance evaluation on multiple existing benchmark suites. The Pycket implementation is of independent interest as an application of the RPython meta-tracing framework (originally created for PyPy), which automatically generates tracing JIT compilers from interpreters. Prior work on meta-tracing focuses on bytecode interpreters, whereas Pycket is a high-level interpreter based on the CEK abstract machine and operates directly on abstract syntax trees. Pycket supports proper tail calls and first-class continuations. In the setting of a functional language, where recursion and higher-order functions are more prevalent than explicit loops, the most significant performance challenge for a tracing JIT is identifying which control flows constitute a loop---we discuss two strategies for identifying loops and measure their impact. ![]() ![]() Cameron Swords, Amr Sabry, and Sam Tobin-Hochstadt (Indiana University, USA) We present a new approach to contract semantics which expresses myriad monitoring strategies using a small core of foundational communication primitives. This approach allows multiple existing contract monitoring approaches, ranging from Findler and Felleisen’s original model of higher-order contracts to semi-eager, parallel, or asynchronous monitors, to be expressed in a single language built on well-understood constructs. We prove that this approach accurately simulates the original semantics of higher-order contracts. A straightforward implementation in Racket demonstrates the practicality of our approach which not only enriches existing Racket monitoring strategies, but also support a new style of monitoring in which collections of contracts collaborate to establish a global invariant. ![]() |
|
Traytel, Dmitriy |
![]() Jasmin Christian Blanchette, Andrei Popescu, and Dmitriy Traytel (INRIA, France; LORIA, France; Middlesex University, UK; TU München, Germany) This paper presents a formalized framework for defining corecursive functions safely in a total setting, based on corecursion up-to and relational parametricity. The end product is a general corecursor that allows corecursive (and even recursive) calls under "friendly" operations, including constructors. Friendly corecursive functions can be registered as such, thereby increasing the corecursor's expressiveness. The metatheory is formalized in the Isabelle proof assistant and forms the core of a prototype tool. The corecursor is derived from first principles, without requiring new axioms or extensions of the logic. ![]() |
|
Trieflinger, Julia |
![]() Tillmann Rendel, Julia Trieflinger, and Klaus Ostermann (University of Tübingen, Germany) Defunctionalization and refunctionalization establish a correspondence between first-class functions and pattern matching, but the correspondence is not symmetric: Not all uses of pattern matching can be automatically refunctionalized to uses of higher-order functions. To remedy this asymmetry, we generalize from first-class functions to arbitrary codata. This leads us to full defunctionalization and refunctionalization between a codata language based on copattern matching and a data language based on pattern matching. We observe how programs can be written as matrices so that they are modularly extensible in one dimension but not the other. In this representation, defunctionalization and refunctionalization correspond to matrix transposition which effectively changes the dimension of extensibility a program supports. This suggests applications to the expression problem. ![]() |
|
Ureche, Vlad |
![]() Nicolas Stucki, Tiark Rompf, Vlad Ureche, and Phil Bagwell (EPFL, Switzerland; Purdue University, USA) State-of-the-art immutable collections have wildly differing performance characteristics across their operations, often forcing programmers to choose different collection implementations for each task. Thus, changes to the program can invalidate the choice of collections, making code evolution costly. It would be desirable to have a collection that performs well for a broad range of operations. To this end, we present the RRB-Vector, an immutable sequence collection that offers good performance across a large number of sequential and parallel operations. The underlying innovations are: (1) the Relaxed-Radix-Balanced (RRB) tree structure, which allows efficient structural reorganization, and (2) an optimization that exploits spatio-temporal locality on the RRB data structure in order to offset the cost of traversing the tree. In our benchmarks, the RRB-Vector speedup for parallel operations is lower bounded by 7x when executing on 4 CPUs of 8 cores each. The performance for discrete operations, such as appending on either end, or updating and removing elements, is consistently good and compares favorably to the most important immutable sequence collections in the literature and in use today. The memory footprint of RRB-Vector is on par with arrays and an order of magnitude less than competing collections. ![]() |
|
Vafeiadis, Viktor |
![]() Georg Neis, Chung-Kil Hur, Jan-Oliver Kaiser, Craig McLaughlin, Derek Dreyer, and Viktor Vafeiadis (MPI-SWS, Germany; Seoul National University, South Kroea; University of Glasgow, UK) Compiler verification is essential for the construction of fully verified software, but most prior work (such as CompCert) has focused on verifying whole-program compilers. To support separate compilation and to enable linking of results from different verified compilers, it is important to develop a compositional notion of compiler correctness that is modular (preserved under linking), transitive (supports multi-pass compilation), and flexible (applicable to compilers that use different intermediate languages or employ non-standard program transformations). In this paper, building on prior work of Hur et al., we develop a novel approach to compositional compiler verification based on parametric inter-language simulations (PILS). PILS are modular: they enable compiler verification in a manner that supports separate compilation. PILS are transitive: we use them to verify Pilsner, a simple (but non-trivial) multi-pass optimizing compiler (programmed in Coq) from an ML-like source language S to an assembly-like target language T, going through a CPS-based intermediate language. Pilsner is the first multi-pass compiler for a higher-order imperative language to be compositionally verified. Lastly, PILS are flexible: we use them to additionally verify (1) Zwickel, a direct non-optimizing compiler for S, and (2) a hand-coded self-modifying T module, proven correct w.r.t. an S-level specification. The output of Zwickel and the self-modifying T module can then be safely linked together with the output of Pilsner. All together, this has been a significant undertaking, involving several person-years of work and over 55,000 lines of Coq. ![]() |
|
Varamesh, Ali |
![]() Ryan R. Newton, Peter P. Fogg, and Ali Varamesh (Indiana University, USA) Purely functional data structures stored inside a mutable variable provide an excellent concurrent data structure—obviously correct, cheap to create, and supporting snapshots. They are not, however, scalable. We provide a way to retain the benefits of these pure-in-a-box data structures while dynamically converting to a more scalable lock-free data structure under contention. Our solution scales to any pair of pure and lock-free container types with key/value set semantics, while retaining lock-freedom. We demonstrate the principle in action on two very different platforms: first in the Glasgow Haskell Compiler and second in Java. To this end we extend GHC to support lock-free data structures and introduce a new approach for safe CAS in a lazy language. ![]() |
|
Vazou, Niki |
![]() Niki Vazou, Alexander Bakst, and Ranjit Jhala (University of California at San Diego, USA) We present a notion of bounded quantification for refinement types and show how it expands the expressiveness of refinement typing by using it to develop typed combinators for: (1) relational algebra and safe database access, (2) Floyd-Hoare logic within a state transformer monad equipped with combinators for branching and looping, and (3) using the above to implement a refined IO monad that tracks capabilities and resource usage. This leap in expressiveness comes via a translation to ``ghost" functions, which lets us retain the automated and decidable SMT based checking and inference that makes refinement typing effective in practice. ![]() ![]() |
|
Vytiniotis, Dimitrios |
![]() Pablo Buiras, Dimitrios Vytiniotis, and Alejandro Russo (Chalmers University of Technology, Sweden; Microsoft Research, UK) Information-Flow Control (IFC) is a well-established approach for allowing untrusted code to manipulate sensitive data without disclosing it. IFC is typically enforced via type systems and static analyses or via dynamic execution monitors. The LIO Haskell library, originating in operating systems research, implements a purely dynamic monitor of the sensitivity level of a computation, particularly suitable when data sensitivity levels are only known at runtime. In this paper, we show how to give programmers the flexibility of deferring IFC checks to runtime (as in LIO), while also providing static guarantees---and the absence of runtime checks---for parts of their programs that can be statically verified (unlike LIO). We present the design and implementation of our approach, HLIO (Hybrid LIO), as an embedding in Haskell that uses a novel technique for deferring IFC checks based on singleton types and constraint polymorphism. We formalize HLIO, prove non-interference, and show how interesting IFC examples can be programmed. Although our motivation is IFC, our technique for deferring constraints goes well beyond and offers a methodology for programmer-controlled hybrid type checking in Haskell. ![]() ![]() Georgios Karachalias, Tom Schrijvers, Dimitrios Vytiniotis, and Simon Peyton Jones (Ghent University, Belgium; KU Leuven, Belgium; Microsoft Research, UK) For ML and Haskell, accurate warnings when a function definition has redundant or missing patterns are mission critical. But today's compilers generate bogus warnings when the programmer uses guards (even simple ones), GADTs, pattern guards, or view patterns. We give the first algorithm that handles all these cases in a single, uniform framework, together with an implementation in GHC, and evidence of its utility in practice. ![]() |
|
Wang, Meng |
![]() Kazutaka Matsuda and Meng Wang (Tohoku University, Japan; University of Kent, UK) A bidirectional transformation is a pair of mappings between source and view data objects, one in each direction. When the view is modified, the source is updated accordingly with respect to some laws. One way to reduce the development and maintenance effort of bidirectional transformations is to have specialized languages in which the resulting programs are bidirectional by construction---giving rise to the paradigm of bidirectional programming. In this paper, we develop a framework for applicative-style and higher-order bidirectional programming, in which we can write bidirectional transformations as unidirectional programs in standard functional languages, opening up access to the bundle of language features previously only available to conventional unidirectional languages. Our framework essentially bridges two very different approaches of bidirectional programming, namely the lens framework and Voigtländer's semantic bidirectionalization, creating a new programming style that is able to bag benefits from both. ![]() |
|
Wies, Thomas |
![]() Zvonimir Pavlinovic, Tim King, and Thomas Wies (New York University, USA; VERIMAG, France) Compilers for statically typed functional programming languages are notorious for generating confusing type error messages. When the compiler detects a type error, it typically reports the program location where the type checking failed as the source of the error. Since other error sources are not even considered, the actual root cause is often missed. A more adequate approach is to consider all possible error sources and report the most useful one subject to some usefulness criterion. In our previous work, we showed that this approach can be formulated as an optimization problem related to satisfiability modulo theories (SMT). This formulation cleanly separates the heuristic nature of usefulness criteria from the underlying search problem. Unfortunately, algorithms that search for an optimal error source cannot directly use principal types which are crucial for dealing with the exponential-time complexity of the decision problem of polymorphic type checking. In this paper, we present a new algorithm that efficiently finds an optimal error source in a given ill-typed program. Our algorithm uses an improved SMT encoding to cope with the high complexity of polymorphic typing by iteratively expanding the typing constraints from which principal types are derived. The algorithm preserves the clean separation between the heuristics and the actual search. We have implemented our algorithm for OCaml. In our experimental evaluation, we found that the algorithm reduces the running times for optimal type error localization from minutes to seconds and scales better than previous localization algorithms. ![]() |
|
Yang, Edward Z. |
![]() Edward Z. Yang, Giovanni Campagna, Ömer S. Ağacan, Ahmed El-Hassany, Abhishek Kulkarni, and Ryan R. Newton (Stanford University, USA; Indiana University, USA) In distributed applications, the transmission of non-contiguous data structures is greatly slowed down by the need to serialize them into a buffer before sending. We describe Compact Normal Forms, an API that allows programmers to explicitly place immutable heap objects into regions, which can both be accessed like ordinary data as well as efficiently transmitted over the network. The process of placing objects into compact regions (essentially a copy) is faster than any serializer and can be amortized over a series of functional updates to the data structure in question. We implement this scheme in the Glasgow Haskell Compiler and show that even with the space expansion attendant with memory-oriented data structure representations, we achieve between x2 and x4 speedups on fast local networks with sufficiently large data structures. ![]() |
|
Zhu, He |
![]() He Zhu, Aditya V. Nori, and Suresh Jagannathan (Purdue University, USA; Microsoft Research, USA) We propose the integration of a random test generation system (capable of discovering program bugs) and a refinement type system (capable of expressing and verifying program invariants), for higher-order functional programs, using a novel lightweight learning algorithm as an effective intermediary between the two. Our approach is based on the well-understood intuition that useful, but difficult to infer, program properties can often be observed from concrete program states generated by tests; these properties act as likely invariants, which if used to refine simple types, can have their validity checked by a refinement type checker. We describe an implementation of our technique for a variety of benchmarks written in ML, and demonstrate its effectiveness in inferring and proving useful invariants for programs that express complex higher-order control and dataflow. ![]() |
|
Ziliani, Beta |
![]() Beta Ziliani and Matthieu Sozeau (MPI-SWS, Germany; INRIA, France; University of Paris Diderot, France) Unification is a core component of every proof assistant or programming language featuring dependent types. In many cases, it must deal with higher-order problems up to conversion. Since unification in such conditions is undecidable, unification algorithms may include several heuristics to solve common problems. However, when the stack of heuristics grows large, the result and complexity of the algorithm can become unpredictable. Our contributions are twofold: (1) We present a full description of a new unification algorithm for the Calculus of Inductive Constructions (the base logic of Coq), including universe polymorphism, canonical structures (the overloading mechanism baked into Coq's unification), and a small set of useful heuristics. (2) We implemented our algorithm, and tested it on several libraries, providing evidence that the selected set of heuristics suffices for large developments. ![]() |
101 authors
proc time: 0.95