POPL 2017 – Author Index 
Contents 
Abstracts 
Authors

A B C D E F G H I J K L M N O P R S T V W X Y Z
Ahman, Danel 
POPL '17: "Dijkstra Monads for Free ..."
Dijkstra Monads for Free
Danel Ahman, Cătălin Hriţcu, Kenji Maillard, Guido Martínez, Gordon Plotkin, Jonathan Protzenko, Aseem Rastogi, and Nikhil Swamy (University of Edinburgh, UK; Microsoft Research, USA; Inria, France; ENS, France; Rosario National University, Argentina; Microsoft Research, India) Dijkstra monads enable a dependent type theory to be enhanced with support for specifying and verifying effectful code via weakest preconditions. Together with their closely related counterparts, Hoare monads, they provide the basis on which verification tools like F*, Hoare Type Theory (HTT), and Ynot are built. We show that Dijkstra monads can be derived “for free” by applying a continuationpassing style (CPS) translation to the standard monadic definitions of the underlying computational effects. Automatically deriving Dijkstra monads in this way provides a correctbyconstruction and efficient way of reasoning about userdefined effects in dependent type theories. We demonstrate these ideas in EMF*, a new dependently typed calculus, validating it via both formal proof and a prototype implementation within F*. Besides equipping F* with a more uniform and extensible effect system, EMF* enables a novel mixture of intrinsic and extrinsic proofs within F*. 

Akella, Aditya 
POPL '17: "Genesis: Synthesizing Forwarding ..."
Genesis: Synthesizing Forwarding Tables in Multitenant Networks
Kausik Subramanian, Loris D'Antoni, and Aditya Akella (University of WisconsinMadison, USA)
Operators in multitenant cloud datacenters require support for diverse and complex endtoend policies, such as, reachability, middlebox traversals, isolation, traffic engineering, and network resource management. We present Genesis, a datacenter network management system which allows policies to be specified in a declarative manner without explicitly programming the network data plane. Genesis tackles the problem of enforcing policies by synthesizing switch forwarding tables. It uses the formal foundations of constraint solving in combination with fast offtheshelf SMT solvers. To improve synthesis performance, Genesis incorporates a novel search strategy that uses regular expressions to specify properties that leverage the structure of datacenter networks, and a divideandconquer synthesis procedure which exploits the structure of policy relationships. We have prototyped Genesis, and conducted experiments with a variety of workloads on realworld topologies to demonstrate its performance.
@InProceedings{POPL17p572,
author = {Kausik Subramanian and Loris D'Antoni and Aditya Akella},
title = {Genesis: Synthesizing Forwarding Tables in Multitenant Networks},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {572585},
doi = {10.1145/3009837.3009845},
year = {2017},
}
Publisher's Version
Article Search


Aldrich, Jonathan 
POPL '17: "Hazelnut: A Bidirectionally ..."
Hazelnut: A Bidirectionally Typed Structure Editor Calculus
Cyrus Omar, Ian Voysey, Michael Hilton, Jonathan Aldrich, and Matthew A. Hammer (Carnegie Mellon University, USA; Oregon State University, USA; University of Colorado at Boulder, USA) Structure editors allow programmers to edit the tree structure of a program directly. This can have cognitive benefits, particularly for novice and enduser programmers. It also simplifies matters for tool designers, because they do not need to contend with malformed program text. This paper introduces Hazelnut, a structure editor based on a small bidirectionally typed lambda calculus extended with holes and a cursor. Hazelnut goes one step beyond syntactic wellformedness: its edit actions operate over statically meaningful incomplete terms. Na'ively, this would force the programmer to construct terms in a rigid “outsidein” manner. To avoid this problem, the action semantics automatically places terms assigned a type that is inconsistent with the expected type inside a hole. This meaningfully defers the type consistency check until the term inside the hole is finished. Hazelnut is not intended as an enduser tool itself. Instead, it serves as a foundational account of typed structure editing. To that end, we describe how Hazelnut’s rich metatheory, which we have mechanized using the Agda proof assistant, serves as a guide when we extend the calculus to include binary sum types. We also discuss various interpretations of holes, and in so doing reveal connections with gradual typing and contextual modal type theory, the CurryHoward interpretation of contextual modal logic. Finally, we discuss how Hazelnut’s semantics lends itself to implementation as an eventbased functional reactive program. Our simple reference implementation is written using js_of_ocaml. 

Alglave, Jade 
POPL '17: "Ogre and Pythia: An Invariance ..."
Ogre and Pythia: An Invariance Proof Method for Weak Consistency Models
Jade Alglave and Patrick Cousot (Microsoft Research, UK; University College London, UK; New York University, USA; ENS, France)
We design an invariance proof method for concurrent programs parameterised by a weak consistency model. The calculational design of the invariance proof method is by abstract interpretation of a truly parallel analytic semantics. This generalises the methods by Lamport and OwickiGries for sequential consistency. We use cat as an example of language to write consistency specifications of both concurrent programs and machine architectures.
@InProceedings{POPL17p3,
author = {Jade Alglave and Patrick Cousot},
title = {Ogre and Pythia: An Invariance Proof Method for Weak Consistency Models},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {318},
doi = {10.1145/3009837.3009883},
year = {2017},
}
Publisher's Version
Article Search
Info


Amin, Nada 
POPL '17: "LMSVerify: Abstraction without ..."
LMSVerify: Abstraction without Regret for Verified Systems Programming
Nada Amin and Tiark Rompf (EPFL, Switzerland; Purdue University, USA)
Performance critical software is almost always developed in C, as programmers do not trust highlevel languages to deliver the same reliable performance. This is bad because lowlevel code in unsafe languages attracts security vulnerabilities and because development is far less productive, with PL advances mostly lost on programmers operating under tight performance constraints. Highlevel languages provide memory safety out of the box, but they are deemed too slow and unpredictable for serious system software.
Recent years have seen a surge in staging and generative programming: the key idea is to use highlevel languages and their abstraction power as glorified macro systems to compose code fragments in firstorder, potentially domainspecific, intermediate languages, from which fast C can be emitted. But what about security? Since the end result is still C code, the safety guarantees of the highlevel host language are lost.
In this paper, we extend this generative approach to emit ACSL specifications along with C code. We demonstrate that staging achieves ``abstraction without regret'' for verification: we show how highlevel programming models, in particular higherorder composable contracts from dynamic languages, can be used at generation time to compose and generate firstorder specifications that can be statically checked by existing tools. We also show how type classes can automatically attach invariants to data types, reducing the need for repetitive manual annotations.
We evaluate our system on several case studies that varyingly exercise verification of memory safety, overflow safety, and functional correctness. We feature an HTTP parser that is (1) fast (2) highlevel: implemented using staged parser combinators (3) secure: with verified memory safety. This result is significant, as input parsing is a key attack vector, and vulnerabilities related to HTTP parsing have been documented in all widelyused web servers.
@InProceedings{POPL17p859,
author = {Nada Amin and Tiark Rompf},
title = {LMSVerify: Abstraction without Regret for Verified Systems Programming},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {859873},
doi = {10.1145/3009837.3009867},
year = {2017},
}
Publisher's Version
Article Search
POPL '17: "Type Soundness Proofs with ..."
Type Soundness Proofs with Definitional Interpreters
Nada Amin and Tiark Rompf (EPFL, Switzerland; Purdue University, USA) While type soundness proofs are taught in every graduate PL class, the gap between realistic languages and what is accessible to formal proofs is large. In the case of Scala, it has been shown that its formal model, the Dependent Object Types (DOT) calculus, cannot simultaneously support key metatheoretic properties such as environment narrowing and subtyping transitivity, which are usually required for a type soundness proof. Moreover, Scala and many other realistic languages lack a general substitution property. The first contribution of this paper is to demonstrate how type soundness proofs for advanced, polymorphic, type systems can be carried out with an operational semantics based on highlevel, definitional interpreters, implemented in Coq. We present the first mechanized soundness proofs in this style for System F and several extensions, including mutable references. Our proofs use only straightforward induction, which is significant, as the combination of bigstep semantics, mutable references, and polymorphism is commonly believed to require coinductive proof techniques. The second main contribution of this paper is to show how DOTlike calculi emerge from straightforward generalizations of the operational aspects of F, exposing a rich design space of calculi with pathdependent types inbetween System F and DOT, which we dub the System D Square. By working directly on the target language, definitional interpreters can focus the design space and expose the invariants that actually matter at runtime. Looking at such runtime invariants is an exciting new avenue for type system design. 

Angiuli, Carlo 
POPL '17: "Computational HigherDimensional ..."
Computational HigherDimensional Type Theory
Carlo Angiuli, Robert Harper, and Todd Wilson (Carnegie Mellon University, USA; California State University at Fresno, USA)
Formal constructive type theory has proved to be an effective language for mechanized proof. By avoiding nonconstructive principles, such as the law of the excluded middle, type theory admits sharper proofs and broader interpretations of results. From a computer science perspective, interest in type theory arises from its applications to programming languages. Standard constructive type theories used in mechanization admit computational interpretations based on metamathematical normalization theorems. These proofs are notoriously brittle; any change to the theory potentially invalidates its computational meaning. As a case in point, Voevodsky's univalence axiom raises questions about the computational meaning of proofs.
We consider the question: Can higherdimensional type theory be construed as a programming language? We answer this question affirmatively by providing a direct, deterministic operational interpretation for a representative higherdimensional dependent type theory with higher inductive types and an instance of univalence. Rather than being a formal type theory defined by rules, it is instead a computational type theory in the sense of MartinLöf's meaning explanations and of the NuPRL semantics. The definition of the type theory starts with programs; types are specifications of program behavior. The main result is a canonicity theorem stating that closed programs of boolean type evaluate to true or false.
@InProceedings{POPL17p680,
author = {Carlo Angiuli and Robert Harper and Todd Wilson},
title = {Computational HigherDimensional Type Theory},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {680693},
doi = {10.1145/3009837.3009861},
year = {2017},
}
Publisher's Version
Article Search


Assaf, Mounir 
POPL '17: "Hypercollecting Semantics ..."
Hypercollecting Semantics and Its Application to Static Analysis of Information Flow
Mounir Assaf, David A. Naumann, Julien Signoles, Éric Totel, and Frédéric Tronel (Stevens Institute of Technology, USA; CEA LIST, France; CentraleSupélec, France) We show how static analysis for secure information flow can be expressed and proved correct entirely within the framework of abstract interpretation. The key idea is to define a Galois connection that directly approximates the hyperproperty of interest. To enable use of such Galois connections, we introduce a fixpoint characterisation of hypercollecting semantics, i.e. a “set of sets” transformer. This makes it possible to systematically derive static analyses for hyperproperties entirely within the calculational framework of abstract interpretation. We evaluate this technique by deriving example static analyses. For qualitative information flow, we derive a dependence analysis similar to the logic of Amtoft and Banerjee (SAS’04) and the type system of Hunt and Sands (POPL’06). For quantitative information flow, we derive a novel cardinality analysis that bounds the leakage conveyed by a program instead of simply deciding whether it exists. This encompasses problems that are hypersafety but not ksafety. We put the framework to use and introduce variations that achieve precision rivalling the most recent and precise static analyses for information flow. 

Azevedo de Amorim, Arthur 
POPL '17: "A Semantic Account of Metric ..."
A Semantic Account of Metric Preservation
Arthur Azevedo de Amorim, Marco Gaboardi, Justin Hsu, Shinya Katsumata, and Ikram Cherigui (University of Pennsylvania, USA; SUNY Buffalo, USA; Kyoto University, Japan; ENS, France) Program sensitivity measures how robust a program is to small changes in its input, and is a fundamental notion in domains ranging from differential privacy to cyberphysical systems. A natural way to formalize program sensitivity is in terms of metrics on the input and output spaces, requiring that an rsensitive function map inputs that are at distance d to outputs that are at distance at most r · d. Program sensitivity is thus an analogue of Lipschitz continuity for programs. Reed and Pierce introduced Fuzz, a functional language with a linear type system that can express program sensitivity. They show soundness operationally, in the form of a metric preservation property. Inspired by their work, we study program sensitivity and metric preservation from a denotational point of view. In particular, we introduce metric CPOs, a novel semantic structure for reasoning about computation on metric spaces, by endowing CPOs with a compatible notion of distance. This structure is useful for reasoning about metric properties of programs, and specifically about program sensitivity. We demonstrate metric CPOs by giving a model for the deterministic fragment of Fuzz. 

Baranowski, Mark 
POPL '17: "Rigorous FloatingPoint MixedPrecision ..."
Rigorous FloatingPoint MixedPrecision Tuning
WeiFan Chiang, Mark Baranowski, Ian Briggs, Alexey Solovyev, Ganesh Gopalakrishnan, and Zvonimir Rakamarić (University of Utah, USA)
Virtually all realvalued computations are carried out using floatingpoint data types and operations. The precision of these data types must be set with the goals of reducing the overall roundoff error, but also emphasizing performance improvements. Often, a mixedprecision allocation achieves this optimum; unfortunately, there are no techniques available to compute such allocations and conservatively meet a given error target across all program inputs. In this work, we present a rigorous approach to precision allocation based on formal analysis via Symbolic Taylor Expansions, and error analysis based on interval functions. This approach is implemented in an automated tool called FPTuner that generates and solves a quadratically constrained quadratic program to obtain a precisionannotated version of the given expression. FPTuner automatically introduces all the requisite precision up and down casting operations. It also allows users to flexibly control precision allocation using constraints to cap the number of high precision operators as well as group operators to allocate the same precision to facilitate vectorization. We evaluate FPTuner by tuning several benchmarks and measuring the proportion of lower precision operators allocated as we increase the error threshold. We also measure the reduction in energy consumption resulting from executing mixedprecision tuned code on a real hardware platform. We observe significant energy savings in response to mixedprecision tuning, but also observe situations where unexpected compiler behaviors thwart intended optimizations.
@InProceedings{POPL17p300,
author = {WeiFan Chiang and Mark Baranowski and Ian Briggs and Alexey Solovyev and Ganesh Gopalakrishnan and Zvonimir Rakamarić},
title = {Rigorous FloatingPoint MixedPrecision Tuning},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {300315},
doi = {10.1145/3009837.3009846},
year = {2017},
}
Publisher's Version
Article Search


Barthe, Gilles 
POPL '17: "Coupling Proofs Are Probabilistic ..."
Coupling Proofs Are Probabilistic Product Programs
Gilles Barthe, Benjamin Grégoire, Justin Hsu, and PierreYves Strub (IMDEA Software Institute, Spain; Inria, France; University of Pennsylvania, USA; École Polytechnique, France)
Couplings are a powerful mathematical tool for reasoning about
pairs of probabilistic processes. Recent developments in formal
verification identify a close connection between couplings and
pRHL, a relational program logic motivated by applications to
provable security, enabling formal construction of couplings from
the probability theory literature. However, existing work using
pRHL merely shows existence of a coupling and does not give a way
to prove quantitative properties about the coupling, needed to
reason about mixing and convergence of probabilistic
processes. Furthermore, pRHL is inherently incomplete, and is not
able to capture some advanced forms of couplings such as shift
couplings. We address both problems as follows.
First, we define an extension of pRHL, called xpRHL, which
explicitly constructs the coupling in a pRHL derivation in the
form of a probabilistic product program that simulates two
correlated runs of the original program. Existing verification
tools for probabilistic programs can then be directly applied to
the probabilistic product to prove quantitative properties of the
coupling. Second, we equip xpRHL with a new rule for while
loops, where reasoning can freely mix synchronized and
unsynchronized loop iterations. Our proof rule can capture
examples of shift couplings, and the logic is relatively complete
for deterministic programs.
We show soundness of xPRHL and use it to analyze two classes of
examples. First, we verify rapid mixing using different tools
from coupling: standard coupling, shift coupling, and path
coupling, a compositional principle for combining local couplings
into a global coupling. Second, we verify
(approximate) equivalence between a source and an optimized program
for several instances of loop optimizations from the literature.
@InProceedings{POPL17p161,
author = {Gilles Barthe and Benjamin Grégoire and Justin Hsu and PierreYves Strub},
title = {Coupling Proofs Are Probabilistic Product Programs},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {161174},
doi = {10.1145/3009837.3009896},
year = {2017},
}
Publisher's Version
Article Search
POPL '17: "Relational Cost Analysis ..."
Relational Cost Analysis
Ezgi Çiçek, Gilles Barthe, Marco Gaboardi, Deepak Garg, and Jan Hoffmann (MPISWS, Germany; IMDEA Software Institute, Spain; SUNY Buffalo, USA; Carnegie Mellon University, USA) Establishing quantitative bounds on the execution cost of programs is essential in many areas of computer science such as complexity analysis, compiler optimizations, security and privacy. Techniques based on program analysis, type systems and abstract interpretation are wellstudied, but methods for analyzing how the execution costs of two programs compare to each other have not received attention. Naively combining the worst and best case execution costs of the two programs does not work well in many cases because such analysis forgets the similarities between the programs or the inputs. In this work, we propose a relational cost analysis technique that is capable of establishing precise bounds on the difference in the execution cost of two programs by making use of relational properties of programs and inputs. We develop , a refinement type and effect system for a higherorder functional language with recursion and subtyping. The key novelty of our technique is the combination of relational refinements with two modes of typing—relational typing for reasoning about similar computations/inputs and unary typing for reasoning about unrelated computations/inputs. This combination allows us to analyze the execution cost difference of two programs more precisely than a naive nonrelational approach. We prove our type system sound using a semantic model based on stepindexed unary and binary logical relations accounting for nonrelational and relational reasoning principles with their respective costs. We demonstrate the precision and generality of our technique through examples. 

Batty, Mark 
POPL '17: "Automatically Comparing Memory ..."
Automatically Comparing Memory Consistency Models
John Wickerson, Mark Batty, Tyler Sorensen, and George A. Constantinides (Imperial College London, UK; University of Kent, UK) A memory consistency model (MCM) is the part of a programming language or computer architecture specification that defines which values can legally be read from shared memory locations. Because MCMs take into account various optimisations employed by architectures and compilers, they are often complex and counterintuitive, which makes them challenging to design and to understand. We identify four tasks involved in designing and understanding MCMs: generating conformance tests, distinguishing two MCMs, checking compiler optimisations, and checking compiler mappings. We show that all four tasks are instances of a general constraintsatisfaction problem to which the solution is either a program or a pair of programs. Although this problem is intractable for automatic solvers when phrased over programs directly, we show how to solve analogous constraints over program executions, and then construct programs that satisfy the original constraints. Our technique, which is implemented in the Alloy modelling framework, is illustrated on several software and architecturelevel MCMs, both axiomatically and operationally defined. We automatically recreate several known results, often in a simpler form, including: distinctions between variants of the C11 MCM; a failure of the ‘SCDRF guarantee’ in an early C11 draft; that x86 is ‘multicopy atomic’ and Power is not; bugs in common C11 compiler optimisations; and bugs in a compiler mapping from OpenCL to AMDstyle GPUs. We also use our technique to develop and validate a new MCM for NVIDIA GPUs that supports a natural mapping from OpenCL. Shaked Flur, Susmit Sarkar, Christopher Pulte, Kyndylan Nienhuis, Luc Maranget, Kathryn E. Gray, Ali Sezgin, Mark Batty, and Peter Sewell (University of Cambridge, UK; University of St. Andrews, UK; Inria, France; University of Kent, UK)
Previous work on the semantics of relaxed sharedmemory concurrency has only considered the case in which each load reads the data of exactly one store. In practice, however, multiprocessors support mixedsize accesses, and these are used by systems software and (to some degree) exposed at the C/C++ language level. A semantic foundation for software, therefore, has to address them.
We investigate the mixedsize behaviour of ARMv8 and IBM POWER architectures and implementations: by experiment, by developing semantic models, by testing the correspondence between these, and by discussion with ARM and IBM staff. This turns out to be surprisingly subtle, and on the way we have to revisit the fundamental concepts of coherence and sequential consistency, which change in this setting. In particular, we show that adding a memory barrier between each instruction does not restore sequential consistency. We go on to extend the C/C++11 model to support nonatomic mixedsize memory accesses.
This is a necessary step towards semantics for realworld sharedmemory concurrent code, beyond litmus tests.
@InProceedings{POPL17p429,
author = {Shaked Flur and Susmit Sarkar and Christopher Pulte and Kyndylan Nienhuis and Luc Maranget and Kathryn E. Gray and Ali Sezgin and Mark Batty and Peter Sewell},
title = {MixedSize Concurrency: ARM, POWER, C/C++11, and SC},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {429442},
doi = {10.1145/3009837.3009839},
year = {2017},
}
Publisher's Version
Article Search
Info


Berenger, Francois 
POPL '17: "SemanticDirected Clumping ..."
SemanticDirected Clumping of Disjunctive Abstract States
Huisong Li, Francois Berenger, BorYuh Evan Chang, and Xavier Rival (Inria, France; CNRS, France; ENS, France; University of Colorado at Boulder, USA)
To infer complex structural invariants, shape analyses rely on expressive families of logical properties. Many such analyses manipulate abstract memory states that consist of separating conjunctions of basic predicates describing atomic blocks or summaries. Moreover, they use finite disjunctions of abstract memory states in order to account for dissimilar shapes. Disjunctions should be kept small for the sake of scalability, though precision often requires to keep additional case splits. In this context, deciding when and how to merge case splits and to replace them with summaries is critical both for the precision and for the efficiency. Existing techniques use sets of syntactic rules, which are tedious to design and prone to failure. In this paper, we design a semantic criterion to clump abstract states based on their silhouette which applies not only to the conservative union of disjuncts, but also to the weakening of separating conjunction of memory predicates into inductive summaries. Our approach allows to define union and widening operators that aim at preserving the case splits that are required for the analysis to succeed. We implement this approach in the MemCAD analyzer, and evaluate it on realworld C codes from existing libraries, including programs dealing with doubly linked lists, redblack trees and AVLtrees.
@InProceedings{POPL17p32,
author = {Huisong Li and Francois Berenger and BorYuh Evan Chang and Xavier Rival},
title = {SemanticDirected Clumping of Disjunctive Abstract States},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {3245},
doi = {10.1145/3009837.3009881},
year = {2017},
}
Publisher's Version
Article Search


Biboudis, Aggelos 
POPL '17: "Stream Fusion, to Completeness ..."
Stream Fusion, to Completeness
Oleg Kiselyov, Aggelos Biboudis, Nick Palladinos, and Yannis Smaragdakis (Tohoku University, Japan; University of Athens, Greece; Nessos IT, Greece)
Stream processing is mainstream (again): Widelyused stream libraries are now available for virtually all modern OO and functional languages, from Java to C# to Scala to OCaml to Haskell. Yet expressivity and performance are still lacking. For instance, the popular, welloptimized Java 8 streams do not support the zip operator and are still an order of magnitude slower than handwritten loops.
We present the first approach that represents the full generality of stream processing and eliminates overheads, via the use of staging. It is based on an unusually rich semantic model of stream interaction. We support any combination of zipping, nesting (or flatmapping), subranging, filtering, mapping—of finite or infinite streams. Our model captures idiosyncrasies that a programmer uses in optimizing stream pipelines, such as rate differences and the choice of a “for” vs. “while” loops. Our approach delivers handwritten–like code, but automatically. It explicitly avoids the reliance on blackbox optimizers and sufficientlysmart compilers, offering highest, guaranteed and portable performance.
Our approach relies on highlevel concepts that are then readily mapped into an implementation. Accordingly, we have two distinct implementations: an OCaml stream library, staged via MetaOCaml, and a Scala library for the JVM, staged via LMS. In both cases, we derive libraries richer and simultaneously many tens of times faster than past work. We greatly exceed in performance the standard stream libraries available in Java, Scala and OCaml, including the welloptimized Java 8 streams.
@InProceedings{POPL17p285,
author = {Oleg Kiselyov and Aggelos Biboudis and Nick Palladinos and Yannis Smaragdakis},
title = {Stream Fusion, to Completeness},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {285299},
doi = {10.1145/3009837.3009880},
year = {2017},
}
Publisher's Version
Article Search
Info


Birkedal, Lars 
POPL '17: "A Relational Model of TypesandEffects ..."
A Relational Model of TypesandEffects in HigherOrder Concurrent Separation Logic
Morten KroghJespersen, Kasper Svendsen, and Lars Birkedal (Aarhus University, Denmark; University of Cambridge, UK)
Recently we have seen a renewed interest in programming languages that tame the complexity of state and concurrency through refined type systems with more finegrained control over effects. In addition to simplifying reasoning and eliminating whole classes of bugs, statically tracking effects opens the door to advanced compiler optimizations.
In this paper we present a relational model of a typeandeffect system for a higherorder, concurrent program ming language. The model precisely captures the semantic invariants expressed by the effect annotations. We demonstrate that these invariants are strong enough to prove advanced program transformations, including automatic parallelization of expressions with suitably disjoint effects. The model also supports refinement proofs between abstract data types implementations with different internal data representations, including proofs that finegrained concurrent algorithms refine their coarsegrained counterparts. This is the first model for such an expressive language that supports both effectbased optimizations and data abstraction.
The logical relation is defined in Iris, a stateoftheart higherorder concurrent separation logic. This greatly simplifies proving welldefinedness of the logical relation and also provides us with a powerful logic for reasoning in the model.
@InProceedings{POPL17p218,
author = {Morten KroghJespersen and Kasper Svendsen and Lars Birkedal},
title = {A Relational Model of TypesandEffects in HigherOrder Concurrent Separation Logic},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {218231},
doi = {10.1145/3009837.3009877},
year = {2017},
}
Publisher's Version
Article Search
Info
POPL '17: "Interactive Proofs in HigherOrder ..."
Interactive Proofs in HigherOrder Concurrent Separation Logic
Robbert Krebbers, Amin Timany, and Lars Birkedal (Delft University of Technology, Netherlands; KU Leuven, Belgium; Aarhus University, Denmark)
When using a proof assistant to reason in an embedded logic  like separation logic  one cannot benefit from the proof contexts and basic tactics of the proof assistant. This results in proofs that are at a too low level of abstraction because they are cluttered with bookkeeping code related to manipulating the object logic.
In this paper, we introduce a socalled proof mode that extends the Coq proof assistant with (spatial and nonspatial) named proof contexts for the object logic. We show that thanks to these contexts we can implement highlevel tactics for introduction and elimination of the connectives of the object logic, and thereby make reasoning in the embedded logic as seamless as reasoning in the meta logic of the proof assistant. We apply our method to Iris: a state of the art higherorder impredicative concurrent separation logic.
We show that our method is very general, and is not just limited to program verification. We demonstrate its generality by formalizing correctness proofs of finegrained concurrent algorithms, derived constructs of the Iris logic, and a unary and binary logical relation for a language with concurrency, higherorder store, polymorphism, and recursive types. This is the first formalization of a binary logical relation for such an expressive language. We also show how to use the logical relation to prove contextual refinement of finegrained concurrent algorithms.
@InProceedings{POPL17p205,
author = {Robbert Krebbers and Amin Timany and Lars Birkedal},
title = {Interactive Proofs in HigherOrder Concurrent Separation Logic},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {205217},
doi = {10.1145/3009837.3009855},
year = {2017},
}
Publisher's Version
Article Search


Blelloch, Guy E. 
POPL '17: "Parallel Functional Arrays ..."
Parallel Functional Arrays
Ananya Kumar, Guy E. Blelloch, and Robert Harper (Carnegie Mellon University, USA)
The goal of this paper is to develop a form of functional arrays (sequences) that are as efficient as imperative
arrays, can be used in parallel, and have well defined costsemantics. The key idea is to consider sequences with
functional value semantics but nonfunctional cost semantics. Because the value semantics is functional, "updating" a
sequence returns a new sequence. We allow operations on "older" sequences (called interior sequences) to be more
expensive than operations on the "most recent" sequences (called leaf sequences).
We embed sequences in a language supporting forkjoin parallelism. Due to the parallelism, operations can be
interleaved nondeterministically, and, in conjunction with the different cost for interior and leaf sequences, this
can lead to nondeterministic costs for a program. Consequently the costs of programs can be difficult to analyze.
The main result is the derivation of a deterministic cost dynamics which makes analyzing the costs easier. The
theorems are not specific to sequences and can be applied to other data types with different costs for operating on
interior and leaf versions.
We present a waitfree concurrent implementation of sequences that requires constant work for accessing and updating
leaf sequences, and logarithmic work for accessing and linear work for updating interior sequences. We sketch a proof
of correctness for the sequence implementation. The key advantages of the present approach compared to current
approaches is that our implementation requires no changes to existing programming languages, supports nested
parallelism, and has well defined cost semantics. At the same time, it allows for functional implementations of
algorithms such as depthfirst search with the same asymptotic complexity as imperative implementations.
@InProceedings{POPL17p706,
author = {Ananya Kumar and Guy E. Blelloch and Robert Harper},
title = {Parallel Functional Arrays},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {706718},
doi = {10.1145/3009837.3009869},
year = {2017},
}
Publisher's Version
Article Search


Bouajjani, Ahmed 
POPL '17: "On Verifying Causal Consistency ..."
On Verifying Causal Consistency
Ahmed Bouajjani, Constantin Enea, Rachid Guerraoui, and Jad Hamza (University of Paris Diderot, France; EPFL, Switzerland; Inria, France) Causal consistency is one of the most adopted consistency criteria for distributed implementations of data structures. It ensures that operations are executed at all sites according to their causal precedence. We address the issue of verifying automatically whether the executions of an implementation of a data structure are causally consistent. We consider two problems: (1) checking whether one single execution is causally consistent, which is relevant for developing testing and bug finding algorithms, and (2) verifying whether all the executions of an implementation are causally consistent. We show that the first problem is NPcomplete. This holds even for the readwrite memory abstraction, which is a building block of many modern distributed systems. Indeed, such systems often store data in keyvalue stores, which are instances of the readwrite memory abstraction. Moreover, we prove that, surprisingly, the second problem is undecidable, and again this holds even for the readwrite memory abstraction. However, we show that for the readwrite memory abstraction, these negative results can be circumvented if the implementations are data independent, i.e., their behaviors do not depend on the data values that are written or read at each moment, which is a realistic assumption. We prove that for data independent implementations, the problem of checking the correctness of a single execution w.r.t. the readwrite memory abstraction is polynomial time. Furthermore, we show that for such implementations the set of noncausally consistent executions can be represented by means of a finite number of register automata. Using these machines as observers (in parallel with the implementation) allows to reduce polynomially the problem of checking causal consistency to a state reachability problem. This reduction holds regardless of the class of programs used for the implementation, of the number of readwrite variables, and of the used data domain. It allows leveraging existing techniques for assertion/reachability checking to causal consistency verification. Moreover, for a significant class of implementations, we derive from this reduction the decidability of verifying causal consistency w.r.t. the readwrite memory abstraction. 

Briggs, Ian 
POPL '17: "Rigorous FloatingPoint MixedPrecision ..."
Rigorous FloatingPoint MixedPrecision Tuning
WeiFan Chiang, Mark Baranowski, Ian Briggs, Alexey Solovyev, Ganesh Gopalakrishnan, and Zvonimir Rakamarić (University of Utah, USA)
Virtually all realvalued computations are carried out using floatingpoint data types and operations. The precision of these data types must be set with the goals of reducing the overall roundoff error, but also emphasizing performance improvements. Often, a mixedprecision allocation achieves this optimum; unfortunately, there are no techniques available to compute such allocations and conservatively meet a given error target across all program inputs. In this work, we present a rigorous approach to precision allocation based on formal analysis via Symbolic Taylor Expansions, and error analysis based on interval functions. This approach is implemented in an automated tool called FPTuner that generates and solves a quadratically constrained quadratic program to obtain a precisionannotated version of the given expression. FPTuner automatically introduces all the requisite precision up and down casting operations. It also allows users to flexibly control precision allocation using constraints to cap the number of high precision operators as well as group operators to allocate the same precision to facilitate vectorization. We evaluate FPTuner by tuning several benchmarks and measuring the proportion of lower precision operators allocated as we increase the error threshold. We also measure the reduction in energy consumption resulting from executing mixedprecision tuned code on a real hardware platform. We observe significant energy savings in response to mixedprecision tuning, but also observe situations where unexpected compiler behaviors thwart intended optimizations.
@InProceedings{POPL17p300,
author = {WeiFan Chiang and Mark Baranowski and Ian Briggs and Alexey Solovyev and Ganesh Gopalakrishnan and Zvonimir Rakamarić},
title = {Rigorous FloatingPoint MixedPrecision Tuning},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {300315},
doi = {10.1145/3009837.3009846},
year = {2017},
}
Publisher's Version
Article Search


Brown, Matt 
POPL '17: "Typed SelfEvaluation via ..."
Typed SelfEvaluation via Intensional Type Functions
Matt Brown and Jens Palsberg (University of California at Los Angeles, USA) Many popular languages have a selfinterpreter, that is, an interpreter for the language written in itself. So far, work on polymorphicallytyped selfinterpreters has concentrated on selfrecognizers that merely recover a program from its representation. A larger and until now unsolved challenge is to implement a polymorphicallytyped selfevaluator that evaluates the represented program and produces a representation of the result. We present F_{ω}^{µi}, the first λcalculus that supports a polymorphicallytyped selfevaluator. Our calculus extends F_{ω} with recursive types and intensional type functions and has decidable type checking. Our key innovation is a novel implementation of type equality proofs that enables us to define a versatile representation of programs. Our results establish a new category of languages that can support polymorphicallytyped selfevaluators. 

Bruse, Florian 
POPL '17: "On the Relationship between ..."
On the Relationship between HigherOrder Recursion Schemes and HigherOrder Fixpoint Logic
Naoki Kobayashi, Étienne Lozes, and Florian Bruse (University of Tokyo, Japan; ENS, France; CNRS, France; University of Kassel, Germany)
We study the relationship between two kinds of higherorder extensions
of model checking: HORS model checking, where models are extended to
higherorder recursion schemes, and HFL model checking, where the
logic is extedned to higherorder modal fixpoint logic. Those extensions
have been independently studied until recently, and the former has
been applied to higherorder program verification. We show that there
exist (arguably) natural reductions between the two problems. To prove
the correctness of the translation from HORS to HFL model checking, we
establish a typebased characterization of HFL model checking, which
should be of independent interest. The results reveal a close
relationship between the two problems, enabling crossfertilization of
the two research threads.
@InProceedings{POPL17p246,
author = {Naoki Kobayashi and Étienne Lozes and Florian Bruse},
title = {On the Relationship between HigherOrder Recursion Schemes and HigherOrder Fixpoint Logic},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {246259},
doi = {10.1145/3009837.3009854},
year = {2017},
}
Publisher's Version
Article Search


Brutschy, Lucas 
POPL '17: "Serializability for Eventual ..."
Serializability for Eventual Consistency: Criterion, Analysis, and Applications
Lucas Brutschy, Dimitar Dimitrov, Peter Müller, and Martin Vechev (ETH Zurich, Switzerland)
Developing and reasoning about systems using eventually consistent data stores
is a difficult challenge due to the presence of unexpected behaviors that do not
occur under sequential consistency. A fundamental problem in this setting is to
identify a correctness criterion that precisely captures intended application
behaviors yet is generic enough to be applicable to a wide range of
applications.
In this paper, we present such a criterion. More precisely, we generalize
conflict serializability to the setting of eventual consistency. Our
generalization is based on a novel dependency model that incorporates two
powerful algebraic properties: commutativity and absorption. These properties enable
precise reasoning about programs that employ highlevel replicated data types,
common in modern systems. To apply our criterion in practice, we also developed
a dynamic analysis algorithm and a tool that checks whether a given program
execution is serializable.
We performed a thorough experimental evaluation on two realworld use cases:
debugging cloudbacked mobile applications and implementing clients of a popular
eventually consistent keyvalue store. Our experimental results indicate that
our criterion reveals harmful synchronization problems in applications, is more
effective at finding them than prior approaches, and can be used for the
development of practical, eventually consistent applications.
@InProceedings{POPL17p458,
author = {Lucas Brutschy and Dimitar Dimitrov and Peter Müller and Martin Vechev},
title = {Serializability for Eventual Consistency: Criterion, Analysis, and Applications},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {458472},
doi = {10.1145/3009837.3009895},
year = {2017},
}
Publisher's Version
Article Search


Chang, BorYuh Evan 
POPL '17: "SemanticDirected Clumping ..."
SemanticDirected Clumping of Disjunctive Abstract States
Huisong Li, Francois Berenger, BorYuh Evan Chang, and Xavier Rival (Inria, France; CNRS, France; ENS, France; University of Colorado at Boulder, USA)
To infer complex structural invariants, shape analyses rely on expressive families of logical properties. Many such analyses manipulate abstract memory states that consist of separating conjunctions of basic predicates describing atomic blocks or summaries. Moreover, they use finite disjunctions of abstract memory states in order to account for dissimilar shapes. Disjunctions should be kept small for the sake of scalability, though precision often requires to keep additional case splits. In this context, deciding when and how to merge case splits and to replace them with summaries is critical both for the precision and for the efficiency. Existing techniques use sets of syntactic rules, which are tedious to design and prone to failure. In this paper, we design a semantic criterion to clump abstract states based on their silhouette which applies not only to the conservative union of disjuncts, but also to the weakening of separating conjunction of memory predicates into inductive summaries. Our approach allows to define union and widening operators that aim at preserving the case splits that are required for the analysis to succeed. We implement this approach in the MemCAD analyzer, and evaluate it on realworld C codes from existing libraries, including programs dealing with doubly linked lists, redblack trees and AVLtrees.
@InProceedings{POPL17p32,
author = {Huisong Li and Francois Berenger and BorYuh Evan Chang and Xavier Rival},
title = {SemanticDirected Clumping of Disjunctive Abstract States},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {3245},
doi = {10.1145/3009837.3009881},
year = {2017},
}
Publisher's Version
Article Search


Chang, Stephen 
POPL '17: "Type Systems as Macros ..."
Type Systems as Macros
Stephen Chang, Alex Knauth, and Ben Greenman (Northeastern University, USA) We present Turnstile, a metalanguage for creating typed embedded languages. To implement the type system, programmers write type checking rules resembling traditional judgment syntax. To implement the semantics, they incorporate elaborations into these rules. Turnstile critically depends on the idea of linguistic reuse. It exploits a macro system in a novel way to simultaneously type check and rewrite a surface program into a target language. Reusing a macro system also yields modular implementations whose rules may be mixed and matched to create other languages. Combined with typical compiler and runtime reuse, Turnstile produces performant typed embedded languages with little effort. 

Chatterjee, Krishnendu 
POPL '17: "Stochastic Invariants for ..."
Stochastic Invariants for Probabilistic Termination
Krishnendu Chatterjee, Petr Novotný, and Ðorđe Žikelić (IST Austria, Austria; University of Cambridge, UK) Termination is one of the basic liveness properties, and we study the termination problem for probabilistic programs with realvalued variables. Previous works focused on the qualitative problem that asks whether an input program terminates with probability 1 (almostsure termination). A powerful approach for this qualitative problem is the notion of ranking supermartingales with respect to a given set of invariants. The quantitative problem (probabilistic termination) asks for bounds on the termination probability, and this problem has not been addressed yet. A fundamental and conceptual drawback of the existing approaches to address probabilistic termination is that even though the supermartingales consider the probabilistic behaviour of the programs, the invariants are obtained completely ignoring the probabilistic aspect (i.e., the invariants are obtained considering all behaviours with no information about the probability). In this work we address the probabilistic termination problem for lineararithmetic probabilistic programs with nondeterminism. We formally define the notion of stochastic invariants, which are constraints along with a probability bound that the constraints hold. We introduce a concept of repulsing supermartingales. First, we show that repulsing supermartingales can be used to obtain bounds on the probability of the stochastic invariants. Second, we show the effectiveness of repulsing supermartingales in the following three ways: (1) With a combination of ranking and repulsing supermartingales we can compute lower bounds on the probability of termination; (2) repulsing supermartingales provide witnesses for refutation of almostsure termination; and (3) with a combination of ranking and repulsing supermartingales we can establish persistence properties of probabilistic programs. Along with our conceptual contributions, we establish the following computational results: First, the synthesis of a stochastic invariant which supports some ranking supermartingale and at the same time admits a repulsing supermartingale can be achieved via reduction to the existential firstorder theory of reals, which generalizes existing results from the nonprobabilistic setting. Second, given a program with “strict invariants” (e.g., obtained via abstract interpretation) and a stochastic invariant, we can check in polynomial time whether there exists a linear repulsing supermartingale w.r.t. the stochastic invariant (via reduction to LP). We also present experimental evaluation of our approach on academic examples. 

Cherigui, Ikram 
POPL '17: "A Semantic Account of Metric ..."
A Semantic Account of Metric Preservation
Arthur Azevedo de Amorim, Marco Gaboardi, Justin Hsu, Shinya Katsumata, and Ikram Cherigui (University of Pennsylvania, USA; SUNY Buffalo, USA; Kyoto University, Japan; ENS, France) Program sensitivity measures how robust a program is to small changes in its input, and is a fundamental notion in domains ranging from differential privacy to cyberphysical systems. A natural way to formalize program sensitivity is in terms of metrics on the input and output spaces, requiring that an rsensitive function map inputs that are at distance d to outputs that are at distance at most r · d. Program sensitivity is thus an analogue of Lipschitz continuity for programs. Reed and Pierce introduced Fuzz, a functional language with a linear type system that can express program sensitivity. They show soundness operationally, in the form of a metric preservation property. Inspired by their work, we study program sensitivity and metric preservation from a denotational point of view. In particular, we introduce metric CPOs, a novel semantic structure for reasoning about computation on metric spaces, by endowing CPOs with a compatible notion of distance. This structure is useful for reasoning about metric properties of programs, and specifically about program sensitivity. We demonstrate metric CPOs by giving a model for the deterministic fragment of Fuzz. 

Chiang, WeiFan 
POPL '17: "Rigorous FloatingPoint MixedPrecision ..."
Rigorous FloatingPoint MixedPrecision Tuning
WeiFan Chiang, Mark Baranowski, Ian Briggs, Alexey Solovyev, Ganesh Gopalakrishnan, and Zvonimir Rakamarić (University of Utah, USA)
Virtually all realvalued computations are carried out using floatingpoint data types and operations. The precision of these data types must be set with the goals of reducing the overall roundoff error, but also emphasizing performance improvements. Often, a mixedprecision allocation achieves this optimum; unfortunately, there are no techniques available to compute such allocations and conservatively meet a given error target across all program inputs. In this work, we present a rigorous approach to precision allocation based on formal analysis via Symbolic Taylor Expansions, and error analysis based on interval functions. This approach is implemented in an automated tool called FPTuner that generates and solves a quadratically constrained quadratic program to obtain a precisionannotated version of the given expression. FPTuner automatically introduces all the requisite precision up and down casting operations. It also allows users to flexibly control precision allocation using constraints to cap the number of high precision operators as well as group operators to allocate the same precision to facilitate vectorization. We evaluate FPTuner by tuning several benchmarks and measuring the proportion of lower precision operators allocated as we increase the error threshold. We also measure the reduction in energy consumption resulting from executing mixedprecision tuned code on a real hardware platform. We observe significant energy savings in response to mixedprecision tuning, but also observe situations where unexpected compiler behaviors thwart intended optimizations.
@InProceedings{POPL17p300,
author = {WeiFan Chiang and Mark Baranowski and Ian Briggs and Alexey Solovyev and Ganesh Gopalakrishnan and Zvonimir Rakamarić},
title = {Rigorous FloatingPoint MixedPrecision Tuning},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {300315},
doi = {10.1145/3009837.3009846},
year = {2017},
}
Publisher's Version
Article Search


Chlipala, Adam 
POPL '17: "A Program Optimization for ..."
A Program Optimization for Automatic Database Result Caching
Ziv Scully and Adam Chlipala (Carnegie Mellon University, USA; Massachusetts Institute of Technology, USA)
Most popular Web applications rely on persistent databases based on languages like SQL for declarative specification of data models and the operations that read and modify them. As applications scale up in user base, they often face challenges responding quickly enough to the high volume of requests. A common aid is caching of database results in the application's memory space, taking advantage of programspecific knowledge of which caching schemes are sound and useful, embodied in handwritten modifications that make the program less maintainable. These modifications also require nontrivial reasoning about the readwrite dependencies across operations. In this paper, we present a compiler optimization that automatically adds sound SQL caching to Web applications coded in the Ur/Web domainspecific functional language, with no modifications required to source code. We use a custom cache implementation that supports concurrent operations without compromising the transactional semantics of the database abstraction. Through experiments with microbenchmarks and production Ur/Web applications, we show that our optimization in many cases enables an easy doubling or more of an application's throughput, requiring nothing more than passing an extra commandline flag to the compiler.
@InProceedings{POPL17p271,
author = {Ziv Scully and Adam Chlipala},
title = {A Program Optimization for Automatic Database Result Caching},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {271284},
doi = {10.1145/3009837.3009891},
year = {2017},
}
Publisher's Version
Article Search


Çiçek, Ezgi 
POPL '17: "Relational Cost Analysis ..."
Relational Cost Analysis
Ezgi Çiçek, Gilles Barthe, Marco Gaboardi, Deepak Garg, and Jan Hoffmann (MPISWS, Germany; IMDEA Software Institute, Spain; SUNY Buffalo, USA; Carnegie Mellon University, USA) Establishing quantitative bounds on the execution cost of programs is essential in many areas of computer science such as complexity analysis, compiler optimizations, security and privacy. Techniques based on program analysis, type systems and abstract interpretation are wellstudied, but methods for analyzing how the execution costs of two programs compare to each other have not received attention. Naively combining the worst and best case execution costs of the two programs does not work well in many cases because such analysis forgets the similarities between the programs or the inputs. In this work, we propose a relational cost analysis technique that is capable of establishing precise bounds on the difference in the execution cost of two programs by making use of relational properties of programs and inputs. We develop , a refinement type and effect system for a higherorder functional language with recursion and subtyping. The key novelty of our technique is the combination of relational refinements with two modes of typing—relational typing for reasoning about similar computations/inputs and unary typing for reasoning about unrelated computations/inputs. This combination allows us to analyze the execution cost difference of two programs more precisely than a naive nonrelational approach. We prove our type system sound using a semantic model based on stepindexed unary and binary logical relations accounting for nonrelational and relational reasoning principles with their respective costs. We demonstrate the precision and generality of our technique through examples. 

Cimini, Matteo 
POPL '17: "Automatically Generating the ..."
Automatically Generating the Dynamic Semantics of Gradually Typed Languages
Matteo Cimini and Jeremy G. Siek (Indiana University, USA)
Many language designers have adopted gradual typing. However, there
remains open questions regarding how to gradualize languages.
Cimini and Siek (2016) created a methodology and algorithm to
automatically generate the type system of a gradually typed language
from a fully static version of the language.
In this paper, we address the next challenge of how to automatically
generate the dynamic semantics of gradually typed languages. Such
languages typically use an intermediate language with explicit casts.
Our first result is a methodology for generating the syntax, type
system, and dynamic semantics of the intermediate language with casts.
Next, we present an algorithm that formalizes and automates the
methodology, given a language definition as input.
We show that our approach is general enough to automatically
gradualize several languages, including features such as polymorphism,
recursive types and exceptions.
We prove that our algorithm produces languages that
satisfy the key correctness criteria of gradual typing.
Finally, we implement the algorithm, generating complete
specifications of gradually typed languages in lambdaProlog,
including executable interpreters.
@InProceedings{POPL17p789,
author = {Matteo Cimini and Jeremy G. Siek},
title = {Automatically Generating the Dynamic Semantics of Gradually Typed Languages},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {789803},
doi = {10.1145/3009837.3009863},
year = {2017},
}
Publisher's Version
Article Search


Constantinides, George A. 
POPL '17: "Automatically Comparing Memory ..."
Automatically Comparing Memory Consistency Models
John Wickerson, Mark Batty, Tyler Sorensen, and George A. Constantinides (Imperial College London, UK; University of Kent, UK) A memory consistency model (MCM) is the part of a programming language or computer architecture specification that defines which values can legally be read from shared memory locations. Because MCMs take into account various optimisations employed by architectures and compilers, they are often complex and counterintuitive, which makes them challenging to design and to understand. We identify four tasks involved in designing and understanding MCMs: generating conformance tests, distinguishing two MCMs, checking compiler optimisations, and checking compiler mappings. We show that all four tasks are instances of a general constraintsatisfaction problem to which the solution is either a program or a pair of programs. Although this problem is intractable for automatic solvers when phrased over programs directly, we show how to solve analogous constraints over program executions, and then construct programs that satisfy the original constraints. Our technique, which is implemented in the Alloy modelling framework, is illustrated on several software and architecturelevel MCMs, both axiomatically and operationally defined. We automatically recreate several known results, often in a simpler form, including: distinctions between variants of the C11 MCM; a failure of the ‘SCDRF guarantee’ in an early C11 draft; that x86 is ‘multicopy atomic’ and Power is not; bugs in common C11 compiler optimisations; and bugs in a compiler mapping from OpenCL to AMDstyle GPUs. We also use our technique to develop and validate a new MCM for NVIDIA GPUs that supports a natural mapping from OpenCL. 

Cousot, Patrick 
POPL '17: "Ogre and Pythia: An Invariance ..."
Ogre and Pythia: An Invariance Proof Method for Weak Consistency Models
Jade Alglave and Patrick Cousot (Microsoft Research, UK; University College London, UK; New York University, USA; ENS, France)
We design an invariance proof method for concurrent programs parameterised by a weak consistency model. The calculational design of the invariance proof method is by abstract interpretation of a truly parallel analytic semantics. This generalises the methods by Lamport and OwickiGries for sequential consistency. We use cat as an example of language to write consistency specifications of both concurrent programs and machine architectures.
@InProceedings{POPL17p3,
author = {Jade Alglave and Patrick Cousot},
title = {Ogre and Pythia: An Invariance Proof Method for Weak Consistency Models},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {318},
doi = {10.1145/3009837.3009883},
year = {2017},
}
Publisher's Version
Article Search
Info


Crary, Karl 
POPL '17: "Modules, Abstraction, and ..."
Modules, Abstraction, and Parametric Polymorphism
Karl Crary (Carnegie Mellon University, USA)
Reynolds's Abstraction theorem forms the mathematical foundation for
data abstraction. His setting was the polymorphic lambda calculus.
Today, many modern languages, such as the ML family, employ rich
module systems designed to give more expressive support for data
abstraction than the polymorphic lambda calculus, but analogues of the
Abstraction theorem for such module systems have lagged far behind.
We give an account of the Abstraction theorem for a modern module
calculus supporting generative and applicative functors, higherorder
functors, sealing, and translucent signatures. The main issues to be
overcome are: (1) the fact that modules combine both types and terms,
so they must be treated as both simultaneously, (2) the effect
discipline that models the distinction between transparent and opaque
modules, and (3) a very rich language of type constructors supporting
singleton kinds. We define logical equivalence for modules and show
that it coincides with contextual equivalence. This substantiates the
folk theorem that modules are good for data abstraction. All our
proofs are formalized in Coq.
@InProceedings{POPL17p100,
author = {Karl Crary},
title = {Modules, Abstraction, and Parametric Polymorphism},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {100113},
doi = {10.1145/3009837.3009892},
year = {2017},
}
Publisher's Version
Article Search


Dal Lago, Ugo 
POPL '17: "The Geometry of Parallelism: ..."
The Geometry of Parallelism: Classical, Probabilistic, and Quantum Effects
Ugo Dal Lago, Claudia Faggian, Benoît Valiron, and Akira Yoshimizu (University of Bologna, Italy; Inria, France; CNRS, France; University of Paris Diderot, France; University of ParisSaclay, France; University of Tokyo, Japan) We introduce a Geometry of Interaction model for higherorder quantum computation, and prove its adequacy for a fully fledged quantum programming language in which entanglement, duplication, and recursion are all available. This model is an instance of a new framework which captures not only quantum but also classical and probabilistic computation. Its main feature is the ability to model commutative effects in a parallel setting. Our model comes with a multitoken machine, a proof net system, and a style language. Being based on a multitoken machine equipped with a memory, it has a concrete nature which makes it well suited for building lowlevel operational descriptions of higherorder languages. 

D'Antoni, Loris 
POPL '17: "Genesis: Synthesizing Forwarding ..."
Genesis: Synthesizing Forwarding Tables in Multitenant Networks
Kausik Subramanian, Loris D'Antoni, and Aditya Akella (University of WisconsinMadison, USA)
Operators in multitenant cloud datacenters require support for diverse and complex endtoend policies, such as, reachability, middlebox traversals, isolation, traffic engineering, and network resource management. We present Genesis, a datacenter network management system which allows policies to be specified in a declarative manner without explicitly programming the network data plane. Genesis tackles the problem of enforcing policies by synthesizing switch forwarding tables. It uses the formal foundations of constraint solving in combination with fast offtheshelf SMT solvers. To improve synthesis performance, Genesis incorporates a novel search strategy that uses regular expressions to specify properties that leverage the structure of datacenter networks, and a divideandconquer synthesis procedure which exploits the structure of policy relationships. We have prototyped Genesis, and conducted experiments with a variety of workloads on realworld topologies to demonstrate its performance.
@InProceedings{POPL17p572,
author = {Kausik Subramanian and Loris D'Antoni and Aditya Akella},
title = {Genesis: Synthesizing Forwarding Tables in Multitenant Networks},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {572585},
doi = {10.1145/3009837.3009845},
year = {2017},
}
Publisher's Version
Article Search
POPL '17: "Monadic SecondOrder Logic ..."
Monadic SecondOrder Logic on Finite Sequences
Loris D'Antoni and Margus Veanes (University of WisconsinMadison, USA; Microsoft Research, USA)
We extend the weak monadic secondorder logic of one successor on finite strings (M2LSTR) to symbolic alphabets by allowing character predicates to range over decidable quantifier free theories instead of finite alphabets. We call this logic, which is able to describe sequences over complex and potentially infinite domains, symbolic M2LSTR (SM2LSTR). We then present a decision procedure for SM2LSTR based on a reduction to symbolic finite automata, a decidable extension of finite automata that allows transitions to carry predicates and can therefore model symbolic alphabets. The reduction constructs a symbolic automaton over an alphabet consisting of pairs of symbols where the first element of the pair is a symbol in the original formula’s alphabet, while the second element is a bitvector. To handle this modified alphabet we show that the Cartesian product of two decidable Boolean algebras (e.g., the formula’s one and the bitvector’s one) also forms a decidable Boolean algebras. To make the decision procedure practical, we propose two efficient representations of the Cartesian product of two Boolean algebras, one based on algebraic decision diagrams and one on a variant of Shannon expansions. Finally, we implement our decision procedure and evaluate it on more than 10,000 formulas. Despite the generality, our implementation has comparable performance with the stateoftheart M2LSTR solvers.
@InProceedings{POPL17p232,
author = {Loris D'Antoni and Margus Veanes},
title = {Monadic SecondOrder Logic on Finite Sequences},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {232245},
doi = {10.1145/3009837.3009844},
year = {2017},
}
Publisher's Version
Article Search


Das, Ankush 
POPL '17: "Towards Automatic Resource ..."
Towards Automatic Resource Bound Analysis for OCaml
Jan Hoffmann, Ankush Das, and ShuChun Weng (Carnegie Mellon University, USA; Yale University, USA)
This article presents a resource analysis system for OCaml programs. The system automatically derives worstcase resource bounds for higherorder polymorphic programs with userdefined inductive types. The technique is parametric in the resource and can derive bounds for time, memory allocations and energy usage. The derived bounds are multivariate resource polynomials which are functions of different size parameters that depend on the standard OCaml types. Bound inference is fully automatic and reduced to a linear optimization problem that is passed to an offtheshelf LP solver. Technically, the analysis system is based on a novel multivariate automatic amortized resource analysis (AARA). It builds on existing work on linear AARA for higherorder programs with userdefined inductive types and on multivariate AARA for firstorder programs with builtin lists and binary trees. This is the first amortized analysis, that automatically derives polynomial bounds for higherorder functions and polynomial bounds that depend on userdefined inductive types. Moreover, the analysis handles a limited form of side effects and even outperforms the linear bound inference of previous systems. At the same time, it preserves the expressivity and efficiency of existing AARA techniques. The practicality of the analysis system is demonstrated with an implementation and integration with Inria's OCaml compiler. The implementation is used to automatically derive resource bounds for 411 functions and 6018 lines of code derived from OCaml libraries, the CompCert compiler, and implementations of textbook algorithms. In a case study, the system infers bounds on the number of queries that are sent by OCaml programs to DynamoDB, a commercial NoSQL cloud database service.
@InProceedings{POPL17p359,
author = {Jan Hoffmann and Ankush Das and ShuChun Weng},
title = {Towards Automatic Resource Bound Analysis for OCaml},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {359373},
doi = {10.1145/3009837.3009842},
year = {2017},
}
Publisher's Version
Article Search
Info


Dillig, Isil 
POPL '17: "ComponentBased Synthesis ..."
ComponentBased Synthesis for Complex APIs
Yu Feng, Ruben Martins, Yuepeng Wang, Isil Dillig, and Thomas W. Reps (University of Texas at Austin, USA; University of WisconsinMadison, USA)
Componentbased approaches to program synthesis assemble programs from a database of existing components, such as methods provided by an API. In this paper, we present a novel typedirected algorithm for componentbased synthesis. The key novelty of our approach is the use of a compact Petrinet representation to model relationships between methods in an API. Given a target method signature S, our approach performs reachability analysis on the underlying Petrinet model to identify sequences of method calls that could be used to synthesize an implementation of S. The programs synthesized by our algorithm are guaranteed to type check and pass all test cases provided by the user.
We have implemented this approach in a tool called SyPet, and used it to successfully synthesize realworld programming tasks extracted from online forums and existing code repositories. We also compare SyPet with two stateoftheart synthesis tools, namely InSynth and CodeHint, and demonstrate that SyPet can synthesize more programs in less time. Finally, we compare our approach with an alternative solution based on hypergraphs and demonstrate its advantages.
@InProceedings{POPL17p599,
author = {Yu Feng and Ruben Martins and Yuepeng Wang and Isil Dillig and Thomas W. Reps},
title = {ComponentBased Synthesis for Complex APIs},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {599612},
doi = {10.1145/3009837.3009851},
year = {2017},
}
Publisher's Version
Article Search


Dimitrov, Dimitar 
POPL '17: "Serializability for Eventual ..."
Serializability for Eventual Consistency: Criterion, Analysis, and Applications
Lucas Brutschy, Dimitar Dimitrov, Peter Müller, and Martin Vechev (ETH Zurich, Switzerland)
Developing and reasoning about systems using eventually consistent data stores
is a difficult challenge due to the presence of unexpected behaviors that do not
occur under sequential consistency. A fundamental problem in this setting is to
identify a correctness criterion that precisely captures intended application
behaviors yet is generic enough to be applicable to a wide range of
applications.
In this paper, we present such a criterion. More precisely, we generalize
conflict serializability to the setting of eventual consistency. Our
generalization is based on a novel dependency model that incorporates two
powerful algebraic properties: commutativity and absorption. These properties enable
precise reasoning about programs that employ highlevel replicated data types,
common in modern systems. To apply our criterion in practice, we also developed
a dynamic analysis algorithm and a tool that checks whether a given program
execution is serializable.
We performed a thorough experimental evaluation on two realworld use cases:
debugging cloudbacked mobile applications and implementing clients of a popular
eventually consistent keyvalue store. Our experimental results indicate that
our criterion reveals harmful synchronization problems in applications, is more
effective at finding them than prior approaches, and can be used for the
development of practical, eventually consistent applications.
@InProceedings{POPL17p458,
author = {Lucas Brutschy and Dimitar Dimitrov and Peter Müller and Martin Vechev},
title = {Serializability for Eventual Consistency: Criterion, Analysis, and Applications},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {458472},
doi = {10.1145/3009837.3009895},
year = {2017},
}
Publisher's Version
Article Search


Dolan, Stephen 
POPL '17: "Polymorphism, Subtyping, and ..."
Polymorphism, Subtyping, and Type Inference in MLsub
Stephen Dolan and Alan Mycroft (University of Cambridge, UK)
We present a type system combining subtyping and MLstyle parametric polymorphism. Unlike previous work, our system supports type inference and has compact principal types. We demonstrate this system in the minimal language MLsub, which types a strict superset of core ML programs.
This is made possible by keeping a strict separation between the types used to describe inputs and those used to describe outputs, and extending the classical unification algorithm to handle subtyping constraints between these input and output types. Principal types are kept compact by type simplification, which exploits deep connections between subtyping and the algebra of regular languages. An implementation is available online.
@InProceedings{POPL17p60,
author = {Stephen Dolan and Alan Mycroft},
title = {Polymorphism, Subtyping, and Type Inference in MLsub},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {6072},
doi = {10.1145/3009837.3009882},
year = {2017},
}
Publisher's Version
Article Search
Info


Donaldson, Alastair F. 
POPL '17: "Dynamic Race Detection for ..."
Dynamic Race Detection for C++11
Christopher Lidbury and Alastair F. Donaldson (Imperial College London, UK)
The intricate rules for memory ordering and synchronisation associated with the C/C++11 memory model mean that data races can be difficult to eliminate from concurrent programs. Dynamic data race analysis can pinpoint races in large and complex applications, but the stateoftheart ThreadSanitizer (tsan) tool for C/C++ considers only sequentially consistent program executions, and does not correctly model synchronisation between C/C++11 atomic operations. We present a scalable dynamic data race analysis for C/C++11 that correctly captures C/C++11 synchronisation,
and uses instrumentation to support exploration of a class of non sequentially consistent executions. We concisely define the memory model fragment captured by our instrumentation via a restricted axiomatic semantics, and show that the axiomatic semantics permits exactly those executions explored by our instrumentation. We have implemented our analysis in tsan, and evaluate its effectiveness on benchmark programs, enabling a comparison with the CDSChecker tool, and on two large and highly concurrent applications: the Firefox and Chromium web browsers. Our results show that our method can detect races that are beyond the scope of the original tsan tool, and that the overhead associated with applying our enhanced instrumentation to large applications is tolerable.
@InProceedings{POPL17p443,
author = {Christopher Lidbury and Alastair F. Donaldson},
title = {Dynamic Race Detection for C++11},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {443457},
doi = {10.1145/3009837.3009857},
year = {2017},
}
Publisher's Version
Article Search


Dreyer, Derek 
POPL '17: "A Promising Semantics for ..."
A Promising Semantics for RelaxedMemory Concurrency
Jeehoon Kang, ChungKil Hur, Ori Lahav, Viktor Vafeiadis, and Derek Dreyer (Seoul National University, South Korea; MPISWS, Germany)
Despite many years of research, it has proven very difficult to
develop a memory model for concurrent programming languages that
adequately balances the conflicting desiderata of programmers,
compilers, and hardware. In this paper, we propose the first relaxed
memory model that (1) accounts for a broad spectrum of features from
the C++11 concurrency model, (2) is implementable, in the sense that
it provably validates many standard compiler optimizations and
reorderings, as well as standard compilation schemes to x86TSO and
Power, (3) justifies simple invariantbased reasoning, thus
demonstrating the absence of bad "outofthinair" behaviors, (4)
supports "DRF" guarantees, ensuring that programmers who use
sufficient synchronization need not understand the full complexities
of relaxedmemory semantics, and (5) defines the semantics of racy
programs without relying on undefined behaviors, which is a
prerequisite for applicability to typesafe languages like Java.
The key novel idea behind our model is the notion of *promises*: a
thread may promise to execute a write in the future, thus enabling
other threads to read from that write out of order. Crucially, to
prevent outofthinair behaviors, a promise step requires a
threadlocal certification that it will be possible to execute the
promised write even in the absence of the promise. To establish
confidence in our model, we have formalized most of our key results in
Coq.
@InProceedings{POPL17p175,
author = {Jeehoon Kang and ChungKil Hur and Ori Lahav and Viktor Vafeiadis and Derek Dreyer},
title = {A Promising Semantics for RelaxedMemory Concurrency},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {175189},
doi = {10.1145/3009837.3009850},
year = {2017},
}
Publisher's Version
Article Search
Info


Dudenhefner, Andrej 
POPL '17: "Intersection Type Calculi ..."
Intersection Type Calculi of Bounded Dimension
Andrej Dudenhefner and Jakob Rehof (TU Dortmund, Germany) A notion of dimension in intersection typed λcalculi is presented. The dimension of a typed λterm is given by the minimal norm of an elaboration (a proof theoretic decoration) necessary for typing the term at its type, and, intuitively, measures intersection introduction as a resource. Boundeddimensional intersection type calculi are shown to enjoy subject reduction, since terms can be elaborated in nonincreasing norm under βreduction. We prove that a multiset interpretation (corresponding to a nonidempotent and nonlinear interpretation of intersection) of dimensionality corresponds to the number of simultaneous constraints required during search for inhabitants. As a consequence, the inhabitation problem is decidable in bounded multiset dimension, and it is proven to be EXPSPACEcomplete. This result is a substantial generalization of inhabitation for the rank 2fragment, yielding a calculus with decidable inhabitation which is independent of rank. Our results give rise to a new criterion (dimensional bound) for subclasses of intersection type calculi with a decidable inhabitation problem, which is orthogonal to previously known criteria, and which should have immediate applications in synthesis. Additionally, we give examples of dimensional analysis of fragments of the intersection type system, including conservativity over simple types, rank 2types, and normal form typings, and we provide some observations towards dimensional analysis of other systems. It is suggested (for future work) that our notion of dimension may have semantic interpretations in terms of of reduction complexity. 

Dunfield, Joshua 
POPL '17: "Sums of Uncertainty: Refinements ..."
Sums of Uncertainty: Refinements Go Gradual
Khurram A. Jafery and Joshua Dunfield (University of British Columbia, Canada)
A longstanding shortcoming of statically typed functional languages is that type checking does not rule out patternmatching failures (runtime match exceptions). Refinement types distinguish different values of datatypes; if a program annotated with refinements passes type checking, patternmatching failures become impossible. Unfortunately, refinement is a monolithic property of a type, exacerbating the difficulty of adding refinement types to nontrivial programs.
Gradual typing has explored how to incrementally move between static typing and dynamic typing. We develop a type system of gradual sums that combines refinement with imprecision. Then, we develop a bidirectional version of the type system, which rules out excessive imprecision, and give a typedirected translation to a target language with explicit casts. We prove that the static sublanguage cannot have match failures, that a welltyped program remains welltyped if its type annotations are made less precise, and that making annotations less precise causes target programs to fail later. Several of these results correspond to criteria for gradual typing given by Siek et al. (2015).
@InProceedings{POPL17p804,
author = {Khurram A. Jafery and Joshua Dunfield},
title = {Sums of Uncertainty: Refinements Go Gradual},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {804817},
doi = {10.1145/3009837.3009865},
year = {2017},
}
Publisher's Version
Article Search


Enea, Constantin 
POPL '17: "On Verifying Causal Consistency ..."
On Verifying Causal Consistency
Ahmed Bouajjani, Constantin Enea, Rachid Guerraoui, and Jad Hamza (University of Paris Diderot, France; EPFL, Switzerland; Inria, France) Causal consistency is one of the most adopted consistency criteria for distributed implementations of data structures. It ensures that operations are executed at all sites according to their causal precedence. We address the issue of verifying automatically whether the executions of an implementation of a data structure are causally consistent. We consider two problems: (1) checking whether one single execution is causally consistent, which is relevant for developing testing and bug finding algorithms, and (2) verifying whether all the executions of an implementation are causally consistent. We show that the first problem is NPcomplete. This holds even for the readwrite memory abstraction, which is a building block of many modern distributed systems. Indeed, such systems often store data in keyvalue stores, which are instances of the readwrite memory abstraction. Moreover, we prove that, surprisingly, the second problem is undecidable, and again this holds even for the readwrite memory abstraction. However, we show that for the readwrite memory abstraction, these negative results can be circumvented if the implementations are data independent, i.e., their behaviors do not depend on the data values that are written or read at each moment, which is a realistic assumption. We prove that for data independent implementations, the problem of checking the correctness of a single execution w.r.t. the readwrite memory abstraction is polynomial time. Furthermore, we show that for such implementations the set of noncausally consistent executions can be represented by means of a finite number of register automata. Using these machines as observers (in parallel with the implementation) allows to reduce polynomially the problem of checking causal consistency to a state reachability problem. This reduction holds regardless of the class of programs used for the implementation, of the number of readwrite variables, and of the used data domain. It allows leveraging existing techniques for assertion/reachability checking to causal consistency verification. Moreover, for a significant class of implementations, we derive from this reduction the decidability of verifying causal consistency w.r.t. the readwrite memory abstraction. 

Faggian, Claudia 
POPL '17: "The Geometry of Parallelism: ..."
The Geometry of Parallelism: Classical, Probabilistic, and Quantum Effects
Ugo Dal Lago, Claudia Faggian, Benoît Valiron, and Akira Yoshimizu (University of Bologna, Italy; Inria, France; CNRS, France; University of Paris Diderot, France; University of ParisSaclay, France; University of Tokyo, Japan) We introduce a Geometry of Interaction model for higherorder quantum computation, and prove its adequacy for a fully fledged quantum programming language in which entanglement, duplication, and recursion are all available. This model is an instance of a new framework which captures not only quantum but also classical and probabilistic computation. Its main feature is the ability to model commutative effects in a parallel setting. Our model comes with a multitoken machine, a proof net system, and a style language. Being based on a multitoken machine equipped with a memory, it has a concrete nature which makes it well suited for building lowlevel operational descriptions of higherorder languages. 

Feng, Yu 
POPL '17: "ComponentBased Synthesis ..."
ComponentBased Synthesis for Complex APIs
Yu Feng, Ruben Martins, Yuepeng Wang, Isil Dillig, and Thomas W. Reps (University of Texas at Austin, USA; University of WisconsinMadison, USA)
Componentbased approaches to program synthesis assemble programs from a database of existing components, such as methods provided by an API. In this paper, we present a novel typedirected algorithm for componentbased synthesis. The key novelty of our approach is the use of a compact Petrinet representation to model relationships between methods in an API. Given a target method signature S, our approach performs reachability analysis on the underlying Petrinet model to identify sequences of method calls that could be used to synthesize an implementation of S. The programs synthesized by our algorithm are guaranteed to type check and pass all test cases provided by the user.
We have implemented this approach in a tool called SyPet, and used it to successfully synthesize realworld programming tasks extracted from online forums and existing code repositories. We also compare SyPet with two stateoftheart synthesis tools, namely InSynth and CodeHint, and demonstrate that SyPet can synthesize more programs in less time. Finally, we compare our approach with an alternative solution based on hypergraphs and demonstrate its advantages.
@InProceedings{POPL17p599,
author = {Yu Feng and Ruben Martins and Yuepeng Wang and Isil Dillig and Thomas W. Reps},
title = {ComponentBased Synthesis for Complex APIs},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {599612},
doi = {10.1145/3009837.3009851},
year = {2017},
}
Publisher's Version
Article Search


Flur, Shaked 
POPL '17: "MixedSize Concurrency: ARM, ..."
MixedSize Concurrency: ARM, POWER, C/C++11, and SC
Shaked Flur, Susmit Sarkar, Christopher Pulte, Kyndylan Nienhuis, Luc Maranget, Kathryn E. Gray, Ali Sezgin, Mark Batty, and Peter Sewell (University of Cambridge, UK; University of St. Andrews, UK; Inria, France; University of Kent, UK)
Previous work on the semantics of relaxed sharedmemory concurrency has only considered the case in which each load reads the data of exactly one store. In practice, however, multiprocessors support mixedsize accesses, and these are used by systems software and (to some degree) exposed at the C/C++ language level. A semantic foundation for software, therefore, has to address them.
We investigate the mixedsize behaviour of ARMv8 and IBM POWER architectures and implementations: by experiment, by developing semantic models, by testing the correspondence between these, and by discussion with ARM and IBM staff. This turns out to be surprisingly subtle, and on the way we have to revisit the fundamental concepts of coherence and sequential consistency, which change in this setting. In particular, we show that adding a memory barrier between each instruction does not restore sequential consistency. We go on to extend the C/C++11 model to support nonatomic mixedsize memory accesses.
This is a necessary step towards semantics for realworld sharedmemory concurrent code, beyond litmus tests.
@InProceedings{POPL17p429,
author = {Shaked Flur and Susmit Sarkar and Christopher Pulte and Kyndylan Nienhuis and Luc Maranget and Kathryn E. Gray and Ali Sezgin and Mark Batty and Peter Sewell},
title = {MixedSize Concurrency: ARM, POWER, C/C++11, and SC},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {429442},
doi = {10.1145/3009837.3009839},
year = {2017},
}
Publisher's Version
Article Search
Info


Foster, Nate 
POPL '17: "Cantor Meets Scott: Semantic ..."
Cantor Meets Scott: Semantic Foundations for Probabilistic Networks
Steffen Smolka, Praveen Kumar, Nate Foster, Dexter Kozen, and Alexandra Silva (Cornell University, USA; University College London, UK)
ProbNetKAT is a probabilistic extension of NetKAT with a denotational semantics based on Markov kernels. The language is expressive enough to generate continuous distributions, which raises the question of how to compute effectively in the language. This paper gives an new characterization of ProbNetKAT’s semantics using domain theory, which provides the foundation needed to build a practical implementation. We show how to use the semantics to approximate the behavior of arbitrary ProbNetKAT programs using distributions with finite support. We develop a prototype implementation and show how to use it to solve a variety of problems including characterizing the expected congestion induced by different routing schemes and reasoning probabilistically about reachability in a network.
@InProceedings{POPL17p557,
author = {Steffen Smolka and Praveen Kumar and Nate Foster and Dexter Kozen and Alexandra Silva},
title = {Cantor Meets Scott: Semantic Foundations for Probabilistic Networks},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {557571},
doi = {10.1145/3009837.3009843},
year = {2017},
}
Publisher's Version
Article Search


Gaboardi, Marco 
POPL '17: "A Semantic Account of Metric ..."
A Semantic Account of Metric Preservation
Arthur Azevedo de Amorim, Marco Gaboardi, Justin Hsu, Shinya Katsumata, and Ikram Cherigui (University of Pennsylvania, USA; SUNY Buffalo, USA; Kyoto University, Japan; ENS, France) Program sensitivity measures how robust a program is to small changes in its input, and is a fundamental notion in domains ranging from differential privacy to cyberphysical systems. A natural way to formalize program sensitivity is in terms of metrics on the input and output spaces, requiring that an rsensitive function map inputs that are at distance d to outputs that are at distance at most r · d. Program sensitivity is thus an analogue of Lipschitz continuity for programs. Reed and Pierce introduced Fuzz, a functional language with a linear type system that can express program sensitivity. They show soundness operationally, in the form of a metric preservation property. Inspired by their work, we study program sensitivity and metric preservation from a denotational point of view. In particular, we introduce metric CPOs, a novel semantic structure for reasoning about computation on metric spaces, by endowing CPOs with a compatible notion of distance. This structure is useful for reasoning about metric properties of programs, and specifically about program sensitivity. We demonstrate metric CPOs by giving a model for the deterministic fragment of Fuzz. Ezgi Çiçek, Gilles Barthe, Marco Gaboardi, Deepak Garg, and Jan Hoffmann (MPISWS, Germany; IMDEA Software Institute, Spain; SUNY Buffalo, USA; Carnegie Mellon University, USA) Establishing quantitative bounds on the execution cost of programs is essential in many areas of computer science such as complexity analysis, compiler optimizations, security and privacy. Techniques based on program analysis, type systems and abstract interpretation are wellstudied, but methods for analyzing how the execution costs of two programs compare to each other have not received attention. Naively combining the worst and best case execution costs of the two programs does not work well in many cases because such analysis forgets the similarities between the programs or the inputs. In this work, we propose a relational cost analysis technique that is capable of establishing precise bounds on the difference in the execution cost of two programs by making use of relational properties of programs and inputs. We develop , a refinement type and effect system for a higherorder functional language with recursion and subtyping. The key novelty of our technique is the combination of relational refinements with two modes of typing—relational typing for reasoning about similar computations/inputs and unary typing for reasoning about unrelated computations/inputs. This combination allows us to analyze the execution cost difference of two programs more precisely than a naive nonrelational approach. We prove our type system sound using a semantic model based on stepindexed unary and binary logical relations accounting for nonrelational and relational reasoning principles with their respective costs. We demonstrate the precision and generality of our technique through examples. 

GalloisWong, Diane 
POPL '17: "Beginner's Luck: A Language ..."
Beginner's Luck: A Language for PropertyBased Generators
Leonidas Lampropoulos, Diane GalloisWong, Cătălin Hriţcu, John Hughes, Benjamin C. Pierce, and Liyao Xia (University of Pennsylvania, USA; Inria, France; ENS, France; Chalmers University of Technology, Sweden)
Propertybased random testing à la QuickCheck requires building efficient generators for welldistributed random data satisfying complex logical predicates, but writing these generators can be difficult and error prone. We propose a domainspecific language in which generators are conveniently expressed by decorating predicates with lightweight annotations to control both the distribution of generated values and the amount of constraint solving that happens before each variable is instantiated. This language, called Luck, makes generators easier to write, read, and maintain.
We give Luck a formal semantics and prove several fundamental properties, including the soundness and completeness of random generation with respect to a standard predicate semantics. We evaluate Luck on common examples from the propertybased testing literature and on two significant case studies, showing that it can be used in complex domains with comparable bugfinding effectiveness and a significant reduction in testing code size compared to handwritten generators.
@InProceedings{POPL17p114,
author = {Leonidas Lampropoulos and Diane GalloisWong and Cătălin Hriţcu and John Hughes and Benjamin C. Pierce and Liyao Xia},
title = {Beginner's Luck: A Language for PropertyBased Generators},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {114129},
doi = {10.1145/3009837.3009868},
year = {2017},
}
Publisher's Version
Article Search


Garg, Deepak 
POPL '17: "Relational Cost Analysis ..."
Relational Cost Analysis
Ezgi Çiçek, Gilles Barthe, Marco Gaboardi, Deepak Garg, and Jan Hoffmann (MPISWS, Germany; IMDEA Software Institute, Spain; SUNY Buffalo, USA; Carnegie Mellon University, USA) Establishing quantitative bounds on the execution cost of programs is essential in many areas of computer science such as complexity analysis, compiler optimizations, security and privacy. Techniques based on program analysis, type systems and abstract interpretation are wellstudied, but methods for analyzing how the execution costs of two programs compare to each other have not received attention. Naively combining the worst and best case execution costs of the two programs does not work well in many cases because such analysis forgets the similarities between the programs or the inputs. In this work, we propose a relational cost analysis technique that is capable of establishing precise bounds on the difference in the execution cost of two programs by making use of relational properties of programs and inputs. We develop , a refinement type and effect system for a higherorder functional language with recursion and subtyping. The key novelty of our technique is the combination of relational refinements with two modes of typing—relational typing for reasoning about similar computations/inputs and unary typing for reasoning about unrelated computations/inputs. This combination allows us to analyze the execution cost difference of two programs more precisely than a naive nonrelational approach. We prove our type system sound using a semantic model based on stepindexed unary and binary logical relations accounting for nonrelational and relational reasoning principles with their respective costs. We demonstrate the precision and generality of our technique through examples. 

Germane, Kimball 
POPL '17: "A Posteriori Environment Analysis ..."
A Posteriori Environment Analysis with Pushdown Delta CFA
Kimball Germane and Matthew Might (University of Utah, USA) Flowdriven higherorder inlining is blocked by free variables, yet current theories of environment analysis cannot reliably cope with multiplybound variables. One of these, ΔCFA, is a promising theory based on stack change but is undermined by its finitestate model of the stack. We present Pushdown ΔCFA which takes a ΔCFAapproach to pushdown models of control flow and can cope with multiplybound variables, even in the face of recursion. 

Gopalakrishnan, Ganesh 
POPL '17: "Rigorous FloatingPoint MixedPrecision ..."
Rigorous FloatingPoint MixedPrecision Tuning
WeiFan Chiang, Mark Baranowski, Ian Briggs, Alexey Solovyev, Ganesh Gopalakrishnan, and Zvonimir Rakamarić (University of Utah, USA)
Virtually all realvalued computations are carried out using floatingpoint data types and operations. The precision of these data types must be set with the goals of reducing the overall roundoff error, but also emphasizing performance improvements. Often, a mixedprecision allocation achieves this optimum; unfortunately, there are no techniques available to compute such allocations and conservatively meet a given error target across all program inputs. In this work, we present a rigorous approach to precision allocation based on formal analysis via Symbolic Taylor Expansions, and error analysis based on interval functions. This approach is implemented in an automated tool called FPTuner that generates and solves a quadratically constrained quadratic program to obtain a precisionannotated version of the given expression. FPTuner automatically introduces all the requisite precision up and down casting operations. It also allows users to flexibly control precision allocation using constraints to cap the number of high precision operators as well as group operators to allocate the same precision to facilitate vectorization. We evaluate FPTuner by tuning several benchmarks and measuring the proportion of lower precision operators allocated as we increase the error threshold. We also measure the reduction in energy consumption resulting from executing mixedprecision tuned code on a real hardware platform. We observe significant energy savings in response to mixedprecision tuning, but also observe situations where unexpected compiler behaviors thwart intended optimizations.
@InProceedings{POPL17p300,
author = {WeiFan Chiang and Mark Baranowski and Ian Briggs and Alexey Solovyev and Ganesh Gopalakrishnan and Zvonimir Rakamarić},
title = {Rigorous FloatingPoint MixedPrecision Tuning},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {300315},
doi = {10.1145/3009837.3009846},
year = {2017},
}
Publisher's Version
Article Search


Gray, Kathryn E. 
POPL '17: "MixedSize Concurrency: ARM, ..."
MixedSize Concurrency: ARM, POWER, C/C++11, and SC
Shaked Flur, Susmit Sarkar, Christopher Pulte, Kyndylan Nienhuis, Luc Maranget, Kathryn E. Gray, Ali Sezgin, Mark Batty, and Peter Sewell (University of Cambridge, UK; University of St. Andrews, UK; Inria, France; University of Kent, UK)
Previous work on the semantics of relaxed sharedmemory concurrency has only considered the case in which each load reads the data of exactly one store. In practice, however, multiprocessors support mixedsize accesses, and these are used by systems software and (to some degree) exposed at the C/C++ language level. A semantic foundation for software, therefore, has to address them.
We investigate the mixedsize behaviour of ARMv8 and IBM POWER architectures and implementations: by experiment, by developing semantic models, by testing the correspondence between these, and by discussion with ARM and IBM staff. This turns out to be surprisingly subtle, and on the way we have to revisit the fundamental concepts of coherence and sequential consistency, which change in this setting. In particular, we show that adding a memory barrier between each instruction does not restore sequential consistency. We go on to extend the C/C++11 model to support nonatomic mixedsize memory accesses.
This is a necessary step towards semantics for realworld sharedmemory concurrent code, beyond litmus tests.
@InProceedings{POPL17p429,
author = {Shaked Flur and Susmit Sarkar and Christopher Pulte and Kyndylan Nienhuis and Luc Maranget and Kathryn E. Gray and Ali Sezgin and Mark Batty and Peter Sewell},
title = {MixedSize Concurrency: ARM, POWER, C/C++11, and SC},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {429442},
doi = {10.1145/3009837.3009839},
year = {2017},
}
Publisher's Version
Article Search
Info


Greenman, Ben 
POPL '17: "Type Systems as Macros ..."
Type Systems as Macros
Stephen Chang, Alex Knauth, and Ben Greenman (Northeastern University, USA) We present Turnstile, a metalanguage for creating typed embedded languages. To implement the type system, programmers write type checking rules resembling traditional judgment syntax. To implement the semantics, they incorporate elaborations into these rules. Turnstile critically depends on the idea of linguistic reuse. It exploits a macro system in a novel way to simultaneously type check and rewrite a surface program into a target language. Reusing a macro system also yields modular implementations whose rules may be mixed and matched to create other languages. Combined with typical compiler and runtime reuse, Turnstile produces performant typed embedded languages with little effort. 

Grégoire, Benjamin 
POPL '17: "Coupling Proofs Are Probabilistic ..."
Coupling Proofs Are Probabilistic Product Programs
Gilles Barthe, Benjamin Grégoire, Justin Hsu, and PierreYves Strub (IMDEA Software Institute, Spain; Inria, France; University of Pennsylvania, USA; École Polytechnique, France)
Couplings are a powerful mathematical tool for reasoning about
pairs of probabilistic processes. Recent developments in formal
verification identify a close connection between couplings and
pRHL, a relational program logic motivated by applications to
provable security, enabling formal construction of couplings from
the probability theory literature. However, existing work using
pRHL merely shows existence of a coupling and does not give a way
to prove quantitative properties about the coupling, needed to
reason about mixing and convergence of probabilistic
processes. Furthermore, pRHL is inherently incomplete, and is not
able to capture some advanced forms of couplings such as shift
couplings. We address both problems as follows.
First, we define an extension of pRHL, called xpRHL, which
explicitly constructs the coupling in a pRHL derivation in the
form of a probabilistic product program that simulates two
correlated runs of the original program. Existing verification
tools for probabilistic programs can then be directly applied to
the probabilistic product to prove quantitative properties of the
coupling. Second, we equip xpRHL with a new rule for while
loops, where reasoning can freely mix synchronized and
unsynchronized loop iterations. Our proof rule can capture
examples of shift couplings, and the logic is relatively complete
for deterministic programs.
We show soundness of xPRHL and use it to analyze two classes of
examples. First, we verify rapid mixing using different tools
from coupling: standard coupling, shift coupling, and path
coupling, a compositional principle for combining local couplings
into a global coupling. Second, we verify
(approximate) equivalence between a source and an optimized program
for several instances of loop optimizations from the literature.
@InProceedings{POPL17p161,
author = {Gilles Barthe and Benjamin Grégoire and Justin Hsu and PierreYves Strub},
title = {Coupling Proofs Are Probabilistic Product Programs},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {161174},
doi = {10.1145/3009837.3009896},
year = {2017},
}
Publisher's Version
Article Search


Grigore, Radu 
POPL '17: "Java Generics Are Turing Complete ..."
Java Generics Are Turing Complete
Radu Grigore (University of Kent, UK)
This paper describes a reduction from the halting problem of Turing machines to subtype checking in Java. It follows that subtype checking in Java is undecidable, which answers a question posed by Kennedy and Pierce in 2007. It also follows that Java's type checker can recognize any recursive language, which improves a result of Gill and Levy from 2016. The latter point is illustrated by a parser generator for fluent interfaces.
@InProceedings{POPL17p73,
author = {Radu Grigore},
title = {Java Generics Are Turing Complete},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {7385},
doi = {10.1145/3009837.3009871},
year = {2017},
}
Publisher's Version
Article Search
Info


Guerraoui, Rachid 
POPL '17: "On Verifying Causal Consistency ..."
On Verifying Causal Consistency
Ahmed Bouajjani, Constantin Enea, Rachid Guerraoui, and Jad Hamza (University of Paris Diderot, France; EPFL, Switzerland; Inria, France) Causal consistency is one of the most adopted consistency criteria for distributed implementations of data structures. It ensures that operations are executed at all sites according to their causal precedence. We address the issue of verifying automatically whether the executions of an implementation of a data structure are causally consistent. We consider two problems: (1) checking whether one single execution is causally consistent, which is relevant for developing testing and bug finding algorithms, and (2) verifying whether all the executions of an implementation are causally consistent. We show that the first problem is NPcomplete. This holds even for the readwrite memory abstraction, which is a building block of many modern distributed systems. Indeed, such systems often store data in keyvalue stores, which are instances of the readwrite memory abstraction. Moreover, we prove that, surprisingly, the second problem is undecidable, and again this holds even for the readwrite memory abstraction. However, we show that for the readwrite memory abstraction, these negative results can be circumvented if the implementations are data independent, i.e., their behaviors do not depend on the data values that are written or read at each moment, which is a realistic assumption. We prove that for data independent implementations, the problem of checking the correctness of a single execution w.r.t. the readwrite memory abstraction is polynomial time. Furthermore, we show that for such implementations the set of noncausally consistent executions can be represented by means of a finite number of register automata. Using these machines as observers (in parallel with the implementation) allows to reduce polynomially the problem of checking causal consistency to a state reachability problem. This reduction holds regardless of the class of programs used for the implementation, of the number of readwrite variables, and of the used data domain. It allows leveraging existing techniques for assertion/reachability checking to causal consistency verification. Moreover, for a significant class of implementations, we derive from this reduction the decidability of verifying causal consistency w.r.t. the readwrite memory abstraction. 

Hammer, Matthew A. 
POPL '17: "Hazelnut: A Bidirectionally ..."
Hazelnut: A Bidirectionally Typed Structure Editor Calculus
Cyrus Omar, Ian Voysey, Michael Hilton, Jonathan Aldrich, and Matthew A. Hammer (Carnegie Mellon University, USA; Oregon State University, USA; University of Colorado at Boulder, USA) Structure editors allow programmers to edit the tree structure of a program directly. This can have cognitive benefits, particularly for novice and enduser programmers. It also simplifies matters for tool designers, because they do not need to contend with malformed program text. This paper introduces Hazelnut, a structure editor based on a small bidirectionally typed lambda calculus extended with holes and a cursor. Hazelnut goes one step beyond syntactic wellformedness: its edit actions operate over statically meaningful incomplete terms. Na'ively, this would force the programmer to construct terms in a rigid “outsidein” manner. To avoid this problem, the action semantics automatically places terms assigned a type that is inconsistent with the expected type inside a hole. This meaningfully defers the type consistency check until the term inside the hole is finished. Hazelnut is not intended as an enduser tool itself. Instead, it serves as a foundational account of typed structure editing. To that end, we describe how Hazelnut’s rich metatheory, which we have mechanized using the Agda proof assistant, serves as a guide when we extend the calculus to include binary sum types. We also discuss various interpretations of holes, and in so doing reveal connections with gradual typing and contextual modal type theory, the CurryHoward interpretation of contextual modal logic. Finally, we discuss how Hazelnut’s semantics lends itself to implementation as an eventbased functional reactive program. Our simple reference implementation is written using js_of_ocaml. 

Hamza, Jad 
POPL '17: "On Verifying Causal Consistency ..."
On Verifying Causal Consistency
Ahmed Bouajjani, Constantin Enea, Rachid Guerraoui, and Jad Hamza (University of Paris Diderot, France; EPFL, Switzerland; Inria, France) Causal consistency is one of the most adopted consistency criteria for distributed implementations of data structures. It ensures that operations are executed at all sites according to their causal precedence. We address the issue of verifying automatically whether the executions of an implementation of a data structure are causally consistent. We consider two problems: (1) checking whether one single execution is causally consistent, which is relevant for developing testing and bug finding algorithms, and (2) verifying whether all the executions of an implementation are causally consistent. We show that the first problem is NPcomplete. This holds even for the readwrite memory abstraction, which is a building block of many modern distributed systems. Indeed, such systems often store data in keyvalue stores, which are instances of the readwrite memory abstraction. Moreover, we prove that, surprisingly, the second problem is undecidable, and again this holds even for the readwrite memory abstraction. However, we show that for the readwrite memory abstraction, these negative results can be circumvented if the implementations are data independent, i.e., their behaviors do not depend on the data values that are written or read at each moment, which is a realistic assumption. We prove that for data independent implementations, the problem of checking the correctness of a single execution w.r.t. the readwrite memory abstraction is polynomial time. Furthermore, we show that for such implementations the set of noncausally consistent executions can be represented by means of a finite number of register automata. Using these machines as observers (in parallel with the implementation) allows to reduce polynomially the problem of checking causal consistency to a state reachability problem. This reduction holds regardless of the class of programs used for the implementation, of the number of readwrite variables, and of the used data domain. It allows leveraging existing techniques for assertion/reachability checking to causal consistency verification. Moreover, for a significant class of implementations, we derive from this reduction the decidability of verifying causal consistency w.r.t. the readwrite memory abstraction. 

Harper, Robert 
POPL '17: "Parallel Functional Arrays ..."
Parallel Functional Arrays
Ananya Kumar, Guy E. Blelloch, and Robert Harper (Carnegie Mellon University, USA)
The goal of this paper is to develop a form of functional arrays (sequences) that are as efficient as imperative
arrays, can be used in parallel, and have well defined costsemantics. The key idea is to consider sequences with
functional value semantics but nonfunctional cost semantics. Because the value semantics is functional, "updating" a
sequence returns a new sequence. We allow operations on "older" sequences (called interior sequences) to be more
expensive than operations on the "most recent" sequences (called leaf sequences).
We embed sequences in a language supporting forkjoin parallelism. Due to the parallelism, operations can be
interleaved nondeterministically, and, in conjunction with the different cost for interior and leaf sequences, this
can lead to nondeterministic costs for a program. Consequently the costs of programs can be difficult to analyze.
The main result is the derivation of a deterministic cost dynamics which makes analyzing the costs easier. The
theorems are not specific to sequences and can be applied to other data types with different costs for operating on
interior and leaf versions.
We present a waitfree concurrent implementation of sequences that requires constant work for accessing and updating
leaf sequences, and logarithmic work for accessing and linear work for updating interior sequences. We sketch a proof
of correctness for the sequence implementation. The key advantages of the present approach compared to current
approaches is that our implementation requires no changes to existing programming languages, supports nested
parallelism, and has well defined cost semantics. At the same time, it allows for functional implementations of
algorithms such as depthfirst search with the same asymptotic complexity as imperative implementations.
@InProceedings{POPL17p706,
author = {Ananya Kumar and Guy E. Blelloch and Robert Harper},
title = {Parallel Functional Arrays},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {706718},
doi = {10.1145/3009837.3009869},
year = {2017},
}
Publisher's Version
Article Search
POPL '17: "Computational HigherDimensional ..."
Computational HigherDimensional Type Theory
Carlo Angiuli, Robert Harper, and Todd Wilson (Carnegie Mellon University, USA; California State University at Fresno, USA)
Formal constructive type theory has proved to be an effective language for mechanized proof. By avoiding nonconstructive principles, such as the law of the excluded middle, type theory admits sharper proofs and broader interpretations of results. From a computer science perspective, interest in type theory arises from its applications to programming languages. Standard constructive type theories used in mechanization admit computational interpretations based on metamathematical normalization theorems. These proofs are notoriously brittle; any change to the theory potentially invalidates its computational meaning. As a case in point, Voevodsky's univalence axiom raises questions about the computational meaning of proofs.
We consider the question: Can higherdimensional type theory be construed as a programming language? We answer this question affirmatively by providing a direct, deterministic operational interpretation for a representative higherdimensional dependent type theory with higher inductive types and an instance of univalence. Rather than being a formal type theory defined by rules, it is instead a computational type theory in the sense of MartinLöf's meaning explanations and of the NuPRL semantics. The definition of the type theory starts with programs; types are specifications of program behavior. The main result is a canonicity theorem stating that closed programs of boolean type evaluate to true or false.
@InProceedings{POPL17p680,
author = {Carlo Angiuli and Robert Harper and Todd Wilson},
title = {Computational HigherDimensional Type Theory},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {680693},
doi = {10.1145/3009837.3009861},
year = {2017},
}
Publisher's Version
Article Search


Harris, William R. 
POPL '17: "Complexity Verification using ..."
Complexity Verification using Guided Theorem Enumeration
Akhilesh Srikanth, Burak Sahin, and William R. Harris (Georgia Institute of Technology, USA)
Determining if a given program satisfies a given bound on the
amount of resources that it may use is a fundamental problem with
critical practical applications. Conventional automatic verifiers for
safety properties cannot be applied to address this problem directly
because such verifiers target properties expressed in decidable
theories; however, many practical bounds are expressed in nonlinear
theories, which are undecidable.
In this work, we introduce an automatic verification algorithm,
CAMPY, that determines if a given program P satisfies a given
resource bound B, which may be expressed using polynomial,
exponential, and logarithmic terms. The key technical contribution
behind our verifier is an interpolating theorem prover for nonlinear
theories that lazily learns a sufficiently accurate approximation of
nonlinear theories by selectively grounding theorems of the nonlinear
theory that are relevant to proving that P satisfies B. To
evaluate CAMPY, we implemented it to target Java Virtual Machine
bytecode. We applied CAMPY to verify that over 20 solutions
submitted for programming problems hosted on popular online
coding platforms satisfy or do not satisfy expected complexity
bounds.
@InProceedings{POPL17p639,
author = {Akhilesh Srikanth and Burak Sahin and William R. Harris},
title = {Complexity Verification using Guided Theorem Enumeration},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {639652},
doi = {10.1145/3009837.3009864},
year = {2017},
}
Publisher's Version
Article Search


Hilton, Michael 
POPL '17: "Hazelnut: A Bidirectionally ..."
Hazelnut: A Bidirectionally Typed Structure Editor Calculus
Cyrus Omar, Ian Voysey, Michael Hilton, Jonathan Aldrich, and Matthew A. Hammer (Carnegie Mellon University, USA; Oregon State University, USA; University of Colorado at Boulder, USA) Structure editors allow programmers to edit the tree structure of a program directly. This can have cognitive benefits, particularly for novice and enduser programmers. It also simplifies matters for tool designers, because they do not need to contend with malformed program text. This paper introduces Hazelnut, a structure editor based on a small bidirectionally typed lambda calculus extended with holes and a cursor. Hazelnut goes one step beyond syntactic wellformedness: its edit actions operate over statically meaningful incomplete terms. Na'ively, this would force the programmer to construct terms in a rigid “outsidein” manner. To avoid this problem, the action semantics automatically places terms assigned a type that is inconsistent with the expected type inside a hole. This meaningfully defers the type consistency check until the term inside the hole is finished. Hazelnut is not intended as an enduser tool itself. Instead, it serves as a foundational account of typed structure editing. To that end, we describe how Hazelnut’s rich metatheory, which we have mechanized using the Agda proof assistant, serves as a guide when we extend the calculus to include binary sum types. We also discuss various interpretations of holes, and in so doing reveal connections with gradual typing and contextual modal type theory, the CurryHoward interpretation of contextual modal logic. Finally, we discuss how Hazelnut’s semantics lends itself to implementation as an eventbased functional reactive program. Our simple reference implementation is written using js_of_ocaml. 

Hoenicke, Jochen 
POPL '17: "Thread Modularity at Many ..."
Thread Modularity at Many Levels: A Pearl in Compositional Verification
Jochen Hoenicke, Rupak Majumdar, and Andreas Podelski (University of Freiburg, Germany; MPISWS, Germany) A threadmodular proof for the correctness of a concurrent program is based on an inductive and interferencefree annotation of each thread. It is wellknown that the corresponding proof system is not complete (unless one adds auxiliary variables). We describe a hierarchy of proof systems where each level k corresponds to a generalized notion of thread modularity (level 1 corresponds to the original notion). Each level is strictly more expressive than the previous. Further, each level precisely captures programs that can be proved using uniform Ashcroft invariants with k universal quantifiers. We demonstrate the usefulness of the hierarchy by giving a compositional proof of the Mach shootdown algorithm for TLB consistency. We show a proof at level 2 that shows the algorithm is correct for an arbitrary number of CPUs. However, there is no proof for the algorithm at level 1 which does not involve auxiliary state. 

Hoffmann, Jan 
POPL '17: "Towards Automatic Resource ..."
Towards Automatic Resource Bound Analysis for OCaml
Jan Hoffmann, Ankush Das, and ShuChun Weng (Carnegie Mellon University, USA; Yale University, USA)
This article presents a resource analysis system for OCaml programs. The system automatically derives worstcase resource bounds for higherorder polymorphic programs with userdefined inductive types. The technique is parametric in the resource and can derive bounds for time, memory allocations and energy usage. The derived bounds are multivariate resource polynomials which are functions of different size parameters that depend on the standard OCaml types. Bound inference is fully automatic and reduced to a linear optimization problem that is passed to an offtheshelf LP solver. Technically, the analysis system is based on a novel multivariate automatic amortized resource analysis (AARA). It builds on existing work on linear AARA for higherorder programs with userdefined inductive types and on multivariate AARA for firstorder programs with builtin lists and binary trees. This is the first amortized analysis, that automatically derives polynomial bounds for higherorder functions and polynomial bounds that depend on userdefined inductive types. Moreover, the analysis handles a limited form of side effects and even outperforms the linear bound inference of previous systems. At the same time, it preserves the expressivity and efficiency of existing AARA techniques. The practicality of the analysis system is demonstrated with an implementation and integration with Inria's OCaml compiler. The implementation is used to automatically derive resource bounds for 411 functions and 6018 lines of code derived from OCaml libraries, the CompCert compiler, and implementations of textbook algorithms. In a case study, the system infers bounds on the number of queries that are sent by OCaml programs to DynamoDB, a commercial NoSQL cloud database service.
@InProceedings{POPL17p359,
author = {Jan Hoffmann and Ankush Das and ShuChun Weng},
title = {Towards Automatic Resource Bound Analysis for OCaml},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {359373},
doi = {10.1145/3009837.3009842},
year = {2017},
}
Publisher's Version
Article Search
Info
POPL '17: "Relational Cost Analysis ..."
Relational Cost Analysis
Ezgi Çiçek, Gilles Barthe, Marco Gaboardi, Deepak Garg, and Jan Hoffmann (MPISWS, Germany; IMDEA Software Institute, Spain; SUNY Buffalo, USA; Carnegie Mellon University, USA) Establishing quantitative bounds on the execution cost of programs is essential in many areas of computer science such as complexity analysis, compiler optimizations, security and privacy. Techniques based on program analysis, type systems and abstract interpretation are wellstudied, but methods for analyzing how the execution costs of two programs compare to each other have not received attention. Naively combining the worst and best case execution costs of the two programs does not work well in many cases because such analysis forgets the similarities between the programs or the inputs. In this work, we propose a relational cost analysis technique that is capable of establishing precise bounds on the difference in the execution cost of two programs by making use of relational properties of programs and inputs. We develop , a refinement type and effect system for a higherorder functional language with recursion and subtyping. The key novelty of our technique is the combination of relational refinements with two modes of typing—relational typing for reasoning about similar computations/inputs and unary typing for reasoning about unrelated computations/inputs. This combination allows us to analyze the execution cost difference of two programs more precisely than a naive nonrelational approach. We prove our type system sound using a semantic model based on stepindexed unary and binary logical relations accounting for nonrelational and relational reasoning principles with their respective costs. We demonstrate the precision and generality of our technique through examples. 

Hriţcu, Cătălin 
POPL '17: "Beginner's Luck: A Language ..."
Beginner's Luck: A Language for PropertyBased Generators
Leonidas Lampropoulos, Diane GalloisWong, Cătălin Hriţcu, John Hughes, Benjamin C. Pierce, and Liyao Xia (University of Pennsylvania, USA; Inria, France; ENS, France; Chalmers University of Technology, Sweden)
Propertybased random testing à la QuickCheck requires building efficient generators for welldistributed random data satisfying complex logical predicates, but writing these generators can be difficult and error prone. We propose a domainspecific language in which generators are conveniently expressed by decorating predicates with lightweight annotations to control both the distribution of generated values and the amount of constraint solving that happens before each variable is instantiated. This language, called Luck, makes generators easier to write, read, and maintain.
We give Luck a formal semantics and prove several fundamental properties, including the soundness and completeness of random generation with respect to a standard predicate semantics. We evaluate Luck on common examples from the propertybased testing literature and on two significant case studies, showing that it can be used in complex domains with comparable bugfinding effectiveness and a significant reduction in testing code size compared to handwritten generators.
@InProceedings{POPL17p114,
author = {Leonidas Lampropoulos and Diane GalloisWong and Cătălin Hriţcu and John Hughes and Benjamin C. Pierce and Liyao Xia},
title = {Beginner's Luck: A Language for PropertyBased Generators},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {114129},
doi = {10.1145/3009837.3009868},
year = {2017},
}
Publisher's Version
Article Search
POPL '17: "Dijkstra Monads for Free ..."
Dijkstra Monads for Free
Danel Ahman, Cătălin Hriţcu, Kenji Maillard, Guido Martínez, Gordon Plotkin, Jonathan Protzenko, Aseem Rastogi, and Nikhil Swamy (University of Edinburgh, UK; Microsoft Research, USA; Inria, France; ENS, France; Rosario National University, Argentina; Microsoft Research, India) Dijkstra monads enable a dependent type theory to be enhanced with support for specifying and verifying effectful code via weakest preconditions. Together with their closely related counterparts, Hoare monads, they provide the basis on which verification tools like F*, Hoare Type Theory (HTT), and Ynot are built. We show that Dijkstra monads can be derived “for free” by applying a continuationpassing style (CPS) translation to the standard monadic definitions of the underlying computational effects. Automatically deriving Dijkstra monads in this way provides a correctbyconstruction and efficient way of reasoning about userdefined effects in dependent type theories. We demonstrate these ideas in EMF*, a new dependently typed calculus, validating it via both formal proof and a prototype implementation within F*. Besides equipping F* with a more uniform and extensible effect system, EMF* enables a novel mixture of intrinsic and extrinsic proofs within F*. 

Hsu, Justin 
POPL '17: "A Semantic Account of Metric ..."
A Semantic Account of Metric Preservation
Arthur Azevedo de Amorim, Marco Gaboardi, Justin Hsu, Shinya Katsumata, and Ikram Cherigui (University of Pennsylvania, USA; SUNY Buffalo, USA; Kyoto University, Japan; ENS, France) Program sensitivity measures how robust a program is to small changes in its input, and is a fundamental notion in domains ranging from differential privacy to cyberphysical systems. A natural way to formalize program sensitivity is in terms of metrics on the input and output spaces, requiring that an rsensitive function map inputs that are at distance d to outputs that are at distance at most r · d. Program sensitivity is thus an analogue of Lipschitz continuity for programs. Reed and Pierce introduced Fuzz, a functional language with a linear type system that can express program sensitivity. They show soundness operationally, in the form of a metric preservation property. Inspired by their work, we study program sensitivity and metric preservation from a denotational point of view. In particular, we introduce metric CPOs, a novel semantic structure for reasoning about computation on metric spaces, by endowing CPOs with a compatible notion of distance. This structure is useful for reasoning about metric properties of programs, and specifically about program sensitivity. We demonstrate metric CPOs by giving a model for the deterministic fragment of Fuzz. Gilles Barthe, Benjamin Grégoire, Justin Hsu, and PierreYves Strub (IMDEA Software Institute, Spain; Inria, France; University of Pennsylvania, USA; École Polytechnique, France)
Couplings are a powerful mathematical tool for reasoning about
pairs of probabilistic processes. Recent developments in formal
verification identify a close connection between couplings and
pRHL, a relational program logic motivated by applications to
provable security, enabling formal construction of couplings from
the probability theory literature. However, existing work using
pRHL merely shows existence of a coupling and does not give a way
to prove quantitative properties about the coupling, needed to
reason about mixing and convergence of probabilistic
processes. Furthermore, pRHL is inherently incomplete, and is not
able to capture some advanced forms of couplings such as shift
couplings. We address both problems as follows.
First, we define an extension of pRHL, called xpRHL, which
explicitly constructs the coupling in a pRHL derivation in the
form of a probabilistic product program that simulates two
correlated runs of the original program. Existing verification
tools for probabilistic programs can then be directly applied to
the probabilistic product to prove quantitative properties of the
coupling. Second, we equip xpRHL with a new rule for while
loops, where reasoning can freely mix synchronized and
unsynchronized loop iterations. Our proof rule can capture
examples of shift couplings, and the logic is relatively complete
for deterministic programs.
We show soundness of xPRHL and use it to analyze two classes of
examples. First, we verify rapid mixing using different tools
from coupling: standard coupling, shift coupling, and path
coupling, a compositional principle for combining local couplings
into a global coupling. Second, we verify
(approximate) equivalence between a source and an optimized program
for several instances of loop optimizations from the literature.
@InProceedings{POPL17p161,
author = {Gilles Barthe and Benjamin Grégoire and Justin Hsu and PierreYves Strub},
title = {Coupling Proofs Are Probabilistic Product Programs},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {161174},
doi = {10.1145/3009837.3009896},
year = {2017},
}
Publisher's Version
Article Search


Hughes, John 
POPL '17: "Beginner's Luck: A Language ..."
Beginner's Luck: A Language for PropertyBased Generators
Leonidas Lampropoulos, Diane GalloisWong, Cătălin Hriţcu, John Hughes, Benjamin C. Pierce, and Liyao Xia (University of Pennsylvania, USA; Inria, France; ENS, France; Chalmers University of Technology, Sweden)
Propertybased random testing à la QuickCheck requires building efficient generators for welldistributed random data satisfying complex logical predicates, but writing these generators can be difficult and error prone. We propose a domainspecific language in which generators are conveniently expressed by decorating predicates with lightweight annotations to control both the distribution of generated values and the amount of constraint solving that happens before each variable is instantiated. This language, called Luck, makes generators easier to write, read, and maintain.
We give Luck a formal semantics and prove several fundamental properties, including the soundness and completeness of random generation with respect to a standard predicate semantics. We evaluate Luck on common examples from the propertybased testing literature and on two significant case studies, showing that it can be used in complex domains with comparable bugfinding effectiveness and a significant reduction in testing code size compared to handwritten generators.
@InProceedings{POPL17p114,
author = {Leonidas Lampropoulos and Diane GalloisWong and Cătălin Hriţcu and John Hughes and Benjamin C. Pierce and Liyao Xia},
title = {Beginner's Luck: A Language for PropertyBased Generators},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {114129},
doi = {10.1145/3009837.3009868},
year = {2017},
}
Publisher's Version
Article Search


Hur, ChungKil 
POPL '17: "A Promising Semantics for ..."
A Promising Semantics for RelaxedMemory Concurrency
Jeehoon Kang, ChungKil Hur, Ori Lahav, Viktor Vafeiadis, and Derek Dreyer (Seoul National University, South Korea; MPISWS, Germany)
Despite many years of research, it has proven very difficult to
develop a memory model for concurrent programming languages that
adequately balances the conflicting desiderata of programmers,
compilers, and hardware. In this paper, we propose the first relaxed
memory model that (1) accounts for a broad spectrum of features from
the C++11 concurrency model, (2) is implementable, in the sense that
it provably validates many standard compiler optimizations and
reorderings, as well as standard compilation schemes to x86TSO and
Power, (3) justifies simple invariantbased reasoning, thus
demonstrating the absence of bad "outofthinair" behaviors, (4)
supports "DRF" guarantees, ensuring that programmers who use
sufficient synchronization need not understand the full complexities
of relaxedmemory semantics, and (5) defines the semantics of racy
programs without relying on undefined behaviors, which is a
prerequisite for applicability to typesafe languages like Java.
The key novel idea behind our model is the notion of *promises*: a
thread may promise to execute a write in the future, thus enabling
other threads to read from that write out of order. Crucially, to
prevent outofthinair behaviors, a promise step requires a
threadlocal certification that it will be possible to execute the
promised write even in the absence of the promise. To establish
confidence in our model, we have formalized most of our key results in
Coq.
@InProceedings{POPL17p175,
author = {Jeehoon Kang and ChungKil Hur and Ori Lahav and Viktor Vafeiadis and Derek Dreyer},
title = {A Promising Semantics for RelaxedMemory Concurrency},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {175189},
doi = {10.1145/3009837.3009850},
year = {2017},
}
Publisher's Version
Article Search
Info


Igarashi, Atsushi 
POPL '17: "Stateful Manifest Contracts ..."
Stateful Manifest Contracts
Taro Sekiyama and Atsushi Igarashi (IBM Research, Japan; Kyoto University, Japan) This paper studies hybrid contract verification for an imperative higherorder language based on a socalled manifest contract system. In manifest contract systems, contracts are part of static types and contract verification is hybrid in the sense that some contracts are statically verified, typically by subtyping, but others are dynamically by casts. It is, however, not trivial to extend existing manifest contract systems, which have been designed mostly for pure functional languages, to imperative features, mainly because of the lack of flowsensitivity, which should be taken into account in verifying imperative programs statically. We develop an imperative higherorder manifest contract system λ_{ref}^{H} for flowsensitive hybrid contract verification. We introduce a computational variant of Nanevski et al’s Hoare types, which are flowsensitive types to represent pre and postconditions of impure computation. Our Hoare types are computational in the sense that pre and postconditions are given by Booleans in the same language as programs so that they are dynamically verifiable. λ_{ref}^{H} also supports refinement types as in existing manifest contract systems to describe flowinsensitive, stateindependent contracts of pure computation. While it is desirable that any—possibly statemanipulating—predicate can be used in contracts, abuse of stateful operations will break the system. To control stateful operations in contracts, we introduce a regionbased effect system, which allows contracts in refinement types and computational Hoare types to manipulate states, as long as they are observationally pure and readonly, respectively. We show that dynamic contract checking in our calculus is consistent with static typing in the sense that the final result obtained without dynamic contract violations satisfies contracts in its static type. It in particular means that the state after stateful computations satisfies their postconditions. As in some of prior manifest contract systems, static contract verification in this work is “post facto,” that is, we first define our manifest contract system so that all contracts are checked at run time, formalize conditions when dynamic checks can be removed safely, and show that programs with and without such removable checks are contextually equivalent. We also apply the idea of post facto verification to regionbased local reasoning, inspired by the frame rule of Separation Logic. 

Ilik, Danko 
POPL '17: "The explog Normal Form of ..."
The explog Normal Form of Types: Decomposing Extensional Equality and Representing Terms Compactly
Danko Ilik (Trusted Labs, France)
Lambda calculi with algebraic data types lie at the core of functional programming languages and proof assistants, but conceal at least two fundamental theoretical problems already in the presence of the simplest nontrivial data type, the sum type. First, we do not know of an explicit and implemented algorithm for deciding the betaetaequality of termsand this in spite of the first decidability results proven two decades ago. Second, it is not clear how to decide when two types are essentially the same, i.e. isomorphic, in spite of the metatheoretic results on decidability of the isomorphism.
In this paper, we present the explog normal form of typesderived from the representation of exponential polynomials via the unary exponential and logarithmic functionsthat any type built from arrows, products, and sums, can be isomorphically mapped to. The type normal form can be used as a simple heuristic for deciding type isomorphism, thanks to the fact that it is a systematic application of the highschool identities.
We then show that the type normal form allows to reduce the standard betaeta equational theory of the lambda calculus to a specialized version of itself, while preserving completeness of the equality on terms.
We end by describing an alternative representation of normal terms of the lambda calculus with sums, together with a Coqimplemented converter into/from our new term calculus. The difference with the only other previously implemented heuristic for deciding interesting instances of etaequality by Balat, Di Cosmo, and Fiore, is that we exploits the type information of terms substantially and this often allows us to obtain a canonical representation of terms without performing a sophisticated term analyses.
@InProceedings{POPL17p387,
author = {Danko Ilik},
title = {The explog Normal Form of Types: Decomposing Extensional Equality and Representing Terms Compactly},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {387399},
doi = {10.1145/3009837.3009841},
year = {2017},
}
Publisher's Version
Article Search
Info


Jafery, Khurram A. 
POPL '17: "Sums of Uncertainty: Refinements ..."
Sums of Uncertainty: Refinements Go Gradual
Khurram A. Jafery and Joshua Dunfield (University of British Columbia, Canada)
A longstanding shortcoming of statically typed functional languages is that type checking does not rule out patternmatching failures (runtime match exceptions). Refinement types distinguish different values of datatypes; if a program annotated with refinements passes type checking, patternmatching failures become impossible. Unfortunately, refinement is a monolithic property of a type, exacerbating the difficulty of adding refinement types to nontrivial programs.
Gradual typing has explored how to incrementally move between static typing and dynamic typing. We develop a type system of gradual sums that combines refinement with imprecision. Then, we develop a bidirectional version of the type system, which rules out excessive imprecision, and give a typedirected translation to a target language with explicit casts. We prove that the static sublanguage cannot have match failures, that a welltyped program remains welltyped if its type annotations are made less precise, and that making annotations less precise causes target programs to fail later. Several of these results correspond to criteria for gradual typing given by Siek et al. (2015).
@InProceedings{POPL17p804,
author = {Khurram A. Jafery and Joshua Dunfield},
title = {Sums of Uncertainty: Refinements Go Gradual},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {804817},
doi = {10.1145/3009837.3009865},
year = {2017},
}
Publisher's Version
Article Search


Kang, Jeehoon 
POPL '17: "A Promising Semantics for ..."
A Promising Semantics for RelaxedMemory Concurrency
Jeehoon Kang, ChungKil Hur, Ori Lahav, Viktor Vafeiadis, and Derek Dreyer (Seoul National University, South Korea; MPISWS, Germany)
Despite many years of research, it has proven very difficult to
develop a memory model for concurrent programming languages that
adequately balances the conflicting desiderata of programmers,
compilers, and hardware. In this paper, we propose the first relaxed
memory model that (1) accounts for a broad spectrum of features from
the C++11 concurrency model, (2) is implementable, in the sense that
it provably validates many standard compiler optimizations and
reorderings, as well as standard compilation schemes to x86TSO and
Power, (3) justifies simple invariantbased reasoning, thus
demonstrating the absence of bad "outofthinair" behaviors, (4)
supports "DRF" guarantees, ensuring that programmers who use
sufficient synchronization need not understand the full complexities
of relaxedmemory semantics, and (5) defines the semantics of racy
programs without relying on undefined behaviors, which is a
prerequisite for applicability to typesafe languages like Java.
The key novel idea behind our model is the notion of *promises*: a
thread may promise to execute a write in the future, thus enabling
other threads to read from that write out of order. Crucially, to
prevent outofthinair behaviors, a promise step requires a
threadlocal certification that it will be possible to execute the
promised write even in the absence of the promise. To establish
confidence in our model, we have formalized most of our key results in
Coq.
@InProceedings{POPL17p175,
author = {Jeehoon Kang and ChungKil Hur and Ori Lahav and Viktor Vafeiadis and Derek Dreyer},
title = {A Promising Semantics for RelaxedMemory Concurrency},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {175189},
doi = {10.1145/3009837.3009850},
year = {2017},
}
Publisher's Version
Article Search
Info


Katsumata, Shinya 
POPL '17: "A Semantic Account of Metric ..."
A Semantic Account of Metric Preservation
Arthur Azevedo de Amorim, Marco Gaboardi, Justin Hsu, Shinya Katsumata, and Ikram Cherigui (University of Pennsylvania, USA; SUNY Buffalo, USA; Kyoto University, Japan; ENS, France) Program sensitivity measures how robust a program is to small changes in its input, and is a fundamental notion in domains ranging from differential privacy to cyberphysical systems. A natural way to formalize program sensitivity is in terms of metrics on the input and output spaces, requiring that an rsensitive function map inputs that are at distance d to outputs that are at distance at most r · d. Program sensitivity is thus an analogue of Lipschitz continuity for programs. Reed and Pierce introduced Fuzz, a functional language with a linear type system that can express program sensitivity. They show soundness operationally, in the form of a metric preservation property. Inspired by their work, we study program sensitivity and metric preservation from a denotational point of view. In particular, we introduce metric CPOs, a novel semantic structure for reasoning about computation on metric spaces, by endowing CPOs with a compatible notion of distance. This structure is useful for reasoning about metric properties of programs, and specifically about program sensitivity. We demonstrate metric CPOs by giving a model for the deterministic fragment of Fuzz. 

Kifer, Daniel 
POPL '17: "LightDP: Towards Automating ..."
LightDP: Towards Automating Differential Privacy Proofs
Danfeng Zhang and Daniel Kifer (Pennsylvania State University, USA)
The growing popularity and adoption of differential privacy in academic and industrial settings has resulted in the development of increasingly sophisticated algorithms for releasing information while preserving privacy. Accompanying this phenomenon is the natural rise in the development and publication of incorrect algorithms, thus demonstrating the necessity of formal verification tools. However, existing formal methods for differential privacy face a dilemma: methods based on customized logics can verify sophisticated algorithms but come with a steep learning curve and significant annotation burden on the programmers, while existing programming platforms lack expressive power for some sophisticated algorithms.
In this paper, we present LightDP, a simple imperative language that strikes a better balance between expressive power and usability. The core of LightDP is a novel relational type system that separates relational reasoning from privacy budget calculations. With dependent types, the type system is powerful enough to verify sophisticated algorithms where the composition theorem falls short. In addition, the inference engine of LightDP infers most of the proof details, and even searches for the proof with minimal privacy cost when multiple proofs exist. We show that LightDP verifies sophisticated algorithms with little manual effort.
@InProceedings{POPL17p888,
author = {Danfeng Zhang and Daniel Kifer},
title = {LightDP: Towards Automating Differential Privacy Proofs},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {888901},
doi = {10.1145/3009837.3009884},
year = {2017},
}
Publisher's Version
Article Search


Kiselyov, Oleg 
POPL '17: "Stream Fusion, to Completeness ..."
Stream Fusion, to Completeness
Oleg Kiselyov, Aggelos Biboudis, Nick Palladinos, and Yannis Smaragdakis (Tohoku University, Japan; University of Athens, Greece; Nessos IT, Greece)
Stream processing is mainstream (again): Widelyused stream libraries are now available for virtually all modern OO and functional languages, from Java to C# to Scala to OCaml to Haskell. Yet expressivity and performance are still lacking. For instance, the popular, welloptimized Java 8 streams do not support the zip operator and are still an order of magnitude slower than handwritten loops.
We present the first approach that represents the full generality of stream processing and eliminates overheads, via the use of staging. It is based on an unusually rich semantic model of stream interaction. We support any combination of zipping, nesting (or flatmapping), subranging, filtering, mapping—of finite or infinite streams. Our model captures idiosyncrasies that a programmer uses in optimizing stream pipelines, such as rate differences and the choice of a “for” vs. “while” loops. Our approach delivers handwritten–like code, but automatically. It explicitly avoids the reliance on blackbox optimizers and sufficientlysmart compilers, offering highest, guaranteed and portable performance.
Our approach relies on highlevel concepts that are then readily mapped into an implementation. Accordingly, we have two distinct implementations: an OCaml stream library, staged via MetaOCaml, and a Scala library for the JVM, staged via LMS. In both cases, we derive libraries richer and simultaneously many tens of times faster than past work. We greatly exceed in performance the standard stream libraries available in Java, Scala and OCaml, including the welloptimized Java 8 streams.
@InProceedings{POPL17p285,
author = {Oleg Kiselyov and Aggelos Biboudis and Nick Palladinos and Yannis Smaragdakis},
title = {Stream Fusion, to Completeness},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {285299},
doi = {10.1145/3009837.3009880},
year = {2017},
}
Publisher's Version
Article Search
Info


Klin, Bartek 
POPL '17: "Learning Nominal Automata ..."
Learning Nominal Automata
Joshua Moerman, Matteo Sammartino, Alexandra Silva, Bartek Klin, and Michał Szynwelski (Radboud University Nijmegen, Netherlands; University College London, UK; University of Warsaw, Poland)
We present an Angluinstyle algorithm to learn nominal automata, which are acceptors of languages over infinite (structured) alphabets. The abstract approach we take allows us to seamlessly extend known variations of the algorithm to this new setting. In particular we can learn a subclass of nominal nondeterministic automata. An implementation using a recently developed Haskell library for nominal computation is provided for preliminary experiments.
@InProceedings{POPL17p613,
author = {Joshua Moerman and Matteo Sammartino and Alexandra Silva and Bartek Klin and Michał Szynwelski},
title = {Learning Nominal Automata},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {613625},
doi = {10.1145/3009837.3009879},
year = {2017},
}
Publisher's Version
Article Search


Knauth, Alex 
POPL '17: "Type Systems as Macros ..."
Type Systems as Macros
Stephen Chang, Alex Knauth, and Ben Greenman (Northeastern University, USA) We present Turnstile, a metalanguage for creating typed embedded languages. To implement the type system, programmers write type checking rules resembling traditional judgment syntax. To implement the semantics, they incorporate elaborations into these rules. Turnstile critically depends on the idea of linguistic reuse. It exploits a macro system in a novel way to simultaneously type check and rewrite a surface program into a target language. Reusing a macro system also yields modular implementations whose rules may be mixed and matched to create other languages. Combined with typical compiler and runtime reuse, Turnstile produces performant typed embedded languages with little effort. 

Kobayashi, Naoki 
POPL '17: "On the Relationship between ..."
On the Relationship between HigherOrder Recursion Schemes and HigherOrder Fixpoint Logic
Naoki Kobayashi, Étienne Lozes, and Florian Bruse (University of Tokyo, Japan; ENS, France; CNRS, France; University of Kassel, Germany)
We study the relationship between two kinds of higherorder extensions
of model checking: HORS model checking, where models are extended to
higherorder recursion schemes, and HFL model checking, where the
logic is extedned to higherorder modal fixpoint logic. Those extensions
have been independently studied until recently, and the former has
been applied to higherorder program verification. We show that there
exist (arguably) natural reductions between the two problems. To prove
the correctness of the translation from HORS to HFL model checking, we
establish a typebased characterization of HFL model checking, which
should be of independent interest. The results reveal a close
relationship between the two problems, enabling crossfertilization of
the two research threads.
@InProceedings{POPL17p246,
author = {Naoki Kobayashi and Étienne Lozes and Florian Bruse},
title = {On the Relationship between HigherOrder Recursion Schemes and HigherOrder Fixpoint Logic},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {246259},
doi = {10.1145/3009837.3009854},
year = {2017},
}
Publisher's Version
Article Search


Konnov, Igor 
POPL '17: "A Short Counterexample Property ..."
A Short Counterexample Property for Safety and Liveness Verification of FaultTolerant Distributed Algorithms
Igor Konnov, Marijana Lazić, Helmut Veith, and Josef Widder (Vienna University of Technology, Austria)
Distributed algorithms have many missioncritical applications
ranging from embedded systems and replicated databases to cloud
computing. Due to asynchronous communication, process faults, or
network failures, these algorithms are difficult to design and
verify. Many algorithms achieve fault tolerance by using
threshold guards that, for instance, ensure that a process waits
until it has received an acknowledgment from a majority of its
peers. Consequently, domainspecific languages for
faulttolerant distributed systems offer language support for
threshold guards.
We introduce an automated method for model checking of safety and
liveness of thresholdguarded distributed algorithms in systems
where the number of processes and the fraction of faulty
processes are parameters. Our method is based on a short
counterexample property: if a distributed algorithm violates a
temporal specification (in a fragment of LTL), then there is a
counterexample whose length is bounded and independent of the
parameters. We prove this property by (i) characterizing
executions depending on the structure of the temporal formula,
and (ii) using commutativity of transitions to accelerate and
shorten executions. We extended the ByMC toolset (Byzantine
Model Checker) with our technique, and verified liveness and
safety of 10 prominent faulttolerant distributed algorithms,
most of which were out of reach for existing techniques.
@InProceedings{POPL17p719,
author = {Igor Konnov and Marijana Lazić and Helmut Veith and Josef Widder},
title = {A Short Counterexample Property for Safety and Liveness Verification of FaultTolerant Distributed Algorithms},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {719734},
doi = {10.1145/3009837.3009860},
year = {2017},
}
Publisher's Version
Article Search
Info


Kopczyński, Eryk 
POPL '17: "LOIS: Syntax and Semantics ..."
LOIS: Syntax and Semantics
Eryk Kopczyński and Szymon Toruńczyk (University of Warsaw, Poland)
We present the semantics of an imperative programming language called LOIS (Looping Over Infinite Sets), which allows iterating through certain infinite sets, in finite time. Our semantics intuitively correspond to execution of infinitely many threads in parallel. This allows to merge the power of abstract mathematical constructions into imperative programming. Infinite sets are internally represented using first order formulas over some underlying logical structure, and SMT solvers are employed to evaluate programs.
@InProceedings{POPL17p586,
author = {Eryk Kopczyński and Szymon Toruńczyk},
title = {LOIS: Syntax and Semantics},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {586598},
doi = {10.1145/3009837.3009876},
year = {2017},
}
Publisher's Version
Article Search
Info


Kovács, Laura 
POPL '17: "Coming to Terms with Quantified ..."
Coming to Terms with Quantified Reasoning
Laura Kovács, Simon Robillard, and Andrei Voronkov (Vienna University of Technology, Austria; Chalmers University of Technology, Sweden; University of Manchester, UK)
The theory of finite term algebras provides a natural framework to describe the semantics of functional languages. The ability to efficiently reason about term algebras is essential to automate program analysis and verification for functional or imperative programs over inductively defined data types such as lists and trees. However, as the theory of finite term algebras is not finitely axiomatizable, reasoning about quantified properties over term algebras is challenging.
In this paper we address full firstorder reasoning about properties of programs manipulating term algebras, and describe two approaches for doing so by using firstorder theorem proving. Our first method is a conservative extension of the theory of term alge bras using a finite number of statements, while our second method relies on extending the superposition calculus of firstorder theorem provers with additional inference rules.
We implemented our work in the firstorder theorem prover Vampire and evaluated it on a large number of inductive datatype benchmarks, as well as game theory constraints. Our experimental results show that our methods are able to find proofs for many hard problems previously unsolved by stateoftheart methods. We also show that Vampire implementing our methods outperforms existing SMT solvers able to deal with inductive data types.
@InProceedings{POPL17p260,
author = {Laura Kovács and Simon Robillard and Andrei Voronkov},
title = {Coming to Terms with Quantified Reasoning},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {260270},
doi = {10.1145/3009837.3009887},
year = {2017},
}
Publisher's Version
Article Search


Kozen, Dexter 
POPL '17: "Cantor Meets Scott: Semantic ..."
Cantor Meets Scott: Semantic Foundations for Probabilistic Networks
Steffen Smolka, Praveen Kumar, Nate Foster, Dexter Kozen, and Alexandra Silva (Cornell University, USA; University College London, UK)
ProbNetKAT is a probabilistic extension of NetKAT with a denotational semantics based on Markov kernels. The language is expressive enough to generate continuous distributions, which raises the question of how to compute effectively in the language. This paper gives an new characterization of ProbNetKAT’s semantics using domain theory, which provides the foundation needed to build a practical implementation. We show how to use the semantics to approximate the behavior of arbitrary ProbNetKAT programs using distributions with finite support. We develop a prototype implementation and show how to use it to solve a variety of problems including characterizing the expected congestion induced by different routing schemes and reasoning probabilistically about reachability in a network.
@InProceedings{POPL17p557,
author = {Steffen Smolka and Praveen Kumar and Nate Foster and Dexter Kozen and Alexandra Silva},
title = {Cantor Meets Scott: Semantic Foundations for Probabilistic Networks},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {557571},
doi = {10.1145/3009837.3009843},
year = {2017},
}
Publisher's Version
Article Search


Krebbers, Robbert 
POPL '17: "Interactive Proofs in HigherOrder ..."
Interactive Proofs in HigherOrder Concurrent Separation Logic
Robbert Krebbers, Amin Timany, and Lars Birkedal (Delft University of Technology, Netherlands; KU Leuven, Belgium; Aarhus University, Denmark)
When using a proof assistant to reason in an embedded logic  like separation logic  one cannot benefit from the proof contexts and basic tactics of the proof assistant. This results in proofs that are at a too low level of abstraction because they are cluttered with bookkeeping code related to manipulating the object logic.
In this paper, we introduce a socalled proof mode that extends the Coq proof assistant with (spatial and nonspatial) named proof contexts for the object logic. We show that thanks to these contexts we can implement highlevel tactics for introduction and elimination of the connectives of the object logic, and thereby make reasoning in the embedded logic as seamless as reasoning in the meta logic of the proof assistant. We apply our method to Iris: a state of the art higherorder impredicative concurrent separation logic.
We show that our method is very general, and is not just limited to program verification. We demonstrate its generality by formalizing correctness proofs of finegrained concurrent algorithms, derived constructs of the Iris logic, and a unary and binary logical relation for a language with concurrency, higherorder store, polymorphism, and recursive types. This is the first formalization of a binary logical relation for such an expressive language. We also show how to use the logical relation to prove contextual refinement of finegrained concurrent algorithms.
@InProceedings{POPL17p205,
author = {Robbert Krebbers and Amin Timany and Lars Birkedal},
title = {Interactive Proofs in HigherOrder Concurrent Separation Logic},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {205217},
doi = {10.1145/3009837.3009855},
year = {2017},
}
Publisher's Version
Article Search


KroghJespersen, Morten 
POPL '17: "A Relational Model of TypesandEffects ..."
A Relational Model of TypesandEffects in HigherOrder Concurrent Separation Logic
Morten KroghJespersen, Kasper Svendsen, and Lars Birkedal (Aarhus University, Denmark; University of Cambridge, UK)
Recently we have seen a renewed interest in programming languages that tame the complexity of state and concurrency through refined type systems with more finegrained control over effects. In addition to simplifying reasoning and eliminating whole classes of bugs, statically tracking effects opens the door to advanced compiler optimizations.
In this paper we present a relational model of a typeandeffect system for a higherorder, concurrent program ming language. The model precisely captures the semantic invariants expressed by the effect annotations. We demonstrate that these invariants are strong enough to prove advanced program transformations, including automatic parallelization of expressions with suitably disjoint effects. The model also supports refinement proofs between abstract data types implementations with different internal data representations, including proofs that finegrained concurrent algorithms refine their coarsegrained counterparts. This is the first model for such an expressive language that supports both effectbased optimizations and data abstraction.
The logical relation is defined in Iris, a stateoftheart higherorder concurrent separation logic. This greatly simplifies proving welldefinedness of the logical relation and also provides us with a powerful logic for reasoning in the model.
@InProceedings{POPL17p218,
author = {Morten KroghJespersen and Kasper Svendsen and Lars Birkedal},
title = {A Relational Model of TypesandEffects in HigherOrder Concurrent Separation Logic},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {218231},
doi = {10.1145/3009837.3009877},
year = {2017},
}
Publisher's Version
Article Search
Info


Kulal, Sumith 
POPL '17: "ContractBased Resource Verification ..."
ContractBased Resource Verification for HigherOrder Functions with Memoization
Ravichandhran Madhavan, Sumith Kulal, and Viktor Kuncak (EPFL, Switzerland; IIT Bombay, India) We present a new approach for specifying and verifying resource utilization of higherorder functional programs that use lazy evaluation and memoization. In our approach, users can specify the desired resource bound as templates with numerical holes e.g. as steps ≤ ? * size(l) + ? in the contracts of functions. They can also express invariants necessary for establishing the bounds that may depend on the state of memoization. Our approach operates in two phases: first generating an instrumented firstorder program that accurately models the higherorder control flow and the effects of memoization on resources using sets, algebraic datatypes and mutual recursion, and then verifying the contracts of the firstorder program by producing verification conditions of the form ∃ ∀ using an extended assume/guarantee reasoning. We use our approach to verify precise bounds on resources such as evaluation steps and number of heapallocated objects on 17 challenging data structures and algorithms. Our benchmarks, comprising of 5K lines of functional Scala code, include lazy mergesort, Okasaki’s realtime queue and deque data structures that rely on aliasing of references to firstclass functions; lazy data structures based on numerical representations such as the conqueue data structure of Scala’s dataparallel library, cyclic streams, as well as dynamic programming algorithms such as knapsack and Viterbi. Our evaluations show that when averaged over all benchmarks the actual runtime resource consumption is 80% of the value inferred by our tool when estimating the number of evaluation steps, and is 88% for the number of heapallocated objects. 

Kumar, Ananya 
POPL '17: "Parallel Functional Arrays ..."
Parallel Functional Arrays
Ananya Kumar, Guy E. Blelloch, and Robert Harper (Carnegie Mellon University, USA)
The goal of this paper is to develop a form of functional arrays (sequences) that are as efficient as imperative
arrays, can be used in parallel, and have well defined costsemantics. The key idea is to consider sequences with
functional value semantics but nonfunctional cost semantics. Because the value semantics is functional, "updating" a
sequence returns a new sequence. We allow operations on "older" sequences (called interior sequences) to be more
expensive than operations on the "most recent" sequences (called leaf sequences).
We embed sequences in a language supporting forkjoin parallelism. Due to the parallelism, operations can be
interleaved nondeterministically, and, in conjunction with the different cost for interior and leaf sequences, this
can lead to nondeterministic costs for a program. Consequently the costs of programs can be difficult to analyze.
The main result is the derivation of a deterministic cost dynamics which makes analyzing the costs easier. The
theorems are not specific to sequences and can be applied to other data types with different costs for operating on
interior and leaf versions.
We present a waitfree concurrent implementation of sequences that requires constant work for accessing and updating
leaf sequences, and logarithmic work for accessing and linear work for updating interior sequences. We sketch a proof
of correctness for the sequence implementation. The key advantages of the present approach compared to current
approaches is that our implementation requires no changes to existing programming languages, supports nested
parallelism, and has well defined cost semantics. At the same time, it allows for functional implementations of
algorithms such as depthfirst search with the same asymptotic complexity as imperative implementations.
@InProceedings{POPL17p706,
author = {Ananya Kumar and Guy E. Blelloch and Robert Harper},
title = {Parallel Functional Arrays},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {706718},
doi = {10.1145/3009837.3009869},
year = {2017},
}
Publisher's Version
Article Search


Kumar, Praveen 
POPL '17: "Cantor Meets Scott: Semantic ..."
Cantor Meets Scott: Semantic Foundations for Probabilistic Networks
Steffen Smolka, Praveen Kumar, Nate Foster, Dexter Kozen, and Alexandra Silva (Cornell University, USA; University College London, UK)
ProbNetKAT is a probabilistic extension of NetKAT with a denotational semantics based on Markov kernels. The language is expressive enough to generate continuous distributions, which raises the question of how to compute effectively in the language. This paper gives an new characterization of ProbNetKAT’s semantics using domain theory, which provides the foundation needed to build a practical implementation. We show how to use the semantics to approximate the behavior of arbitrary ProbNetKAT programs using distributions with finite support. We develop a prototype implementation and show how to use it to solve a variety of problems including characterizing the expected congestion induced by different routing schemes and reasoning probabilistically about reachability in a network.
@InProceedings{POPL17p557,
author = {Steffen Smolka and Praveen Kumar and Nate Foster and Dexter Kozen and Alexandra Silva},
title = {Cantor Meets Scott: Semantic Foundations for Probabilistic Networks},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {557571},
doi = {10.1145/3009837.3009843},
year = {2017},
}
Publisher's Version
Article Search


Kuncak, Viktor 
POPL '17: "ContractBased Resource Verification ..."
ContractBased Resource Verification for HigherOrder Functions with Memoization
Ravichandhran Madhavan, Sumith Kulal, and Viktor Kuncak (EPFL, Switzerland; IIT Bombay, India) We present a new approach for specifying and verifying resource utilization of higherorder functional programs that use lazy evaluation and memoization. In our approach, users can specify the desired resource bound as templates with numerical holes e.g. as steps ≤ ? * size(l) + ? in the contracts of functions. They can also express invariants necessary for establishing the bounds that may depend on the state of memoization. Our approach operates in two phases: first generating an instrumented firstorder program that accurately models the higherorder control flow and the effects of memoization on resources using sets, algebraic datatypes and mutual recursion, and then verifying the contracts of the firstorder program by producing verification conditions of the form ∃ ∀ using an extended assume/guarantee reasoning. We use our approach to verify precise bounds on resources such as evaluation steps and number of heapallocated objects on 17 challenging data structures and algorithms. Our benchmarks, comprising of 5K lines of functional Scala code, include lazy mergesort, Okasaki’s realtime queue and deque data structures that rely on aliasing of references to firstclass functions; lazy data structures based on numerical representations such as the conqueue data structure of Scala’s dataparallel library, cyclic streams, as well as dynamic programming algorithms such as knapsack and Viterbi. Our evaluations show that when averaged over all benchmarks the actual runtime resource consumption is 80% of the value inferred by our tool when estimating the number of evaluation steps, and is 88% for the number of heapallocated objects. 

Lahav, Ori 
POPL '17: "A Promising Semantics for ..."
A Promising Semantics for RelaxedMemory Concurrency
Jeehoon Kang, ChungKil Hur, Ori Lahav, Viktor Vafeiadis, and Derek Dreyer (Seoul National University, South Korea; MPISWS, Germany)
Despite many years of research, it has proven very difficult to
develop a memory model for concurrent programming languages that
adequately balances the conflicting desiderata of programmers,
compilers, and hardware. In this paper, we propose the first relaxed
memory model that (1) accounts for a broad spectrum of features from
the C++11 concurrency model, (2) is implementable, in the sense that
it provably validates many standard compiler optimizations and
reorderings, as well as standard compilation schemes to x86TSO and
Power, (3) justifies simple invariantbased reasoning, thus
demonstrating the absence of bad "outofthinair" behaviors, (4)
supports "DRF" guarantees, ensuring that programmers who use
sufficient synchronization need not understand the full complexities
of relaxedmemory semantics, and (5) defines the semantics of racy
programs without relying on undefined behaviors, which is a
prerequisite for applicability to typesafe languages like Java.
The key novel idea behind our model is the notion of *promises*: a
thread may promise to execute a write in the future, thus enabling
other threads to read from that write out of order. Crucially, to
prevent outofthinair behaviors, a promise step requires a
threadlocal certification that it will be possible to execute the
promised write even in the absence of the promise. To establish
confidence in our model, we have formalized most of our key results in
Coq.
@InProceedings{POPL17p175,
author = {Jeehoon Kang and ChungKil Hur and Ori Lahav and Viktor Vafeiadis and Derek Dreyer},
title = {A Promising Semantics for RelaxedMemory Concurrency},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {175189},
doi = {10.1145/3009837.3009850},
year = {2017},
}
Publisher's Version
Article Search
Info


Lampropoulos, Leonidas 
POPL '17: "Beginner's Luck: A Language ..."
Beginner's Luck: A Language for PropertyBased Generators
Leonidas Lampropoulos, Diane GalloisWong, Cătălin Hriţcu, John Hughes, Benjamin C. Pierce, and Liyao Xia (University of Pennsylvania, USA; Inria, France; ENS, France; Chalmers University of Technology, Sweden)
Propertybased random testing à la QuickCheck requires building efficient generators for welldistributed random data satisfying complex logical predicates, but writing these generators can be difficult and error prone. We propose a domainspecific language in which generators are conveniently expressed by decorating predicates with lightweight annotations to control both the distribution of generated values and the amount of constraint solving that happens before each variable is instantiated. This language, called Luck, makes generators easier to write, read, and maintain.
We give Luck a formal semantics and prove several fundamental properties, including the soundness and completeness of random generation with respect to a standard predicate semantics. We evaluate Luck on common examples from the propertybased testing literature and on two significant case studies, showing that it can be used in complex domains with comparable bugfinding effectiveness and a significant reduction in testing code size compared to handwritten generators.
@InProceedings{POPL17p114,
author = {Leonidas Lampropoulos and Diane GalloisWong and Cătălin Hriţcu and John Hughes and Benjamin C. Pierce and Liyao Xia},
title = {Beginner's Luck: A Language for PropertyBased Generators},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {114129},
doi = {10.1145/3009837.3009868},
year = {2017},
}
Publisher's Version
Article Search


Lange, Julien 
POPL '17: "Fencing off Go: Liveness and ..."
Fencing off Go: Liveness and Safety for ChannelBased Programming
Julien Lange, Nicholas Ng, Bernardo Toninho, and Nobuko Yoshida (Imperial College London, UK)
Go is a productionlevel statically typed programming language whose design
features explicit messagepassing primitives and lightweight threads, enabling
(and encouraging) programmers to develop concurrent systems where components
interact through communication more so than by lockbased shared memory
concurrency. Go can only detect global deadlocks at runtime, but provides no
compiletime protection against all too common communication mismatches or
partial deadlocks. This work develops a static verification framework for
liveness and safety in Go programs, able to detect communication errors and
partial deadlocks in a general class of realistic concurrent programs,
including those with dynamic channel creation, unbounded thread creation and
recursion. Our approach infers from a Go program a faithful representation of
its communication patterns as a behavioural type. By checking a syntactic
restriction on channel usage, dubbed fencing, we ensure that programs are made
up of finitely many different communication patterns that may be repeated
infinitely many times. This restriction allows us to implement a decision
procedure for liveness and safety in types which in turn statically ensures
liveness and safety in Go programs. We have implemented a type inference and
decision procedures in a toolchain and tested it against publicly available Go
programs.
@InProceedings{POPL17p748,
author = {Julien Lange and Nicholas Ng and Bernardo Toninho and Nobuko Yoshida},
title = {Fencing off Go: Liveness and Safety for ChannelBased Programming},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {748761},
doi = {10.1145/3009837.3009847},
year = {2017},
}
Publisher's Version
Article Search


Lazić, Marijana 
POPL '17: "A Short Counterexample Property ..."
A Short Counterexample Property for Safety and Liveness Verification of FaultTolerant Distributed Algorithms
Igor Konnov, Marijana Lazić, Helmut Veith, and Josef Widder (Vienna University of Technology, Austria)
Distributed algorithms have many missioncritical applications
ranging from embedded systems and replicated databases to cloud
computing. Due to asynchronous communication, process faults, or
network failures, these algorithms are difficult to design and
verify. Many algorithms achieve fault tolerance by using
threshold guards that, for instance, ensure that a process waits
until it has received an acknowledgment from a majority of its
peers. Consequently, domainspecific languages for
faulttolerant distributed systems offer language support for
threshold guards.
We introduce an automated method for model checking of safety and
liveness of thresholdguarded distributed algorithms in systems
where the number of processes and the fraction of faulty
processes are parameters. Our method is based on a short
counterexample property: if a distributed algorithm violates a
temporal specification (in a fragment of LTL), then there is a
counterexample whose length is bounded and independent of the
parameters. We prove this property by (i) characterizing
executions depending on the structure of the temporal formula,
and (ii) using commutativity of transitions to accelerate and
shorten executions. We extended the ByMC toolset (Byzantine
Model Checker) with our technique, and verified liveness and
safety of 10 prominent faulttolerant distributed algorithms,
most of which were out of reach for existing techniques.
@InProceedings{POPL17p719,
author = {Igor Konnov and Marijana Lazić and Helmut Veith and Josef Widder},
title = {A Short Counterexample Property for Safety and Liveness Verification of FaultTolerant Distributed Algorithms},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {719734},
doi = {10.1145/3009837.3009860},
year = {2017},
}
Publisher's Version
Article Search
Info


Lehmann, Nico 
POPL '17: "Gradual Refinement Types ..."
Gradual Refinement Types
Nico Lehmann and Éric Tanter (University of Chile, Chile)
Refinement types are an effective languagebased verification technique. However, as any expressive typing discipline, its strength is its weakness, imposing sometimes undesired rigidity. Guided by abstract interpretation, we extend the gradual typing agenda and develop the notion of gradual refinement types, allowing smooth evolution and interoperability between simple types and logicallyrefined types. In doing so, we address two challenges unexplored in the gradual typing literature: dealing with imprecise logical information, and with dependent function types. The first challenge leads to a crucial notion of locality for refinement formulas, and the second yields novel operators related to type and termlevel substitution, identifying new opportunity for runtime errors in gradual dependentlytyped languages. The gradual language we present is type safe, type sound, and satisfies the refined criteria for graduallytyped languages of Siek et al. We also explain how to extend our approach to richer refinement logics, anticipating key challenges to consider.
@InProceedings{POPL17p775,
author = {Nico Lehmann and Éric Tanter},
title = {Gradual Refinement Types},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {775788},
doi = {10.1145/3009837.3009856},
year = {2017},
}
Publisher's Version
Article Search


Leijen, Daan 
POPL '17: "Type Directed Compilation ..."
Type Directed Compilation of RowTyped Algebraic Effects
Daan Leijen (Microsoft Research, USA)
Algebraic effect handlers, introduced by Plotkin and Power in 2002,
are recently gaining in popularity as a purely functional approach to
modeling effects. In this article, we give a full overview of
practical algebraic effects in the context of a compiled
implementation in the Koka language. In particular, we show how
algebraic effects generalize over common constructs like exception
handling, state, iterators and asyncawait. We give an effective type
inference algorithm based on extensible effect rows using scoped
labels, and a direct operational semantics. Finally, we show an
efficient compilation scheme to common runtime platforms (like
JavaScript) using a type directed selective CPS translation.
@InProceedings{POPL17p486,
author = {Daan Leijen},
title = {Type Directed Compilation of RowTyped Algebraic Effects},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {486499},
doi = {10.1145/3009837.3009872},
year = {2017},
}
Publisher's Version
Article Search
Info


Levy, Paul Blain 
POPL '17: "Contextual Isomorphisms ..."
Contextual Isomorphisms
Paul Blain Levy (University of Birmingham, UK) What is the right notion of ”isomorphism” between types, in a simple type theory? The traditional answer is: a pair of terms that are inverse up to a specified congruence. We firstly argue that, in the presence of effects, this answer is too liberal and needs to be restricted, using F'uhrmann’s notion of thunkability in the case of value types (as in callbyvalue), or using MunchMaccagnoni’s notion of linearity in the case of computation types (as in callbyname). Yet that leaves us with different notions of isomorphism for different kinds of type. This situation is resolved by means of a new notion of “contextual” isomorphism (or morphism), analogous at the level of types to contextual equivalence of terms. A contextual morphism is a way of replacing one type with the other wherever it may occur in a judgement, in a way that is preserved by the action of any term with holes. For types of pure λcalculus, we show that a contextual morphism corresponds to a traditional isomorphism. For value types, a contextual morphism corresponds to a thunkable isomorphism, and for computation types, to a linear isomorphism. 

Li, Huisong 
POPL '17: "SemanticDirected Clumping ..."
SemanticDirected Clumping of Disjunctive Abstract States
Huisong Li, Francois Berenger, BorYuh Evan Chang, and Xavier Rival (Inria, France; CNRS, France; ENS, France; University of Colorado at Boulder, USA)
To infer complex structural invariants, shape analyses rely on expressive families of logical properties. Many such analyses manipulate abstract memory states that consist of separating conjunctions of basic predicates describing atomic blocks or summaries. Moreover, they use finite disjunctions of abstract memory states in order to account for dissimilar shapes. Disjunctions should be kept small for the sake of scalability, though precision often requires to keep additional case splits. In this context, deciding when and how to merge case splits and to replace them with summaries is critical both for the precision and for the efficiency. Existing techniques use sets of syntactic rules, which are tedious to design and prone to failure. In this paper, we design a semantic criterion to clump abstract states based on their silhouette which applies not only to the conservative union of disjuncts, but also to the weakening of separating conjunction of memory predicates into inductive summaries. Our approach allows to define union and widening operators that aim at preserving the case splits that are required for the analysis to succeed. We implement this approach in the MemCAD analyzer, and evaluate it on realworld C codes from existing libraries, including programs dealing with doubly linked lists, redblack trees and AVLtrees.
@InProceedings{POPL17p32,
author = {Huisong Li and Francois Berenger and BorYuh Evan Chang and Xavier Rival},
title = {SemanticDirected Clumping of Disjunctive Abstract States},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {3245},
doi = {10.1145/3009837.3009881},
year = {2017},
}
Publisher's Version
Article Search


Lidbury, Christopher 
POPL '17: "Dynamic Race Detection for ..."
Dynamic Race Detection for C++11
Christopher Lidbury and Alastair F. Donaldson (Imperial College London, UK)
The intricate rules for memory ordering and synchronisation associated with the C/C++11 memory model mean that data races can be difficult to eliminate from concurrent programs. Dynamic data race analysis can pinpoint races in large and complex applications, but the stateoftheart ThreadSanitizer (tsan) tool for C/C++ considers only sequentially consistent program executions, and does not correctly model synchronisation between C/C++11 atomic operations. We present a scalable dynamic data race analysis for C/C++11 that correctly captures C/C++11 synchronisation,
and uses instrumentation to support exploration of a class of non sequentially consistent executions. We concisely define the memory model fragment captured by our instrumentation via a restricted axiomatic semantics, and show that the axiomatic semantics permits exactly those executions explored by our instrumentation. We have implemented our analysis in tsan, and evaluate its effectiveness on benchmark programs, enabling a comparison with the CDSChecker tool, and on two large and highly concurrent applications: the Firefox and Chromium web browsers. Our results show that our method can detect races that are beyond the scope of the original tsan tool, and that the overhead associated with applying our enhanced instrumentation to large applications is tolerable.
@InProceedings{POPL17p443,
author = {Christopher Lidbury and Alastair F. Donaldson},
title = {Dynamic Race Detection for C++11},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {443457},
doi = {10.1145/3009837.3009857},
year = {2017},
}
Publisher's Version
Article Search


Lindley, Sam 
POPL '17: "Do Be Do Be Do ..."
Do Be Do Be Do
Sam Lindley, Conor McBride, and Craig McLaughlin (University of Edinburgh, UK; University of Strathclyde, UK)
We explore the design and implementation of Frank, a strict functional
programming language with a bidirectional effect type system designed
from the ground up around a novel variant of Plotkin and Pretnar's
effect handler abstraction.
Effect handlers provide an abstraction for modular effectful
programming: a handler acts as an interpreter for a collection of
commands whose interfaces are statically tracked by the type
system. However, Frank eliminates the need for an additional effect
handling construct by generalising the basic mechanism of functional
abstraction itself. A function is simply the special case of a Frank
operator that interprets no commands. Moreover, Frank's operators
can be multihandlers which simultaneously interpret commands from
several sources at once, without disturbing the direct style of
functional programming with values.
Effect typing in Frank employs a novel form of effect polymorphism
which avoid mentioning effect variables in source code. This is
achieved by propagating an ambient ability inwards, rather than
accumulating unions of potential effects outwards.
We introduce Frank by example, and then give a formal account of the
Frank type system and its semantics. We introduce Core Frank by
elaborating Frank operators into functions, case expressions, and
unary handlers, and then give a sound smallstep operational semantics
for Core Frank.
Programming with effects and handlers is in its infancy. We contribute
an exploration of future possibilities, particularly in
combination with other forms of rich type system.
@InProceedings{POPL17p500,
author = {Sam Lindley and Conor McBride and Craig McLaughlin},
title = {Do Be Do Be Do},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {500514},
doi = {10.1145/3009837.3009897},
year = {2017},
}
Publisher's Version
Article Search


Liu, Xinxin 
POPL '17: "Analyzing Divergence in Bisimulation ..."
Analyzing Divergence in Bisimulation Semantics
Xinxin Liu, Tingting Yu, and Wenhui Zhang (Institute of Software at Chinese Academy of Sciences, China)
Some bisimulation based abstract equivalence relations may equate divergent systems with nondivergent ones, examples including weak bisimulation equivalence and branching bisimulation equivalence. Thus extra efforts are needed to analyze divergence for the compared systems. In this paper we propose a new method for analyzing divergence in bisimulation semantics, which relies only on simple observations of individual transitions. We show that this method can verify several typical divergence preserving bisimulation equivalences including two wellknown ones. As an application case study, we use the proposed method to verify the HSY collision stack to draw the conclusion that the stack implementation is correct in terms of linearizability with lockfree progress condition.
@InProceedings{POPL17p735,
author = {Xinxin Liu and Tingting Yu and Wenhui Zhang},
title = {Analyzing Divergence in Bisimulation Semantics},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {735747},
doi = {10.1145/3009837.3009870},
year = {2017},
}
Publisher's Version
Article Search


Lozes, Étienne 
POPL '17: "On the Relationship between ..."
On the Relationship between HigherOrder Recursion Schemes and HigherOrder Fixpoint Logic
Naoki Kobayashi, Étienne Lozes, and Florian Bruse (University of Tokyo, Japan; ENS, France; CNRS, France; University of Kassel, Germany)
We study the relationship between two kinds of higherorder extensions
of model checking: HORS model checking, where models are extended to
higherorder recursion schemes, and HFL model checking, where the
logic is extedned to higherorder modal fixpoint logic. Those extensions
have been independently studied until recently, and the former has
been applied to higherorder program verification. We show that there
exist (arguably) natural reductions between the two problems. To prove
the correctness of the translation from HORS to HFL model checking, we
establish a typebased characterization of HFL model checking, which
should be of independent interest. The results reveal a close
relationship between the two problems, enabling crossfertilization of
the two research threads.
@InProceedings{POPL17p246,
author = {Naoki Kobayashi and Étienne Lozes and Florian Bruse},
title = {On the Relationship between HigherOrder Recursion Schemes and HigherOrder Fixpoint Logic},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {246259},
doi = {10.1145/3009837.3009854},
year = {2017},
}
Publisher's Version
Article Search


Madhavan, Ravichandhran 
POPL '17: "ContractBased Resource Verification ..."
ContractBased Resource Verification for HigherOrder Functions with Memoization
Ravichandhran Madhavan, Sumith Kulal, and Viktor Kuncak (EPFL, Switzerland; IIT Bombay, India) We present a new approach for specifying and verifying resource utilization of higherorder functional programs that use lazy evaluation and memoization. In our approach, users can specify the desired resource bound as templates with numerical holes e.g. as steps ≤ ? * size(l) + ? in the contracts of functions. They can also express invariants necessary for establishing the bounds that may depend on the state of memoization. Our approach operates in two phases: first generating an instrumented firstorder program that accurately models the higherorder control flow and the effects of memoization on resources using sets, algebraic datatypes and mutual recursion, and then verifying the contracts of the firstorder program by producing verification conditions of the form ∃ ∀ using an extended assume/guarantee reasoning. We use our approach to verify precise bounds on resources such as evaluation steps and number of heapallocated objects on 17 challenging data structures and algorithms. Our benchmarks, comprising of 5K lines of functional Scala code, include lazy mergesort, Okasaki’s realtime queue and deque data structures that rely on aliasing of references to firstclass functions; lazy data structures based on numerical representations such as the conqueue data structure of Scala’s dataparallel library, cyclic streams, as well as dynamic programming algorithms such as knapsack and Viterbi. Our evaluations show that when averaged over all benchmarks the actual runtime resource consumption is 80% of the value inferred by our tool when estimating the number of evaluation steps, and is 88% for the number of heapallocated objects. 

Maillard, Kenji 
POPL '17: "Dijkstra Monads for Free ..."
Dijkstra Monads for Free
Danel Ahman, Cătălin Hriţcu, Kenji Maillard, Guido Martínez, Gordon Plotkin, Jonathan Protzenko, Aseem Rastogi, and Nikhil Swamy (University of Edinburgh, UK; Microsoft Research, USA; Inria, France; ENS, France; Rosario National University, Argentina; Microsoft Research, India) Dijkstra monads enable a dependent type theory to be enhanced with support for specifying and verifying effectful code via weakest preconditions. Together with their closely related counterparts, Hoare monads, they provide the basis on which verification tools like F*, Hoare Type Theory (HTT), and Ynot are built. We show that Dijkstra monads can be derived “for free” by applying a continuationpassing style (CPS) translation to the standard monadic definitions of the underlying computational effects. Automatically deriving Dijkstra monads in this way provides a correctbyconstruction and efficient way of reasoning about userdefined effects in dependent type theories. We demonstrate these ideas in EMF*, a new dependently typed calculus, validating it via both formal proof and a prototype implementation within F*. Besides equipping F* with a more uniform and extensible effect system, EMF* enables a novel mixture of intrinsic and extrinsic proofs within F*. 

Majumdar, Rupak 
POPL '17: "Thread Modularity at Many ..."
Thread Modularity at Many Levels: A Pearl in Compositional Verification
Jochen Hoenicke, Rupak Majumdar, and Andreas Podelski (University of Freiburg, Germany; MPISWS, Germany) A threadmodular proof for the correctness of a concurrent program is based on an inductive and interferencefree annotation of each thread. It is wellknown that the corresponding proof system is not complete (unless one adds auxiliary variables). We describe a hierarchy of proof systems where each level k corresponds to a generalized notion of thread modularity (level 1 corresponds to the original notion). Each level is strictly more expressive than the previous. Further, each level precisely captures programs that can be proved using uniform Ashcroft invariants with k universal quantifiers. We demonstrate the usefulness of the hierarchy by giving a compositional proof of the Mach shootdown algorithm for TLB consistency. We show a proof at level 2 that shows the algorithm is correct for an arbitrary number of CPUs. However, there is no proof for the algorithm at level 1 which does not involve auxiliary state. 

Maranget, Luc 
POPL '17: "MixedSize Concurrency: ARM, ..."
MixedSize Concurrency: ARM, POWER, C/C++11, and SC
Shaked Flur, Susmit Sarkar, Christopher Pulte, Kyndylan Nienhuis, Luc Maranget, Kathryn E. Gray, Ali Sezgin, Mark Batty, and Peter Sewell (University of Cambridge, UK; University of St. Andrews, UK; Inria, France; University of Kent, UK)
Previous work on the semantics of relaxed sharedmemory concurrency has only considered the case in which each load reads the data of exactly one store. In practice, however, multiprocessors support mixedsize accesses, and these are used by systems software and (to some degree) exposed at the C/C++ language level. A semantic foundation for software, therefore, has to address them.
We investigate the mixedsize behaviour of ARMv8 and IBM POWER architectures and implementations: by experiment, by developing semantic models, by testing the correspondence between these, and by discussion with ARM and IBM staff. This turns out to be surprisingly subtle, and on the way we have to revisit the fundamental concepts of coherence and sequential consistency, which change in this setting. In particular, we show that adding a memory barrier between each instruction does not restore sequential consistency. We go on to extend the C/C++11 model to support nonatomic mixedsize memory accesses.
This is a necessary step towards semantics for realworld sharedmemory concurrent code, beyond litmus tests.
@InProceedings{POPL17p429,
author = {Shaked Flur and Susmit Sarkar and Christopher Pulte and Kyndylan Nienhuis and Luc Maranget and Kathryn E. Gray and Ali Sezgin and Mark Batty and Peter Sewell},
title = {MixedSize Concurrency: ARM, POWER, C/C++11, and SC},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {429442},
doi = {10.1145/3009837.3009839},
year = {2017},
}
Publisher's Version
Article Search
Info


Martínez, Guido 
POPL '17: "Dijkstra Monads for Free ..."
Dijkstra Monads for Free
Danel Ahman, Cătălin Hriţcu, Kenji Maillard, Guido Martínez, Gordon Plotkin, Jonathan Protzenko, Aseem Rastogi, and Nikhil Swamy (University of Edinburgh, UK; Microsoft Research, USA; Inria, France; ENS, France; Rosario National University, Argentina; Microsoft Research, India) Dijkstra monads enable a dependent type theory to be enhanced with support for specifying and verifying effectful code via weakest preconditions. Together with their closely related counterparts, Hoare monads, they provide the basis on which verification tools like F*, Hoare Type Theory (HTT), and Ynot are built. We show that Dijkstra monads can be derived “for free” by applying a continuationpassing style (CPS) translation to the standard monadic definitions of the underlying computational effects. Automatically deriving Dijkstra monads in this way provides a correctbyconstruction and efficient way of reasoning about userdefined effects in dependent type theories. We demonstrate these ideas in EMF*, a new dependently typed calculus, validating it via both formal proof and a prototype implementation within F*. Besides equipping F* with a more uniform and extensible effect system, EMF* enables a novel mixture of intrinsic and extrinsic proofs within F*. 

Martins, Ruben 
POPL '17: "ComponentBased Synthesis ..."
ComponentBased Synthesis for Complex APIs
Yu Feng, Ruben Martins, Yuepeng Wang, Isil Dillig, and Thomas W. Reps (University of Texas at Austin, USA; University of WisconsinMadison, USA)
Componentbased approaches to program synthesis assemble programs from a database of existing components, such as methods provided by an API. In this paper, we present a novel typedirected algorithm for componentbased synthesis. The key novelty of our approach is the use of a compact Petrinet representation to model relationships between methods in an API. Given a target method signature S, our approach performs reachability analysis on the underlying Petrinet model to identify sequences of method calls that could be used to synthesize an implementation of S. The programs synthesized by our algorithm are guaranteed to type check and pass all test cases provided by the user.
We have implemented this approach in a tool called SyPet, and used it to successfully synthesize realworld programming tasks extracted from online forums and existing code repositories. We also compare SyPet with two stateoftheart synthesis tools, namely InSynth and CodeHint, and demonstrate that SyPet can synthesize more programs in less time. Finally, we compare our approach with an alternative solution based on hypergraphs and demonstrate its advantages.
@InProceedings{POPL17p599,
author = {Yu Feng and Ruben Martins and Yuepeng Wang and Isil Dillig and Thomas W. Reps},
title = {ComponentBased Synthesis for Complex APIs},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {599612},
doi = {10.1145/3009837.3009851},
year = {2017},
}
Publisher's Version
Article Search


McBride, Conor 
POPL '17: "Do Be Do Be Do ..."
Do Be Do Be Do
Sam Lindley, Conor McBride, and Craig McLaughlin (University of Edinburgh, UK; University of Strathclyde, UK)
We explore the design and implementation of Frank, a strict functional
programming language with a bidirectional effect type system designed
from the ground up around a novel variant of Plotkin and Pretnar's
effect handler abstraction.
Effect handlers provide an abstraction for modular effectful
programming: a handler acts as an interpreter for a collection of
commands whose interfaces are statically tracked by the type
system. However, Frank eliminates the need for an additional effect
handling construct by generalising the basic mechanism of functional
abstraction itself. A function is simply the special case of a Frank
operator that interprets no commands. Moreover, Frank's operators
can be multihandlers which simultaneously interpret commands from
several sources at once, without disturbing the direct style of
functional programming with values.
Effect typing in Frank employs a novel form of effect polymorphism
which avoid mentioning effect variables in source code. This is
achieved by propagating an ambient ability inwards, rather than
accumulating unions of potential effects outwards.
We introduce Frank by example, and then give a formal account of the
Frank type system and its semantics. We introduce Core Frank by
elaborating Frank operators into functions, case expressions, and
unary handlers, and then give a sound smallstep operational semantics
for Core Frank.
Programming with effects and handlers is in its infancy. We contribute
an exploration of future possibilities, particularly in
combination with other forms of rich type system.
@InProceedings{POPL17p500,
author = {Sam Lindley and Conor McBride and Craig McLaughlin},
title = {Do Be Do Be Do},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {500514},
doi = {10.1145/3009837.3009897},
year = {2017},
}
Publisher's Version
Article Search


McLaughlin, Craig 
POPL '17: "Do Be Do Be Do ..."
Do Be Do Be Do
Sam Lindley, Conor McBride, and Craig McLaughlin (University of Edinburgh, UK; University of Strathclyde, UK)
We explore the design and implementation of Frank, a strict functional
programming language with a bidirectional effect type system designed
from the ground up around a novel variant of Plotkin and Pretnar's
effect handler abstraction.
Effect handlers provide an abstraction for modular effectful
programming: a handler acts as an interpreter for a collection of
commands whose interfaces are statically tracked by the type
system. However, Frank eliminates the need for an additional effect
handling construct by generalising the basic mechanism of functional
abstraction itself. A function is simply the special case of a Frank
operator that interprets no commands. Moreover, Frank's operators
can be multihandlers which simultaneously interpret commands from
several sources at once, without disturbing the direct style of
functional programming with values.
Effect typing in Frank employs a novel form of effect polymorphism
which avoid mentioning effect variables in source code. This is
achieved by propagating an ambient ability inwards, rather than
accumulating unions of potential effects outwards.
We introduce Frank by example, and then give a formal account of the
Frank type system and its semantics. We introduce Core Frank by
elaborating Frank operators into functions, case expressions, and
unary handlers, and then give a sound smallstep operational semantics
for Core Frank.
Programming with effects and handlers is in its infancy. We contribute
an exploration of future possibilities, particularly in
combination with other forms of rich type system.
@InProceedings{POPL17p500,
author = {Sam Lindley and Conor McBride and Craig McLaughlin},
title = {Do Be Do Be Do},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {500514},
doi = {10.1145/3009837.3009897},
year = {2017},
}
Publisher's Version
Article Search


Might, Matthew 
POPL '17: "A Posteriori Environment Analysis ..."
A Posteriori Environment Analysis with Pushdown Delta CFA
Kimball Germane and Matthew Might (University of Utah, USA) Flowdriven higherorder inlining is blocked by free variables, yet current theories of environment analysis cannot reliably cope with multiplybound variables. One of these, ΔCFA, is a promising theory based on stack change but is undermined by its finitestate model of the stack. We present Pushdown ΔCFA which takes a ΔCFAapproach to pushdown models of control flow and can cope with multiplybound variables, even in the face of recursion. 

Moerman, Joshua 
POPL '17: "Learning Nominal Automata ..."
Learning Nominal Automata
Joshua Moerman, Matteo Sammartino, Alexandra Silva, Bartek Klin, and Michał Szynwelski (Radboud University Nijmegen, Netherlands; University College London, UK; University of Warsaw, Poland)
We present an Angluinstyle algorithm to learn nominal automata, which are acceptors of languages over infinite (structured) alphabets. The abstract approach we take allows us to seamlessly extend known variations of the algorithm to this new setting. In particular we can learn a subclass of nominal nondeterministic automata. An implementation using a recently developed Haskell library for nominal computation is provided for preliminary experiments.
@InProceedings{POPL17p613,
author = {Joshua Moerman and Matteo Sammartino and Alexandra Silva and Bartek Klin and Michał Szynwelski},
title = {Learning Nominal Automata},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {613625},
doi = {10.1145/3009837.3009879},
year = {2017},
}
Publisher's Version
Article Search


Müller, Peter 
POPL '17: "Serializability for Eventual ..."
Serializability for Eventual Consistency: Criterion, Analysis, and Applications
Lucas Brutschy, Dimitar Dimitrov, Peter Müller, and Martin Vechev (ETH Zurich, Switzerland)
Developing and reasoning about systems using eventually consistent data stores
is a difficult challenge due to the presence of unexpected behaviors that do not
occur under sequential consistency. A fundamental problem in this setting is to
identify a correctness criterion that precisely captures intended application
behaviors yet is generic enough to be applicable to a wide range of
applications.
In this paper, we present such a criterion. More precisely, we generalize
conflict serializability to the setting of eventual consistency. Our
generalization is based on a novel dependency model that incorporates two
powerful algebraic properties: commutativity and absorption. These properties enable
precise reasoning about programs that employ highlevel replicated data types,
common in modern systems. To apply our criterion in practice, we also developed
a dynamic analysis algorithm and a tool that checks whether a given program
execution is serializable.
We performed a thorough experimental evaluation on two realworld use cases:
debugging cloudbacked mobile applications and implementing clients of a popular
eventually consistent keyvalue store. Our experimental results indicate that
our criterion reveals harmful synchronization problems in applications, is more
effective at finding them than prior approaches, and can be used for the
development of practical, eventually consistent applications.
@InProceedings{POPL17p458,
author = {Lucas Brutschy and Dimitar Dimitrov and Peter Müller and Martin Vechev},
title = {Serializability for Eventual Consistency: Criterion, Analysis, and Applications},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {458472},
doi = {10.1145/3009837.3009895},
year = {2017},
}
Publisher's Version
Article Search


Mycroft, Alan 
POPL '17: "Polymorphism, Subtyping, and ..."
Polymorphism, Subtyping, and Type Inference in MLsub
Stephen Dolan and Alan Mycroft (University of Cambridge, UK)
We present a type system combining subtyping and MLstyle parametric polymorphism. Unlike previous work, our system supports type inference and has compact principal types. We demonstrate this system in the minimal language MLsub, which types a strict superset of core ML programs.
This is made possible by keeping a strict separation between the types used to describe inputs and those used to describe outputs, and extending the classical unification algorithm to handle subtyping constraints between these input and output types. Principal types are kept compact by type simplification, which exploits deep connections between subtyping and the algebra of regular languages. An implementation is available online.
@InProceedings{POPL17p60,
author = {Stephen Dolan and Alan Mycroft},
title = {Polymorphism, Subtyping, and Type Inference in MLsub},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {6072},
doi = {10.1145/3009837.3009882},
year = {2017},
}
Publisher's Version
Article Search
Info


Naumann, David A. 
POPL '17: "Hypercollecting Semantics ..."
Hypercollecting Semantics and Its Application to Static Analysis of Information Flow
Mounir Assaf, David A. Naumann, Julien Signoles, Éric Totel, and Frédéric Tronel (Stevens Institute of Technology, USA; CEA LIST, France; CentraleSupélec, France) We show how static analysis for secure information flow can be expressed and proved correct entirely within the framework of abstract interpretation. The key idea is to define a Galois connection that directly approximates the hyperproperty of interest. To enable use of such Galois connections, we introduce a fixpoint characterisation of hypercollecting semantics, i.e. a “set of sets” transformer. This makes it possible to systematically derive static analyses for hyperproperties entirely within the calculational framework of abstract interpretation. We evaluate this technique by deriving example static analyses. For qualitative information flow, we derive a dependence analysis similar to the logic of Amtoft and Banerjee (SAS’04) and the type system of Hunt and Sands (POPL’06). For quantitative information flow, we derive a novel cardinality analysis that bounds the leakage conveyed by a program instead of simply deciding whether it exists. This encompasses problems that are hypersafety but not ksafety. We put the framework to use and introduce variations that achieve precision rivalling the most recent and precise static analyses for information flow. 

Ng, Nicholas 
POPL '17: "Fencing off Go: Liveness and ..."
Fencing off Go: Liveness and Safety for ChannelBased Programming
Julien Lange, Nicholas Ng, Bernardo Toninho, and Nobuko Yoshida (Imperial College London, UK)
Go is a productionlevel statically typed programming language whose design
features explicit messagepassing primitives and lightweight threads, enabling
(and encouraging) programmers to develop concurrent systems where components
interact through communication more so than by lockbased shared memory
concurrency. Go can only detect global deadlocks at runtime, but provides no
compiletime protection against all too common communication mismatches or
partial deadlocks. This work develops a static verification framework for
liveness and safety in Go programs, able to detect communication errors and
partial deadlocks in a general class of realistic concurrent programs,
including those with dynamic channel creation, unbounded thread creation and
recursion. Our approach infers from a Go program a faithful representation of
its communication patterns as a behavioural type. By checking a syntactic
restriction on channel usage, dubbed fencing, we ensure that programs are made
up of finitely many different communication patterns that may be repeated
infinitely many times. This restriction allows us to implement a decision
procedure for liveness and safety in types which in turn statically ensures
liveness and safety in Go programs. We have implemented a type inference and
decision procedures in a toolchain and tested it against publicly available Go
programs.
@InProceedings{POPL17p748,
author = {Julien Lange and Nicholas Ng and Bernardo Toninho and Nobuko Yoshida},
title = {Fencing off Go: Liveness and Safety for ChannelBased Programming},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {748761},
doi = {10.1145/3009837.3009847},
year = {2017},
}
Publisher's Version
Article Search


Nienhuis, Kyndylan 
POPL '17: "MixedSize Concurrency: ARM, ..."
MixedSize Concurrency: ARM, POWER, C/C++11, and SC
Shaked Flur, Susmit Sarkar, Christopher Pulte, Kyndylan Nienhuis, Luc Maranget, Kathryn E. Gray, Ali Sezgin, Mark Batty, and Peter Sewell (University of Cambridge, UK; University of St. Andrews, UK; Inria, France; University of Kent, UK)
Previous work on the semantics of relaxed sharedmemory concurrency has only considered the case in which each load reads the data of exactly one store. In practice, however, multiprocessors support mixedsize accesses, and these are used by systems software and (to some degree) exposed at the C/C++ language level. A semantic foundation for software, therefore, has to address them.
We investigate the mixedsize behaviour of ARMv8 and IBM POWER architectures and implementations: by experiment, by developing semantic models, by testing the correspondence between these, and by discussion with ARM and IBM staff. This turns out to be surprisingly subtle, and on the way we have to revisit the fundamental concepts of coherence and sequential consistency, which change in this setting. In particular, we show that adding a memory barrier between each instruction does not restore sequential consistency. We go on to extend the C/C++11 model to support nonatomic mixedsize memory accesses.
This is a necessary step towards semantics for realworld sharedmemory concurrent code, beyond litmus tests.
@InProceedings{POPL17p429,
author = {Shaked Flur and Susmit Sarkar and Christopher Pulte and Kyndylan Nienhuis and Luc Maranget and Kathryn E. Gray and Ali Sezgin and Mark Batty and Peter Sewell},
title = {MixedSize Concurrency: ARM, POWER, C/C++11, and SC},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {429442},
doi = {10.1145/3009837.3009839},
year = {2017},
}
Publisher's Version
Article Search
Info


Novotný, Petr 
POPL '17: "Stochastic Invariants for ..."
Stochastic Invariants for Probabilistic Termination
Krishnendu Chatterjee, Petr Novotný, and Ðorđe Žikelić (IST Austria, Austria; University of Cambridge, UK) Termination is one of the basic liveness properties, and we study the termination problem for probabilistic programs with realvalued variables. Previous works focused on the qualitative problem that asks whether an input program terminates with probability 1 (almostsure termination). A powerful approach for this qualitative problem is the notion of ranking supermartingales with respect to a given set of invariants. The quantitative problem (probabilistic termination) asks for bounds on the termination probability, and this problem has not been addressed yet. A fundamental and conceptual drawback of the existing approaches to address probabilistic termination is that even though the supermartingales consider the probabilistic behaviour of the programs, the invariants are obtained completely ignoring the probabilistic aspect (i.e., the invariants are obtained considering all behaviours with no information about the probability). In this work we address the probabilistic termination problem for lineararithmetic probabilistic programs with nondeterminism. We formally define the notion of stochastic invariants, which are constraints along with a probability bound that the constraints hold. We introduce a concept of repulsing supermartingales. First, we show that repulsing supermartingales can be used to obtain bounds on the probability of the stochastic invariants. Second, we show the effectiveness of repulsing supermartingales in the following three ways: (1) With a combination of ranking and repulsing supermartingales we can compute lower bounds on the probability of termination; (2) repulsing supermartingales provide witnesses for refutation of almostsure termination; and (3) with a combination of ranking and repulsing supermartingales we can establish persistence properties of probabilistic programs. Along with our conceptual contributions, we establish the following computational results: First, the synthesis of a stochastic invariant which supports some ranking supermartingale and at the same time admits a repulsing supermartingale can be achieved via reduction to the existential firstorder theory of reals, which generalizes existing results from the nonprobabilistic setting. Second, given a program with “strict invariants” (e.g., obtained via abstract interpretation) and a stochastic invariant, we can check in polynomial time whether there exists a linear repulsing supermartingale w.r.t. the stochastic invariant (via reduction to LP). We also present experimental evaluation of our approach on academic examples. 

Omar, Cyrus 
POPL '17: "Hazelnut: A Bidirectionally ..."
Hazelnut: A Bidirectionally Typed Structure Editor Calculus
Cyrus Omar, Ian Voysey, Michael Hilton, Jonathan Aldrich, and Matthew A. Hammer (Carnegie Mellon University, USA; Oregon State University, USA; University of Colorado at Boulder, USA) Structure editors allow programmers to edit the tree structure of a program directly. This can have cognitive benefits, particularly for novice and enduser programmers. It also simplifies matters for tool designers, because they do not need to contend with malformed program text. This paper introduces Hazelnut, a structure editor based on a small bidirectionally typed lambda calculus extended with holes and a cursor. Hazelnut goes one step beyond syntactic wellformedness: its edit actions operate over statically meaningful incomplete terms. Na'ively, this would force the programmer to construct terms in a rigid “outsidein” manner. To avoid this problem, the action semantics automatically places terms assigned a type that is inconsistent with the expected type inside a hole. This meaningfully defers the type consistency check until the term inside the hole is finished. Hazelnut is not intended as an enduser tool itself. Instead, it serves as a foundational account of typed structure editing. To that end, we describe how Hazelnut’s rich metatheory, which we have mechanized using the Agda proof assistant, serves as a guide when we extend the calculus to include binary sum types. We also discuss various interpretations of holes, and in so doing reveal connections with gradual typing and contextual modal type theory, the CurryHoward interpretation of contextual modal logic. Finally, we discuss how Hazelnut’s semantics lends itself to implementation as an eventbased functional reactive program. Our simple reference implementation is written using js_of_ocaml. 

Palladinos, Nick 
POPL '17: "Stream Fusion, to Completeness ..."
Stream Fusion, to Completeness
Oleg Kiselyov, Aggelos Biboudis, Nick Palladinos, and Yannis Smaragdakis (Tohoku University, Japan; University of Athens, Greece; Nessos IT, Greece)
Stream processing is mainstream (again): Widelyused stream libraries are now available for virtually all modern OO and functional languages, from Java to C# to Scala to OCaml to Haskell. Yet expressivity and performance are still lacking. For instance, the popular, welloptimized Java 8 streams do not support the zip operator and are still an order of magnitude slower than handwritten loops.
We present the first approach that represents the full generality of stream processing and eliminates overheads, via the use of staging. It is based on an unusually rich semantic model of stream interaction. We support any combination of zipping, nesting (or flatmapping), subranging, filtering, mapping—of finite or infinite streams. Our model captures idiosyncrasies that a programmer uses in optimizing stream pipelines, such as rate differences and the choice of a “for” vs. “while” loops. Our approach delivers handwritten–like code, but automatically. It explicitly avoids the reliance on blackbox optimizers and sufficientlysmart compilers, offering highest, guaranteed and portable performance.
Our approach relies on highlevel concepts that are then readily mapped into an implementation. Accordingly, we have two distinct implementations: an OCaml stream library, staged via MetaOCaml, and a Scala library for the JVM, staged via LMS. In both cases, we derive libraries richer and simultaneously many tens of times faster than past work. We greatly exceed in performance the standard stream libraries available in Java, Scala and OCaml, including the welloptimized Java 8 streams.
@InProceedings{POPL17p285,
author = {Oleg Kiselyov and Aggelos Biboudis and Nick Palladinos and Yannis Smaragdakis},
title = {Stream Fusion, to Completeness},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {285299},
doi = {10.1145/3009837.3009880},
year = {2017},
}
Publisher's Version
Article Search
Info


Palsberg, Jens 
POPL '17: "Typed SelfEvaluation via ..."
Typed SelfEvaluation via Intensional Type Functions
Matt Brown and Jens Palsberg (University of California at Los Angeles, USA) Many popular languages have a selfinterpreter, that is, an interpreter for the language written in itself. So far, work on polymorphicallytyped selfinterpreters has concentrated on selfrecognizers that merely recover a program from its representation. A larger and until now unsolved challenge is to implement a polymorphicallytyped selfevaluator that evaluates the represented program and produces a representation of the result. We present F_{ω}^{µi}, the first λcalculus that supports a polymorphicallytyped selfevaluator. Our calculus extends F_{ω} with recursive types and intensional type functions and has decidable type checking. Our key innovation is a novel implementation of type equality proofs that enables us to define a versatile representation of programs. Our results establish a new category of languages that can support polymorphicallytyped selfevaluators. 

Paykin, Jennifer 
POPL '17: "QWIRE: A Core Language for ..."
QWIRE: A Core Language for Quantum Circuits
Jennifer Paykin, Robert Rand, and Steve Zdancewic (University of Pennsylvania, USA)
This paper introduces QWIRE (``choir''), a language for defining quantum
circuits and an interface for manipulating them inside of an arbitrary
classical host language. QWIRE is minimalit contains only a
few primitivesand sound with respect to the physical properties entailed by
quantum mechanics. At the same time, QWIRE is expressive and highly modular
due to its relationship with the host language, mirroring the QRAM model
of computation that places a quantum computer (controlled by circuits)
alongside a classical computer (controlled by the host language).
We present QWIRE along with its type system and operational semantics, which
we prove is safe and strongly normalizing whenever the host language is. We
give circuits a denotational semantics in terms of density matrices. Throughout, we
investigate examples that demonstrate the expressive power of QWIRE, including
extensions to the host language that (1) expose a general analysis framework
for circuits, and (2) provide dependent types.
@InProceedings{POPL17p846,
author = {Jennifer Paykin and Robert Rand and Steve Zdancewic},
title = {QWIRE: A Core Language for Quantum Circuits},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {846858},
doi = {10.1145/3009837.3009894},
year = {2017},
}
Publisher's Version
Article Search


Pierce, Benjamin C. 
POPL '17: "Beginner's Luck: A Language ..."
Beginner's Luck: A Language for PropertyBased Generators
Leonidas Lampropoulos, Diane GalloisWong, Cătălin Hriţcu, John Hughes, Benjamin C. Pierce, and Liyao Xia (University of Pennsylvania, USA; Inria, France; ENS, France; Chalmers University of Technology, Sweden)
Propertybased random testing à la QuickCheck requires building efficient generators for welldistributed random data satisfying complex logical predicates, but writing these generators can be difficult and error prone. We propose a domainspecific language in which generators are conveniently expressed by decorating predicates with lightweight annotations to control both the distribution of generated values and the amount of constraint solving that happens before each variable is instantiated. This language, called Luck, makes generators easier to write, read, and maintain.
We give Luck a formal semantics and prove several fundamental properties, including the soundness and completeness of random generation with respect to a standard predicate semantics. We evaluate Luck on common examples from the propertybased testing literature and on two significant case studies, showing that it can be used in complex domains with comparable bugfinding effectiveness and a significant reduction in testing code size compared to handwritten generators.
@InProceedings{POPL17p114,
author = {Leonidas Lampropoulos and Diane GalloisWong and Cătălin Hriţcu and John Hughes and Benjamin C. Pierce and Liyao Xia},
title = {Beginner's Luck: A Language for PropertyBased Generators},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {114129},
doi = {10.1145/3009837.3009868},
year = {2017},
}
Publisher's Version
Article Search


Plotkin, Gordon 
POPL '17: "Dijkstra Monads for Free ..."
Dijkstra Monads for Free
Danel Ahman, Cătălin Hriţcu, Kenji Maillard, Guido Martínez, Gordon Plotkin, Jonathan Protzenko, Aseem Rastogi, and Nikhil Swamy (University of Edinburgh, UK; Microsoft Research, USA; Inria, France; ENS, France; Rosario National University, Argentina; Microsoft Research, India) Dijkstra monads enable a dependent type theory to be enhanced with support for specifying and verifying effectful code via weakest preconditions. Together with their closely related counterparts, Hoare monads, they provide the basis on which verification tools like F*, Hoare Type Theory (HTT), and Ynot are built. We show that Dijkstra monads can be derived “for free” by applying a continuationpassing style (CPS) translation to the standard monadic definitions of the underlying computational effects. Automatically deriving Dijkstra monads in this way provides a correctbyconstruction and efficient way of reasoning about userdefined effects in dependent type theories. We demonstrate these ideas in EMF*, a new dependently typed calculus, validating it via both formal proof and a prototype implementation within F*. Besides equipping F* with a more uniform and extensible effect system, EMF* enables a novel mixture of intrinsic and extrinsic proofs within F*. 

Podelski, Andreas 
POPL '17: "Thread Modularity at Many ..."
Thread Modularity at Many Levels: A Pearl in Compositional Verification
Jochen Hoenicke, Rupak Majumdar, and Andreas Podelski (University of Freiburg, Germany; MPISWS, Germany) A threadmodular proof for the correctness of a concurrent program is based on an inductive and interferencefree annotation of each thread. It is wellknown that the corresponding proof system is not complete (unless one adds auxiliary variables). We describe a hierarchy of proof systems where each level k corresponds to a generalized notion of thread modularity (level 1 corresponds to the original notion). Each level is strictly more expressive than the previous. Further, each level precisely captures programs that can be proved using uniform Ashcroft invariants with k universal quantifiers. We demonstrate the usefulness of the hierarchy by giving a compositional proof of the Mach shootdown algorithm for TLB consistency. We show a proof at level 2 that shows the algorithm is correct for an arbitrary number of CPUs. However, there is no proof for the algorithm at level 1 which does not involve auxiliary state. 

Protzenko, Jonathan 
POPL '17: "Dijkstra Monads for Free ..."
Dijkstra Monads for Free
Danel Ahman, Cătălin Hriţcu, Kenji Maillard, Guido Martínez, Gordon Plotkin, Jonathan Protzenko, Aseem Rastogi, and Nikhil Swamy (University of Edinburgh, UK; Microsoft Research, USA; Inria, France; ENS, France; Rosario National University, Argentina; Microsoft Research, India) Dijkstra monads enable a dependent type theory to be enhanced with support for specifying and verifying effectful code via weakest preconditions. Together with their closely related counterparts, Hoare monads, they provide the basis on which verification tools like F*, Hoare Type Theory (HTT), and Ynot are built. We show that Dijkstra monads can be derived “for free” by applying a continuationpassing style (CPS) translation to the standard monadic definitions of the underlying computational effects. Automatically deriving Dijkstra monads in this way provides a correctbyconstruction and efficient way of reasoning about userdefined effects in dependent type theories. We demonstrate these ideas in EMF*, a new dependently typed calculus, validating it via both formal proof and a prototype implementation within F*. Besides equipping F* with a more uniform and extensible effect system, EMF* enables a novel mixture of intrinsic and extrinsic proofs within F*. 

Püschel, Markus 
POPL '17: "Fast Polyhedra Abstract Domain ..."
Fast Polyhedra Abstract Domain
Gagandeep Singh, Markus Püschel, and Martin Vechev (ETH Zurich, Switzerland) Numerical abstract domains are an important ingredient of modern static analyzers used for verifying critical program properties (e.g., absence of buffer overflow or memory safety). Among the many numerical domains introduced over the years, Polyhedra is the most expressive one, but also the most expensive: it has worstcase exponential space and time complexity. As a consequence, static analysis with the Polyhedra domain is thought to be impractical when applied to large scale, real world programs. In this paper, we present a new approach and a complete implementation for speeding up Polyhedra domain analysis. Our approach does not lose precision, and for many practical cases, is orders of magnitude faster than stateoftheart solutions. The key insight underlying our work is that polyhedra arising during analysis can usually be kept decomposed, thus considerably reducing the overall complexity. We first present the theory underlying our approach, which identifies the interaction between partitions of variables and domain operators. Based on the theory we develop new algorithms for these operators that work with decomposed polyhedra. We implemented these algorithms using the same interface as existing libraries, thus enabling static analyzers to use our implementation with little effort. In our evaluation, we analyze large benchmarks from the popular software verification competition, including Linux device drivers with over 50K lines of code. Our experimental results demonstrate massive gains in both space and time: we show endtoend speedups of two to five orders of magnitude compared to stateoftheart Polyhedra implementations as well as significant memory gains, on all larger benchmarks. In fact, in many cases our analysis terminates in seconds where prior code runs out of memory or times out after 4 hours. We believe this work is an important step in making the Polyhedra abstract domain both feasible and practically usable for handling large, realworld programs. 

Pulte, Christopher 
POPL '17: "MixedSize Concurrency: ARM, ..."
MixedSize Concurrency: ARM, POWER, C/C++11, and SC
Shaked Flur, Susmit Sarkar, Christopher Pulte, Kyndylan Nienhuis, Luc Maranget, Kathryn E. Gray, Ali Sezgin, Mark Batty, and Peter Sewell (University of Cambridge, UK; University of St. Andrews, UK; Inria, France; University of Kent, UK)
Previous work on the semantics of relaxed sharedmemory concurrency has only considered the case in which each load reads the data of exactly one store. In practice, however, multiprocessors support mixedsize accesses, and these are used by systems software and (to some degree) exposed at the C/C++ language level. A semantic foundation for software, therefore, has to address them.
We investigate the mixedsize behaviour of ARMv8 and IBM POWER architectures and implementations: by experiment, by developing semantic models, by testing the correspondence between these, and by discussion with ARM and IBM staff. This turns out to be surprisingly subtle, and on the way we have to revisit the fundamental concepts of coherence and sequential consistency, which change in this setting. In particular, we show that adding a memory barrier between each instruction does not restore sequential consistency. We go on to extend the C/C++11 model to support nonatomic mixedsize memory accesses.
This is a necessary step towards semantics for realworld sharedmemory concurrent code, beyond litmus tests.
@InProceedings{POPL17p429,
author = {Shaked Flur and Susmit Sarkar and Christopher Pulte and Kyndylan Nienhuis and Luc Maranget and Kathryn E. Gray and Ali Sezgin and Mark Batty and Peter Sewell},
title = {MixedSize Concurrency: ARM, POWER, C/C++11, and SC},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {429442},
doi = {10.1145/3009837.3009839},
year = {2017},
}
Publisher's Version
Article Search
Info


Rakamarić, Zvonimir 
POPL '17: "Rigorous FloatingPoint MixedPrecision ..."
Rigorous FloatingPoint MixedPrecision Tuning
WeiFan Chiang, Mark Baranowski, Ian Briggs, Alexey Solovyev, Ganesh Gopalakrishnan, and Zvonimir Rakamarić (University of Utah, USA)
Virtually all realvalued computations are carried out using floatingpoint data types and operations. The precision of these data types must be set with the goals of reducing the overall roundoff error, but also emphasizing performance improvements. Often, a mixedprecision allocation achieves this optimum; unfortunately, there are no techniques available to compute such allocations and conservatively meet a given error target across all program inputs. In this work, we present a rigorous approach to precision allocation based on formal analysis via Symbolic Taylor Expansions, and error analysis based on interval functions. This approach is implemented in an automated tool called FPTuner that generates and solves a quadratically constrained quadratic program to obtain a precisionannotated version of the given expression. FPTuner automatically introduces all the requisite precision up and down casting operations. It also allows users to flexibly control precision allocation using constraints to cap the number of high precision operators as well as group operators to allocate the same precision to facilitate vectorization. We evaluate FPTuner by tuning several benchmarks and measuring the proportion of lower precision operators allocated as we increase the error threshold. We also measure the reduction in energy consumption resulting from executing mixedprecision tuned code on a real hardware platform. We observe significant energy savings in response to mixedprecision tuning, but also observe situations where unexpected compiler behaviors thwart intended optimizations.
@InProceedings{POPL17p300,
author = {WeiFan Chiang and Mark Baranowski and Ian Briggs and Alexey Solovyev and Ganesh Gopalakrishnan and Zvonimir Rakamarić},
title = {Rigorous FloatingPoint MixedPrecision Tuning},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {300315},
doi = {10.1145/3009837.3009846},
year = {2017},
}
Publisher's Version
Article Search


Ramsey, Norman 
POPL '17: "Exact Bayesian Inference by ..."
Exact Bayesian Inference by Symbolic Disintegration
Chungchieh Shan and Norman Ramsey (Indiana University, USA; Tufts University, USA) Bayesian inference, of posterior knowledge from prior knowledge and observed evidence, is typically defined by Bayes’s rule, which says the posterior multiplied by the probability of an observation equals a joint probability. But the observation of a continuous quantity usually has probability zero, in which case Bayes’s rule says only that the unknown times zero is zero. To infer a posterior distribution from a zeroprobability observation, the statistical notion of disintegration tells us to specify the observation as an expression rather than a predicate, but does not tell us how to compute the posterior. We present the first method of computing a disintegration from a probabilistic program and an expression of a quantity to be observed, even when the observation has probability zero. Because the method produces an exact posterior term and preserves a semantics in which monadic terms denote measures, it composes with other inference methods in a modular way—without sacrificing accuracy or performance. 

Rand, Robert 
POPL '17: "QWIRE: A Core Language for ..."
QWIRE: A Core Language for Quantum Circuits
Jennifer Paykin, Robert Rand, and Steve Zdancewic (University of Pennsylvania, USA)
This paper introduces QWIRE (``choir''), a language for defining quantum
circuits and an interface for manipulating them inside of an arbitrary
classical host language. QWIRE is minimalit contains only a
few primitivesand sound with respect to the physical properties entailed by
quantum mechanics. At the same time, QWIRE is expressive and highly modular
due to its relationship with the host language, mirroring the QRAM model
of computation that places a quantum computer (controlled by circuits)
alongside a classical computer (controlled by the host language).
We present QWIRE along with its type system and operational semantics, which
we prove is safe and strongly normalizing whenever the host language is. We
give circuits a denotational semantics in terms of density matrices. Throughout, we
investigate examples that demonstrate the expressive power of QWIRE, including
extensions to the host language that (1) expose a general analysis framework
for circuits, and (2) provide dependent types.
@InProceedings{POPL17p846,
author = {Jennifer Paykin and Robert Rand and Steve Zdancewic},
title = {QWIRE: A Core Language for Quantum Circuits},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {846858},
doi = {10.1145/3009837.3009894},
year = {2017},
}
Publisher's Version
Article Search


Rastogi, Aseem 
POPL '17: "Dijkstra Monads for Free ..."
Dijkstra Monads for Free
Danel Ahman, Cătălin Hriţcu, Kenji Maillard, Guido Martínez, Gordon Plotkin, Jonathan Protzenko, Aseem Rastogi, and Nikhil Swamy (University of Edinburgh, UK; Microsoft Research, USA; Inria, France; ENS, France; Rosario National University, Argentina; Microsoft Research, India) Dijkstra monads enable a dependent type theory to be enhanced with support for specifying and verifying effectful code via weakest preconditions. Together with their closely related counterparts, Hoare monads, they provide the basis on which verification tools like F*, Hoare Type Theory (HTT), and Ynot are built. We show that Dijkstra monads can be derived “for free” by applying a continuationpassing style (CPS) translation to the standard monadic definitions of the underlying computational effects. Automatically deriving Dijkstra monads in this way provides a correctbyconstruction and efficient way of reasoning about userdefined effects in dependent type theories. We demonstrate these ideas in EMF*, a new dependently typed calculus, validating it via both formal proof and a prototype implementation within F*. Besides equipping F* with a more uniform and extensible effect system, EMF* enables a novel mixture of intrinsic and extrinsic proofs within F*. 

Rehof, Jakob 
POPL '17: "Intersection Type Calculi ..."
Intersection Type Calculi of Bounded Dimension
Andrej Dudenhefner and Jakob Rehof (TU Dortmund, Germany) A notion of dimension in intersection typed λcalculi is presented. The dimension of a typed λterm is given by the minimal norm of an elaboration (a proof theoretic decoration) necessary for typing the term at its type, and, intuitively, measures intersection introduction as a resource. Boundeddimensional intersection type calculi are shown to enjoy subject reduction, since terms can be elaborated in nonincreasing norm under βreduction. We prove that a multiset interpretation (corresponding to a nonidempotent and nonlinear interpretation of intersection) of dimensionality corresponds to the number of simultaneous constraints required during search for inhabitants. As a consequence, the inhabitation problem is decidable in bounded multiset dimension, and it is proven to be EXPSPACEcomplete. This result is a substantial generalization of inhabitation for the rank 2fragment, yielding a calculus with decidable inhabitation which is independent of rank. Our results give rise to a new criterion (dimensional bound) for subclasses of intersection type calculi with a decidable inhabitation problem, which is orthogonal to previously known criteria, and which should have immediate applications in synthesis. Additionally, we give examples of dimensional analysis of fragments of the intersection type system, including conservativity over simple types, rank 2types, and normal form typings, and we provide some observations towards dimensional analysis of other systems. It is suggested (for future work) that our notion of dimension may have semantic interpretations in terms of of reduction complexity. 

Reps, Thomas W. 
POPL '17: "ComponentBased Synthesis ..."
ComponentBased Synthesis for Complex APIs
Yu Feng, Ruben Martins, Yuepeng Wang, Isil Dillig, and Thomas W. Reps (University of Texas at Austin, USA; University of WisconsinMadison, USA)
Componentbased approaches to program synthesis assemble programs from a database of existing components, such as methods provided by an API. In this paper, we present a novel typedirected algorithm for componentbased synthesis. The key novelty of our approach is the use of a compact Petrinet representation to model relationships between methods in an API. Given a target method signature S, our approach performs reachability analysis on the underlying Petrinet model to identify sequences of method calls that could be used to synthesize an implementation of S. The programs synthesized by our algorithm are guaranteed to type check and pass all test cases provided by the user.
We have implemented this approach in a tool called SyPet, and used it to successfully synthesize realworld programming tasks extracted from online forums and existing code repositories. We also compare SyPet with two stateoftheart synthesis tools, namely InSynth and CodeHint, and demonstrate that SyPet can synthesize more programs in less time. Finally, we compare our approach with an alternative solution based on hypergraphs and demonstrate its advantages.
@InProceedings{POPL17p599,
author = {Yu Feng and Ruben Martins and Yuepeng Wang and Isil Dillig and Thomas W. Reps},
title = {ComponentBased Synthesis for Complex APIs},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {599612},
doi = {10.1145/3009837.3009851},
year = {2017},
}
Publisher's Version
Article Search


Rival, Xavier 
POPL '17: "SemanticDirected Clumping ..."
SemanticDirected Clumping of Disjunctive Abstract States
Huisong Li, Francois Berenger, BorYuh Evan Chang, and Xavier Rival (Inria, France; CNRS, France; ENS, France; University of Colorado at Boulder, USA)
To infer complex structural invariants, shape analyses rely on expressive families of logical properties. Many such analyses manipulate abstract memory states that consist of separating conjunctions of basic predicates describing atomic blocks or summaries. Moreover, they use finite disjunctions of abstract memory states in order to account for dissimilar shapes. Disjunctions should be kept small for the sake of scalability, though precision often requires to keep additional case splits. In this context, deciding when and how to merge case splits and to replace them with summaries is critical both for the precision and for the efficiency. Existing techniques use sets of syntactic rules, which are tedious to design and prone to failure. In this paper, we design a semantic criterion to clump abstract states based on their silhouette which applies not only to the conservative union of disjuncts, but also to the weakening of separating conjunction of memory predicates into inductive summaries. Our approach allows to define union and widening operators that aim at preserving the case splits that are required for the analysis to succeed. We implement this approach in the MemCAD analyzer, and evaluate it on realworld C codes from existing libraries, including programs dealing with doubly linked lists, redblack trees and AVLtrees.
@InProceedings{POPL17p32,
author = {Huisong Li and Francois Berenger and BorYuh Evan Chang and Xavier Rival},
title = {SemanticDirected Clumping of Disjunctive Abstract States},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {3245},
doi = {10.1145/3009837.3009881},
year = {2017},
}
Publisher's Version
Article Search


Robillard, Simon 
POPL '17: "Coming to Terms with Quantified ..."
Coming to Terms with Quantified Reasoning
Laura Kovács, Simon Robillard, and Andrei Voronkov (Vienna University of Technology, Austria; Chalmers University of Technology, Sweden; University of Manchester, UK)
The theory of finite term algebras provides a natural framework to describe the semantics of functional languages. The ability to efficiently reason about term algebras is essential to automate program analysis and verification for functional or imperative programs over inductively defined data types such as lists and trees. However, as the theory of finite term algebras is not finitely axiomatizable, reasoning about quantified properties over term algebras is challenging.
In this paper we address full firstorder reasoning about properties of programs manipulating term algebras, and describe two approaches for doing so by using firstorder theorem proving. Our first method is a conservative extension of the theory of term alge bras using a finite number of statements, while our second method relies on extending the superposition calculus of firstorder theorem provers with additional inference rules.
We implemented our work in the firstorder theorem prover Vampire and evaluated it on a large number of inductive datatype benchmarks, as well as game theory constraints. Our experimental results show that our methods are able to find proofs for many hard problems previously unsolved by stateoftheart methods. We also show that Vampire implementing our methods outperforms existing SMT solvers able to deal with inductive data types.
@InProceedings{POPL17p260,
author = {Laura Kovács and Simon Robillard and Andrei Voronkov},
title = {Coming to Terms with Quantified Reasoning},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {260270},
doi = {10.1145/3009837.3009887},
year = {2017},
}
Publisher's Version
Article Search


Rompf, Tiark 
POPL '17: "LMSVerify: Abstraction without ..."
LMSVerify: Abstraction without Regret for Verified Systems Programming
Nada Amin and Tiark Rompf (EPFL, Switzerland; Purdue University, USA)
Performance critical software is almost always developed in C, as programmers do not trust highlevel languages to deliver the same reliable performance. This is bad because lowlevel code in unsafe languages attracts security vulnerabilities and because development is far less productive, with PL advances mostly lost on programmers operating under tight performance constraints. Highlevel languages provide memory safety out of the box, but they are deemed too slow and unpredictable for serious system software.
Recent years have seen a surge in staging and generative programming: the key idea is to use highlevel languages and their abstraction power as glorified macro systems to compose code fragments in firstorder, potentially domainspecific, intermediate languages, from which fast C can be emitted. But what about security? Since the end result is still C code, the safety guarantees of the highlevel host language are lost.
In this paper, we extend this generative approach to emit ACSL specifications along with C code. We demonstrate that staging achieves ``abstraction without regret'' for verification: we show how highlevel programming models, in particular higherorder composable contracts from dynamic languages, can be used at generation time to compose and generate firstorder specifications that can be statically checked by existing tools. We also show how type classes can automatically attach invariants to data types, reducing the need for repetitive manual annotations.
We evaluate our system on several case studies that varyingly exercise verification of memory safety, overflow safety, and functional correctness. We feature an HTTP parser that is (1) fast (2) highlevel: implemented using staged parser combinators (3) secure: with verified memory safety. This result is significant, as input parsing is a key attack vector, and vulnerabilities related to HTTP parsing have been documented in all widelyused web servers.
@InProceedings{POPL17p859,
author = {Nada Amin and Tiark Rompf},
title = {LMSVerify: Abstraction without Regret for Verified Systems Programming},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {859873},
doi = {10.1145/3009837.3009867},
year = {2017},
}
Publisher's Version
Article Search
POPL '17: "Type Soundness Proofs with ..."
Type Soundness Proofs with Definitional Interpreters
Nada Amin and Tiark Rompf (EPFL, Switzerland; Purdue University, USA) While type soundness proofs are taught in every graduate PL class, the gap between realistic languages and what is accessible to formal proofs is large. In the case of Scala, it has been shown that its formal model, the Dependent Object Types (DOT) calculus, cannot simultaneously support key metatheoretic properties such as environment narrowing and subtyping transitivity, which are usually required for a type soundness proof. Moreover, Scala and many other realistic languages lack a general substitution property. The first contribution of this paper is to demonstrate how type soundness proofs for advanced, polymorphic, type systems can be carried out with an operational semantics based on highlevel, definitional interpreters, implemented in Coq. We present the first mechanized soundness proofs in this style for System F and several extensions, including mutable references. Our proofs use only straightforward induction, which is significant, as the combination of bigstep semantics, mutable references, and polymorphism is commonly believed to require coinductive proof techniques. The second main contribution of this paper is to show how DOTlike calculi emerge from straightforward generalizations of the operational aspects of F, exposing a rich design space of calculi with pathdependent types inbetween System F and DOT, which we dub the System D Square. By working directly on the target language, definitional interpreters can focus the design space and expose the invariants that actually matter at runtime. Looking at such runtime invariants is an exciting new avenue for type system design. 

Sahin, Burak 
POPL '17: "Complexity Verification using ..."
Complexity Verification using Guided Theorem Enumeration
Akhilesh Srikanth, Burak Sahin, and William R. Harris (Georgia Institute of Technology, USA)
Determining if a given program satisfies a given bound on the
amount of resources that it may use is a fundamental problem with
critical practical applications. Conventional automatic verifiers for
safety properties cannot be applied to address this problem directly
because such verifiers target properties expressed in decidable
theories; however, many practical bounds are expressed in nonlinear
theories, which are undecidable.
In this work, we introduce an automatic verification algorithm,
CAMPY, that determines if a given program P satisfies a given
resource bound B, which may be expressed using polynomial,
exponential, and logarithmic terms. The key technical contribution
behind our verifier is an interpolating theorem prover for nonlinear
theories that lazily learns a sufficiently accurate approximation of
nonlinear theories by selectively grounding theorems of the nonlinear
theory that are relevant to proving that P satisfies B. To
evaluate CAMPY, we implemented it to target Java Virtual Machine
bytecode. We applied CAMPY to verify that over 20 solutions
submitted for programming problems hosted on popular online
coding platforms satisfy or do not satisfy expected complexity
bounds.
@InProceedings{POPL17p639,
author = {Akhilesh Srikanth and Burak Sahin and William R. Harris},
title = {Complexity Verification using Guided Theorem Enumeration},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {639652},
doi = {10.1145/3009837.3009864},
year = {2017},
}
Publisher's Version
Article Search


Sammartino, Matteo 
POPL '17: "Learning Nominal Automata ..."
Learning Nominal Automata
Joshua Moerman, Matteo Sammartino, Alexandra Silva, Bartek Klin, and Michał Szynwelski (Radboud University Nijmegen, Netherlands; University College London, UK; University of Warsaw, Poland)
We present an Angluinstyle algorithm to learn nominal automata, which are acceptors of languages over infinite (structured) alphabets. The abstract approach we take allows us to seamlessly extend known variations of the algorithm to this new setting. In particular we can learn a subclass of nominal nondeterministic automata. An implementation using a recently developed Haskell library for nominal computation is provided for preliminary experiments.
@InProceedings{POPL17p613,
author = {Joshua Moerman and Matteo Sammartino and Alexandra Silva and Bartek Klin and Michał Szynwelski},
title = {Learning Nominal Automata},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {613625},
doi = {10.1145/3009837.3009879},
year = {2017},
}
Publisher's Version
Article Search


Sarkar, Susmit 
POPL '17: "MixedSize Concurrency: ARM, ..."
MixedSize Concurrency: ARM, POWER, C/C++11, and SC
Shaked Flur, Susmit Sarkar, Christopher Pulte, Kyndylan Nienhuis, Luc Maranget, Kathryn E. Gray, Ali Sezgin, Mark Batty, and Peter Sewell (University of Cambridge, UK; University of St. Andrews, UK; Inria, France; University of Kent, UK)
Previous work on the semantics of relaxed sharedmemory concurrency has only considered the case in which each load reads the data of exactly one store. In practice, however, multiprocessors support mixedsize accesses, and these are used by systems software and (to some degree) exposed at the C/C++ language level. A semantic foundation for software, therefore, has to address them.
We investigate the mixedsize behaviour of ARMv8 and IBM POWER architectures and implementations: by experiment, by developing semantic models, by testing the correspondence between these, and by discussion with ARM and IBM staff. This turns out to be surprisingly subtle, and on the way we have to revisit the fundamental concepts of coherence and sequential consistency, which change in this setting. In particular, we show that adding a memory barrier between each instruction does not restore sequential consistency. We go on to extend the C/C++11 model to support nonatomic mixedsize memory accesses.
This is a necessary step towards semantics for realworld sharedmemory concurrent code, beyond litmus tests.
@InProceedings{POPL17p429,
author = {Shaked Flur and Susmit Sarkar and Christopher Pulte and Kyndylan Nienhuis and Luc Maranget and Kathryn E. Gray and Ali Sezgin and Mark Batty and Peter Sewell},
title = {MixedSize Concurrency: ARM, POWER, C/C++11, and SC},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {429442},
doi = {10.1145/3009837.3009839},
year = {2017},
}
Publisher's Version
Article Search
Info


Scherer, Gabriel 
POPL '17: "Deciding Equivalence with ..."
Deciding Equivalence with Sums and the Empty Type
Gabriel Scherer (Northeastern University, USA) The logical technique of focusing can be applied to the λcalculus; in a simple type system with atomic types and negative type formers (functions, products, the unit type), its normal forms coincide with βηnormal forms. Introducing a saturation phase gives a notion of quasinormal forms in presence of positive types (sum types and the empty type). This rich structure let us prove the decidability of βηequivalence in presence of the empty type, the fact that it coincides with contextual equivalence, and with settheoretic equality in all finite models. 

Scully, Ziv 
POPL '17: "A Program Optimization for ..."
A Program Optimization for Automatic Database Result Caching
Ziv Scully and Adam Chlipala (Carnegie Mellon University, USA; Massachusetts Institute of Technology, USA)
Most popular Web applications rely on persistent databases based on languages like SQL for declarative specification of data models and the operations that read and modify them. As applications scale up in user base, they often face challenges responding quickly enough to the high volume of requests. A common aid is caching of database results in the application's memory space, taking advantage of programspecific knowledge of which caching schemes are sound and useful, embodied in handwritten modifications that make the program less maintainable. These modifications also require nontrivial reasoning about the readwrite dependencies across operations. In this paper, we present a compiler optimization that automatically adds sound SQL caching to Web applications coded in the Ur/Web domainspecific functional language, with no modifications required to source code. We use a custom cache implementation that supports concurrent operations without compromising the transactional semantics of the database abstraction. Through experiments with microbenchmarks and production Ur/Web applications, we show that our optimization in many cases enables an easy doubling or more of an application's throughput, requiring nothing more than passing an extra commandline flag to the compiler.
@InProceedings{POPL17p271,
author = {Ziv Scully and Adam Chlipala},
title = {A Program Optimization for Automatic Database Result Caching},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {271284},
doi = {10.1145/3009837.3009891},
year = {2017},
}
Publisher's Version
Article Search


Sekiyama, Taro 
POPL '17: "Stateful Manifest Contracts ..."
Stateful Manifest Contracts
Taro Sekiyama and Atsushi Igarashi (IBM Research, Japan; Kyoto University, Japan) This paper studies hybrid contract verification for an imperative higherorder language based on a socalled manifest contract system. In manifest contract systems, contracts are part of static types and contract verification is hybrid in the sense that some contracts are statically verified, typically by subtyping, but others are dynamically by casts. It is, however, not trivial to extend existing manifest contract systems, which have been designed mostly for pure functional languages, to imperative features, mainly because of the lack of flowsensitivity, which should be taken into account in verifying imperative programs statically. We develop an imperative higherorder manifest contract system λ_{ref}^{H} for flowsensitive hybrid contract verification. We introduce a computational variant of Nanevski et al’s Hoare types, which are flowsensitive types to represent pre and postconditions of impure computation. Our Hoare types are computational in the sense that pre and postconditions are given by Booleans in the same language as programs so that they are dynamically verifiable. λ_{ref}^{H} also supports refinement types as in existing manifest contract systems to describe flowinsensitive, stateindependent contracts of pure computation. While it is desirable that any—possibly statemanipulating—predicate can be used in contracts, abuse of stateful operations will break the system. To control stateful operations in contracts, we introduce a regionbased effect system, which allows contracts in refinement types and computational Hoare types to manipulate states, as long as they are observationally pure and readonly, respectively. We show that dynamic contract checking in our calculus is consistent with static typing in the sense that the final result obtained without dynamic contract violations satisfies contracts in its static type. It in particular means that the state after stateful computations satisfies their postconditions. As in some of prior manifest contract systems, static contract verification in this work is “post facto,” that is, we first define our manifest contract system so that all contracts are checked at run time, formalize conditions when dynamic checks can be removed safely, and show that programs with and without such removable checks are contextually equivalent. We also apply the idea of post facto verification to regionbased local reasoning, inspired by the frame rule of Separation Logic. 

Sewell, Peter 
POPL '17: "MixedSize Concurrency: ARM, ..."
MixedSize Concurrency: ARM, POWER, C/C++11, and SC
Shaked Flur, Susmit Sarkar, Christopher Pulte, Kyndylan Nienhuis, Luc Maranget, Kathryn E. Gray, Ali Sezgin, Mark Batty, and Peter Sewell (University of Cambridge, UK; University of St. Andrews, UK; Inria, France; University of Kent, UK)
Previous work on the semantics of relaxed sharedmemory concurrency has only considered the case in which each load reads the data of exactly one store. In practice, however, multiprocessors support mixedsize accesses, and these are used by systems software and (to some degree) exposed at the C/C++ language level. A semantic foundation for software, therefore, has to address them.
We investigate the mixedsize behaviour of ARMv8 and IBM POWER architectures and implementations: by experiment, by developing semantic models, by testing the correspondence between these, and by discussion with ARM and IBM staff. This turns out to be surprisingly subtle, and on the way we have to revisit the fundamental concepts of coherence and sequential consistency, which change in this setting. In particular, we show that adding a memory barrier between each instruction does not restore sequential consistency. We go on to extend the C/C++11 model to support nonatomic mixedsize memory accesses.
This is a necessary step towards semantics for realworld sharedmemory concurrent code, beyond litmus tests.
@InProceedings{POPL17p429,
author = {Shaked Flur and Susmit Sarkar and Christopher Pulte and Kyndylan Nienhuis and Luc Maranget and Kathryn E. Gray and Ali Sezgin and Mark Batty and Peter Sewell},
title = {MixedSize Concurrency: ARM, POWER, C/C++11, and SC},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {429442},
doi = {10.1145/3009837.3009839},
year = {2017},
}
Publisher's Version
Article Search
Info


Sezgin, Ali 
POPL '17: "MixedSize Concurrency: ARM, ..."
MixedSize Concurrency: ARM, POWER, C/C++11, and SC
Shaked Flur, Susmit Sarkar, Christopher Pulte, Kyndylan Nienhuis, Luc Maranget, Kathryn E. Gray, Ali Sezgin, Mark Batty, and Peter Sewell (University of Cambridge, UK; University of St. Andrews, UK; Inria, France; University of Kent, UK)
Previous work on the semantics of relaxed sharedmemory concurrency has only considered the case in which each load reads the data of exactly one store. In practice, however, multiprocessors support mixedsize accesses, and these are used by systems software and (to some degree) exposed at the C/C++ language level. A semantic foundation for software, therefore, has to address them.
We investigate the mixedsize behaviour of ARMv8 and IBM POWER architectures and implementations: by experiment, by developing semantic models, by testing the correspondence between these, and by discussion with ARM and IBM staff. This turns out to be surprisingly subtle, and on the way we have to revisit the fundamental concepts of coherence and sequential consistency, which change in this setting. In particular, we show that adding a memory barrier between each instruction does not restore sequential consistency. We go on to extend the C/C++11 model to support nonatomic mixedsize memory accesses.
This is a necessary step towards semantics for realworld sharedmemory concurrent code, beyond litmus tests.
@InProceedings{POPL17p429,
author = {Shaked Flur and Susmit Sarkar and Christopher Pulte and Kyndylan Nienhuis and Luc Maranget and Kathryn E. Gray and Ali Sezgin and Mark Batty and Peter Sewell},
title = {MixedSize Concurrency: ARM, POWER, C/C++11, and SC},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {429442},
doi = {10.1145/3009837.3009839},
year = {2017},
}
Publisher's Version
Article Search
Info


Shan, Chungchieh 
POPL '17: "Exact Bayesian Inference by ..."
Exact Bayesian Inference by Symbolic Disintegration
Chungchieh Shan and Norman Ramsey (Indiana University, USA; Tufts University, USA) Bayesian inference, of posterior knowledge from prior knowledge and observed evidence, is typically defined by Bayes’s rule, which says the posterior multiplied by the probability of an observation equals a joint probability. But the observation of a continuous quantity usually has probability zero, in which case Bayes’s rule says only that the unknown times zero is zero. To infer a posterior distribution from a zeroprobability observation, the statistical notion of disintegration tells us to specify the observation as an expression rather than a predicate, but does not tell us how to compute the posterior. We present the first method of computing a disintegration from a probabilistic program and an expression of a quantity to be observed, even when the observation has probability zero. Because the method produces an exact posterior term and preserves a semantics in which monadic terms denote measures, it composes with other inference methods in a modular way—without sacrificing accuracy or performance. 

Siek, Jeremy G. 
POPL '17: "Big Types in Little Runtime: ..."
Big Types in Little Runtime: OpenWorld Soundness and Collaborative Blame for Gradual Type Systems
Michael M. Vitousek, Cameron Swords, and Jeremy G. Siek (Indiana University, USA)
Gradual typing combines static and dynamic typing in the same language, offering programmers the error detection and strong guarantees of static types and the rapid prototyping and flexible programming idioms of dynamic types. Many gradually typed languages are implemented by translation into an untyped target language (e.g., Typed Clojure, TypeScript, Gradualtalk, and Reticulated Python). For such languages, it is desirable to support arbitrary interaction between translated code and legacy code in the untyped language while maintaining the type soundness of the translated code. In this paper we formalize this goal in the form of the openworld soundness criterion. We discuss why it is challenging to achieve openworld soundness using the traditional proxybased approach for higherorder casts. However, the transient design satisfies openworld soundness. Indeed, we present a formal semantics for the transient design and prove that our semantics satisfies openworld soundness. In this paper we also solve a challenging problem for the transient design: how to provide blame tracking without proxies. We define a semantics for blame and prove the Blame Theorem. We also prove that the Gradual Guarantee holds for this system, ensuring that programs can be evolved freely between static and dynamic typing. Finally, we demonstrate that the runtime overhead of the transient approach is low in the context of Reticulated Python, an implementation of gradual typing for Python.
@InProceedings{POPL17p762,
author = {Michael M. Vitousek and Cameron Swords and Jeremy G. Siek},
title = {Big Types in Little Runtime: OpenWorld Soundness and Collaborative Blame for Gradual Type Systems},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {762774},
doi = {10.1145/3009837.3009849},
year = {2017},
}
Publisher's Version
Article Search
Info
POPL '17: "Automatically Generating the ..."
Automatically Generating the Dynamic Semantics of Gradually Typed Languages
Matteo Cimini and Jeremy G. Siek (Indiana University, USA)
Many language designers have adopted gradual typing. However, there
remains open questions regarding how to gradualize languages.
Cimini and Siek (2016) created a methodology and algorithm to
automatically generate the type system of a gradually typed language
from a fully static version of the language.
In this paper, we address the next challenge of how to automatically
generate the dynamic semantics of gradually typed languages. Such
languages typically use an intermediate language with explicit casts.
Our first result is a methodology for generating the syntax, type
system, and dynamic semantics of the intermediate language with casts.
Next, we present an algorithm that formalizes and automates the
methodology, given a language definition as input.
We show that our approach is general enough to automatically
gradualize several languages, including features such as polymorphism,
recursive types and exceptions.
We prove that our algorithm produces languages that
satisfy the key correctness criteria of gradual typing.
Finally, we implement the algorithm, generating complete
specifications of gradually typed languages in lambdaProlog,
including executable interpreters.
@InProceedings{POPL17p789,
author = {Matteo Cimini and Jeremy G. Siek},
title = {Automatically Generating the Dynamic Semantics of Gradually Typed Languages},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {789803},
doi = {10.1145/3009837.3009863},
year = {2017},
}
Publisher's Version
Article Search


Signoles, Julien 
POPL '17: "Hypercollecting Semantics ..."
Hypercollecting Semantics and Its Application to Static Analysis of Information Flow
Mounir Assaf, David A. Naumann, Julien Signoles, Éric Totel, and Frédéric Tronel (Stevens Institute of Technology, USA; CEA LIST, France; CentraleSupélec, France) We show how static analysis for secure information flow can be expressed and proved correct entirely within the framework of abstract interpretation. The key idea is to define a Galois connection that directly approximates the hyperproperty of interest. To enable use of such Galois connections, we introduce a fixpoint characterisation of hypercollecting semantics, i.e. a “set of sets” transformer. This makes it possible to systematically derive static analyses for hyperproperties entirely within the calculational framework of abstract interpretation. We evaluate this technique by deriving example static analyses. For qualitative information flow, we derive a dependence analysis similar to the logic of Amtoft and Banerjee (SAS’04) and the type system of Hunt and Sands (POPL’06). For quantitative information flow, we derive a novel cardinality analysis that bounds the leakage conveyed by a program instead of simply deciding whether it exists. This encompasses problems that are hypersafety but not ksafety. We put the framework to use and introduce variations that achieve precision rivalling the most recent and precise static analyses for information flow. 

Silva, Alexandra 
POPL '17: "Cantor Meets Scott: Semantic ..."
Cantor Meets Scott: Semantic Foundations for Probabilistic Networks
Steffen Smolka, Praveen Kumar, Nate Foster, Dexter Kozen, and Alexandra Silva (Cornell University, USA; University College London, UK)
ProbNetKAT is a probabilistic extension of NetKAT with a denotational semantics based on Markov kernels. The language is expressive enough to generate continuous distributions, which raises the question of how to compute effectively in the language. This paper gives an new characterization of ProbNetKAT’s semantics using domain theory, which provides the foundation needed to build a practical implementation. We show how to use the semantics to approximate the behavior of arbitrary ProbNetKAT programs using distributions with finite support. We develop a prototype implementation and show how to use it to solve a variety of problems including characterizing the expected congestion induced by different routing schemes and reasoning probabilistically about reachability in a network.
@InProceedings{POPL17p557,
author = {Steffen Smolka and Praveen Kumar and Nate Foster and Dexter Kozen and Alexandra Silva},
title = {Cantor Meets Scott: Semantic Foundations for Probabilistic Networks},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {557571},
doi = {10.1145/3009837.3009843},
year = {2017},
}
Publisher's Version
Article Search
POPL '17: "Learning Nominal Automata ..."
Learning Nominal Automata
Joshua Moerman, Matteo Sammartino, Alexandra Silva, Bartek Klin, and Michał Szynwelski (Radboud University Nijmegen, Netherlands; University College London, UK; University of Warsaw, Poland)
We present an Angluinstyle algorithm to learn nominal automata, which are acceptors of languages over infinite (structured) alphabets. The abstract approach we take allows us to seamlessly extend known variations of the algorithm to this new setting. In particular we can learn a subclass of nominal nondeterministic automata. An implementation using a recently developed Haskell library for nominal computation is provided for preliminary experiments.
@InProceedings{POPL17p613,
author = {Joshua Moerman and Matteo Sammartino and Alexandra Silva and Bartek Klin and Michał Szynwelski},
title = {Learning Nominal Automata},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {613625},
doi = {10.1145/3009837.3009879},
year = {2017},
}
Publisher's Version
Article Search


Singh, Gagandeep 
POPL '17: "Fast Polyhedra Abstract Domain ..."
Fast Polyhedra Abstract Domain
Gagandeep Singh, Markus Püschel, and Martin Vechev (ETH Zurich, Switzerland) Numerical abstract domains are an important ingredient of modern static analyzers used for verifying critical program properties (e.g., absence of buffer overflow or memory safety). Among the many numerical domains introduced over the years, Polyhedra is the most expressive one, but also the most expensive: it has worstcase exponential space and time complexity. As a consequence, static analysis with the Polyhedra domain is thought to be impractical when applied to large scale, real world programs. In this paper, we present a new approach and a complete implementation for speeding up Polyhedra domain analysis. Our approach does not lose precision, and for many practical cases, is orders of magnitude faster than stateoftheart solutions. The key insight underlying our work is that polyhedra arising during analysis can usually be kept decomposed, thus considerably reducing the overall complexity. We first present the theory underlying our approach, which identifies the interaction between partitions of variables and domain operators. Based on the theory we develop new algorithms for these operators that work with decomposed polyhedra. We implemented these algorithms using the same interface as existing libraries, thus enabling static analyzers to use our implementation with little effort. In our evaluation, we analyze large benchmarks from the popular software verification competition, including Linux device drivers with over 50K lines of code. Our experimental results demonstrate massive gains in both space and time: we show endtoend speedups of two to five orders of magnitude compared to stateoftheart Polyhedra implementations as well as significant memory gains, on all larger benchmarks. In fact, in many cases our analysis terminates in seconds where prior code runs out of memory or times out after 4 hours. We believe this work is an important step in making the Polyhedra abstract domain both feasible and practically usable for handling large, realworld programs. 

Smaragdakis, Yannis 
POPL '17: "Stream Fusion, to Completeness ..."
Stream Fusion, to Completeness
Oleg Kiselyov, Aggelos Biboudis, Nick Palladinos, and Yannis Smaragdakis (Tohoku University, Japan; University of Athens, Greece; Nessos IT, Greece)
Stream processing is mainstream (again): Widelyused stream libraries are now available for virtually all modern OO and functional languages, from Java to C# to Scala to OCaml to Haskell. Yet expressivity and performance are still lacking. For instance, the popular, welloptimized Java 8 streams do not support the zip operator and are still an order of magnitude slower than handwritten loops.
We present the first approach that represents the full generality of stream processing and eliminates overheads, via the use of staging. It is based on an unusually rich semantic model of stream interaction. We support any combination of zipping, nesting (or flatmapping), subranging, filtering, mapping—of finite or infinite streams. Our model captures idiosyncrasies that a programmer uses in optimizing stream pipelines, such as rate differences and the choice of a “for” vs. “while” loops. Our approach delivers handwritten–like code, but automatically. It explicitly avoids the reliance on blackbox optimizers and sufficientlysmart compilers, offering highest, guaranteed and portable performance.
Our approach relies on highlevel concepts that are then readily mapped into an implementation. Accordingly, we have two distinct implementations: an OCaml stream library, staged via MetaOCaml, and a Scala library for the JVM, staged via LMS. In both cases, we derive libraries richer and simultaneously many tens of times faster than past work. We greatly exceed in performance the standard stream libraries available in Java, Scala and OCaml, including the welloptimized Java 8 streams.
@InProceedings{POPL17p285,
author = {Oleg Kiselyov and Aggelos Biboudis and Nick Palladinos and Yannis Smaragdakis},
title = {Stream Fusion, to Completeness},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {285299},
doi = {10.1145/3009837.3009880},
year = {2017},
}
Publisher's Version
Article Search
Info


Smolka, Steffen 
POPL '17: "Cantor Meets Scott: Semantic ..."
Cantor Meets Scott: Semantic Foundations for Probabilistic Networks
Steffen Smolka, Praveen Kumar, Nate Foster, Dexter Kozen, and Alexandra Silva (Cornell University, USA; University College London, UK)
ProbNetKAT is a probabilistic extension of NetKAT with a denotational semantics based on Markov kernels. The language is expressive enough to generate continuous distributions, which raises the question of how to compute effectively in the language. This paper gives an new characterization of ProbNetKAT’s semantics using domain theory, which provides the foundation needed to build a practical implementation. We show how to use the semantics to approximate the behavior of arbitrary ProbNetKAT programs using distributions with finite support. We develop a prototype implementation and show how to use it to solve a variety of problems including characterizing the expected congestion induced by different routing schemes and reasoning probabilistically about reachability in a network.
@InProceedings{POPL17p557,
author = {Steffen Smolka and Praveen Kumar and Nate Foster and Dexter Kozen and Alexandra Silva},
title = {Cantor Meets Scott: Semantic Foundations for Probabilistic Networks},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {557571},
doi = {10.1145/3009837.3009843},
year = {2017},
}
Publisher's Version
Article Search


Solovyev, Alexey 
POPL '17: "Rigorous FloatingPoint MixedPrecision ..."
Rigorous FloatingPoint MixedPrecision Tuning
WeiFan Chiang, Mark Baranowski, Ian Briggs, Alexey Solovyev, Ganesh Gopalakrishnan, and Zvonimir Rakamarić (University of Utah, USA)
Virtually all realvalued computations are carried out using floatingpoint data types and operations. The precision of these data types must be set with the goals of reducing the overall roundoff error, but also emphasizing performance improvements. Often, a mixedprecision allocation achieves this optimum; unfortunately, there are no techniques available to compute such allocations and conservatively meet a given error target across all program inputs. In this work, we present a rigorous approach to precision allocation based on formal analysis via Symbolic Taylor Expansions, and error analysis based on interval functions. This approach is implemented in an automated tool called FPTuner that generates and solves a quadratically constrained quadratic program to obtain a precisionannotated version of the given expression. FPTuner automatically introduces all the requisite precision up and down casting operations. It also allows users to flexibly control precision allocation using constraints to cap the number of high precision operators as well as group operators to allocate the same precision to facilitate vectorization. We evaluate FPTuner by tuning several benchmarks and measuring the proportion of lower precision operators allocated as we increase the error threshold. We also measure the reduction in energy consumption resulting from executing mixedprecision tuned code on a real hardware platform. We observe significant energy savings in response to mixedprecision tuning, but also observe situations where unexpected compiler behaviors thwart intended optimizations.
@InProceedings{POPL17p300,
author = {WeiFan Chiang and Mark Baranowski and Ian Briggs and Alexey Solovyev and Ganesh Gopalakrishnan and Zvonimir Rakamarić},
title = {Rigorous FloatingPoint MixedPrecision Tuning},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {300315},
doi = {10.1145/3009837.3009846},
year = {2017},
}
Publisher's Version
Article Search


Sorensen, Tyler 
POPL '17: "Automatically Comparing Memory ..."
Automatically Comparing Memory Consistency Models
John Wickerson, Mark Batty, Tyler Sorensen, and George A. Constantinides (Imperial College London, UK; University of Kent, UK) A memory consistency model (MCM) is the part of a programming language or computer architecture specification that defines which values can legally be read from shared memory locations. Because MCMs take into account various optimisations employed by architectures and compilers, they are often complex and counterintuitive, which makes them challenging to design and to understand. We identify four tasks involved in designing and understanding MCMs: generating conformance tests, distinguishing two MCMs, checking compiler optimisations, and checking compiler mappings. We show that all four tasks are instances of a general constraintsatisfaction problem to which the solution is either a program or a pair of programs. Although this problem is intractable for automatic solvers when phrased over programs directly, we show how to solve analogous constraints over program executions, and then construct programs that satisfy the original constraints. Our technique, which is implemented in the Alloy modelling framework, is illustrated on several software and architecturelevel MCMs, both axiomatically and operationally defined. We automatically recreate several known results, often in a simpler form, including: distinctions between variants of the C11 MCM; a failure of the ‘SCDRF guarantee’ in an early C11 draft; that x86 is ‘multicopy atomic’ and Power is not; bugs in common C11 compiler optimisations; and bugs in a compiler mapping from OpenCL to AMDstyle GPUs. We also use our technique to develop and validate a new MCM for NVIDIA GPUs that supports a natural mapping from OpenCL. 

Srikanth, Akhilesh 
POPL '17: "Complexity Verification using ..."
Complexity Verification using Guided Theorem Enumeration
Akhilesh Srikanth, Burak Sahin, and William R. Harris (Georgia Institute of Technology, USA)
Determining if a given program satisfies a given bound on the
amount of resources that it may use is a fundamental problem with
critical practical applications. Conventional automatic verifiers for
safety properties cannot be applied to address this problem directly
because such verifiers target properties expressed in decidable
theories; however, many practical bounds are expressed in nonlinear
theories, which are undecidable.
In this work, we introduce an automatic verification algorithm,
CAMPY, that determines if a given program P satisfies a given
resource bound B, which may be expressed using polynomial,
exponential, and logarithmic terms. The key technical contribution
behind our verifier is an interpolating theorem prover for nonlinear
theories that lazily learns a sufficiently accurate approximation of
nonlinear theories by selectively grounding theorems of the nonlinear
theory that are relevant to proving that P satisfies B. To
evaluate CAMPY, we implemented it to target Java Virtual Machine
bytecode. We applied CAMPY to verify that over 20 solutions
submitted for programming problems hosted on popular online
coding platforms satisfy or do not satisfy expected complexity
bounds.
@InProceedings{POPL17p639,
author = {Akhilesh Srikanth and Burak Sahin and William R. Harris},
title = {Complexity Verification using Guided Theorem Enumeration},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {639652},
doi = {10.1145/3009837.3009864},
year = {2017},
}
Publisher's Version
Article Search


Strub, PierreYves 
POPL '17: "Coupling Proofs Are Probabilistic ..."
Coupling Proofs Are Probabilistic Product Programs
Gilles Barthe, Benjamin Grégoire, Justin Hsu, and PierreYves Strub (IMDEA Software Institute, Spain; Inria, France; University of Pennsylvania, USA; École Polytechnique, France)
Couplings are a powerful mathematical tool for reasoning about
pairs of probabilistic processes. Recent developments in formal
verification identify a close connection between couplings and
pRHL, a relational program logic motivated by applications to
provable security, enabling formal construction of couplings from
the probability theory literature. However, existing work using
pRHL merely shows existence of a coupling and does not give a way
to prove quantitative properties about the coupling, needed to
reason about mixing and convergence of probabilistic
processes. Furthermore, pRHL is inherently incomplete, and is not
able to capture some advanced forms of couplings such as shift
couplings. We address both problems as follows.
First, we define an extension of pRHL, called xpRHL, which
explicitly constructs the coupling in a pRHL derivation in the
form of a probabilistic product program that simulates two
correlated runs of the original program. Existing verification
tools for probabilistic programs can then be directly applied to
the probabilistic product to prove quantitative properties of the
coupling. Second, we equip xpRHL with a new rule for while
loops, where reasoning can freely mix synchronized and
unsynchronized loop iterations. Our proof rule can capture
examples of shift couplings, and the logic is relatively complete
for deterministic programs.
We show soundness of xPRHL and use it to analyze two classes of
examples. First, we verify rapid mixing using different tools
from coupling: standard coupling, shift coupling, and path
coupling, a compositional principle for combining local couplings
into a global coupling. Second, we verify
(approximate) equivalence between a source and an optimized program
for several instances of loop optimizations from the literature.
@InProceedings{POPL17p161,
author = {Gilles Barthe and Benjamin Grégoire and Justin Hsu and PierreYves Strub},
title = {Coupling Proofs Are Probabilistic Product Programs},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {161174},
doi = {10.1145/3009837.3009896},
year = {2017},
}
Publisher's Version
Article Search


Su, Zhendong 
POPL '17: "ContextSensitive DataDependence ..."
ContextSensitive DataDependence Analysis via Linear Conjunctive Language Reachability
Qirun Zhang and Zhendong Su (University of California at Davis, USA) Many program analysis problems can be formulated as graph reachability problems. In the literature, contextfree language (CFL) reachability has been the most popular formulation and can be computed in subcubic time. The contextsensitive datadependence analysis is a fundamental abstraction that can express a broad range of program analysis problems. It essentially describes an interleaved matchedparenthesis language reachability problem. The language is not contextfree, and the problem is wellknown to be undecidable. In practice, many program analyses adopt CFLreachability to exactly model the matched parentheses for either contextsensitivity or structuretransmitted datadependence, but not both. Thus, the CFLreachability formulation for contextsensitive datadependence analysis is inherently an approximation. To support more precise and scalable analyses, this paper introduces linear conjunctive language (LCL) reachability, a new, expressive class of graph reachability. LCL not only contains the interleaved matchedparenthesis language, but is also closed under all settheoretic operations. Given a graph with n nodes and m edges, we propose an O(mn) time approximation algorithm for solving allpairs LCLreachability, which is asymptotically better than known CFLreachability algorithms. Our formulation and algorithm offer a new perspective on attacking the aforementioned undecidable problem — the LCLreachability formulation is exact, while the LCLreachability algorithm yields a sound approximation. We have applied the LCLreachability framework to two existing client analyses. The experimental results show that the LCLreachability framework is both more precise and scalable than the traditional CFLreachability framework. This paper opens up the opportunity to exploit LCLreachability in program analysis. 

Subramanian, Kausik 
POPL '17: "Genesis: Synthesizing Forwarding ..."
Genesis: Synthesizing Forwarding Tables in Multitenant Networks
Kausik Subramanian, Loris D'Antoni, and Aditya Akella (University of WisconsinMadison, USA)
Operators in multitenant cloud datacenters require support for diverse and complex endtoend policies, such as, reachability, middlebox traversals, isolation, traffic engineering, and network resource management. We present Genesis, a datacenter network management system which allows policies to be specified in a declarative manner without explicitly programming the network data plane. Genesis tackles the problem of enforcing policies by synthesizing switch forwarding tables. It uses the formal foundations of constraint solving in combination with fast offtheshelf SMT solvers. To improve synthesis performance, Genesis incorporates a novel search strategy that uses regular expressions to specify properties that leverage the structure of datacenter networks, and a divideandconquer synthesis procedure which exploits the structure of policy relationships. We have prototyped Genesis, and conducted experiments with a variety of workloads on realworld topologies to demonstrate its performance.
@InProceedings{POPL17p572,
author = {Kausik Subramanian and Loris D'Antoni and Aditya Akella},
title = {Genesis: Synthesizing Forwarding Tables in Multitenant Networks},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {572585},
doi = {10.1145/3009837.3009845},
year = {2017},
}
Publisher's Version
Article Search


Svendsen, Kasper 
POPL '17: "A Relational Model of TypesandEffects ..."
A Relational Model of TypesandEffects in HigherOrder Concurrent Separation Logic
Morten KroghJespersen, Kasper Svendsen, and Lars Birkedal (Aarhus University, Denmark; University of Cambridge, UK)
Recently we have seen a renewed interest in programming languages that tame the complexity of state and concurrency through refined type systems with more finegrained control over effects. In addition to simplifying reasoning and eliminating whole classes of bugs, statically tracking effects opens the door to advanced compiler optimizations.
In this paper we present a relational model of a typeandeffect system for a higherorder, concurrent program ming language. The model precisely captures the semantic invariants expressed by the effect annotations. We demonstrate that these invariants are strong enough to prove advanced program transformations, including automatic parallelization of expressions with suitably disjoint effects. The model also supports refinement proofs between abstract data types implementations with different internal data representations, including proofs that finegrained concurrent algorithms refine their coarsegrained counterparts. This is the first model for such an expressive language that supports both effectbased optimizations and data abstraction.
The logical relation is defined in Iris, a stateoftheart higherorder concurrent separation logic. This greatly simplifies proving welldefinedness of the logical relation and also provides us with a powerful logic for reasoning in the model.
@InProceedings{POPL17p218,
author = {Morten KroghJespersen and Kasper Svendsen and Lars Birkedal},
title = {A Relational Model of TypesandEffects in HigherOrder Concurrent Separation Logic},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {218231},
doi = {10.1145/3009837.3009877},
year = {2017},
}
Publisher's Version
Article Search
Info


Swamy, Nikhil 
POPL '17: "Dijkstra Monads for Free ..."
Dijkstra Monads for Free
Danel Ahman, Cătălin Hriţcu, Kenji Maillard, Guido Martínez, Gordon Plotkin, Jonathan Protzenko, Aseem Rastogi, and Nikhil Swamy (University of Edinburgh, UK; Microsoft Research, USA; Inria, France; ENS, France; Rosario National University, Argentina; Microsoft Research, India) Dijkstra monads enable a dependent type theory to be enhanced with support for specifying and verifying effectful code via weakest preconditions. Together with their closely related counterparts, Hoare monads, they provide the basis on which verification tools like F*, Hoare Type Theory (HTT), and Ynot are built. We show that Dijkstra monads can be derived “for free” by applying a continuationpassing style (CPS) translation to the standard monadic definitions of the underlying computational effects. Automatically deriving Dijkstra monads in this way provides a correctbyconstruction and efficient way of reasoning about userdefined effects in dependent type theories. We demonstrate these ideas in EMF*, a new dependently typed calculus, validating it via both formal proof and a prototype implementation within F*. Besides equipping F* with a more uniform and extensible effect system, EMF* enables a novel mixture of intrinsic and extrinsic proofs within F*. 

Swords, Cameron 
POPL '17: "Big Types in Little Runtime: ..."
Big Types in Little Runtime: OpenWorld Soundness and Collaborative Blame for Gradual Type Systems
Michael M. Vitousek, Cameron Swords, and Jeremy G. Siek (Indiana University, USA)
Gradual typing combines static and dynamic typing in the same language, offering programmers the error detection and strong guarantees of static types and the rapid prototyping and flexible programming idioms of dynamic types. Many gradually typed languages are implemented by translation into an untyped target language (e.g., Typed Clojure, TypeScript, Gradualtalk, and Reticulated Python). For such languages, it is desirable to support arbitrary interaction between translated code and legacy code in the untyped language while maintaining the type soundness of the translated code. In this paper we formalize this goal in the form of the openworld soundness criterion. We discuss why it is challenging to achieve openworld soundness using the traditional proxybased approach for higherorder casts. However, the transient design satisfies openworld soundness. Indeed, we present a formal semantics for the transient design and prove that our semantics satisfies openworld soundness. In this paper we also solve a challenging problem for the transient design: how to provide blame tracking without proxies. We define a semantics for blame and prove the Blame Theorem. We also prove that the Gradual Guarantee holds for this system, ensuring that programs can be evolved freely between static and dynamic typing. Finally, we demonstrate that the runtime overhead of the transient approach is low in the context of Reticulated Python, an implementation of gradual typing for Python.
@InProceedings{POPL17p762,
author = {Michael M. Vitousek and Cameron Swords and Jeremy G. Siek},
title = {Big Types in Little Runtime: OpenWorld Soundness and Collaborative Blame for Gradual Type Systems},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {762774},
doi = {10.1145/3009837.3009849},
year = {2017},
}
Publisher's Version
Article Search
Info


Szynwelski, Michał 
POPL '17: "Learning Nominal Automata ..."
Learning Nominal Automata
Joshua Moerman, Matteo Sammartino, Alexandra Silva, Bartek Klin, and Michał Szynwelski (Radboud University Nijmegen, Netherlands; University College London, UK; University of Warsaw, Poland)
We present an Angluinstyle algorithm to learn nominal automata, which are acceptors of languages over infinite (structured) alphabets. The abstract approach we take allows us to seamlessly extend known variations of the algorithm to this new setting. In particular we can learn a subclass of nominal nondeterministic automata. An implementation using a recently developed Haskell library for nominal computation is provided for preliminary experiments.
@InProceedings{POPL17p613,
author = {Joshua Moerman and Matteo Sammartino and Alexandra Silva and Bartek Klin and Michał Szynwelski},
title = {Learning Nominal Automata},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {613625},
doi = {10.1145/3009837.3009879},
year = {2017},
}
Publisher's Version
Article Search


Tanter, Éric 
POPL '17: "Gradual Refinement Types ..."
Gradual Refinement Types
Nico Lehmann and Éric Tanter (University of Chile, Chile)
Refinement types are an effective languagebased verification technique. However, as any expressive typing discipline, its strength is its weakness, imposing sometimes undesired rigidity. Guided by abstract interpretation, we extend the gradual typing agenda and develop the notion of gradual refinement types, allowing smooth evolution and interoperability between simple types and logicallyrefined types. In doing so, we address two challenges unexplored in the gradual typing literature: dealing with imprecise logical information, and with dependent function types. The first challenge leads to a crucial notion of locality for refinement formulas, and the second yields novel operators related to type and termlevel substitution, identifying new opportunity for runtime errors in gradual dependentlytyped languages. The gradual language we present is type safe, type sound, and satisfies the refined criteria for graduallytyped languages of Siek et al. We also explain how to extend our approach to richer refinement logics, anticipating key challenges to consider.
@InProceedings{POPL17p775,
author = {Nico Lehmann and Éric Tanter},
title = {Gradual Refinement Types},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {775788},
doi = {10.1145/3009837.3009856},
year = {2017},
}
Publisher's Version
Article Search


Timany, Amin 
POPL '17: "Interactive Proofs in HigherOrder ..."
Interactive Proofs in HigherOrder Concurrent Separation Logic
Robbert Krebbers, Amin Timany, and Lars Birkedal (Delft University of Technology, Netherlands; KU Leuven, Belgium; Aarhus University, Denmark)
When using a proof assistant to reason in an embedded logic  like separation logic  one cannot benefit from the proof contexts and basic tactics of the proof assistant. This results in proofs that are at a too low level of abstraction because they are cluttered with bookkeeping code related to manipulating the object logic.
In this paper, we introduce a socalled proof mode that extends the Coq proof assistant with (spatial and nonspatial) named proof contexts for the object logic. We show that thanks to these contexts we can implement highlevel tactics for introduction and elimination of the connectives of the object logic, and thereby make reasoning in the embedded logic as seamless as reasoning in the meta logic of the proof assistant. We apply our method to Iris: a state of the art higherorder impredicative concurrent separation logic.
We show that our method is very general, and is not just limited to program verification. We demonstrate its generality by formalizing correctness proofs of finegrained concurrent algorithms, derived constructs of the Iris logic, and a unary and binary logical relation for a language with concurrency, higherorder store, polymorphism, and recursive types. This is the first formalization of a binary logical relation for such an expressive language. We also show how to use the logical relation to prove contextual refinement of finegrained concurrent algorithms.
@InProceedings{POPL17p205,
author = {Robbert Krebbers and Amin Timany and Lars Birkedal},
title = {Interactive Proofs in HigherOrder Concurrent Separation Logic},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {205217},
doi = {10.1145/3009837.3009855},
year = {2017},
}
Publisher's Version
Article Search


Toninho, Bernardo 
POPL '17: "Fencing off Go: Liveness and ..."
Fencing off Go: Liveness and Safety for ChannelBased Programming
Julien Lange, Nicholas Ng, Bernardo Toninho, and Nobuko Yoshida (Imperial College London, UK)
Go is a productionlevel statically typed programming language whose design
features explicit messagepassing primitives and lightweight threads, enabling
(and encouraging) programmers to develop concurrent systems where components
interact through communication more so than by lockbased shared memory
concurrency. Go can only detect global deadlocks at runtime, but provides no
compiletime protection against all too common communication mismatches or
partial deadlocks. This work develops a static verification framework for
liveness and safety in Go programs, able to detect communication errors and
partial deadlocks in a general class of realistic concurrent programs,
including those with dynamic channel creation, unbounded thread creation and
recursion. Our approach infers from a Go program a faithful representation of
its communication patterns as a behavioural type. By checking a syntactic
restriction on channel usage, dubbed fencing, we ensure that programs are made
up of finitely many different communication patterns that may be repeated
infinitely many times. This restriction allows us to implement a decision
procedure for liveness and safety in types which in turn statically ensures
liveness and safety in Go programs. We have implemented a type inference and
decision procedures in a toolchain and tested it against publicly available Go
programs.
@InProceedings{POPL17p748,
author = {Julien Lange and Nicholas Ng and Bernardo Toninho and Nobuko Yoshida},
title = {Fencing off Go: Liveness and Safety for ChannelBased Programming},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {748761},
doi = {10.1145/3009837.3009847},
year = {2017},
}
Publisher's Version
Article Search


Toruńczyk, Szymon 
POPL '17: "LOIS: Syntax and Semantics ..."
LOIS: Syntax and Semantics
Eryk Kopczyński and Szymon Toruńczyk (University of Warsaw, Poland)
We present the semantics of an imperative programming language called LOIS (Looping Over Infinite Sets), which allows iterating through certain infinite sets, in finite time. Our semantics intuitively correspond to execution of infinitely many threads in parallel. This allows to merge the power of abstract mathematical constructions into imperative programming. Infinite sets are internally represented using first order formulas over some underlying logical structure, and SMT solvers are employed to evaluate programs.
@InProceedings{POPL17p586,
author = {Eryk Kopczyński and Szymon Toruńczyk},
title = {LOIS: Syntax and Semantics},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {586598},
doi = {10.1145/3009837.3009876},
year = {2017},
}
Publisher's Version
Article Search
Info


Totel, Éric 
POPL '17: "Hypercollecting Semantics ..."
Hypercollecting Semantics and Its Application to Static Analysis of Information Flow
Mounir Assaf, David A. Naumann, Julien Signoles, Éric Totel, and Frédéric Tronel (Stevens Institute of Technology, USA; CEA LIST, France; CentraleSupélec, France) We show how static analysis for secure information flow can be expressed and proved correct entirely within the framework of abstract interpretation. The key idea is to define a Galois connection that directly approximates the hyperproperty of interest. To enable use of such Galois connections, we introduce a fixpoint characterisation of hypercollecting semantics, i.e. a “set of sets” transformer. This makes it possible to systematically derive static analyses for hyperproperties entirely within the calculational framework of abstract interpretation. We evaluate this technique by deriving example static analyses. For qualitative information flow, we derive a dependence analysis similar to the logic of Amtoft and Banerjee (SAS’04) and the type system of Hunt and Sands (POPL’06). For quantitative information flow, we derive a novel cardinality analysis that bounds the leakage conveyed by a program instead of simply deciding whether it exists. This encompasses problems that are hypersafety but not ksafety. We put the framework to use and introduce variations that achieve precision rivalling the most recent and precise static analyses for information flow. 

Tronel, Frédéric 
POPL '17: "Hypercollecting Semantics ..."
Hypercollecting Semantics and Its Application to Static Analysis of Information Flow
Mounir Assaf, David A. Naumann, Julien Signoles, Éric Totel, and Frédéric Tronel (Stevens Institute of Technology, USA; CEA LIST, France; CentraleSupélec, France) We show how static analysis for secure information flow can be expressed and proved correct entirely within the framework of abstract interpretation. The key idea is to define a Galois connection that directly approximates the hyperproperty of interest. To enable use of such Galois connections, we introduce a fixpoint characterisation of hypercollecting semantics, i.e. a “set of sets” transformer. This makes it possible to systematically derive static analyses for hyperproperties entirely within the calculational framework of abstract interpretation. We evaluate this technique by deriving example static analyses. For qualitative information flow, we derive a dependence analysis similar to the logic of Amtoft and Banerjee (SAS’04) and the type system of Hunt and Sands (POPL’06). For quantitative information flow, we derive a novel cardinality analysis that bounds the leakage conveyed by a program instead of simply deciding whether it exists. This encompasses problems that are hypersafety but not ksafety. We put the framework to use and introduce variations that achieve precision rivalling the most recent and precise static analyses for information flow. 

Turon, Aaron 
POPL '17KEY: "Rust: From POPL to Practice ..."
Rust: From POPL to Practice (Keynote)
Aaron Turon (Mozilla, USA)
In 2015, a language based fundamentally on substructural typing–Rust–hit its 1.0 release, and less than a year later it has been put into production use in a number of tech companies, including some household names. The language has started a trend, with several other mainstream languages, including C++ and Swift, in the early stages of incorporating ideas about ownership. How did this come about?
Rust’s core focus is safe systems programming. It does not require a runtime system or garbage collector, but guarantees memory safety. It does not stipulate any particular style of concurrent programming, but instead provides the tools needed to guarantee data race freedom even when doing lowlevel sharedstate concurrency. It allows you to build up highlevel abstractions without paying a tax; its compilation model ensures that the abstractions boil away.
These benefits derive from two core aspects of Rust: its ownership system (based on substructural typing) and its trait system (a descendant of Haskell’s typeclasses). The talk will cover these two pillars of Rust design, with particular attention to the key innovations that make the language usable at scale. It will highlight the implications for concurrency, where Rust provides a unique perspective. It will also touch on aspects of Rust’s development that tend to get less attention within the POPL community: Rust’s governance and open development process, and design considerations around language and library evolution. Finally, it will mention a few of the myriad open research questions around Rust.
@InProceedings{POPL17p2,
author = {Aaron Turon},
title = {Rust: From POPL to Practice (Keynote)},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {22},
doi = {10.1145/3009837.3011999},
year = {2017},
}
Publisher's Version
Article Search


Vafeiadis, Viktor 
POPL '17: "A Promising Semantics for ..."
A Promising Semantics for RelaxedMemory Concurrency
Jeehoon Kang, ChungKil Hur, Ori Lahav, Viktor Vafeiadis, and Derek Dreyer (Seoul National University, South Korea; MPISWS, Germany)
Despite many years of research, it has proven very difficult to
develop a memory model for concurrent programming languages that
adequately balances the conflicting desiderata of programmers,
compilers, and hardware. In this paper, we propose the first relaxed
memory model that (1) accounts for a broad spectrum of features from
the C++11 concurrency model, (2) is implementable, in the sense that
it provably validates many standard compiler optimizations and
reorderings, as well as standard compilation schemes to x86TSO and
Power, (3) justifies simple invariantbased reasoning, thus
demonstrating the absence of bad "outofthinair" behaviors, (4)
supports "DRF" guarantees, ensuring that programmers who use
sufficient synchronization need not understand the full complexities
of relaxedmemory semantics, and (5) defines the semantics of racy
programs without relying on undefined behaviors, which is a
prerequisite for applicability to typesafe languages like Java.
The key novel idea behind our model is the notion of *promises*: a
thread may promise to execute a write in the future, thus enabling
other threads to read from that write out of order. Crucially, to
prevent outofthinair behaviors, a promise step requires a
threadlocal certification that it will be possible to execute the
promised write even in the absence of the promise. To establish
confidence in our model, we have formalized most of our key results in
Coq.
@InProceedings{POPL17p175,
author = {Jeehoon Kang and ChungKil Hur and Ori Lahav and Viktor Vafeiadis and Derek Dreyer},
title = {A Promising Semantics for RelaxedMemory Concurrency},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {175189},
doi = {10.1145/3009837.3009850},
year = {2017},
}
Publisher's Version
Article Search
Info


Valiron, Benoît 
POPL '17: "The Geometry of Parallelism: ..."
The Geometry of Parallelism: Classical, Probabilistic, and Quantum Effects
Ugo Dal Lago, Claudia Faggian, Benoît Valiron, and Akira Yoshimizu (University of Bologna, Italy; Inria, France; CNRS, France; University of Paris Diderot, France; University of ParisSaclay, France; University of Tokyo, Japan) We introduce a Geometry of Interaction model for higherorder quantum computation, and prove its adequacy for a fully fledged quantum programming language in which entanglement, duplication, and recursion are all available. This model is an instance of a new framework which captures not only quantum but also classical and probabilistic computation. Its main feature is the ability to model commutative effects in a parallel setting. Our model comes with a multitoken machine, a proof net system, and a style language. Being based on a multitoken machine equipped with a memory, it has a concrete nature which makes it well suited for building lowlevel operational descriptions of higherorder languages. 

Veanes, Margus 
POPL '17: "Monadic SecondOrder Logic ..."
Monadic SecondOrder Logic on Finite Sequences
Loris D'Antoni and Margus Veanes (University of WisconsinMadison, USA; Microsoft Research, USA)
We extend the weak monadic secondorder logic of one successor on finite strings (M2LSTR) to symbolic alphabets by allowing character predicates to range over decidable quantifier free theories instead of finite alphabets. We call this logic, which is able to describe sequences over complex and potentially infinite domains, symbolic M2LSTR (SM2LSTR). We then present a decision procedure for SM2LSTR based on a reduction to symbolic finite automata, a decidable extension of finite automata that allows transitions to carry predicates and can therefore model symbolic alphabets. The reduction constructs a symbolic automaton over an alphabet consisting of pairs of symbols where the first element of the pair is a symbol in the original formula’s alphabet, while the second element is a bitvector. To handle this modified alphabet we show that the Cartesian product of two decidable Boolean algebras (e.g., the formula’s one and the bitvector’s one) also forms a decidable Boolean algebras. To make the decision procedure practical, we propose two efficient representations of the Cartesian product of two Boolean algebras, one based on algebraic decision diagrams and one on a variant of Shannon expansions. Finally, we implement our decision procedure and evaluate it on more than 10,000 formulas. Despite the generality, our implementation has comparable performance with the stateoftheart M2LSTR solvers.
@InProceedings{POPL17p232,
author = {Loris D'Antoni and Margus Veanes},
title = {Monadic SecondOrder Logic on Finite Sequences},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {232245},
doi = {10.1145/3009837.3009844},
year = {2017},
}
Publisher's Version
Article Search


Vechev, Martin 
POPL '17: "Serializability for Eventual ..."
Serializability for Eventual Consistency: Criterion, Analysis, and Applications
Lucas Brutschy, Dimitar Dimitrov, Peter Müller, and Martin Vechev (ETH Zurich, Switzerland)
Developing and reasoning about systems using eventually consistent data stores
is a difficult challenge due to the presence of unexpected behaviors that do not
occur under sequential consistency. A fundamental problem in this setting is to
identify a correctness criterion that precisely captures intended application
behaviors yet is generic enough to be applicable to a wide range of
applications.
In this paper, we present such a criterion. More precisely, we generalize
conflict serializability to the setting of eventual consistency. Our
generalization is based on a novel dependency model that incorporates two
powerful algebraic properties: commutativity and absorption. These properties enable
precise reasoning about programs that employ highlevel replicated data types,
common in modern systems. To apply our criterion in practice, we also developed
a dynamic analysis algorithm and a tool that checks whether a given program
execution is serializable.
We performed a thorough experimental evaluation on two realworld use cases:
debugging cloudbacked mobile applications and implementing clients of a popular
eventually consistent keyvalue store. Our experimental results indicate that
our criterion reveals harmful synchronization problems in applications, is more
effective at finding them than prior approaches, and can be used for the
development of practical, eventually consistent applications.
@InProceedings{POPL17p458,
author = {Lucas Brutschy and Dimitar Dimitrov and Peter Müller and Martin Vechev},
title = {Serializability for Eventual Consistency: Criterion, Analysis, and Applications},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {458472},
doi = {10.1145/3009837.3009895},
year = {2017},
}
Publisher's Version
Article Search
POPL '17: "Fast Polyhedra Abstract Domain ..."
Fast Polyhedra Abstract Domain
Gagandeep Singh, Markus Püschel, and Martin Vechev (ETH Zurich, Switzerland) Numerical abstract domains are an important ingredient of modern static analyzers used for verifying critical program properties (e.g., absence of buffer overflow or memory safety). Among the many numerical domains introduced over the years, Polyhedra is the most expressive one, but also the most expensive: it has worstcase exponential space and time complexity. As a consequence, static analysis with the Polyhedra domain is thought to be impractical when applied to large scale, real world programs. In this paper, we present a new approach and a complete implementation for speeding up Polyhedra domain analysis. Our approach does not lose precision, and for many practical cases, is orders of magnitude faster than stateoftheart solutions. The key insight underlying our work is that polyhedra arising during analysis can usually be kept decomposed, thus considerably reducing the overall complexity. We first present the theory underlying our approach, which identifies the interaction between partitions of variables and domain operators. Based on the theory we develop new algorithms for these operators that work with decomposed polyhedra. We implemented these algorithms using the same interface as existing libraries, thus enabling static analyzers to use our implementation with little effort. In our evaluation, we analyze large benchmarks from the popular software verification competition, including Linux device drivers with over 50K lines of code. Our experimental results demonstrate massive gains in both space and time: we show endtoend speedups of two to five orders of magnitude compared to stateoftheart Polyhedra implementations as well as significant memory gains, on all larger benchmarks. In fact, in many cases our analysis terminates in seconds where prior code runs out of memory or times out after 4 hours. We believe this work is an important step in making the Polyhedra abstract domain both feasible and practically usable for handling large, realworld programs. 

Veith, Helmut 
POPL '17: "A Short Counterexample Property ..."
A Short Counterexample Property for Safety and Liveness Verification of FaultTolerant Distributed Algorithms
Igor Konnov, Marijana Lazić, Helmut Veith, and Josef Widder (Vienna University of Technology, Austria)
Distributed algorithms have many missioncritical applications
ranging from embedded systems and replicated databases to cloud
computing. Due to asynchronous communication, process faults, or
network failures, these algorithms are difficult to design and
verify. Many algorithms achieve fault tolerance by using
threshold guards that, for instance, ensure that a process waits
until it has received an acknowledgment from a majority of its
peers. Consequently, domainspecific languages for
faulttolerant distributed systems offer language support for
threshold guards.
We introduce an automated method for model checking of safety and
liveness of thresholdguarded distributed algorithms in systems
where the number of processes and the fraction of faulty
processes are parameters. Our method is based on a short
counterexample property: if a distributed algorithm violates a
temporal specification (in a fragment of LTL), then there is a
counterexample whose length is bounded and independent of the
parameters. We prove this property by (i) characterizing
executions depending on the structure of the temporal formula,
and (ii) using commutativity of transitions to accelerate and
shorten executions. We extended the ByMC toolset (Byzantine
Model Checker) with our technique, and verified liveness and
safety of 10 prominent faulttolerant distributed algorithms,
most of which were out of reach for existing techniques.
@InProceedings{POPL17p719,
author = {Igor Konnov and Marijana Lazić and Helmut Veith and Josef Widder},
title = {A Short Counterexample Property for Safety and Liveness Verification of FaultTolerant Distributed Algorithms},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {719734},
doi = {10.1145/3009837.3009860},
year = {2017},
}
Publisher's Version
Article Search
Info


Vitousek, Michael M. 
POPL '17: "Big Types in Little Runtime: ..."
Big Types in Little Runtime: OpenWorld Soundness and Collaborative Blame for Gradual Type Systems
Michael M. Vitousek, Cameron Swords, and Jeremy G. Siek (Indiana University, USA)
Gradual typing combines static and dynamic typing in the same language, offering programmers the error detection and strong guarantees of static types and the rapid prototyping and flexible programming idioms of dynamic types. Many gradually typed languages are implemented by translation into an untyped target language (e.g., Typed Clojure, TypeScript, Gradualtalk, and Reticulated Python). For such languages, it is desirable to support arbitrary interaction between translated code and legacy code in the untyped language while maintaining the type soundness of the translated code. In this paper we formalize this goal in the form of the openworld soundness criterion. We discuss why it is challenging to achieve openworld soundness using the traditional proxybased approach for higherorder casts. However, the transient design satisfies openworld soundness. Indeed, we present a formal semantics for the transient design and prove that our semantics satisfies openworld soundness. In this paper we also solve a challenging problem for the transient design: how to provide blame tracking without proxies. We define a semantics for blame and prove the Blame Theorem. We also prove that the Gradual Guarantee holds for this system, ensuring that programs can be evolved freely between static and dynamic typing. Finally, we demonstrate that the runtime overhead of the transient approach is low in the context of Reticulated Python, an implementation of gradual typing for Python.
@InProceedings{POPL17p762,
author = {Michael M. Vitousek and Cameron Swords and Jeremy G. Siek},
title = {Big Types in Little Runtime: OpenWorld Soundness and Collaborative Blame for Gradual Type Systems},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {762774},
doi = {10.1145/3009837.3009849},
year = {2017},
}
Publisher's Version
Article Search
Info


Voronkov, Andrei 
POPL '17: "Coming to Terms with Quantified ..."
Coming to Terms with Quantified Reasoning
Laura Kovács, Simon Robillard, and Andrei Voronkov (Vienna University of Technology, Austria; Chalmers University of Technology, Sweden; University of Manchester, UK)
The theory of finite term algebras provides a natural framework to describe the semantics of functional languages. The ability to efficiently reason about term algebras is essential to automate program analysis and verification for functional or imperative programs over inductively defined data types such as lists and trees. However, as the theory of finite term algebras is not finitely axiomatizable, reasoning about quantified properties over term algebras is challenging.
In this paper we address full firstorder reasoning about properties of programs manipulating term algebras, and describe two approaches for doing so by using firstorder theorem proving. Our first method is a conservative extension of the theory of term alge bras using a finite number of statements, while our second method relies on extending the superposition calculus of firstorder theorem provers with additional inference rules.
We implemented our work in the firstorder theorem prover Vampire and evaluated it on a large number of inductive datatype benchmarks, as well as game theory constraints. Our experimental results show that our methods are able to find proofs for many hard problems previously unsolved by stateoftheart methods. We also show that Vampire implementing our methods outperforms existing SMT solvers able to deal with inductive data types.
@InProceedings{POPL17p260,
author = {Laura Kovács and Simon Robillard and Andrei Voronkov},
title = {Coming to Terms with Quantified Reasoning},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {260270},
doi = {10.1145/3009837.3009887},
year = {2017},
}
Publisher's Version
Article Search


Voysey, Ian 
POPL '17: "Hazelnut: A Bidirectionally ..."
Hazelnut: A Bidirectionally Typed Structure Editor Calculus
Cyrus Omar, Ian Voysey, Michael Hilton, Jonathan Aldrich, and Matthew A. Hammer (Carnegie Mellon University, USA; Oregon State University, USA; University of Colorado at Boulder, USA) Structure editors allow programmers to edit the tree structure of a program directly. This can have cognitive benefits, particularly for novice and enduser programmers. It also simplifies matters for tool designers, because they do not need to contend with malformed program text. This paper introduces Hazelnut, a structure editor based on a small bidirectionally typed lambda calculus extended with holes and a cursor. Hazelnut goes one step beyond syntactic wellformedness: its edit actions operate over statically meaningful incomplete terms. Na'ively, this would force the programmer to construct terms in a rigid “outsidein” manner. To avoid this problem, the action semantics automatically places terms assigned a type that is inconsistent with the expected type inside a hole. This meaningfully defers the type consistency check until the term inside the hole is finished. Hazelnut is not intended as an enduser tool itself. Instead, it serves as a foundational account of typed structure editing. To that end, we describe how Hazelnut’s rich metatheory, which we have mechanized using the Agda proof assistant, serves as a guide when we extend the calculus to include binary sum types. We also discuss various interpretations of holes, and in so doing reveal connections with gradual typing and contextual modal type theory, the CurryHoward interpretation of contextual modal logic. Finally, we discuss how Hazelnut’s semantics lends itself to implementation as an eventbased functional reactive program. Our simple reference implementation is written using js_of_ocaml. 

Wang, Yuepeng 
POPL '17: "ComponentBased Synthesis ..."
ComponentBased Synthesis for Complex APIs
Yu Feng, Ruben Martins, Yuepeng Wang, Isil Dillig, and Thomas W. Reps (University of Texas at Austin, USA; University of WisconsinMadison, USA)
Componentbased approaches to program synthesis assemble programs from a database of existing components, such as methods provided by an API. In this paper, we present a novel typedirected algorithm for componentbased synthesis. The key novelty of our approach is the use of a compact Petrinet representation to model relationships between methods in an API. Given a target method signature S, our approach performs reachability analysis on the underlying Petrinet model to identify sequences of method calls that could be used to synthesize an implementation of S. The programs synthesized by our algorithm are guaranteed to type check and pass all test cases provided by the user.
We have implemented this approach in a tool called SyPet, and used it to successfully synthesize realworld programming tasks extracted from online forums and existing code repositories. We also compare SyPet with two stateoftheart synthesis tools, namely InSynth and CodeHint, and demonstrate that SyPet can synthesize more programs in less time. Finally, we compare our approach with an alternative solution based on hypergraphs and demonstrate its advantages.
@InProceedings{POPL17p599,
author = {Yu Feng and Ruben Martins and Yuepeng Wang and Isil Dillig and Thomas W. Reps},
title = {ComponentBased Synthesis for Complex APIs},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {599612},
doi = {10.1145/3009837.3009851},
year = {2017},
}
Publisher's Version
Article Search


Weirich, Stephanie 
POPL '17KEY: "The Influence of Dependent ..."
The Influence of Dependent Types (Keynote)
Stephanie Weirich (University of Pennsylvania, USA)
What has dependent type theory done for Haskell? In this talk, I will discuss
the influence of dependent types on the design of programming languages and on
the practice of functional programmers. Over the past ten years, the Glasgow
Haskell compiler has adopted several type system features inspired by
dependent type theory. However, this process has not been a direct
translation; working in the context of an existing language has lead us to new
designs in the semantics of dependent types. I will take a close look at what
we have achieved in GHC and discuss what we have learned from this experiment:
what works now, what doesn't work yet, and what has surprised us along the
way.
@InProceedings{POPL17p1,
author = {Stephanie Weirich},
title = {The Influence of Dependent Types (Keynote)},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {11},
doi = {10.1145/3009837.3009923},
year = {2017},
}
Publisher's Version
Article Search


Weng, ShuChun 
POPL '17: "Towards Automatic Resource ..."
Towards Automatic Resource Bound Analysis for OCaml
Jan Hoffmann, Ankush Das, and ShuChun Weng (Carnegie Mellon University, USA; Yale University, USA)
This article presents a resource analysis system for OCaml programs. The system automatically derives worstcase resource bounds for higherorder polymorphic programs with userdefined inductive types. The technique is parametric in the resource and can derive bounds for time, memory allocations and energy usage. The derived bounds are multivariate resource polynomials which are functions of different size parameters that depend on the standard OCaml types. Bound inference is fully automatic and reduced to a linear optimization problem that is passed to an offtheshelf LP solver. Technically, the analysis system is based on a novel multivariate automatic amortized resource analysis (AARA). It builds on existing work on linear AARA for higherorder programs with userdefined inductive types and on multivariate AARA for firstorder programs with builtin lists and binary trees. This is the first amortized analysis, that automatically derives polynomial bounds for higherorder functions and polynomial bounds that depend on userdefined inductive types. Moreover, the analysis handles a limited form of side effects and even outperforms the linear bound inference of previous systems. At the same time, it preserves the expressivity and efficiency of existing AARA techniques. The practicality of the analysis system is demonstrated with an implementation and integration with Inria's OCaml compiler. The implementation is used to automatically derive resource bounds for 411 functions and 6018 lines of code derived from OCaml libraries, the CompCert compiler, and implementations of textbook algorithms. In a case study, the system infers bounds on the number of queries that are sent by OCaml programs to DynamoDB, a commercial NoSQL cloud database service.
@InProceedings{POPL17p359,
author = {Jan Hoffmann and Ankush Das and ShuChun Weng},
title = {Towards Automatic Resource Bound Analysis for OCaml},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {359373},
doi = {10.1145/3009837.3009842},
year = {2017},
}
Publisher's Version
Article Search
Info


Wickerson, John 
POPL '17: "Automatically Comparing Memory ..."
Automatically Comparing Memory Consistency Models
John Wickerson, Mark Batty, Tyler Sorensen, and George A. Constantinides (Imperial College London, UK; University of Kent, UK) A memory consistency model (MCM) is the part of a programming language or computer architecture specification that defines which values can legally be read from shared memory locations. Because MCMs take into account various optimisations employed by architectures and compilers, they are often complex and counterintuitive, which makes them challenging to design and to understand. We identify four tasks involved in designing and understanding MCMs: generating conformance tests, distinguishing two MCMs, checking compiler optimisations, and checking compiler mappings. We show that all four tasks are instances of a general constraintsatisfaction problem to which the solution is either a program or a pair of programs. Although this problem is intractable for automatic solvers when phrased over programs directly, we show how to solve analogous constraints over program executions, and then construct programs that satisfy the original constraints. Our technique, which is implemented in the Alloy modelling framework, is illustrated on several software and architecturelevel MCMs, both axiomatically and operationally defined. We automatically recreate several known results, often in a simpler form, including: distinctions between variants of the C11 MCM; a failure of the ‘SCDRF guarantee’ in an early C11 draft; that x86 is ‘multicopy atomic’ and Power is not; bugs in common C11 compiler optimisations; and bugs in a compiler mapping from OpenCL to AMDstyle GPUs. We also use our technique to develop and validate a new MCM for NVIDIA GPUs that supports a natural mapping from OpenCL. 

Widder, Josef 
POPL '17: "A Short Counterexample Property ..."
A Short Counterexample Property for Safety and Liveness Verification of FaultTolerant Distributed Algorithms
Igor Konnov, Marijana Lazić, Helmut Veith, and Josef Widder (Vienna University of Technology, Austria)
Distributed algorithms have many missioncritical applications
ranging from embedded systems and replicated databases to cloud
computing. Due to asynchronous communication, process faults, or
network failures, these algorithms are difficult to design and
verify. Many algorithms achieve fault tolerance by using
threshold guards that, for instance, ensure that a process waits
until it has received an acknowledgment from a majority of its
peers. Consequently, domainspecific languages for
faulttolerant distributed systems offer language support for
threshold guards.
We introduce an automated method for model checking of safety and
liveness of thresholdguarded distributed algorithms in systems
where the number of processes and the fraction of faulty
processes are parameters. Our method is based on a short
counterexample property: if a distributed algorithm violates a
temporal specification (in a fragment of LTL), then there is a
counterexample whose length is bounded and independent of the
parameters. We prove this property by (i) characterizing
executions depending on the structure of the temporal formula,
and (ii) using commutativity of transitions to accelerate and
shorten executions. We extended the ByMC toolset (Byzantine
Model Checker) with our technique, and verified liveness and
safety of 10 prominent faulttolerant distributed algorithms,
most of which were out of reach for existing techniques.
@InProceedings{POPL17p719,
author = {Igor Konnov and Marijana Lazić and Helmut Veith and Josef Widder},
title = {A Short Counterexample Property for Safety and Liveness Verification of FaultTolerant Distributed Algorithms},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {719734},
doi = {10.1145/3009837.3009860},
year = {2017},
}
Publisher's Version
Article Search
Info


Wilson, Todd 
POPL '17: "Computational HigherDimensional ..."
Computational HigherDimensional Type Theory
Carlo Angiuli, Robert Harper, and Todd Wilson (Carnegie Mellon University, USA; California State University at Fresno, USA)
Formal constructive type theory has proved to be an effective language for mechanized proof. By avoiding nonconstructive principles, such as the law of the excluded middle, type theory admits sharper proofs and broader interpretations of results. From a computer science perspective, interest in type theory arises from its applications to programming languages. Standard constructive type theories used in mechanization admit computational interpretations based on metamathematical normalization theorems. These proofs are notoriously brittle; any change to the theory potentially invalidates its computational meaning. As a case in point, Voevodsky's univalence axiom raises questions about the computational meaning of proofs.
We consider the question: Can higherdimensional type theory be construed as a programming language? We answer this question affirmatively by providing a direct, deterministic operational interpretation for a representative higherdimensional dependent type theory with higher inductive types and an instance of univalence. Rather than being a formal type theory defined by rules, it is instead a computational type theory in the sense of MartinLöf's meaning explanations and of the NuPRL semantics. The definition of the type theory starts with programs; types are specifications of program behavior. The main result is a canonicity theorem stating that closed programs of boolean type evaluate to true or false.
@InProceedings{POPL17p680,
author = {Carlo Angiuli and Robert Harper and Todd Wilson},
title = {Computational HigherDimensional Type Theory},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {680693},
doi = {10.1145/3009837.3009861},
year = {2017},
}
Publisher's Version
Article Search


Wu, Xiaodi 
POPL '17: "Invariants of Quantum Programs: ..."
Invariants of Quantum Programs: Characterisations and Generation
Mingsheng Ying, Shenggang Ying, and Xiaodi Wu (University of Technology Sydney, Australia; Tsinghua University, China; Institute of Software at Chinese Academy of Sciences, China; University of Oregon, USA)
Program invariant is a fundamental notion widely used in program verification and analysis. The aim of this paper is twofold: (i) find an appropriate definition of invariants for quantum programs; and (ii) develop an effective technique of invariant generation for verification and analysis of quantum programs.
Interestingly, the notion of invariant can be defined for quantum programs in two different ways  additive invariants and multiplicative invariants  corresponding to two interpretations of implication in a continuous valued logic: the Lukasiewicz implication and the Godel implication. It is shown that both of them can be used to establish partial correctness of quantum programs.
The problem of generating additive invariants of quantum programs is addressed by reducing it to an SDP (Semidefinite Programming) problem. This approach is applied with an SDP solver to generate invariants of two important quantum algorithms  quantum walk and quantum Metropolis sampling. Our examples show that the generated invariants can be used to verify correctness of these algorithms and are helpful in optimising quantum Metropolis sampling.
To our knowledge, this paper is the first attempt to define the notion of invariant and to develop a method of invariant generation for quantum programs.
@InProceedings{POPL17p818,
author = {Mingsheng Ying and Shenggang Ying and Xiaodi Wu},
title = {Invariants of Quantum Programs: Characterisations and Generation},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {818832},
doi = {10.1145/3009837.3009840},
year = {2017},
}
Publisher's Version
Article Search


Xia, Liyao 
POPL '17: "Beginner's Luck: A Language ..."
Beginner's Luck: A Language for PropertyBased Generators
Leonidas Lampropoulos, Diane GalloisWong, Cătălin Hriţcu, John Hughes, Benjamin C. Pierce, and Liyao Xia (University of Pennsylvania, USA; Inria, France; ENS, France; Chalmers University of Technology, Sweden)
Propertybased random testing à la QuickCheck requires building efficient generators for welldistributed random data satisfying complex logical predicates, but writing these generators can be difficult and error prone. We propose a domainspecific language in which generators are conveniently expressed by decorating predicates with lightweight annotations to control both the distribution of generated values and the amount of constraint solving that happens before each variable is instantiated. This language, called Luck, makes generators easier to write, read, and maintain.
We give Luck a formal semantics and prove several fundamental properties, including the soundness and completeness of random generation with respect to a standard predicate semantics. We evaluate Luck on common examples from the propertybased testing literature and on two significant case studies, showing that it can be used in complex domains with comparable bugfinding effectiveness and a significant reduction in testing code size compared to handwritten generators.
@InProceedings{POPL17p114,
author = {Leonidas Lampropoulos and Diane GalloisWong and Cătălin Hriţcu and John Hughes and Benjamin C. Pierce and Liyao Xia},
title = {Beginner's Luck: A Language for PropertyBased Generators},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {114129},
doi = {10.1145/3009837.3009868},
year = {2017},
}
Publisher's Version
Article Search


Ying, Mingsheng 
POPL '17: "Invariants of Quantum Programs: ..."
Invariants of Quantum Programs: Characterisations and Generation
Mingsheng Ying, Shenggang Ying, and Xiaodi Wu (University of Technology Sydney, Australia; Tsinghua University, China; Institute of Software at Chinese Academy of Sciences, China; University of Oregon, USA)
Program invariant is a fundamental notion widely used in program verification and analysis. The aim of this paper is twofold: (i) find an appropriate definition of invariants for quantum programs; and (ii) develop an effective technique of invariant generation for verification and analysis of quantum programs.
Interestingly, the notion of invariant can be defined for quantum programs in two different ways  additive invariants and multiplicative invariants  corresponding to two interpretations of implication in a continuous valued logic: the Lukasiewicz implication and the Godel implication. It is shown that both of them can be used to establish partial correctness of quantum programs.
The problem of generating additive invariants of quantum programs is addressed by reducing it to an SDP (Semidefinite Programming) problem. This approach is applied with an SDP solver to generate invariants of two important quantum algorithms  quantum walk and quantum Metropolis sampling. Our examples show that the generated invariants can be used to verify correctness of these algorithms and are helpful in optimising quantum Metropolis sampling.
To our knowledge, this paper is the first attempt to define the notion of invariant and to develop a method of invariant generation for quantum programs.
@InProceedings{POPL17p818,
author = {Mingsheng Ying and Shenggang Ying and Xiaodi Wu},
title = {Invariants of Quantum Programs: Characterisations and Generation},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {818832},
doi = {10.1145/3009837.3009840},
year = {2017},
}
Publisher's Version
Article Search


Ying, Shenggang 
POPL '17: "Invariants of Quantum Programs: ..."
Invariants of Quantum Programs: Characterisations and Generation
Mingsheng Ying, Shenggang Ying, and Xiaodi Wu (University of Technology Sydney, Australia; Tsinghua University, China; Institute of Software at Chinese Academy of Sciences, China; University of Oregon, USA)
Program invariant is a fundamental notion widely used in program verification and analysis. The aim of this paper is twofold: (i) find an appropriate definition of invariants for quantum programs; and (ii) develop an effective technique of invariant generation for verification and analysis of quantum programs.
Interestingly, the notion of invariant can be defined for quantum programs in two different ways  additive invariants and multiplicative invariants  corresponding to two interpretations of implication in a continuous valued logic: the Lukasiewicz implication and the Godel implication. It is shown that both of them can be used to establish partial correctness of quantum programs.
The problem of generating additive invariants of quantum programs is addressed by reducing it to an SDP (Semidefinite Programming) problem. This approach is applied with an SDP solver to generate invariants of two important quantum algorithms  quantum walk and quantum Metropolis sampling. Our examples show that the generated invariants can be used to verify correctness of these algorithms and are helpful in optimising quantum Metropolis sampling.
To our knowledge, this paper is the first attempt to define the notion of invariant and to develop a method of invariant generation for quantum programs.
@InProceedings{POPL17p818,
author = {Mingsheng Ying and Shenggang Ying and Xiaodi Wu},
title = {Invariants of Quantum Programs: Characterisations and Generation},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {818832},
doi = {10.1145/3009837.3009840},
year = {2017},
}
Publisher's Version
Article Search


Yoshida, Nobuko 
POPL '17: "Fencing off Go: Liveness and ..."
Fencing off Go: Liveness and Safety for ChannelBased Programming
Julien Lange, Nicholas Ng, Bernardo Toninho, and Nobuko Yoshida (Imperial College London, UK)
Go is a productionlevel statically typed programming language whose design
features explicit messagepassing primitives and lightweight threads, enabling
(and encouraging) programmers to develop concurrent systems where components
interact through communication more so than by lockbased shared memory
concurrency. Go can only detect global deadlocks at runtime, but provides no
compiletime protection against all too common communication mismatches or
partial deadlocks. This work develops a static verification framework for
liveness and safety in Go programs, able to detect communication errors and
partial deadlocks in a general class of realistic concurrent programs,
including those with dynamic channel creation, unbounded thread creation and
recursion. Our approach infers from a Go program a faithful representation of
its communication patterns as a behavioural type. By checking a syntactic
restriction on channel usage, dubbed fencing, we ensure that programs are made
up of finitely many different communication patterns that may be repeated
infinitely many times. This restriction allows us to implement a decision
procedure for liveness and safety in types which in turn statically ensures
liveness and safety in Go programs. We have implemented a type inference and
decision procedures in a toolchain and tested it against publicly available Go
programs.
@InProceedings{POPL17p748,
author = {Julien Lange and Nicholas Ng and Bernardo Toninho and Nobuko Yoshida},
title = {Fencing off Go: Liveness and Safety for ChannelBased Programming},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {748761},
doi = {10.1145/3009837.3009847},
year = {2017},
}
Publisher's Version
Article Search


Yoshimizu, Akira 
POPL '17: "The Geometry of Parallelism: ..."
The Geometry of Parallelism: Classical, Probabilistic, and Quantum Effects
Ugo Dal Lago, Claudia Faggian, Benoît Valiron, and Akira Yoshimizu (University of Bologna, Italy; Inria, France; CNRS, France; University of Paris Diderot, France; University of ParisSaclay, France; University of Tokyo, Japan) We introduce a Geometry of Interaction model for higherorder quantum computation, and prove its adequacy for a fully fledged quantum programming language in which entanglement, duplication, and recursion are all available. This model is an instance of a new framework which captures not only quantum but also classical and probabilistic computation. Its main feature is the ability to model commutative effects in a parallel setting. Our model comes with a multitoken machine, a proof net system, and a style language. Being based on a multitoken machine equipped with a memory, it has a concrete nature which makes it well suited for building lowlevel operational descriptions of higherorder languages. 

Yu, Tingting 
POPL '17: "Analyzing Divergence in Bisimulation ..."
Analyzing Divergence in Bisimulation Semantics
Xinxin Liu, Tingting Yu, and Wenhui Zhang (Institute of Software at Chinese Academy of Sciences, China)
Some bisimulation based abstract equivalence relations may equate divergent systems with nondivergent ones, examples including weak bisimulation equivalence and branching bisimulation equivalence. Thus extra efforts are needed to analyze divergence for the compared systems. In this paper we propose a new method for analyzing divergence in bisimulation semantics, which relies only on simple observations of individual transitions. We show that this method can verify several typical divergence preserving bisimulation equivalences including two wellknown ones. As an application case study, we use the proposed method to verify the HSY collision stack to draw the conclusion that the stack implementation is correct in terms of linearizability with lockfree progress condition.
@InProceedings{POPL17p735,
author = {Xinxin Liu and Tingting Yu and Wenhui Zhang},
title = {Analyzing Divergence in Bisimulation Semantics},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {735747},
doi = {10.1145/3009837.3009870},
year = {2017},
}
Publisher's Version
Article Search


Zdancewic, Steve 
POPL '17: "QWIRE: A Core Language for ..."
QWIRE: A Core Language for Quantum Circuits
Jennifer Paykin, Robert Rand, and Steve Zdancewic (University of Pennsylvania, USA)
This paper introduces QWIRE (``choir''), a language for defining quantum
circuits and an interface for manipulating them inside of an arbitrary
classical host language. QWIRE is minimalit contains only a
few primitivesand sound with respect to the physical properties entailed by
quantum mechanics. At the same time, QWIRE is expressive and highly modular
due to its relationship with the host language, mirroring the QRAM model
of computation that places a quantum computer (controlled by circuits)
alongside a classical computer (controlled by the host language).
We present QWIRE along with its type system and operational semantics, which
we prove is safe and strongly normalizing whenever the host language is. We
give circuits a denotational semantics in terms of density matrices. Throughout, we
investigate examples that demonstrate the expressive power of QWIRE, including
extensions to the host language that (1) expose a general analysis framework
for circuits, and (2) provide dependent types.
@InProceedings{POPL17p846,
author = {Jennifer Paykin and Robert Rand and Steve Zdancewic},
title = {QWIRE: A Core Language for Quantum Circuits},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {846858},
doi = {10.1145/3009837.3009894},
year = {2017},
}
Publisher's Version
Article Search


Zhang, Danfeng 
POPL '17: "LightDP: Towards Automating ..."
LightDP: Towards Automating Differential Privacy Proofs
Danfeng Zhang and Daniel Kifer (Pennsylvania State University, USA)
The growing popularity and adoption of differential privacy in academic and industrial settings has resulted in the development of increasingly sophisticated algorithms for releasing information while preserving privacy. Accompanying this phenomenon is the natural rise in the development and publication of incorrect algorithms, thus demonstrating the necessity of formal verification tools. However, existing formal methods for differential privacy face a dilemma: methods based on customized logics can verify sophisticated algorithms but come with a steep learning curve and significant annotation burden on the programmers, while existing programming platforms lack expressive power for some sophisticated algorithms.
In this paper, we present LightDP, a simple imperative language that strikes a better balance between expressive power and usability. The core of LightDP is a novel relational type system that separates relational reasoning from privacy budget calculations. With dependent types, the type system is powerful enough to verify sophisticated algorithms where the composition theorem falls short. In addition, the inference engine of LightDP infers most of the proof details, and even searches for the proof with minimal privacy cost when multiple proofs exist. We show that LightDP verifies sophisticated algorithms with little manual effort.
@InProceedings{POPL17p888,
author = {Danfeng Zhang and Daniel Kifer},
title = {LightDP: Towards Automating Differential Privacy Proofs},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {888901},
doi = {10.1145/3009837.3009884},
year = {2017},
}
Publisher's Version
Article Search


Zhang, Qirun 
POPL '17: "ContextSensitive DataDependence ..."
ContextSensitive DataDependence Analysis via Linear Conjunctive Language Reachability
Qirun Zhang and Zhendong Su (University of California at Davis, USA) Many program analysis problems can be formulated as graph reachability problems. In the literature, contextfree language (CFL) reachability has been the most popular formulation and can be computed in subcubic time. The contextsensitive datadependence analysis is a fundamental abstraction that can express a broad range of program analysis problems. It essentially describes an interleaved matchedparenthesis language reachability problem. The language is not contextfree, and the problem is wellknown to be undecidable. In practice, many program analyses adopt CFLreachability to exactly model the matched parentheses for either contextsensitivity or structuretransmitted datadependence, but not both. Thus, the CFLreachability formulation for contextsensitive datadependence analysis is inherently an approximation. To support more precise and scalable analyses, this paper introduces linear conjunctive language (LCL) reachability, a new, expressive class of graph reachability. LCL not only contains the interleaved matchedparenthesis language, but is also closed under all settheoretic operations. Given a graph with n nodes and m edges, we propose an O(mn) time approximation algorithm for solving allpairs LCLreachability, which is asymptotically better than known CFLreachability algorithms. Our formulation and algorithm offer a new perspective on attacking the aforementioned undecidable problem — the LCLreachability formulation is exact, while the LCLreachability algorithm yields a sound approximation. We have applied the LCLreachability framework to two existing client analyses. The experimental results show that the LCLreachability framework is both more precise and scalable than the traditional CFLreachability framework. This paper opens up the opportunity to exploit LCLreachability in program analysis. 

Zhang, Wenhui 
POPL '17: "Analyzing Divergence in Bisimulation ..."
Analyzing Divergence in Bisimulation Semantics
Xinxin Liu, Tingting Yu, and Wenhui Zhang (Institute of Software at Chinese Academy of Sciences, China)
Some bisimulation based abstract equivalence relations may equate divergent systems with nondivergent ones, examples including weak bisimulation equivalence and branching bisimulation equivalence. Thus extra efforts are needed to analyze divergence for the compared systems. In this paper we propose a new method for analyzing divergence in bisimulation semantics, which relies only on simple observations of individual transitions. We show that this method can verify several typical divergence preserving bisimulation equivalences including two wellknown ones. As an application case study, we use the proposed method to verify the HSY collision stack to draw the conclusion that the stack implementation is correct in terms of linearizability with lockfree progress condition.
@InProceedings{POPL17p735,
author = {Xinxin Liu and Tingting Yu and Wenhui Zhang},
title = {Analyzing Divergence in Bisimulation Semantics},
booktitle = {Proc.\ POPL},
publisher = {ACM},
pages = {735747},
doi = {10.1145/3009837.3009870},
year = {2017},
}
Publisher's Version
Article Search


Žikelić, Ðorđe 
POPL '17: "Stochastic Invariants for ..."
Stochastic Invariants for Probabilistic Termination
Krishnendu Chatterjee, Petr Novotný, and Ðorđe Žikelić (IST Austria, Austria; University of Cambridge, UK) Termination is one of the basic liveness properties, and we study the termination problem for probabilistic programs with realvalued variables. Previous works focused on the qualitative problem that asks whether an input program terminates with probability 1 (almostsure termination). A powerful approach for this qualitative problem is the notion of ranking supermartingales with respect to a given set of invariants. The quantitative problem (probabilistic termination) asks for bounds on the termination probability, and this problem has not been addressed yet. A fundamental and conceptual drawback of the existing approaches to address probabilistic termination is that even though the supermartingales consider the probabilistic behaviour of the programs, the invariants are obtained completely ignoring the probabilistic aspect (i.e., the invariants are obtained considering all behaviours with no information about the probability). In this work we address the probabilistic termination problem for lineararithmetic probabilistic programs with nondeterminism. We formally define the notion of stochastic invariants, which are constraints along with a probability bound that the constraints hold. We introduce a concept of repulsing supermartingales. First, we show that repulsing supermartingales can be used to obtain bounds on the probability of the stochastic invariants. Second, we show the effectiveness of repulsing supermartingales in the following three ways: (1) With a combination of ranking and repulsing supermartingales we can compute lower bounds on the probability of termination; (2) repulsing supermartingales provide witnesses for refutation of almostsure termination; and (3) with a combination of ranking and repulsing supermartingales we can establish persistence properties of probabilistic programs. Along with our conceptual contributions, we establish the following computational results: First, the synthesis of a stochastic invariant which supports some ranking supermartingale and at the same time admits a repulsing supermartingale can be achieved via reduction to the existential firstorder theory of reals, which generalizes existing results from the nonprobabilistic setting. Second, given a program with “strict invariants” (e.g., obtained via abstract interpretation) and a stochastic invariant, we can check in polynomial time whether there exists a linear repulsing supermartingale w.r.t. the stochastic invariant (via reduction to LP). We also present experimental evaluation of our approach on academic examples. 
192 authors
proc time: 0.24