Powered by
Conference Publishing Consulting
7th ACM SIGPLAN International Conference on Certified Programs and Proofs (CPP 2018),
January 8–9, 2018,
Los Angeles, CA, USA
Frontmatter
Welcome from the Chairs
It is our great pleasure to welcome you to CPP 2018, the 7th ACM
SIGPLAN Conference on Certified Programs and Proofs, held on January
8-9, 2018, in Los Angeles, California, USA, in cooperation with ACM
SIGLOG.
Invited Talks
POPLMark Reloaded: Mechanizing Logical Relations Proofs (Invited Talk)
Brigitte Pientka
(McGill University, Canada)
Mechanizing formal systems, given via axioms and inference rules,
together with proofs about them plays an important role in
establishing trust in formal developments. Over the past decade, the
POPLMark challenge popularized the use of proof assistants in
mechanizing the metatheory of programming languages. Focusing on the
the meta-theory of System F with subtyping, it allowed the programming
languages community to survey existing techniques to represent and
reason about syntactic structures with binders and promote the use of
proof assistants. Today, mechanizing proofs is a stable fixture in the
daily life of programming languages researchers.
As a follow-up to the POPLMark Challenge, we propose a new collection of
benchmarks that use proofs by logical relations. Such proofs are now used to attack
problems in the theory of complex languages models, with applications
to issues in equivalence of programs, compiler correctness,
representation independence and even more intensional properties such
as non-interference, differential privacy and secure multi-language
inter-operability. Yet, they remain challenging to mechanize.
In this talk, we focus on one particular challenge problem, namely
strong normalization of a simply-typed lambda-calculus with a proof by
Kripke-style logical relations. We will advocate a modern view of this
well-understood problem by formulating our logical relation on
well-typed terms. Using this case study, we share some of the lessons
learned tackling this challenge problem in Beluga, a proof environment
that supports higher-order abstract syntax encodings, first-class
context and first-class substitutions. We also discuss and highlight
similarities, strategies, and limitations in other proof assistants
when tackling this challenge problem.
We hope others will be motivated to submit solutions! The goal of this
talk is to engage the community in discussions on what support in
proof environments is needed to truly bring mechanized metatheory to
the masses.
Article Search
Efficient Certification of Complexity Proofs: Formalizing the Perron–Frobenius Theorem (Invited Talk Paper)
Jose Divasón, Sebastiaan Joosten, Ondřej Kunčar, René Thiemann, and
Akihisa Yamada
(University of La Rioja, Spain; University of Twente, Netherlands; TU Munich, Germany; University of Innsbruck, Austria; National Institute of Informatics, Japan)
Matrix interpretations are widely used in automated complexity analysis. Certifying such analyses boils down to determining the growth rate of A^{n} for a fixed non-negative rational matrix A. A direct solution for this task involves the computation of all eigenvalues of A, which often leads to expensive algebraic number computations.
In this work we formalize the Perron–Frobenius theorem. We utilize the theorem to avoid most of the algebraic numbers needed for certifying complexity analysis, so that our new algorithm only needs the rational arithmetic when certifying complexity proofs that existing tools can find. To cover the theorem in its full extent, we establish a connection between two different Isabelle/HOL libraries on matrices, enabling an easy exchange of theorems between both libraries. This connection crucially relies on the transfer mechanism in combination with local type definitions, being a non-trivial case study for these Isabelle tools.
Article Search
Verifing Programs and Systems
Total Haskell is Reasonable Coq
Antal Spector-Zabusky, Joachim Breitner, Christine Rizkallah, and Stephanie Weirich
(University of Pennsylvania, USA)
We would like to use the Coq proof assistant to mechanically verify properties of Haskell programs. To that end, we present a tool, named hs-to-coq, that translates total Haskell programs into Coq programs via a shallow embedding. We apply our tool in three case studies – a lawful Monad instance, “Hutton’s razor”, and an existing data structure library – and prove their correctness. These examples show that this approach is viable: both that hs-to-coq applies to existing Haskell code, and that the output it produces is amenable to verification.
Article Search
Info
A Formal Proof in Coq of a Control Function for the Inverted Pendulum
Damien Rouhling
(University of Côte d'Azur, France; Inria, France)
Control theory provides techniques to design controllers, or control functions, for dynamical systems with inputs, so as to grant a particular behaviour of such a system. The inverted pendulum is a classic system in control theory: it is used as a benchmark for nonlinear control techniques and is a model for several other systems with various applications. We formalized in the Coq proof assistant the proof of soundness of a control function for the inverted pendulum. This is a first step towards the formal verification of more complex systems for which safety may be critical.
Article Search
Completeness and Decidability of Converse PDL in the Constructive Type Theory of Coq
Christian Doczkal and Joachim Bard
(University of Lyon, France; CNRS, France; ENS Lyon, France; Claude Bernard University Lyon 1, France; LIP, France; Saarland University, Germany)
The completeness proofs for Propositional Dynamic Logic (PDL) in the literature are non-constructive and usually presented in an informal manner. We obtain a formal and constructive completeness proof for Converse PDL by recasting a completeness proof by Kozen and Parikh into our constructive setting. We base our proof on a Pratt-style decision method for satisfiability constructing finite models for satisfiable formulas and pruning refutations for unsatisfiable formulas. Completeness of Segerberg's axiomatization of PDL is then obtained by translating pruning refutations to derivations in the Hilbert system. We first treat PDL without converse and then extend the proofs to Converse PDL. All results are formalized in Coq/Ssreflect.
Article Search
Archive submitted (0 MB)
Artifacts Available
Verified Applications
Mechanising and Verifying the WebAssembly Specification
Conrad Watt
(University of Cambridge, UK)
WebAssembly is a new low-level language currently being implemented in all major web browsers. It is designed to become the universal compilation target for the web, obsoleting existing solutions in this area, such as asm.js and Native Client. The WebAssembly working group has incorporated formal techniques into the development of the language, but their efforts so far have focussed on pen and paper formal specification.
We present a mechanised Isabelle specification for the WebAssembly language, together with a verified executable interpreter and type checker. Moreover, we present a fully mechanised proof of the soundness of the WebAssembly type system, and detail how our work on this proof has exposed several issues with the official WebAssembly specification, influencing its development. Finally, we give a brief account of our efforts in performing differential fuzzing of our interpreter against industry implementations.
Article Search
Towards Verifying Ethereum Smart Contract Bytecode in Isabelle/HOL
Sidney Amani, Myriam Bégel, Maksym Bortin, and Mark Staples
(Data61 at CSIRO, Australia; ENS Paris-Saclay, France; University of Paris-Saclay, France; UNSW, Australia)
Blockchain technology has increasing attention in research and across many industries.
The Ethereum blockchain offers smart contracts, which are small programs defined,
executed, and recorded as transactions in the blockchain transaction history.
These smart contracts run on the Ethereum Virtual Machine (EVM) and can be used to encode
agreements, transfer assets, and enforce integrity conditions in relationships between parties.
Smart contracts can carry financial value, and are increasingly
used for safety-, security-, or mission-critical purposes.
Errors in smart contracts have led and will lead to loss or harm.
Formal verification can provide the highest level of confidence about the
correct behaviour of smart contracts.
In this paper we extend an existing EVM formalisation in Isabelle/HOL by a sound
program logic at the level of bytecode.
We structure bytecode sequences into blocks of straight-line
code and create a program logic to reason about these.
This abstraction is a step towards control of the cost and complexity of formal verification
of EVM smart contracts.
Article Search
Mechanising Blockchain Consensus
George Pîrlea and Ilya Sergey
(University College London, UK)
We present the first formalisation of a blockchain-based distributed consensus protocol with a proof of its consistency mechanised in an interactive proof assistant.
Our development includes a reference mechanisation of the block forest data structure, necessary for implementing provably correct per-node protocol logic. We also define a model of a network, implementing the protocol in the form of a replicated state-transition system. The protocol's executions are modeled via a small-step operational semantics for asynchronous message passing, in which packages can be rearranged or duplicated.
In this work, we focus on the notion of global system safety, proving a form of eventual consistency. To do so, we provide a library of theorems about a pure functional implementation of block forests, define an inductive system invariant, and show that, in a quiescent system state, it implies a global agreement on the state of per-node transaction ledgers. Our development is parametric with respect to implementations of several security primitives, such as hash-functions, a notion of a proof object, a Validator Acceptance Function, and a Fork Choice Rule. We precisely characterise the assumptions, made about these components for proving the global system consensus, and discuss their adequacy.
All results described in this paper are formalised in Coq.
Preprint
Archive submitted (0 MB)
Info
Artifacts Available
Formal Microeconomic Foundations and the First Welfare Theorem
Cezary Kaliszyk and Julian Parsert
(University of Innsbruck, Austria)
Economic activity has always been a fundamental part of society. With recent social and political changes economics has gained even more influence on our lives. In this paper we formalize two economic models in Isabelle/HOL: the pure exchange economy, where the only economic actors are consumers, as well as a version of the Arrow-Debreu Model, a private ownership economy, which includes production facilities. Interestingly, the definitions of various components of the economic models differ in the economic literature. We therefore show the equivalences and implications between various presentations, which allows us to create an extensible foundation for formalizing microeconomics and game theory compatible with multiple economic theories. We prove the First Theorem of Welfare Economics in both economic models. The theorem is the mathematical formulation of Adam Smith’s famous invisible hand and states that a group of self-interested and rational actors will eventually achieve an efficient allocation of goods. The formal proofs allow us to find more precise assumptions than those found in the economic literature.
Article Search
Proof Methods and Libraries
Triangulating Context Lemmas
Craig McLaughlin, James McKinna, and Ian Stark
(University of Edinburgh, UK)
The idea of a context lemma spans a range of programming-language models: from Milner’s original through the CIU theorem to ‘CIU-like’ results for multiple language features. Each shows that to prove observational equivalence between program terms it is enough to test only some restricted class of contexts: applicative, evaluation, reduction, etc.
We formally reconstruct a distinctive proof method for context lemmas based on cyclic inclusion of three program approximations: by triangulating between ‘applicative’ and ‘logical’ relations we prove that both match the observational notion, while being simpler to compute. Moreover, the observational component of the triangle condenses a series of approximations covering variation in the literature around what variable-capturing structure qualifies as a ‘context’.
Although entirely concrete, our approach involves no term dissection or inspection of reduction sequences; instead we draw on previous context lemmas using operational logical relations and biorthogonality. We demonstrate the method for a fine-grained call-by-value presentation of the simply-typed lambda-calculus, and extend to a CIU result formulated with frame stacks.
All this is formalised and proved in Agda: building on work of Allais et al., we exploit dependent types to specify lambda-calculus terms as well-typed and well-scoped by construction. By doing so, we seek to dispel any lingering anxieties about the manipulation of concrete contexts when reasoning about bound variables, capturing substitution, and observational equivalences.
Article Search
Adapting Proof Automation to Adapt Proofs
Talia Ringer, Nathaniel Yazdani, John Leo, and Dan Grossman
(University of Washington, USA; Halfaya Research, USA)
We extend proof automation in an interactive theorem prover to analyze changes in specifications and proofs. Our approach leverages the history of changes to specifications and proofs to search for a patch that can be applied to other specifications and proofs that need to change in analogous ways.
We identify and implement five core components that are key to searching for a patch. We build a patch finding procedure from these components, which we configure for various classes of changes. We implement this procedure in a Coq plugin as a proof-of-concept and use it on real Coq code to change specifications, port definitions of a type, and update the Coq standard library. We show how our findings help drive a future that moves the burden of dealing with the brittleness of small changes in an interactive theorem prover away from the programmer and into automated tooling.
Preprint
A Monadic Framework for Relational Verification: Applied to Information Security, Program Equivalence, and Optimizations
Niklas Grimm, Kenji Maillard, Cédric Fournet, Cătălin Hriţcu, Matteo Maffei, Jonathan Protzenko, Tahina Ramananandro, Aseem Rastogi, Nikhil Swamy, and Santiago Zanella-Béguelin
(Vienna University of Technology, Austria; Inria, France; ENS Paris, France; Microsoft Research, UK; Microsoft Research, USA; Microsoft Research, India)
Relational properties describe multiple runs of one or more programs. They characterize many useful notions of security, program refinement, and equivalence for programs with diverse computational effects, and they have received much attention in the recent literature. Rather than developing separate tools for special classes of effects and relational properties, we advocate using a general purpose proof assistant as a unifying framework for the relational verification of effectful programs. The essence of our approach is to model effectful computations using monads and to prove relational properties on their monadic representations, making the most of existing support for reasoning about pure programs.
We apply this method in F* and evaluate it by encoding a variety of relational program analyses, including information flow control, program equivalence and refinement at higher order, correctness of program optimizations and game-based cryptographic security. By relying on SMT-based automation, unary weakest preconditions, user-defined effects, and monadic reification, we show that, compared to unary properties, verifying relational properties requires little additional effort from the F* programmer.
Preprint
Formal Proof of Polynomial-Time Complexity with Quasi-Interpretations
Hugo Férée, Samuel Hym, Micaela Mayero, Jean-Yves Moyen, and David Nowak
(University of Kent, UK; University of Lille, France; University of Paris 13, France; University of Copenhagen, Denmark; CNRS, France)
We present a Coq library that allows for readily proving that a function is computable in polynomial time. It is based on quasi-interpretations that, in combination with termination ordering, provide a characterisation of the class fp of functions computable in polynomial time. At the heart of this formalisation is a proof of soundness and extensional completeness. Compared to the original paper proof, we had to fill a lot of not so trivial details that were left to the reader and fix a few glitches. To demonstrate the usability of our library, we apply it to the modular exponentiation.
Article Search
Info
Trusted Verification Frameworks and Systems
A Verified SAT Solver with Watched Literals Using Imperative HOL
Mathias Fleury,
Jasmin Christian Blanchette, and Peter Lammich
(Max Planck Institute for Informatics, Germany; Vrije Universiteit Amsterdam, Netherlands; TU Munich, Germany; Virginia Tech, USA)
Based on our earlier formalization of conflict-driven clause learning (CDCL) in Isabelle/HOL, we refine the CDCL calculus to add a crucial optimization: two watched literals. We formalize the data structure and the invariants. Then we refine the calculus to obtain an executable SAT solver. Through a chain of refinements carried out using the Isabelle Refinement Framework, we target Imperative HOL and extract imperative Standard ML code. Although our solver is not competitive with the state of the art, it offers acceptable performance for some applications, and heuristics can be added to improve it further.
Preprint
Œuf: Minimizing the Coq Extraction TCB
Eric Mullen, Stuart Pernsteiner, James R. Wilcox, Zachary Tatlock, and Dan Grossman
(University of Washington, USA)
Verifying systems by implementing them in the programming language of a proof assistant (e.g., Gallina for Coq) lets us directly leverage the full power of the proof assistant for verifying the system. But, to execute such an implementation requires extraction, a large complicated process that is in the trusted computing base (TCB).
This paper presents Œuf, a verified compiler from a subset of Gallina to assembly. Œuf’s correctness theorem ensures that compilation preserves the semantics of the source Gallina program. We describe how Œuf’s specification can be used as a foreign function interface to reason about the interaction between compiled Gallina programs and surrounding shim code. Additionally, Œufmaintains a small TCB for its front-end by reflecting Gallina programs to Œufsource and automatically ensuring equivalence using computational denotation. This design enabled us to implement some early compiler passes (e.g., lambda lifting) in the untrusted reflection and ensure their correctness via translation validation. To evaluate Œuf, we compile Appel’s SHA256 specification from Gallina to x86 and write a shim for the generated code, yielding a verified sha256sum implementation with a small TCB.
Article Search
Proofs in Conflict-Driven Theory Combination
Maria Paola Bonacina, Stéphane Graham-Lengrand, and Natarajan Shankar
(University of Verona, Italy; CNRS, France; Inria, France; École Polytechnique, France; SRI International, USA)
Search-based satisfiability procedures try to construct a model of the input formula by simultaneously proposing candidate models and deriving new formulae implied by the input. When the formulae are satisfiable, these procedures generate a model as a witness. Dually, it is desirable to have a proof when the formulae are unsatisfiable. Conflict-driven procedures perform nontrivial inferences only when resolving conflicts between the formulae and assignments representing the candidate model. CDSAT (Conflict-Driven SATisfiability) is a method for conflict-driven reasoning in combinations of theories. It combines solvers for individual theories as theory modules within a solver for the union of the theories. In this paper we endow CDSAT with lemma learning and proof generation. For the latter, we present two techniques. The first one produces proof objects in memory: it assumes that all theory modules produce proof objects and it accommodates multiple proof formats. The second technique adapts the LCF approach to proofs from interactive theorem proving to conflict-driven SMT-solving and theory combination, by defining a small kernel of reasoning primitives that guarantees that CDSAT proofs are correct by construction.
Article Search
Type Theory, Set Theory, and Formalized Mathematics
Finite Sets in Homotopy Type Theory
Dan Frumin, Herman Geuvers, Léon Gondelman, and Niels van der Weide
(Radboud University Nijmegen, Netherlands)
We study different formalizations of finite sets in homotopy type theory to obtain a general definition that exhibits both the computational facilities and the proof principles expected from finite sets. We use higher inductive types to define the type K(A) of "finite sets over type A" à la Kuratowski without assuming that K(A) has decidable equality. We show how to define basic functions and prove basic properties after which we give two applications of our definition.
On the foundational side, we use K to define the notions of "Kuratowski-finite type" and "Kuratowski-finite subobject", which we contrast with established notions, e.g. Bishop-finite types and enumerated types. We argue that Kuratowski-finiteness is the most general and flexible one of those and we define the usual operations on finite types and subobjects.
From the computational perspective, we show how to use K(A) for an abstract interface for well-known finite set implementations such as tree- and list-like data structures. This implies that a function defined on a concrete finite sets implementation can be obtained from a function defined on the abstract finite sets K(A) and that correctness properties are inherited. Hence, HoTT is the ideal setting for data refinement. Beside this, we define bounded quantification, which lifts a decidable property on A to one on K(A).
Article Search
Archive submitted (0 MB)
Info
Artifacts Available
Generic Derivation of Induction for Impredicative Encodings in Cedille
Denis Firsov and Aaron Stump
(University of Iowa, USA)
This paper presents generic derivations of induction for
impredicatively typed lambda-encoded datatypes, in the Cedille
type theory. Cedille is a pure type theory extending the
Curry-style Calculus of Constructions with implicit products,
primitive heterogeneous equality, and dependent intersections.
All data erase to pure lambda terms, and there is no built-in
notion of datatype. The derivations are generic in the sense
that we derive induction for any datatype which arises as the
least fixed point of a signature functor. We consider
Church-style and Mendler-style lambda-encodings. Moreover, the
isomorphism of these encodings is proved. Also, we formalize
Lambek's lemma as a consequence of expected laws of cancellation,
reflection, and fusion.
Article Search
Large Model Constructions for Second-Order ZF in Dependent Type Theory
Dominik Kirst and Gert Smolka
(Saarland University, Germany)
We study various models of classical second-order set theories in the dependent type theory of Coq. Without logical assumptions, Aczel’s sets-as-trees interpretation yields an intensional model of second-order ZF with functional replacement. Building on work of Werner and Barras, we discuss the need for quotient axioms in order to obtain extensional models with relational replacement and to construct large sets. Specifically, we show that the consistency strength of Coq extended by excluded middle and a description operator on well-founded trees allows for constructing models with exactly n Grothendieck universes for every natural number n. By a previous categoricity result based on Zermelo’s embedding theorem, it follows that those models are unique up to isomorphism. Moreover, we show that the smallest universe contains exactly the hereditarily finite sets and give a concise independence proof of the foundation axiom based on permutation models.
Article Search
Info
A Constructive Formalisation of Semi-algebraic Sets and Functions
Boris Djalal
(Inria, France)
Semi-algebraic sets and semi-algebraic functions are essential
to specify and certify cylindrical algebraic decomposition algorithms.
We formally define in Coq the base operations on semi-algebraic sets
and functions using embedded first-order formulae
over the language of real closed fields,
and we prove the correctness of their geometrical interpretation.
In doing so,
we exploit a previous formalisation of quantifier elimination on such embedded formulae
to guarantee the decidability of several first-order properties
and keep our development constructive.
We also exploit it to formalise formulae substitution
without having to handle bound variables.
Article Search
Formalizing Meta-Theory
HOπ in Coq
Sergueï Lenglet and Alan Schmitt
(University of Lorraine, France; Inria, France)
We propose a formalization of HOπ in Coq, a process calculus where messages carry processes. Such a higher-order calculus features two very different kinds of binder: process input, similar to λ-abstraction, and name restriction, whose scope can be expanded by communication. We formalize strong context bisimilarity and prove it is compatible, i.e., closed under every context, using Howe’s method, based on several proof schemes we developed in a previous paper.
Article Search
A Two-Level Logic Perspective on (Simultaneous) Substitutions
Kaustuv Chaudhuri
(Inria, France; École Polytechnique, France)
Lambda-tree syntax (λTS), also known as higher-order abstract syntax (HOAS), is a representational technique where the pure λ-calculus in a meta-language is used to represent binding constructs in an object language. A key feature of λTS is that capture-avoiding substitution in the object language is represented by β-reduction in the meta language. However, to reason about the meta-theory of (simultaneous) substitutions, it may seem that λTS gets in the way: not only does iterated β-reduction not capture simultaneity, but also β-redexes are not first-class constructs.
This paper proposes a representation of (simultaneous) substitutions in the two-level logic approach (2LLA), where properties of a specification language are established in a strong reasoning meta-logic that supports inductive reasoning. A substitution, which is a partial map from variables to terms, is represented in a form similar to typing contexts, which are partial maps from variables to types; both are first-class in 2LLA. The standard typing rules for substitutions are then just a kind of context relation that are already well-known in 2LLA. This representation neither changes the reasoning kernel, nor requires any modification of existing type systems, and does not sacrifice any expressivity.
Preprint
Archive submitted (0 MB)
Info
Artifacts Available
Binder Aware Recursion over Well-Scoped de Bruijn Syntax
Jonas Kaiser, Steven Schäfer, and Kathrin Stark
(Saarland University, Germany)
The de Bruijn representation of syntax with binding is commonly used, but flawed when it comes to recursion.
As the structural recursion principle associated to an inductive type of expressions is unaware of the binding discipline, each recursive definition requires a separate proof of compatibility with variable instantiation.
We solve this problem by extending Allais' notion of syntax traversals to obtain a framework for instantiation-compatible recursion.
The framework is general enough to handle multivariate, potentially mutually recursive syntactic systems.
With our framework we define variable renaming and instantiation, syntax directed typing and certain unary logical relations for System F.
These definitons lead to concise proofs of type preservation, as well as weak and strong normalisation.
Our framework is designed to serve as the theoretical foundation of future versions of the Autosubst Coq library.
All developments and case studies are formalised in the Coq proof assistant.
Article Search
Info
proc time: 0.09