CPP 2017
6th ACM SIGPLAN Conference on Certified Programs and Proofs (CPP 2017)
Powered by
Conference Publishing Consulting

6th ACM SIGPLAN Conference on Certified Programs and Proofs (CPP 2017), January 16–17, 2017, Paris, France

CPP 2017 – Proceedings

Contents - Abstracts - Authors

Frontmatter

Title Page


Welcome from the Chairs
The CPP'17 conference (Certified Programs and Proofs) is the sixth conference in a series that was started in 2011. The first editions were held in December 2011 in Taipei (Taiwan), in December 2012 in Kyoto (Japan), in December 2013 in Melbourne (Australia), in January 2015 in Mumbai (India), and in January 2016 in St. Petersburg (Florida, USA). The first three editions were co-located with the Asian Symposium on Program Languages and Systems. The last two editions were co-located with the ACM Symposium on Principles of Programming Languages (POPL). This edition is also co-located with POPL'17, in Paris (France). We are thankful to ACM SIGPLAN for sponsoring CPP'17 and to the POPL'17 general chair and local organizers for hosting CPP'17. The CPP conference also received financial support from Inria, France.

CPP 2017 Organization
Committee Listings

CPP 2017 Sponsors
Sponsors and Supporters

Keynotes

Porting the HOL Light Analysis Library: Some Lessons (Invited Talk)
Lawrence C. Paulson
(University of Cambridge, UK)
The HOL Light proof assistant is famous for its huge multivariate analysis library: nearly 300,000 lines of code and 13,000 theorems. A substantial fraction of this library has been manually ported to Isabelle/HOL. The Isabelle analysis library contains approximately 7400 named theorems, including Cauchy’s integral and residue theorems, the Liouville theorem, the open mapping and domain invariance theorems, the maximum modulus principle and the Krein–Milman Minkowski theorem. Why port proofs manually given so much work on porting proofs automatically? Typical approaches rely on low level encodings that seldom yield natural-looking results. Manual porting has allowed us to generalise many results from n-dimensional vector spaces to metric or topological spaces. The transition from the traditional LCF/HOL proof style (which has hardly changed since 1985) to structured proofs has produced a dramatic improvement in the legibility of the material. Automatic porting generally yields a list of theorem statements but no intelligible proofs. This project has highlighted three features of Isabelle working well together: heuristic automation, structured proofs and sledgehammer. Heuristic automation builds in a lot of implicit knowledge, which is potentially unpredictable, but in combination with structured proofs any breakages (caused by updates to the system) are localised and easily fixed. Sledgehammer (which uses powerful external automation to solve subgoals) can frequently complete an argument without requiring a precise reproduction of the original HOL Light proof. Sledgehammer also encourages a style in which the user reaches the desired result by suggesting a series of intermediate claims. Such proofs are genuinely human-oriented. And only such proofs will attract mathematicians; even a guarantee of correctness will not impress them unless the system lets them understand and tinker with their formal proofs.

Mechanized Verification of Preemptive OS Kernels (Invited Talk)
Xinyu Feng
(University of Science and Technology of China, China)
We propose a practical verification framework for preemptive OS kernels. The framework models the correctness of API implementations in OS kernels as contextual refinement of their abstract specifications. It provides a specification language for defining the high-level abstract model of OS kernels, a program logic for refinement verification of concurrent kernel code with multi-level hardware interrupts, and automated tactics for developing mechanized proofs. The whole framework is developed for a practical subset of the C language. We have successfully applied it to verify key modules of a commercial preemptive OS μC/OS-II, including the scheduler, interrupt handlers, message queues, and mutexes, etc. We also verify the priority-inversion-freedom (PIF) in μC/OS-II. All the proofs are mechanized in Coq. To our knowledge, our work is the first to verify the functional correctness of a practical preemptive OS kernel with machine-checkable proofs. More details about the project is available at http://staff.ustc.edu.cn/~fuming/research/certiucos/.

Algorithm and Library Verification

Verifying a Hash Table and Its Iterators in Higher-Order Separation Logic
François Pottier
(Inria, France)
We describe the specification and proof of an (imperative, sequential) hash table implementation. The usual dictionary operations (insertion, lookup, and so on) are supported, as well as iteration via folds and iterators. The code is written in OCaml and verified using higher-order separation logic, embedded in Coq, via the CFML tool and library. This case study is part of a larger project that aims to build a verified OCaml library of basic data structures.

A Formalization of the Berlekamp-Zassenhaus Factorization Algorithm
Jose Divasón, Sebastiaan Joosten, René Thiemann, and Akihisa Yamada
(Universidad de La Rioja, Spain; University of Innsbruck, Austria)
We formalize the Berlekamp–Zassenhaus algorithm for factoring square-free integer polynomials in Isabelle/HOL. We further adapt an existing formalization of Yun’s square-free factorization algorithm to integer polynomials, and thus provide an efficient and certified factorization algorithm for arbitrary univariate polynomials. The algorithm first performs a factorization in the prime field GF(p) and then performs computations in the ring of integers modulo pk, where both p and k are determined at runtime. Since a natural modeling of these structures via dependent types is not possible in Isabelle/HOL, we formalize the whole algorithm using Isabelle’s recent addition of local type definitions. Through experiments we verify that our algorithm factors polynomials of degree 100 within seconds.

Formal Foundations of 3D Geometry to Model Robot Manipulators
Reynald Affeldt and Cyril Cohen
(AIST, Japan; Inria, France)
We are interested in the formal specification of safety properties of robot manipulators down to the mathematical physics. To this end, we have been developing a formalization of the mathematics of rigid body transformations in the Coq proof-assistant. It can be used to address the forward kinematics problem, i.e., the computation of the position and orientation of the end-effector of a robot manipulator in terms of the link and joint parameters. Our formalization starts by extending the Mathematical Components library with a new theory for angles and by developing three-dimensional geometry. We use these theories to formalize the foundations of robotics. First, we formalize a comprehensive theory of three-dimensional rotations, including exponentials of skew-symmetric matrices and quaternions. Then, we provide a formalization of the various representations of rigid body transformations: isometries, homogeneous representation, the Denavit-Hartenberg convention, and screw motions. These ingredients make it possible to formalize robot manipulators: we illustrate this aspect by an application to the SCARA robot manipulator.

Automated Proof and Its Formal Verification

BliStrTune: Hierarchical Invention of Theorem Proving Strategies
Jan Jakubův and Josef Urban
(Czech Technical University, Czech Republic)
Inventing targeted proof search strategies for specific problem sets is a difficult task. State-of-the-art automated theorem provers (ATPs) such as E allow a large number of user-specified proof search strategies described in a rich domain specific language. Several machine learning methods that invent strategies automatically for ATPs were proposed previously. One of them is the Blind Strategymaker (BliStr), a system for automated invention of ATP strategies.
In this paper we introduce BliStrTune -- a hierarchical extension of BliStr. BliStrTune allows exploring much larger space of E strategies by interleaving search for high-level parameters with their fine-tuning. We use BliStrTune to invent new strategies based also on new clause weight functions targeted at problems from large ITP libraries. We show that the new strategies significantly improve E's performance in solving problems from the Mizar Mathematical Library.

Automatic Cyclic Termination Proofs for Recursive Procedures in Separation Logic
Reuben N. S. Rowe and James Brotherston
(University College London, UK)
We describe a formal verification framework and tool implementation, based upon cyclic proofs, for certifying the safe termination of imperative pointer programs with recursive procedures. Our assertions are symbolic heaps in separation logic with user defined inductive predicates; we employ explicit approximations of these predicates as our termination measures. This enables us to extend cyclic proof to programs with procedures by relating these measures across the pre- and postconditions of procedure calls.
We provide an implementation of our formal proof system in the Cyclist theorem proving framework, and evaluate its performance on a range of examples drawn from the literature on program termination. Our implementation extends the current state-of-the-art in cyclic proof-based program verification, enabling automatic termination proofs of a larger set of programs than previously possible.

Info
Formalization of Karp-Miller Tree Construction on Petri Nets
Mitsuharu Yamamoto, Shogo Sekine, and Saki Matsumoto
(Chiba University, Japan; Acrovision, Japan)
Karp-Miller tree construction on Petri nets is a classical well-known algorithm to construct the minimal coverability set via forward analysis. It enables us to construct decision procedures for several problems such as coverability and (place) boundedness. This paper formalizes Karp-Miller tree construction on Petri nets, and its correctness with respect to coverability using Coq and Ssreflect. Instead of showing soundness and completeness for trees directly, we prove these properties for transition systems derived from Petri nets, and then relate them with trees so we can make use of well-established libraries and also avoid making proofs complicated. Termination of the tree construction is guaranteed by defining it as a Coq function with a well-foundedness proof. Although the termination proof for Karp-Miller tree is usually made with classical reasoning, we avoided it so the whole formalization can be done in an axiom-free way with respect the default Coq theory. This work will provide us with a basis for further formalization of several optimized algorithms, some of which are known to be error prone, and also a template for Karp-Miller style algorithms on extended variants of Petri nets.

Formalized Mathematics with Numerical Computations

A Coq Formal Proof of the Lax–Milgram Theorem
Sylvie Boldo, François Clément, Florian Faissole, Vincent Martin, and Micaela Mayero
(Inria, France; University of Paris-Sud, France; CNRS, France; University of Paris-Saclay, France; University of Technology of Compiègne, France; University of Paris North, France)
The Finite Element Method is a widely-used method to solve numerical problems coming for instance from physics or biology. To obtain the highest confidence on the correction of numerical simulation programs implementing the Finite Element Method, one has to formalize the mathematical notions and results that allow to establish the soundness of the method. The Lax–Milgram theorem may be seen as one of those theoretical cornerstones: under some completeness and coercivity assumptions, it states existence and uniqueness of the solution to the weak formulation of some boundary value problems. This article presents the full formal proof of the Lax–Milgram theorem in Coq. It requires many results from linear algebra, geometry, functional analysis, and Hilbert spaces.

A Reflexive Tactic for Polynomial Positivity using Numerical Solvers and Floating-Point Computations
Érik Martin-Dorel and Pierre Roux
(IRIT, France; Paul Sabatier University, France; ONERA, France)
Polynomial positivity over the real field is known to be decidable but even the best algorithms remain costly. An incomplete but often efficient alternative consists in looking for positivity witnesses as sum of squares decompositions. Such decompositions can in practice be obtained through convex optimization. Unfortunately, these methods only yield approximate solutions. Hence the need for formal verification of such witnesses. State of the art methods rely on heuristic roundings to exact solutions in the rational field. These solutions are then easy to verify in a proof assistant. However, this verification often turns out to be very costly, as rational coefficients may blow up during computations.
Nevertheless, overapproximations with floating-point arithmetic can be enough to obtain proofs at a much lower cost. Such overapproximations being non trivial, it is mandatory to formally prove that rounding errors are correctly taken into account. We develop a reflexive tactic for the Coq proof assistant allowing one to automatically discharge polynomial positivity proofs. The tactic relies on heavy computation involving multivariate polynomials, matrices and floating-point arithmetic. Benchmarks indicate that we are able to formally address positivity problems that would otherwise be untractable with other state of the art methods.

Info
Markov Processes in Isabelle/HOL
Johannes Hölzl
(TU München, Germany)
Markov processes with discrete time and arbitrary state spaces are important models in probability theory. They model the infinite steps of non-terminating programs with (not just discrete) probabilistic choice and form the basis for further probabilistic models. Their transition behavior is described by Markov kernels, i.e. measurable functions from a state to a distribution of states. Markov kernels can be composed in a monadic way from distributions (normal, exponential, Bernoulli, etc.), other Markov kernels, and even other Markov processes.
In this paper we construct discrete-time Markov processes with arbitrary state spaces, given the transition probabilities as a Markov kernel. We show that the Markov processes form again Markov kernels. This allows us to prove a bisimulation argument between two Markov processes and derive the strong Markov property. We use the existing probability theory in Isabelle/HOL and extend its capability to work with Markov kernels.
As application we construct continuous-time Markov chains (CTMCs). These are constructed as jump & hold processes, which are discrete-time Markov processes where the state space is a product of continuous holding times and discrete states. We prove the Markov property of CTMCs using the bisimulation argument for discrete-time Markov processes, and that the transition probability is the solution of a differential equation.

Formalising Real Numbers in Homotopy Type Theory
Gaëtan Gilbert
(ENS Lyon, France)
Cauchy reals can be defined as a quotient of Cauchy sequences of rationals. In this case, the limit of a Cauchy sequence of Cauchy reals is defined through lifting it to a sequence of Cauchy sequences of rationals.
This lifting requires the axiom of countable choice or excluded middle, neither of which is available in homotopy type theory. To address this, the Univalent Foundations Program uses a higher inductive-inductive type to define the Cauchy reals as the free Cauchy complete metric space generated by the rationals.
We generalize this construction to define the free Cauchy complete metric space generated by an arbitrary metric space. This forms a monad in the category of metric spaces with Lipschitz functions. When applied to the rationals it defines the Cauchy reals. Finally, we can use Altenkirch and Danielson (2016)'s partiality monad to define a semi-decision procedure comparing a real number and a rational number.
The entire construction has been formalized in the Coq proof assistant. It is available at https://github.com/SkySkimmer/HoTTClasses/tree/CPP2017 .

Verified Programming Tools

Verified Compilation of CakeML to Multiple Machine-Code Targets
Anthony Fox, Magnus O. Myreen, Yong Kiam Tan, and Ramana Kumar
(University of Cambridge, UK; Chalmers University of Technology, Sweden; Carnegie Mellon University, USA; Data61 at CSIRO, Australia; UNSW, Australia)
This paper describes how the latest CakeML compiler supports verified compilation down to multiple realistically modelled target architectures. In particular, we describe how the compiler definition, the various language semantics, and the correctness proofs were organised to minimize target-specific overhead. With our setup we have incorporated compilation to four 64-bit architectures, ARMv8, x86-64, MIPS-64, RISC-V, and one 32-bit architecture, ARMv6. Our correctness theorem allows interference from the environment: the top-level correctness statement takes into account execution of foreign code and per-instruction interference from external processes, such as interrupt handlers in operating systems. The entire CakeML development is formalised in the HOL4 theorem prover.

Info
Complx: A Verification Framework for Concurrent Imperative Programs
Sidney Amani, June Andronick, Maksym Bortin, Corey Lewis, Christine Rizkallah, and Joseph Tuong
(Data61 at CSIRO, Australia; UNSW, Australia; University of Pennsylvania, USA)
We propose a concurrency reasoning framework for imperative programs, based on the Owicki-Gries (OG) foundational shared-variable concurrency method. Our framework combines the approaches of Hoare-Parallel, a formalisation of OG in Isabelle/HOL for a simple while-language, and Simpl, a generic imperative language embedded in Isabelle/HOL, allowing formal reasoning on C programs. We define the Complx language, extending the syntax and semantics of Simpl with support for parallel composition and synchronisation. We additionally define an OG logic, which we prove sound w.r.t. the semantics, and a verification condition generator, both supporting involved low-level imperative constructs such as function calls and abrupt termination. We illustrate our framework on an example that features exceptions, guards and function calls. We aim to then target concurrent operating systems, such as the interruptible eChronos embedded operating system for which we already have a model-level OG proof using Hoare-Parallel.

Verifying Dynamic Race Detection
William Mansky, Yuanfeng Peng, Steve Zdancewic, and Joseph Devietti
(Princeton University, USA; University of Pennsylvania, USA)
Writing race-free concurrent code is notoriously difficult, and data races can result in bugs that are difficult to isolate and reproduce. Dynamic race detection can catch data races that cannot (easily) be detected statically. One approach to dynamic race detection is to instrument the potentially racy code with operations that store and compare metadata, where the metadata implements some known race detection algorithm (e.g. vector-clock race detection). In this paper, we describe the process of formally verifying several algorithms for dynamic race detection. We then turn to implementations, laying out an instrumentation pass for race detection in a simple language and presenting a mechanized formal proof of its correctness: all races in a program will be caught by the instrumentation, and all races detected by the instrumentation are possible in the original program.

Info

Homotopy Type Theory

The HoTT Library: A Formalization of Homotopy Type Theory in Coq
Andrej Bauer, Jason Gross, Peter LeFanu Lumsdaine, Michael Shulman, Matthieu Sozeau, and Bas Spitters
(University of Ljubljana, Slovenia; Massachusetts Institute of Technology, USA; Stockholm University, Sweden; University of San Diego, USA; Inria, France; Aarhus University, Denmark)
We report on the development of the HoTT library, a formalization of homotopy type theory in the Coq proof assistant. It formalizes most of basic homotopy type theory, including univalence, higher inductive types, and significant amounts of synthetic homotopy theory, as well as category theory and modalities. The library has been used as a basis for several independent developments. We discuss the decisions that led to the design of the library, and we comment on the interaction of homotopy type theory with recently introduced features of Coq, such as universe polymorphism and private inductive types.

Info
Lifting Proof-Relevant Unification to Higher Dimensions
Jesper Cockx and Dominique Devriese
(KU Leuven, Belgium)
In a dependently typed language such as Coq or Agda, unification can be used to discharge equality constraints and detect impossible cases automatically. By nature of dependent types, it is necessary to use a proof-relevant unification algorithm where unification rules are functions manipulating equality proofs. This ensures their correctness but simultaneously sets a high bar for new unification rules. In particular, so far no-one has given a satisfactory proof-relevant version of the injectivity rule for indexed datatypes.
In this paper, we describe a general technique for solving equations between constructors of indexed datatypes. We handle the main technical problem—generalization over equality proofs in the indices—by introducing new equations between equality proofs. Borrowing terminology from homotopy type theory, we call them higher-dimensional equations. To apply existing one-dimensional unifiers to these higher-dimensional equations, we show how unifiers can be lifted to a higher dimension. We show the usefulness of this idea by applying it to the unification algorithm used by Agda, though it can also be applied in languages that support identity types but not general indexed datatypes.

The Next 700 Syntactical Models of Type Theory
Simon Boulier, Pierre-Marie Pédrot, and Nicolas Tabareau
(Inria, France)
A family of syntactic models for the calculus of construction with universes (CCω) is described, all of them preserving conversion of the calculus definitionally, and thus giving rise directly to a program transformation of CCω into itself.
Those models are based on the remark that negative type constructors (e.g. dependent product, coinductive types or universes) are underspecified in type theory—which leaves some freedom on extra intensional specifications.
The model construction can be seen as a compilation phase from a complex type theory into a simpler type theory.
Such models can be used to derive (the negative part of) independence results with respect to CCω, such as functional extensionality, propositional extensionality, univalence or the fact that bisimulation on a coinductive type may not coincide with equality.
They can also be used to add new principles to the theory, which we illustrate by defining a version of CCω with ad-hoc polymorphism that shows in particular that parametricity is not an implicit requirement of type theory.
The correctness of some of the models/program transformations have been checked in the Coq proof assistant and have been instrumented as a Coq plugin.

Formal Verification of Programming Language Foundations

Type-and-Scope Safe Programs and Their Proofs
Guillaume Allais, James Chapman, Conor McBride, and James McKinna
(Radboud University Nijmegen, Netherlands; University of Strathclyde, UK; University of Edinburgh, UK)
We abstract the common type-and-scope safe structure from computations on λ-terms that deliver, e.g., renaming, substitution, evaluation, CPS-transformation, and printing with a name supply. By exposing this structure, we can prove generic simulation and fusion lemmas relating operations built this way. This work has been fully formalised in Agda.

Info
Formally Verified Differential Dynamic Logic
Brandon Bohrer, Vincent Rahli, Ivana Vukotic, Marcus Völp, and André Platzer
(Carnegie Mellon University, USA; University of Luxembourg, Luxembourg)
We formalize the soundness theorem for differential dynamic logic, a logic for verifying hybrid systems. To increase confidence in the formalization, we present two versions: one in Isabelle/HOL and one in Coq. We extend the metatheory to include features used in practice, such as systems of differential equations and functions of multiple arguments. We demonstrate the viability of constructing a verified kernel for the hybrid systems theorem prover KeYmaera X by embedding proof checkers for differential dynamic logic in Coq and Isabelle. We discuss how different provers and libraries influence the design of the formalization.

Equivalence of System F and λ2 in Coq Based on Context Morphism Lemmas
Jonas Kaiser, Tobias Tebbi, and Gert Smolka
(Saarland University, Germany)
We give a machine-checked proof of the equivalence of the usual, two-sorted presentation of System F and its single-sorted pure type system variant λ2. This is established by reducing the typability problem of F to λ2 and vice versa. The difficulty lies in aligning different binding-structures and different contexts (dependent vs. non-dependent). The use of de Bruijn syntax, parallel substitutions, and context morphism lemmas leads to an elegant proof. We use the Coq proof assistant and the substitution library Autosubst.

Info

proc time: 2.01