The CPP'17 conference (Certified Programs and Proofs) is the sixth conference in a series that was started in 2011. The first editions were held in December 2011 in Taipei (Taiwan), in December 2012 in Kyoto (Japan), in December 2013 in Melbourne (Australia), in January 2015 in Mumbai (India), and in January 2016 in St. Petersburg (Florida, USA). The first three editions were co-located with the Asian Symposium on Program Languages and Systems. The last two editions were co-located with the ACM Symposium on Principles of Programming Languages (POPL). This edition is also co-located with POPL'17, in Paris (France). We are thankful to ACM SIGPLAN for sponsoring CPP'17 and to the POPL'17 general chair and local organizers for hosting CPP'17. The CPP conference also received financial support from Inria, France.
The HOL Light proof assistant is famous for its huge multivariate analysis library: nearly 300,000 lines of code and 13,000 theorems. A substantial fraction of this library has been manually ported to Isabelle/HOL. The Isabelle analysis library contains approximately 7400 named theorems, including Cauchy’s integral and residue theorems, the Liouville theorem, the open mapping and domain invariance theorems, the maximum modulus principle and the Krein–Milman Minkowski theorem.
Why port proofs manually given so much work on porting proofs automatically? Typical approaches rely on low level encodings that seldom yield natural-looking results. Manual porting has allowed us to generalise many results from n-dimensional vector spaces to metric or topological spaces. The transition from the traditional LCF/HOL proof style (which has hardly changed since 1985) to structured proofs has produced a dramatic improvement in the legibility of the material. Automatic porting generally yields a list of theorem statements but no intelligible proofs.
This project has highlighted three features of Isabelle working well together: heuristic automation, structured proofs and sledgehammer. Heuristic automation builds in a lot of implicit knowledge, which is potentially unpredictable, but in combination with structured proofs any breakages (caused by updates to the system) are localised and easily fixed. Sledgehammer (which uses powerful external automation to solve subgoals) can frequently complete an argument without requiring a precise reproduction of the original HOL Light proof. Sledgehammer also encourages a style in which the user reaches the desired result by suggesting a series of intermediate claims.
Such proofs are genuinely human-oriented. And only such proofs will attract mathematicians; even a guarantee of correctness will not impress them unless the system lets them understand and tinker with their formal proofs.
@InProceedings{CPP17p1,
author = {Lawrence C. Paulson},
title = {Porting the HOL Light Analysis Library: Some Lessons (Invited Talk)},
booktitle = {Proc.\ CPP},
publisher = {ACM},
pages = {1--1},
doi = {10.1145/3018610.3023366},
year = {2017},
}
Publisher's Version Article Search
We propose a practical verification framework for preemptive OS kernels. The framework models the correctness of API implementations in OS kernels as contextual refinement of their abstract specifications. It provides a specification language for defining the high-level abstract model of OS kernels, a program logic for refinement verification of concurrent kernel code with multi-level hardware interrupts, and automated tactics for developing mechanized proofs. The whole framework is developed for a practical subset of the C language. We have successfully applied it to verify key modules of a commercial preemptive OS μC/OS-II, including the scheduler, interrupt handlers, message queues, and mutexes, etc. We also verify the priority-inversion-freedom (PIF) in μC/OS-II. All the proofs are mechanized in Coq. To our knowledge, our work is the first to verify the functional correctness of a practical preemptive OS kernel with machine-checkable proofs. More details about the project is available at <http://staff.ustc.edu.cn/~fuming/research/certiucos/>
@InProceedings{CPP17p2,
author = {Xinyu Feng},
title = {Mechanized Verification of Preemptive OS Kernels (Invited Talk)},
booktitle = {Proc.\ CPP},
publisher = {ACM},
pages = {2--2},
doi = {10.1145/3018610.3023367},
year = {2017},
}
Publisher's Version Article Search
We describe the specification and proof of an (imperative, sequential) hash
table implementation. The usual dictionary operations (insertion, lookup, and
so on) are supported, as well as iteration via folds and iterators. The code
is written in OCaml and verified using higher-order separation logic, embedded
in Coq, via the CFML tool and library. This case study is part of a larger
project that aims to build a verified OCaml library of basic data structures.
@InProceedings{CPP17p3,
author = {François Pottier},
title = {Verifying a Hash Table and Its Iterators in Higher-Order Separation Logic},
booktitle = {Proc.\ CPP},
publisher = {ACM},
pages = {3--16},
doi = {10.1145/3018610.3018624},
year = {2017},
}
Publisher's Version Article Search
We formalize the Berlekamp–Zassenhaus algorithm for factoring square-free integer polynomials in Isabelle/HOL. We further adapt an existing formalization of Yun’s square-free factorization algorithm to integer polynomials, and thus provide an efficient and certified factorization algorithm for arbitrary univariate polynomials.
The algorithm first performs a factorization in the prime field GF(p) and then performs computations in the ring of integers modulo pk, where both p and k are determined at runtime. Since a natural modeling of these structures via dependent types is not possible in Isabelle/HOL, we formalize the whole algorithm using Isabelle’s recent addition of local type definitions.
Through experiments we verify that our algorithm factors polynomials of degree 100 within seconds.
@InProceedings{CPP17p17,
author = {Jose Divasón and Sebastiaan Joosten and René Thiemann and Akihisa Yamada},
title = {A Formalization of the Berlekamp-Zassenhaus Factorization Algorithm},
booktitle = {Proc.\ CPP},
publisher = {ACM},
pages = {17--29},
doi = {10.1145/3018610.3018617},
year = {2017},
}
Publisher's Version Article Search
We are interested in the formal specification of safety properties of
robot manipulators down to the mathematical physics. To this end, we
have been developing a formalization of the mathematics of rigid body
transformations in the Coq proof-assistant. It can be used to address
the forward kinematics problem, i.e., the computation of the position
and orientation of the end-effector of a robot manipulator in terms of
the link and joint parameters. Our formalization starts
by extending the Mathematical Components library with a new theory for angles and
by developing three-dimensional geometry. We use these theories to
formalize the foundations of robotics. First, we formalize a
comprehensive theory of three-dimensional rotations, including
exponentials of skew-symmetric matrices and quaternions. Then, we
provide a formalization of the various representations of rigid body
transformations: isometries, homogeneous representation, the
Denavit-Hartenberg convention, and screw motions. These ingredients
make it possible to formalize robot manipulators: we illustrate this
aspect by an application to the SCARA robot manipulator.
@InProceedings{CPP17p30,
author = {Reynald Affeldt and Cyril Cohen},
title = {Formal Foundations of 3D Geometry to Model Robot Manipulators},
booktitle = {Proc.\ CPP},
publisher = {ACM},
pages = {30--42},
doi = {10.1145/3018610.3018629},
year = {2017},
}
Publisher's Version Article Search
Inventing targeted proof search strategies for specific problem sets is a
difficult task. State-of-the-art automated theorem provers (ATPs) such as E
allow a large number of user-specified proof search strategies described in a
rich domain specific language. Several machine learning methods that invent
strategies automatically for ATPs were proposed previously. One of them is the
Blind Strategymaker (BliStr), a system for automated invention of ATP
strategies.
In this paper we introduce BliStrTune -- a hierarchical extension of BliStr.
BliStrTune allows exploring much larger space of E strategies by interleaving
search for high-level parameters with their fine-tuning. We use BliStrTune to
invent new strategies based also on new clause weight functions targeted at
problems from large ITP libraries. We show that the new strategies
significantly improve E's performance in solving problems from the Mizar
Mathematical Library.
@InProceedings{CPP17p43,
author = {Jan Jakubův and Josef Urban},
title = {BliStrTune: Hierarchical Invention of Theorem Proving Strategies},
booktitle = {Proc.\ CPP},
publisher = {ACM},
pages = {43--52},
doi = {10.1145/3018610.3018619},
year = {2017},
}
Publisher's Version Article Search
We describe a formal verification framework and tool implementation, based upon cyclic proofs, for certifying the safe termination of imperative pointer programs with recursive procedures. Our assertions are symbolic heaps in separation logic with user defined inductive predicates; we employ explicit approximations of these predicates as our termination measures. This enables us to extend cyclic proof to programs with procedures by relating these measures across the pre- and postconditions of procedure calls.
We provide an implementation of our formal proof system in the Cyclist theorem proving framework, and evaluate its performance on a range of examples drawn from the literature on program termination. Our implementation extends the current state-of-the-art in cyclic proof-based program verification, enabling automatic termination proofs of a larger set of programs than previously possible.
@InProceedings{CPP17p53,
author = {Reuben N. S. Rowe and James Brotherston},
title = {Automatic Cyclic Termination Proofs for Recursive Procedures in Separation Logic},
booktitle = {Proc.\ CPP},
publisher = {ACM},
pages = {53--65},
doi = {10.1145/3018610.3018623},
year = {2017},
}
Publisher's Version Article Search Info
Karp-Miller tree construction on Petri nets is a classical well-known algorithm to construct the minimal coverability set via forward analysis. It enables us to construct decision procedures for several problems such as coverability and (place) boundedness.
This paper formalizes Karp-Miller tree construction on Petri nets, and its correctness with respect to coverability using Coq and Ssreflect. Instead of showing soundness and completeness for trees directly, we prove these properties for transition systems derived from Petri nets, and then relate them with trees so we can make use of well-established libraries and also avoid making proofs complicated.
Termination of the tree construction is guaranteed by defining it as a Coq function with a well-foundedness proof. Although the termination proof for Karp-Miller tree is usually made with classical reasoning, we avoided it so the whole formalization can be done in an axiom-free way with respect the default Coq theory.
This work will provide us with a basis for further formalization of several optimized algorithms, some of which are known to be error prone, and also a template for Karp-Miller style algorithms on extended variants of Petri nets.
@InProceedings{CPP17p66,
author = {Mitsuharu Yamamoto and Shogo Sekine and Saki Matsumoto},
title = {Formalization of Karp-Miller Tree Construction on Petri Nets},
booktitle = {Proc.\ CPP},
publisher = {ACM},
pages = {66--78},
doi = {10.1145/3018610.3018626},
year = {2017},
}
Publisher's Version Article Search
Formalized Mathematics with Numerical Computations
A Coq Formal Proof of the Lax–Milgram Theorem
Sylvie Boldo, François Clément, Florian Faissole, Vincent Martin, and Micaela Mayero (Inria, France; University of Paris-Sud, France; CNRS, France; University of Paris-Saclay, France; University of Technology of Compiègne, France; University of Paris North, France)
The Finite Element Method is a widely-used method to solve numerical problems coming for instance from physics or biology. To obtain the highest confidence on the correction of numerical simulation programs implementing the Finite Element Method, one has to formalize the mathematical notions and results that allow to establish the soundness of the method. The Lax–Milgram theorem may be seen as one of those theoretical cornerstones: under some completeness and coercivity assumptions, it states existence and uniqueness of the solution to the weak formulation of some boundary value problems. This article presents the full formal proof of the Lax–Milgram theorem in Coq. It requires many results from linear algebra, geometry, functional analysis, and Hilbert spaces.
@InProceedings{CPP17p79,
author = {Sylvie Boldo and François Clément and Florian Faissole and Vincent Martin and Micaela Mayero},
title = {A Coq Formal Proof of the Lax–Milgram Theorem},
booktitle = {Proc.\ CPP},
publisher = {ACM},
pages = {79--89},
doi = {10.1145/3018610.3018625},
year = {2017},
}
Publisher's Version Article Search
Polynomial positivity over the real field is known to be decidable but even the best algorithms remain costly. An incomplete but often efficient alternative consists in looking for positivity witnesses as sum of squares decompositions. Such decompositions can in practice be obtained through convex optimization. Unfortunately, these methods only yield approximate solutions. Hence the need for formal verification of such witnesses. State of the art methods rely on heuristic roundings to exact solutions in the rational field. These solutions are then easy to verify in a proof assistant. However, this verification often turns out to be very costly, as rational coefficients may blow up during computations.
Nevertheless, overapproximations with floating-point arithmetic can be enough to obtain proofs at a much lower cost. Such overapproximations being non trivial, it is mandatory to formally prove that rounding errors are correctly taken into account. We develop a reflexive tactic for the Coq proof assistant allowing one to automatically discharge polynomial positivity proofs. The tactic relies on heavy computation involving multivariate polynomials, matrices and floating-point arithmetic. Benchmarks indicate that we are able to formally address positivity problems that would otherwise be untractable with other state of the art methods.
@InProceedings{CPP17p90,
author = {Érik Martin-Dorel and Pierre Roux},
title = {A Reflexive Tactic for Polynomial Positivity using Numerical Solvers and Floating-Point Computations},
booktitle = {Proc.\ CPP},
publisher = {ACM},
pages = {90--99},
doi = {10.1145/3018610.3018622},
year = {2017},
}
Publisher's Version Article Search Info
Markov processes with discrete time and arbitrary state spaces are important models in probability theory. They model the infinite steps of non-terminating programs with (not just discrete) probabilistic choice and form the basis for further probabilistic models. Their transition behavior is described by Markov kernels, i.e. measurable functions from a state to a distribution of states. Markov kernels can be composed in a monadic way from distributions (normal, exponential, Bernoulli, etc.), other Markov kernels, and even other Markov processes.
In this paper we construct discrete-time Markov processes with arbitrary state spaces, given the transition probabilities as a Markov kernel. We show that the Markov processes form again Markov kernels. This allows us to prove a bisimulation argument between two Markov processes and derive the strong Markov property. We use the existing probability theory in Isabelle/HOL and extend its capability to work with Markov kernels.
As application we construct continuous-time Markov chains (CTMCs). These are constructed as jump & hold processes, which are discrete-time Markov processes where the state space is a product of continuous holding times and discrete states. We prove the Markov property of CTMCs using the bisimulation argument for discrete-time Markov processes, and that the transition probability is the solution of a differential equation.
@InProceedings{CPP17p100,
author = {Johannes Hölzl},
title = {Markov Processes in Isabelle/HOL},
booktitle = {Proc.\ CPP},
publisher = {ACM},
pages = {100--111},
doi = {10.1145/3018610.3018628},
year = {2017},
}
Publisher's Version Article Search
Cauchy reals can be defined as a quotient of Cauchy sequences of rationals. In this case, the limit of a Cauchy sequence of Cauchy reals is defined through lifting it to a sequence of Cauchy sequences of rationals.
This lifting requires the axiom of countable choice or excluded middle, neither of which is available in homotopy type theory. To address this, the Univalent Foundations Program uses a higher inductive-inductive type to define the Cauchy reals as the free Cauchy complete metric space generated by the rationals.
We generalize this construction to define the free Cauchy complete metric space generated by an arbitrary metric space. This forms a monad in the category of metric spaces with Lipschitz functions. When applied to the rationals it defines the Cauchy reals. Finally, we can use Altenkirch and Danielson (2016)'s partiality monad to define a semi-decision procedure comparing a real number and a rational number.
The entire construction has been formalized in the Coq proof assistant. It is available at https://github.com/SkySkimmer/HoTTClasses/tree/CPP2017 .
@InProceedings{CPP17p112,
author = {Gaëtan Gilbert},
title = {Formalising Real Numbers in Homotopy Type Theory},
booktitle = {Proc.\ CPP},
publisher = {ACM},
pages = {112--124},
doi = {10.1145/3018610.3018614},
year = {2017},
}
Publisher's Version Article Search
This paper describes how the latest CakeML compiler supports verified compilation down to multiple realistically modelled target architectures. In particular, we describe how the compiler definition, the various language semantics, and the correctness proofs were organised to minimize target-specific overhead. With our setup we have incorporated compilation to four 64-bit architectures, ARMv8, x86-64, MIPS-64, RISC-V, and one 32-bit architecture, ARMv6. Our correctness theorem allows interference from the environment: the top-level correctness statement takes into account execution of foreign code and per-instruction interference from external processes, such as interrupt handlers in operating systems. The entire CakeML development is formalised in the HOL4 theorem prover.
@InProceedings{CPP17p125,
author = {Anthony Fox and Magnus O. Myreen and Yong Kiam Tan and Ramana Kumar},
title = {Verified Compilation of CakeML to Multiple Machine-Code Targets},
booktitle = {Proc.\ CPP},
publisher = {ACM},
pages = {125--137},
doi = {10.1145/3018610.3018621},
year = {2017},
}
Publisher's Version Article Search Info
We propose a concurrency reasoning framework for imperative programs, based on the Owicki-Gries (OG) foundational shared-variable concurrency method. Our framework combines the approaches of Hoare-Parallel, a formalisation of OG in Isabelle/HOL for a simple while-language, and Simpl, a generic imperative language embedded in Isabelle/HOL, allowing formal reasoning on C programs.
We define the Complx language, extending the syntax and semantics of Simpl with support for parallel composition and synchronisation. We additionally define an OG logic, which we prove sound w.r.t. the semantics, and a verification condition generator, both supporting involved low-level imperative constructs such as function calls and abrupt termination. We illustrate our framework on an example that features exceptions, guards and function calls. We aim to then target concurrent operating systems, such as the interruptible eChronos embedded operating system for which we already have a model-level OG proof using Hoare-Parallel.
@InProceedings{CPP17p138,
author = {Sidney Amani and June Andronick and Maksym Bortin and Corey Lewis and Christine Rizkallah and Joseph Tuong},
title = {Complx: A Verification Framework for Concurrent Imperative Programs},
booktitle = {Proc.\ CPP},
publisher = {ACM},
pages = {138--150},
doi = {10.1145/3018610.3018627},
year = {2017},
}
Publisher's Version Article Search
Verifying Dynamic Race Detection
William Mansky, Yuanfeng Peng, Steve Zdancewic, and Joseph Devietti (Princeton University, USA; University of Pennsylvania, USA)
Writing race-free concurrent code is notoriously difficult, and data races can result in bugs that are difficult to isolate and reproduce. Dynamic race detection can catch data races that cannot (easily) be detected statically. One approach to dynamic race detection is to instrument the potentially racy code with operations that store and compare metadata, where the metadata implements some known race detection algorithm (e.g. vector-clock race detection). In this paper, we describe the process of formally verifying several algorithms for dynamic race detection. We then turn to implementations, laying out an instrumentation pass for race detection in a simple language and presenting a mechanized formal proof of its correctness: all races in a program will be caught by the instrumentation, and all races detected by the instrumentation are possible in the original program.
@InProceedings{CPP17p151,
author = {William Mansky and Yuanfeng Peng and Steve Zdancewic and Joseph Devietti},
title = {Verifying Dynamic Race Detection},
booktitle = {Proc.\ CPP},
publisher = {ACM},
pages = {151--163},
doi = {10.1145/3018610.3018611},
year = {2017},
}
Publisher's Version Article Search Info
Homotopy Type Theory
The HoTT Library: A Formalization of Homotopy Type Theory in Coq
Andrej Bauer, Jason Gross, Peter LeFanu Lumsdaine, Michael Shulman, Matthieu Sozeau, and Bas Spitters (University of Ljubljana, Slovenia; Massachusetts Institute of Technology, USA; Stockholm University, Sweden; University of San Diego, USA; Inria, France; Aarhus University, Denmark)
We report on the development of the HoTT library, a formalization of homotopy type theory in the Coq proof assistant. It formalizes most of basic homotopy type theory, including univalence, higher inductive types, and significant amounts of synthetic homotopy theory, as well as category theory and modalities. The library has been used as a basis for several independent developments. We discuss the decisions that led to the design of the library, and we comment on the interaction of homotopy type theory with recently introduced features of Coq, such as universe polymorphism and private inductive types.
@InProceedings{CPP17p164,
author = {Andrej Bauer and Jason Gross and Peter LeFanu Lumsdaine and Michael Shulman and Matthieu Sozeau and Bas Spitters},
title = {The HoTT Library: A Formalization of Homotopy Type Theory in Coq},
booktitle = {Proc.\ CPP},
publisher = {ACM},
pages = {164--172},
doi = {10.1145/3018610.3018615},
year = {2017},
}
Publisher's Version Article Search Info
In a dependently typed language such as Coq or Agda, unification can be used to discharge equality constraints and detect impossible cases automatically. By nature of dependent types, it is necessary to use a proof-relevant unification algorithm where unification rules are functions manipulating equality proofs. This ensures their correctness but simultaneously sets a high bar for new unification rules. In particular, so far no-one has given a satisfactory proof-relevant version of the injectivity rule for indexed datatypes.
In this paper, we describe a general technique for solving equations between constructors of indexed datatypes. We handle the main technical problem—generalization over equality proofs in the indices—by introducing new equations between equality proofs. Borrowing terminology from homotopy type theory, we call them higher-dimensional equations. To apply existing one-dimensional unifiers to these higher-dimensional equations, we show how unifiers can be lifted to a higher dimension. We show the usefulness of this idea by applying it to the unification algorithm used by Agda, though it can also be applied in languages that support identity types but not general indexed datatypes.
@InProceedings{CPP17p173,
author = {Jesper Cockx and Dominique Devriese},
title = {Lifting Proof-Relevant Unification to Higher Dimensions},
booktitle = {Proc.\ CPP},
publisher = {ACM},
pages = {173--181},
doi = {10.1145/3018610.3018612},
year = {2017},
}
Publisher's Version Article Search
A family of syntactic models for the calculus of construction with universes (CC_{ω}) is described, all of them preserving conversion of the calculus definitionally, and thus giving rise directly to a program transformation of CC_{ω} into itself.
Those models are based on the remark that negative type constructors (e.g. dependent product, coinductive types or universes) are underspecified in type theory—which leaves some freedom on extra intensional specifications.
The model construction can be seen as a compilation phase from a complex type theory into a simpler type theory.
Such models can be used to derive (the negative part of) independence results with respect to CC_{ω}, such as functional extensionality, propositional extensionality, univalence or the fact that bisimulation on a coinductive type may not coincide with equality.
They can also be used to add new principles to the theory, which we illustrate by defining a version of CC_{ω} with ad-hoc polymorphism that shows in particular that parametricity is not an implicit requirement of type theory.
The correctness of some of the models/program transformations have been checked in the Coq proof assistant and have been instrumented as a Coq plugin.
@InProceedings{CPP17p182,
author = {Simon Boulier and Pierre-Marie Pédrot and Nicolas Tabareau},
title = {The Next 700 Syntactical Models of Type Theory},
booktitle = {Proc.\ CPP},
publisher = {ACM},
pages = {182--194},
doi = {10.1145/3018610.3018620},
year = {2017},
}
Publisher's Version Article Search
Formal Verification of Programming Language Foundations
Type-and-Scope Safe Programs and Their Proofs
Guillaume Allais, James Chapman, Conor McBride, and James McKinna (Radboud University Nijmegen, Netherlands; University of Strathclyde, UK; University of Edinburgh, UK)
We abstract the common type-and-scope safe structure from computations on λ-terms that deliver, e.g., renaming, substitution, evaluation, CPS-transformation, and printing with a name supply. By exposing this structure, we can prove generic simulation and fusion lemmas relating operations built this way. This work has been fully formalised in Agda.
@InProceedings{CPP17p195,
author = {Guillaume Allais and James Chapman and Conor McBride and James McKinna},
title = {Type-and-Scope Safe Programs and Their Proofs},
booktitle = {Proc.\ CPP},
publisher = {ACM},
pages = {195--207},
doi = {10.1145/3018610.3018613},
year = {2017},
}
Publisher's Version Article Search Info
We formalize the soundness theorem for differential dynamic logic, a logic for verifying hybrid systems.
To increase confidence in the formalization, we present two versions: one in Isabelle/HOL and one in Coq.
We extend the metatheory to include features used in practice, such as systems of differential equations and functions of multiple arguments.
We demonstrate the viability of constructing a verified kernel for the hybrid systems theorem prover
KeYmaera X by embedding proof checkers for differential dynamic logic in Coq and Isabelle.
We discuss how different provers and libraries influence the design of the formalization.
@InProceedings{CPP17p208,
author = {Brandon Bohrer and Vincent Rahli and Ivana Vukotic and Marcus Völp and André Platzer},
title = {Formally Verified Differential Dynamic Logic},
booktitle = {Proc.\ CPP},
publisher = {ACM},
pages = {208--221},
doi = {10.1145/3018610.3018616},
year = {2017},
}
Publisher's Version Article Search
We give a machine-checked proof of the equivalence of the usual, two-sorted presentation of System F and its single-sorted pure type system variant λ2. This is established by reducing the typability problem of F to λ2 and vice versa. The difficulty lies in aligning different binding-structures and different contexts (dependent vs. non-dependent). The use of de Bruijn syntax, parallel substitutions, and context morphism lemmas leads to an elegant proof. We use the Coq proof assistant and the substitution library Autosubst.
@InProceedings{CPP17p222,
author = {Jonas Kaiser and Tobias Tebbi and Gert Smolka},
title = {Equivalence of System F and λ2 in Coq Based on Context Morphism Lemmas},
booktitle = {Proc.\ CPP},
publisher = {ACM},
pages = {222--234},
doi = {10.1145/3018610.3018618},
year = {2017},
}
Publisher's Version Article Search Info