Powered by
Proceedings of the ACM on Programming Languages, Volume 6, Number OOPSLA2,
December 8–10, 2022,
Auckland, New Zealand
Frontmatter
Papers
AnICA: Analyzing Inconsistencies in Microarchitectural Code Analyzers
Fabian Ritter and
Sebastian Hack
(Saarland University, Germany)
Microarchitectural code analyzers, i.e., tools that estimate the throughput of
machine code basic blocks, are important utensils in the tool belt of
performance engineers. Recent tools like llvm-mca, uiCA, and Ithemal use a
variety of techniques and different models for their throughput predictions.
When put to the test, it is common to see these state-of-the-art tools give
very different results. These inconsistencies are either errors, or they point
to different and rarely documented assumptions made by the tool designers.
In this paper, we present AnICA, a tool taking inspiration from differential
testing and abstract interpretation to systematically analyze inconsistencies
among these code analyzers. Our evaluation shows that AnICA can summarize
thousands of inconsistencies in a few dozen descriptions that directly lead to
high-level insights into the different behavior of the tools. In several case
studies, we further demonstrate how AnICA automatically finds and characterizes
known and unknown bugs in llvm-mca, as well as a quirk in AMD's Zen
microarchitectures.
@Article{OOPSLA222p125,
author = {Fabian Ritter and Sebastian Hack},
title = {AnICA: Analyzing Inconsistencies in Microarchitectural Code Analyzers},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {125},
numpages = {29},
doi = {10.1145/3563288},
year = {2022},
}
Publisher's Version
Info
Artifacts Reusable
First-Class Names for Effect Handlers
Ningning Xie,
Youyou Cong,
Kazuki Ikemori, and
Daan Leijen
(University of Cambridge, UK; Tokyo Institute of Technology, Japan; Microsoft Research, USA)
Algebraic effects and handlers are a promising technique for incorporating composable computational effects into functional programming languages. Effect handlers enable concisely programming with different effects, but they do not offer a convenient way to program with different instances of the same effect. As a solution to this inconvenience, previous studies have introduced _named effect handlers_, which allow the programmer to distinguish among different effect instances. However, existing formalizations of named handlers are both involved and restrictive, as they employ non-standard mechanisms to prevent the escaping of handler names.
In this paper, we propose a simple and flexible design of named handlers. Specifically, we treat handler names as first-class values, and prevent their escaping while staying within the ordinary λ-calculus. Such a design is enabled by combining named handlers with _scoped effects_, a novel variation of effects that maintain a scope via rank-2 polymorphism. We formalize two combinations of named handlers and scoped effects, and implement them in the Koka programming language. We also present practical applications of named handlers, including a neural network and a unification algorithm.
@Article{OOPSLA222p126,
author = {Ningning Xie and Youyou Cong and Kazuki Ikemori and Daan Leijen},
title = {First-Class Names for Effect Handlers},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {126},
numpages = {30},
doi = {10.1145/3563289},
year = {2022},
}
Publisher's Version
Artifacts Reusable
Compositional Virtual Timelines: Verifying Dynamic-Priority Partitions with Algorithmic Temporal Isolation
Mengqi Liu,
Zhong Shao,
Hao Chen,
Man-Ki Yoon, and
Jung-Eun Kim
(Yale University, USA)
Real-time systems power safety-critical applications that require strong isolation among each other. Such isolation needs to be enforced at two orthogonal levels. On the micro-architectural level, this mainly involves avoiding interference through micro-architectural states, such as cache lines. On the algorithmic level, this is usually achieved by adopting real-time partitions to reserve resources for each application. Implementations of such systems are often complex and require formal verification to guarantee proper isolation.
In this paper, we focus on algorithmic isolation, which is mainly related to scheduling-induced interferences. We address earliest-deadline-first (EDF) partitions to achieve compositionality and utilization, while imposing constraints on tasks' periods and enforcing budgets on these periodic partitions to ensure isolation between each other. The formal verification of such a real-time OS kernel is challenging due to the inherent complexity of the dynamic priority assignment on the partition level. We tackle this problem by adopting a dynamically constructed abstraction to lift the reasoning of a concrete scheduler into an abstract domain. Using this framework, we verify a real-time operating system kernel with budget-enforcing EDF partitions and prove that it indeed ensures isolation between partitions. All the proofs are mechanized in Coq.
@Article{OOPSLA222p127,
author = {Mengqi Liu and Zhong Shao and Hao Chen and Man-Ki Yoon and Jung-Eun Kim},
title = {Compositional Virtual Timelines: Verifying Dynamic-Priority Partitions with Algorithmic Temporal Isolation},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {127},
numpages = {29},
doi = {10.1145/3563290},
year = {2022},
}
Publisher's Version
Artifacts Functional
Parsing Randomness
Harrison Goldstein and
Benjamin C. Pierce
(University of Pennsylvania, USA)
Random data generators can be thought of as parsers of streams of randomness. This perspective on generators for random data structures is established folklore in the programming languages community, but it has never been formalized, nor have its consequences been deeply explored.
We build on the idea of freer monads to develop free generators, which unify parsing and generation using a common structure that makes the relationship between the two concepts precise. Free generators lead naturally to a proof that a monadic generator can be factored into a parser plus a distribution over choice sequences. Free generators also support a notion of derivative, analogous to the familiar Brzozowski derivatives of formal languages, allowing analysis tools to "preview" the effect of a particular generator choice. This gives rise to a novel algorithm for generating data structures satisfying user-specified preconditions.
@Article{OOPSLA222p128,
author = {Harrison Goldstein and Benjamin C. Pierce},
title = {Parsing Randomness},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {128},
numpages = {25},
doi = {10.1145/3563291},
year = {2022},
}
Publisher's Version
Archive submitted (1.5 MB)
CAAT: Consistency as a Theory
Thomas Haas,
Roland Meyer, and
Hernán Ponce de León
(TU Braunschweig, Germany; Huawei, Germany)
We propose a family of logical theories for capturing an abstract notion of consistency and show how to build a generic and efficient theory solver that works for all members in the family.
The theories can be used to model the influence of memory consistency models on the semantics of concurrent programs.
They are general enough to precisely capture important examples like TSO, POWER, ARMv8, RISC-V, RC11, IMM, and the Linux kernel memory model.
To evaluate the expressiveness of our theories and the performance of our solver, we integrate them into a lazy SMT scheme that we use as a backend for a bounded model checking tool.
An evaluation against related verification tools shows, besides flexibility, promising performance on challenging programs under complex memory models.
@Article{OOPSLA222p129,
author = {Thomas Haas and Roland Meyer and Hernán Ponce de León},
title = {CAAT: Consistency as a Theory},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {129},
numpages = {31},
doi = {10.1145/3563292},
year = {2022},
}
Publisher's Version
Artifacts Functional
Reasoning about Distributed Reconfigurable Systems
Emma Ahrens,
Marius Bozga,
Radu Iosif, and
Joost-Pieter Katoen
(RWTH Aachen University, Aachen, Germany; Université Grenoble Alpes, France; CNRS, France; Grenoble INP, France; VERIMAG, France)
This paper presents a Hoare-style calculus for formal reasoning about
reconfiguration programs of distributed systems. Such programs create
and delete components and/or interactions (connectors) while the
system components change state according to their internal behaviour.
Our proof calculus uses a resource logic, in the spirit of Separation
Logic, to give local specifications of reconfiguration
actions. Moreover, distributed systems with an unbounded number of
components are described using inductively defined predicates. The
correctness of reconfiguration programs relies on havoc invariants,
that are assertions about the ongoing interactions in a part of the
system that is not affected by the structural change caused by the
reconfiguration. We present a proof system for such invariants in an
assume/rely-guarantee style. We illustrate the feasibility of our
approach by proving the correctness of real-life distributed systems
with reconfigurable (self-adjustable) tree architectures.
@Article{OOPSLA222p130,
author = {Emma Ahrens and Marius Bozga and Radu Iosif and Joost-Pieter Katoen},
title = {Reasoning about Distributed Reconfigurable Systems},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {130},
numpages = {30},
doi = {10.1145/3563293},
year = {2022},
}
Publisher's Version
Compositional Embeddings of Domain-Specific Languages
Yaozhu Sun,
Utkarsh Dhandhania, and
Bruno C. d. S. Oliveira
(University of Hong Kong, China)
A common approach to defining domain-specific languages (DSLs) is via a direct embedding into a host language. There are several well-known techniques to do such embeddings, including shallow and deep embeddings. However, such embeddings come with various trade-offs in existing programming languages. Owing to such trade-offs, many embedded DSLs end up using a mix of approaches in practice, requiring a substantial amount of code, as well as some advanced coding techniques.
In this paper, we show that the recently proposed Compositional Programming paradigm and the CP language provide improved support for embedded DSLs. In CP we obtain a new form of embedding, which we call a compositional embedding, that has most of the advantages of both shallow and deep embeddings. On the one hand, compositional embeddings enable various forms of linguistic reuse that are characteristic of shallow embeddings, including the ability to reuse host-language optimizations in the DSL and add new DSL constructs easily. On the other hand, similarly to deep embeddings, compositional embeddings support definitions by pattern matching or dynamic dispatching (including dependent interpretations, transformations, and optimizations) over the abstract syntax of the DSL and have the ability to add new interpretations. We illustrate an instance of compositional embeddings with a DSL for document authoring called ExT. The DSL is highly flexible and extensible, allowing users to create various non-trivial extensions easily. For instance, ExT supports various extensions that enable the production of wiki-like documents, LaTeX documents, vector graphics or charts. The viability of compositional embeddings for ExT is evaluated with three applications.
@Article{OOPSLA222p131,
author = {Yaozhu Sun and Utkarsh Dhandhania and Bruno C. d. S. Oliveira},
title = {Compositional Embeddings of Domain-Specific Languages},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {131},
numpages = {29},
doi = {10.1145/3563294},
year = {2022},
}
Publisher's Version
Archive submitted (930 kB)
Info
Artifacts Reusable
Scalable Linear Invariant Generation with Farkas’ Lemma
Hongming Liu,
Hongfei Fu,
Zhiyong Yu,
Jiaxin Song, and
Guoqiang Li
(Shanghai Jiao Tong University, China)
Invariant generation is a classical problem to automatically generate invariants to aid the formal analysis of programs. In this work, we consider the problem of generating tight linear-invariants over affine programs (i.e., programs with affine guards and updates) without a prescribed goal property. In the literature, the only known sound and complete characterization to solve this problem is via Farkas’ Lemma (FL), and has been implemented through either quantifier elimination or reasonable heuristics. Although FL-based approaches can generate highly accurate linear invariants from the completeness of FL, the main bottleneck to applying these approaches is the scalability issue caused by either non-linear constraints or combinatorial explosion. We base our approach on the only practical FL-based approach [Sankaranarayanan et al., SAS 2004] that applies FL with reasonable heuristics, and develop two novel and independent improvements to leverage the scalability. The first improvement is the novel idea to generate invariants at one program location in a single invariant-generation process, so that the invariants for each location are generated separately rather than together in a single computation. This idea naturally leads to a parallel processing that divides the invariant-generation task for all program locations by assigning the locations separately to multiple processors. Moreover, the idea enables us to develop detailed technical improvements to further reduce the combinatorial explosion in the original work [Sankaranarayanan et al., SAS 2004]. The second improvement is a segmented subsumption testing in the CNF-to-DNF expansion that allows discovering more local subsumptions in advance. We formally prove that our approach has the same accuracy as the original work and thus does not incur accuracy loss on the generated invariants. Moreover, experimental results on representative benchmarks involving non-trivial linear invariants demonstrate that our approach improves the runtime of the original work by several orders of magnitude, even in the non-parallel scenario that sums up the execution time for all program locations. Hence, our approach constitutes the first significant improvement in FL-based approaches for linear invariant generation after almost two decades.
@Article{OOPSLA222p132,
author = {Hongming Liu and Hongfei Fu and Zhiyong Yu and Jiaxin Song and Guoqiang Li},
title = {Scalable Linear Invariant Generation with Farkas’ Lemma},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {132},
numpages = {29},
doi = {10.1145/3563295},
year = {2022},
}
Publisher's Version
Can Guided Decomposition Help End-Users Write Larger Block-Based Programs? A Mobile Robot Experiment
Nico Ritschel,
Felipe Fronchetti,
Reid Holmes,
Ronald Garcia, and
David C. Shepherd
(University of British Columbia, Canada; Virginia Commonwealth University, USA)
Block-based programming environments, already popular in computer science education, have been successfully used to make programming accessible to end-users in domains like robotics, mobile apps, and even DevOps. Most studies of these applications have examined small programs that fit within a single screen, yet real-world programs often grow large, and editing these large block-based programs quickly becomes unwieldy. Traditional programming language features, like functions, allow programmers to decompose their programs. Unfortunately, both previous work, and our own findings, suggest that end-users rarely use these features, resulting in large monolithic code blocks that are hard to understand. In this work, we introduce a block-based system that provides users with a hierarchical, domain-specific program structure and requires them to decompose their programs accordingly. Through a user study with 92 users, we compared this approach, which we call guided program decomposition, to a traditional system that supports functions, but does not require decomposition. We found that while almost all users could successfully complete smaller tasks, those who decomposed their programs were significantly more successful as the tasks grew larger. As expected, most users without guided decomposition did not decompose their programs, resulting in poor performance on larger problems. In comparison, users of guided decomposition performed significantly better on the same tasks. Though this study investigated only a limited selection of tasks in one specific domain, it suggests that guided decomposition can benefit end-user programmers. While no single decomposition strategy fits all domains, we believe that similar domain-specific sub-hierarchies could be found for other application areas, increasing the scale of code end-users can create and understand.
@Article{OOPSLA222p133,
author = {Nico Ritschel and Felipe Fronchetti and Reid Holmes and Ronald Garcia and David C. Shepherd},
title = {Can Guided Decomposition Help End-Users Write Larger Block-Based Programs? A Mobile Robot Experiment},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {133},
numpages = {26},
doi = {10.1145/3563296},
year = {2022},
}
Publisher's Version
Info
Tower: Data Structures in Quantum Superposition
Charles Yuan and
Michael Carbin
(Massachusetts Institute of Technology, USA)
Emerging quantum algorithms for problems such as element distinctness, subset sum, and closest pair demonstrate computational advantages by relying on abstract data structures. Practically realizing such an algorithm as a program for a quantum computer requires an efficient implementation of the data structure whose operations correspond to unitary operators that manipulate quantum superpositions of data.
To correctly operate in superposition, an implementation must satisfy three properties --- reversibility, history independence, and bounded-time execution. Standard implementations, such as the representation of an abstract set as a hash table, fail these properties, calling for tools to develop specialized implementations.
In this work, we present Core Tower, the first language for quantum programming with random-access memory. Core Tower enables the developer to implement data structures as pointer-based, linked data. It features a reversible semantics enabling every valid program to be translated to a unitary quantum circuit.
We present Boson, the first memory allocator that supports reversible, history-independent, and constant-time dynamic memory allocation in quantum superposition.
We also present Tower, a language for quantum programming with recursively defined data structures. Tower features a type system that bounds all recursion using classical parameters as is necessary for a program to execute on a quantum computer.
Using Tower, we implement Ground, the first quantum library of data structures, including lists, stacks, queues, strings, and sets. We provide the first executable implementation of sets that satisfies all three mandated properties of reversibility, history independence, and bounded-time execution.
@Article{OOPSLA222p134,
author = {Charles Yuan and Michael Carbin},
title = {Tower: Data Structures in Quantum Superposition},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {134},
numpages = {30},
doi = {10.1145/3563297},
year = {2022},
}
Publisher's Version
Archive submitted (410 kB)
Artifacts Reusable
Proving Hypersafety Compositionally
Emanuele D’Osualdo,
Azadeh Farzan, and
Derek Dreyer
(MPI-SWS, Germany; University of Toronto, Canada)
Hypersafety properties of arity n are program properties that relate n traces of a program (or, more generally, traces of n programs).
Classic examples include determinism, idempotence, and associativity.
A number of relational program logics have been introduced to target this class of properties. Their aim is to construct simpler proofs by capitalizing on structural similarities between the n related programs.
We propose an unexplored, complementary proof principle that establishes hyper-triples (i.e. hypersafety judgments) as a unifying compositional building block for proofs, and we use it to develop a Logic for Hyper-triple Composition (LHC), which supports forms of proof compositionality that were not achievable in previous logics.
We prove LHC sound and apply it to a number of challenging examples.
@Article{OOPSLA222p135,
author = {Emanuele D’Osualdo and Azadeh Farzan and Derek Dreyer},
title = {Proving Hypersafety Compositionally},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {135},
numpages = {26},
doi = {10.1145/3563298},
year = {2022},
}
Publisher's Version
Bridging the Semantic Gap between Qualitative and Quantitative Models of Distributed Systems
Si Liu,
Jose Meseguer,
Peter Csaba Ölveczky,
Min Zhang, and
David Basin
(ETH Zurich, Switzerland; University of Illinois at Urbana-Champaign, USA; University of Oslo, Norway; East China Normal University, China)
Today’s distributed systems must satisfy both qualitative and quantitative properties. These properties are analyzed using very different formal frameworks: expressive untimed and non-probabilistic frameworks, such as TLA+ and Hoare/separation logics, for qualitative properties; and timed/probabilistic-automaton-based ones, such as Uppaal and Prism, for quantitative ones. This requires developing two quite different models of the same system, without guarantees of semantic consistency between them. Furthermore, it is very hard or impossible to represent intrinsic features of distributed object systems—such as unbounded data structures, dynamic object creation, and an unbounded number of messages—using finite automata.
In this paper we bridge this semantic gap, overcome the problem of manually having to develop two different models of a system, and solve the representation problem by: (i) defining a transformation from a very general class of distributed systems (a generalization of Agha’s actor model) that maps an untimed non-probabilistic distributed system model suitable for qualitative analysis to a probabilistic timed model suitable for quantitative analysis; and (ii) proving the two models semantically consistent. We formalize our models in rewriting logic, and can therefore use the Maude tool to analyze qualitative properties, and statistical model checking with PVeStA to analyze quantitative properties. We have automated this transformation and integrated it, together with the PVeStA statistical model checker, into the Actors2PMaude tool. We illustrate the expressiveness of our framework and our tool’s ease of use by automatically transforming untimed, qualitative models of numerous distributed system designs—including an industrial data store and a state-of-the-art transaction system—into quantitative models to analyze and compare the performance of different designs.
@Article{OOPSLA222p136,
author = {Si Liu and Jose Meseguer and Peter Csaba Ölveczky and Min Zhang and David Basin},
title = {Bridging the Semantic Gap between Qualitative and Quantitative Models of Distributed Systems},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {136},
numpages = {30},
doi = {10.1145/3563299},
year = {2022},
}
Publisher's Version
Info
Artifacts Functional
Concurrent Size
Gal Sela and
Erez Petrank
(Technion, Israel)
The size of a data structure (i.e., the number of elements in it) is a widely used property of a data set. However, for concurrent programs, obtaining a correct size efficiently is non-trivial. In fact, the literature does not offer a mechanism to obtain a correct (linearizable) size of a concurrent data set without resorting to inefficient solutions, such as taking a full snapshot of the data structure to count the elements, or acquiring one global lock in all update and size operations. This paper presents a methodology for adding a concurrent linearizable size operation to sets and dictionaries with a relatively low performance overhead. Theoretically, the proposed size operation is wait-free with asymptotic complexity linear in the number of threads (independently of data-structure size). Practically, we evaluated the performance overhead by adding size to various concurrent data structures in Java−a skip list, a hash table and a tree. The proposed linearizable size operation executes faster by orders of magnitude compared to the existing option of taking a snapshot, while incurring a throughput loss of 1%−20% on the original data structure’s operations.
@Article{OOPSLA222p137,
author = {Gal Sela and Erez Petrank},
title = {Concurrent Size},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {137},
numpages = {28},
doi = {10.1145/3563300},
year = {2022},
}
Publisher's Version
Artifacts Reusable
Wildcards Need Witness Protection
Kevin Bierhoff
(Google, USA)
In this paper, we show that the unsoundness discovered by Amin and Tate (2016) in Java’s wildcards is avoidable, even in the absence of a nullness-aware type system.
The key insight of this paper is that soundness in type systems that implicitly introduce existential types through subtyping hinges on still making sure there are suitable witness types when introducing existentially quantified type variables.
To show that this approach is viable, this paper formalizes a core calculus and proves it sound.
We used a static analysis based on our approach to look for potential issues in a vast corpus of Java code and found none (with 1 false positive).
This confirms both that Java's unsoundness has minimal practical consequence,
and that our approach can avoid it entirely with minimal false positives.
@Article{OOPSLA222p138,
author = {Kevin Bierhoff},
title = {Wildcards Need Witness Protection},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {138},
numpages = {22},
doi = {10.1145/3563301},
year = {2022},
}
Publisher's Version
Info
Overwatch: Learning Patterns in Code Edit Sequences
Yuhao Zhang,
Yasharth Bajpai,
Priyanshu Gupta,
Ameya Ketkar,
Miltiadis Allamanis,
Titus Barik,
Sumit Gulwani,
Arjun Radhakrishna,
Mohammad Raza,
Gustavo Soares, and
Ashish Tiwari
(University of Wisconsin-Madison, USA; Microsoft, India; Uber, USA; Microsoft Research, UK; Microsoft, USA)
Integrated Development Environments (IDEs) provide tool support to automate many source code editing tasks. Traditionally, IDEs use only the spatial context, i.e., the location where the developer is editing, to generate candidate edit recommendations. However, spatial context alone is often not sufficient to confidently predict the developer’s next edit, and thus IDEs generate many suggestions at a location. Therefore, IDEs generally do not actively offer suggestions and instead, the developer is usually required to click on a specific
icon or menu and then select from a large list of potential suggestions. As a consequence, developers often miss the opportunity to use the tool support because they are not aware it exists or forget to use it.
To better understand common patterns in developer behavior and produce better edit recommendations, we can additionally use the temporal context, i.e., the edits that a developer was recently performing. To enable edit recommendations based on temporal context, we present Overwatch, a novel technique for learning edit sequence patterns from traces of developers’ edits performed in an IDE. Our experiments show that Overwatch has 78% precision and that Overwatch not only completed edits when developers missed the
opportunity to use the IDE tool support but also predicted new edits that have no tool support in the IDE.
@Article{OOPSLA222p139,
author = {Yuhao Zhang and Yasharth Bajpai and Priyanshu Gupta and Ameya Ketkar and Miltiadis Allamanis and Titus Barik and Sumit Gulwani and Arjun Radhakrishna and Mohammad Raza and Gustavo Soares and Ashish Tiwari},
title = {Overwatch: Learning Patterns in Code Edit Sequences},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {139},
numpages = {29},
doi = {10.1145/3563302},
year = {2022},
}
Publisher's Version
Incremental Type-Checking for Free: Using Scope Graphs to Derive Incremental Type-Checkers
Aron Zwaan,
Hendrik van Antwerpen, and
Eelco Visser
(Delft University of Technology, Netherlands)
Fast analysis response times in IDEs are essential for a good editor experience.
Incremental type-checking can provide that in a scalable fashion.
However, existing techniques are not reusable between languages.
Moreover, mutual and dynamic dependencies preclude traditional approaches to incrementality.
This makes finding automatic approaches to incremental type-checking a challenging but important open question.
In this paper, we present a technique that automatically derives incremental type-checkers from type system specifications written in the Statix meta-DSL.
We use name resolution queries in scope graphs (a generic model of name binding embedded in Statix) to derive dependencies between compilation units.
A novel query confirmation algorithm finds queries for which the answer changed due to an edit in the program.
Only units with such queries require reanalysis.
The effectiveness of this algorithm is improved by
(1) splitting the type-checking task into a context-free and a context-sensitive part, and
(2) reusing a generic mechanism to resolve mutual dependencies.
This automatically yields incremental type-checkers for any Statix specification.
Compared to non-incremental parallel execution, we achieve speedups up to 147x on synthetic benchmarks, and up to 21x on real-world projects, with initial overheads below 10%.
This suggests that our framework can provide efficient incremental type-checking to the wide range of languages supported by Statix.
@Article{OOPSLA222p140,
author = {Aron Zwaan and Hendrik van Antwerpen and Eelco Visser},
title = {Incremental Type-Checking for Free: Using Scope Graphs to Derive Incremental Type-Checkers},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {140},
numpages = {25},
doi = {10.1145/3563303},
year = {2022},
}
Publisher's Version
Archive submitted (1.2 MB)
Artifacts Functional
MLstruct: Principal Type Inference in a Boolean Algebra of Structural Types
Lionel Parreaux and
Chun Yin Chau
(Hong Kong University of Science and Technology, China)
Intersection and union types are becoming more popular by the day, entering the mainstream in programming languages like TypeScript and Scala 3. Yet, no language so far has managed to combine these powerful types with principal polymorphic type inference. We present a solution to this problem in MLstruct, a language with subtyped records, equirecursive types, first-class unions and intersections, class-based instance matching, and ML-style principal type inference. While MLstruct is mostly structurally typed, it contains a healthy sprinkle of nominality for classes, which gives it desirable semantics, enabling the expression of a powerful form of extensible variants that does not need row variables. Technically, we define the constructs of our language using conjunction, disjunction, and negation connectives, making sure they form a Boolean algebra, and we show that the addition of a few nonstandard but sound subtyping rules gives us enough structure to derive a sound and complete type inference algorithm. With this work, we hope to foster the development of better type inference for present and future programming languages with expressive subtyping systems.
@Article{OOPSLA222p141,
author = {Lionel Parreaux and Chun Yin Chau},
title = {MLstruct: Principal Type Inference in a Boolean Algebra of Structural Types},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {141},
numpages = {30},
doi = {10.1145/3563304},
year = {2022},
}
Publisher's Version
Info
Artifacts Functional
Highly Illogical, Kirk: Spotting Type Mismatches in the Large Despite Broken Contracts, Unsound Types, and Too Many Linters
Joshua Hoeflich,
Robert Bruce Findler, and
Manuel Serrano
(Northwestern University, USA; Inria, France; University of Côte d'Azur, France)
The DefinitelyTyped repository hosts type declarations for thousands of JavaScript libraries. Given the lack of formal connection between the types and the corresponding code, a natural question is are the types right? An equally important question, as DefinitelyTyped and the libraries it supports change over time, is how can we keep the types from becoming wrong?
In this paper we offer Scotty, a tool that detects mismatches between the types and code in the Definitely-Typed repository. More specifically, Scotty checks each package by converting its types into contracts and installing the contracts on the boundary between the library and its test suite. Running the test suite in this environment can reveal mismatches between the types and the JavaScript code. As automation and generality are both essential if such a tool is going to remain useful in the long term, we focus on techniques that sacrifice completeness, instead preferring to avoid false positives. Scotty currently handles about 26% of the 8006 packages on DefinitelyTyped (61% of the packages whose code is available and whose test suite passes).
Perhaps unsurprisingly, running the tests with these contracts in place revealed many errors in Definitely-Typed. More surprisingly, despite the inherent limitations of the techniques we use, this exercise led to one hundred accepted pull requests that fix errors in DefinitelyTyped, demonstrating the value of this approach for the long-term maintenance of DefinitelyTyped. It also revealed a number of lessons about working in the JavaScript ecosystem and how details beyond the semantics of the language can be surprisingly important. Best of all, it also revealed a few places where programmers preferred incorrect types, suggesting some avenues of research to improve TypeScript.
@Article{OOPSLA222p142,
author = {Joshua Hoeflich and Robert Bruce Findler and Manuel Serrano},
title = {Highly Illogical, Kirk: Spotting Type Mismatches in the Large Despite Broken Contracts, Unsound Types, and Too Many Linters},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {142},
numpages = {26},
doi = {10.1145/3563305},
year = {2022},
}
Publisher's Version
Data-Driven Lemma Synthesis for Interactive Proofs
Aishwarya Sivaraman,
Alex Sanchez-Stern,
Bretton Chen,
Sorin Lerner, and
Todd Millstein
(University of California at Los Angeles, USA; University of Massachusetts at Amherst, USA; University of California at San Diego, USA)
Interactive proofs of theorems often require auxiliary helper lemmas to prove the desired theorem. Existing approaches for automatically synthesizing helper lemmas fall into two broad categories. Some approaches are goal-directed, producing lemmas specifically to help a user make progress from a given proof state, but they have limited expressiveness in terms of the lemmas that can be produced. Other approaches are highly expressive, able to generate arbitrary lemmas from a given grammar, but they are completely undirected and hence not amenable to interactive usage.
In this paper, we develop an approach to lemma synthesis that is both goal-directed and expressive.
The key novelty is a technique for reducing lemma synthesis to a data-driven program synthesis problem, whereby examples for synthesis are generated from the current proof state. We also describe a technique to systematically introduce new variables for lemma synthesis, as well as techniques for filtering and ranking candidate lemmas for presentation to the user. We implement these ideas in a tool called lfind, which can be run as a Coq tactic. In an evaluation on four benchmark suites, lfind produces useful lemmas in 68% of the cases where a human prover used a lemma to make progress. In these cases lfind synthesizes a lemma that either enables a fully automated proof of the original goal or that matches the human-provided lemma.
@Article{OOPSLA222p143,
author = {Aishwarya Sivaraman and Alex Sanchez-Stern and Bretton Chen and Sorin Lerner and Todd Millstein},
title = {Data-Driven Lemma Synthesis for Interactive Proofs},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {143},
numpages = {27},
doi = {10.1145/3563306},
year = {2022},
}
Publisher's Version
Artifacts Functional
Type-Directed Synthesis of Visualizations from Natural Language Queries
Qiaochu Chen,
Shankara Pailoor,
Celeste Barnaby,
Abby Criswell,
Chenglong Wang,
Greg Durrett, and
Işil Dillig
(University of Texas at Austin, USA; Microsoft Research, USA)
We propose a new technique based on program synthesis for automatically generating visualizations from natural language queries. Our method parses the natural language query into a refinement type specification using the intents-and-slots paradigm and leverages type-directed synthesis to generate a set of visualization programs that are most likely to meet the user's intent. Our refinement type system captures useful hints present in the natural language query and allows the synthesis algorithm to reject visualizations that violate well-established design guidelines for the input data set. We have implemented our ideas in a tool called Graphy and evaluated it on NLVCorpus, which consists of 3 popular datasets and over 700 real-world natural language queries. Our experiments show that Graphy significantly outperforms state-of-the-art natural language based visualization tools, including transformer and rule-based ones.
@Article{OOPSLA222p144,
author = {Qiaochu Chen and Shankara Pailoor and Celeste Barnaby and Abby Criswell and Chenglong Wang and Greg Durrett and Işil Dillig},
title = {Type-Directed Synthesis of Visualizations from Natural Language Queries},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {144},
numpages = {28},
doi = {10.1145/3563307},
year = {2022},
}
Publisher's Version
Synthesis-Powered Optimization of Smart Contracts via Data Type Refactoring
Yanju Chen,
Yuepeng Wang,
Maruth Goyal,
James Dong,
Yu Feng, and
Işil Dillig
(University of California at Santa Barbara, USA; Simon Fraser University, Canada; University of Texas at Austin, USA; Stanford University, USA)
Since executing a smart contract on the Ethereum blockchain costs money (measured in gas), smart contract developers spend significant effort in reducing gas usage. In this paper, we propose a new technique for reducing the gas usage of smart contracts by changing the underlying data layout. Given a smart contract P and a type-level transformation, our method automatically synthesizes a new contract P′ that is functionally equivalent to P. Our approach provides a convenient DSL for expressing data type refactorings and employs program synthesis to generate the new version of the contract. We have implemented our approach in a tool called Solidare and demonstrate its capabilities on real-world smart contracts from Etherscan and GasStation. In particular, we show that our approach is effective at automating the desired data layout transformation and that it is useful for reducing gas usage of smart contracts that use rich data structures.
@Article{OOPSLA222p145,
author = {Yanju Chen and Yuepeng Wang and Maruth Goyal and James Dong and Yu Feng and Işil Dillig},
title = {Synthesis-Powered Optimization of Smart Contracts via Data Type Refactoring},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {145},
numpages = {29},
doi = {10.1145/3563308},
year = {2022},
}
Publisher's Version
Verified Compilation of Quantum Oracles
Liyi Li,
Finn Voichick,
Kesha Hietala,
Yuxiang Peng,
Xiaodi Wu, and
Michael Hicks
(University of Maryland, USA; Amazon, USA)
Quantum algorithms often apply classical operations, such as arithmetic or predicate checks, over a quantum superposition of classical data; these so-called oracles are often the largest components of a quantum program. To ease the construction of efficient, correct oracle functions, this paper presents VQO, a high-assurance framework implemented with the Coq proof assistant. The core of VQO is OQASM, the oracle quantum assembly language. OQASM operations move qubits between two different bases via the quantum Fourier transform, thus admitting important optimizations, but without inducing entanglement and the exponential blowup that comes with it. OQASM’s design enabled us to prove correct VQO’s compilers—from a simple imperative language called OQIMP to OQASM, and from OQASM to SQIR, a general-purpose quantum assembly language—and allowed us to efficiently test properties of OQASM programs using the QuickChick property-based testing framework. We have used VQO to implement a variety of arithmetic and geometric operators that are building blocks for important oracles, including those used in Shor’s and Grover’s algorithms. We found that VQO’s QFT-based arithmetic oracles require fewer qubits, sometimes substantially fewer, than those constructed using “classical” gates; VQO’s versions of the latter were nevertheless on par with or better than (in terms of both qubit and gate counts) oracles produced by Quipper, a state-of-the-art but unverified quantum programming platform.
@Article{OOPSLA222p146,
author = {Liyi Li and Finn Voichick and Kesha Hietala and Yuxiang Peng and Xiaodi Wu and Michael Hicks},
title = {Verified Compilation of Quantum Oracles},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {146},
numpages = {27},
doi = {10.1145/3563309},
year = {2022},
}
Publisher's Version
Info
Artifacts Reusable
Specification-Guided Component-Based Synthesis from Effectful Libraries
Ashish Mishra and
Suresh Jagannathan
(Purdue University, USA)
Component-based synthesis seeks to build programs using the APIs provided by a set of libraries. Oftentimes, these APIs have effects, which make it challenging to reason about the correctness of potential synthesis candidates. This is because changes to global state made by effectful library procedures affect how they may be composed together, yielding an intractably large search space that can confound typical enumerative synthesis techniques. If the nature of these effects are exposed as part of their specification, however, deductive synthesis approaches can be used to help guide the search for components. In this paper, we present a new specification-guided synthesis procedure that uses Hoare-style pre- and post-conditions to express fine-grained effects of potential library component candidates to drive a bi-directional synthesis search strategy. The procedure alternates between a forward search process that seeks to build larger terms given an existing context but which is otherwise unaware of the actual goal, alongside a backward search mechanism that seeks terms consistent with the desired goal but which is otherwise unaware of the context from which these terms must be synthesized. To further improve efficiency and scalability, we integrate a conflict-driven learning procedure into the synthesis algorithm that provides a semantic characterization of previously encountered unsuccessful search paths that is used to prune the space of possible candidates as synthesis proceeds. We have implemented our ideas in a tool called and demonstrate its effectiveness on a number of challenging synthesis problems defined over OCaml libraries equipped with effectful specifications.
@Article{OOPSLA222p147,
author = {Ashish Mishra and Suresh Jagannathan},
title = {Specification-Guided Component-Based Synthesis from Effectful Libraries},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {147},
numpages = {30},
doi = {10.1145/3563310},
year = {2022},
}
Publisher's Version
A Fast In-Place Interpreter for WebAssembly
Ben L. Titzer
(Carnegie Mellon University, USA)
WebAssembly (Wasm) is a compact, well-specified bytecode format that offers a portable compilation target with near-native execution speed. The bytecode format was specifically designed to be fast to parse, validate, and compile, positioning itself as a portable alternative to native code. It was pointedly not designed to be interpreted directly. Instead, design considerations at the time focused on competing with native code, utilizing optimizing compilers as the primary execution tier. Yet, in JIT scenarios, compilation time and memory consumption critically impact application startup, leading many Wasm engines to later deploy faster single-pass (baseline) compilers. Though faster, baseline compilers still take time and waste code space for infrequently executed code. A typical interpreter being infeasible, some engines resort to compiling Wasm not to machine code, but to a more compact, but easy to interpret format. This still takes time and wastes memory. Instead, we introduce in this article a fast in-place interpreter for WebAssembly, where no rewrite and no separate format is necessary. Our evaluation shows that in-place interpretation of Wasm code is space-efficient and fast, achieving performance on-par with interpreting a custom-designed internal format. This fills a hole in the execution tier space for Wasm, allowing for even faster startup and lower memory footprint than previous engine configurations.
@Article{OOPSLA222p148,
author = {Ben L. Titzer},
title = {A Fast In-Place Interpreter for WebAssembly},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {148},
numpages = {27},
doi = {10.1145/3563311},
year = {2022},
}
Publisher's Version
Artifacts Reusable
SigVM: Enabling Event-Driven Execution for Truly Decentralized Smart Contracts
Zihan Zhao,
Sidi Mohamed Beillahi,
Ryan Song,
Yuxi Cai,
Andreas Veneris, and
Fan Long
(University of Toronto, Canada)
This paper presents SigVM, the first blockchain virtual machine that extends EVM to support an event-driven execution model, enabling developers to build truly decentralized smart contracts. Contracts in SigVM can emit signal events, on which other contracts can listen. Once an event is triggered, corresponding handler functions are automatically executed as signal transactions. We build an end-to-end blockchain platform SigChain and a contract language compiler SigSolid to realize the potential of SigVM. Experimental results show that our benchmark applications can be reimplemented with SigVM in a truly decentralized way, eliminating the dependency on centralized and unreliable mechanisms like off-chain relay servers. The development effort of reimplementing these contracts with SigVM is small, i.e., we modified on average 3.17% of the contract code. The runtime and the gas overhead of SigVM on these contracts is negligible.
@Article{OOPSLA222p149,
author = {Zihan Zhao and Sidi Mohamed Beillahi and Ryan Song and Yuxi Cai and Andreas Veneris and Fan Long},
title = {SigVM: Enabling Event-Driven Execution for Truly Decentralized Smart Contracts},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {149},
numpages = {26},
doi = {10.1145/3563312},
year = {2022},
}
Publisher's Version
Artifacts Functional
Solo: A Lightweight Static Analysis for Differential Privacy
Chiké Abuah,
David Darais, and
Joseph P. Near
(University of Vermont, USA; Galois, USA)
Existing approaches for statically enforcing differential privacy in higher order languages use either linear or relational refinement types. A barrier to adoption for these approaches is the lack of support for expressing these “fancy types” in mainstream programming languages. For example, no mainstream language supports relational refinement types, and although Rust and modern versions of Haskell both employ some linear typing techniques, they are inadequate for embedding enforcement of differential privacy, which requires “full” linear types. We propose a new type system that enforces differential privacy, avoids the use of linear and relational refinement types, and can be easily embedded in richly typed programming languages like Haskell. We demonstrate such an embedding in Haskell, demonstrate its expressiveness on case studies, and prove soundness of our type-based enforcement of differential privacy.
@Article{OOPSLA222p150,
author = {Chiké Abuah and David Darais and Joseph P. Near},
title = {Solo: A Lightweight Static Analysis for Differential Privacy},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {150},
numpages = {30},
doi = {10.1145/3563313},
year = {2022},
}
Publisher's Version
Info
Artifacts Reusable
A Conceptual Framework for Safe Object Initialization: A Principled and Mechanized Soundness Proof of the Celsius Model
Clément Blaudeau and
Fengyun Liu
(Inria, France; Université de Paris-Cité, France; Oracle Labs, Switzerland)
An object under initialization does not fulfill its class specification yet and can be unsafe to
use as it may have uninitialized fields. It can sometimes be useful to call methods on such a
partially initialized object in order to compute a complex initial value, or to let the object
escape its constructor in order to create mutually recursive objects. However, inadvertent usage
of uninitialized fields can lead to run-time crashes. Those subtle programming errors are not
statically detected by most modern compilers.
While many other features of object-oriented programming languages have been thoroughly studied
over the years, object initialization lacks a simple, systematic, and principled
treatment. Building on the insights of previous work, we identify a set of four core principles
for safe initialization: monotonicity, authority, stackability, and scopability. We capture the
essence of the principles with a minimal calculus, Celsius, and show that the principles give rise
to a practical initialization system that strikes a balance between expressiveness and
simplicity. The meta-theory of the system is entirely mechanized using the Coq proof assistant. We
believe that our approach based on well-identified core principles sheds new light on the
underlying mechanisms ensuring safety and could serve as a basis for language design when faced
with similar challenges.
@Article{OOPSLA222p151,
author = {Clément Blaudeau and Fengyun Liu},
title = {A Conceptual Framework for Safe Object Initialization: A Principled and Mechanized Soundness Proof of the Celsius Model},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {151},
numpages = {29},
doi = {10.1145/3563314},
year = {2022},
}
Publisher's Version
Artifacts Reusable
Model Checking for a Multi-Execution Memory Model
Evgenii Moiseenko,
Michalis Kokologiannakis, and
Viktor Vafeiadis
(JetBrains Research, Serbia; MPI-SWS, Germany)
Multi-execution memory models, such as Promising and Weakestmo, are an advanced class of weak memory
consistency models that justify certain outcomes of a concurrent program by considering multiple candidate executions collectively. While this key characteristic allows them to support effective compilation to hardware models and a wide range of compiler optimizations, it makes reasoning about them substantially more difficult. In particular, we observe that Promising and Weakestmo inhibit effective model checking because they allow some suprisingly weak behaviors that cannot be generated by examining one execution at a time.
We therefore introduce Weakestmo2, a strengthening of Weakestmo by constraining its multi-execution
nature, while preserving the important properties of Weakestmo: DRF theorems, compilation to hardware models, and correctness of local program transformations. Our strengthening rules out a class of surprisingly weak program behaviors, which we attempt to characterize with the help of two novel properties: load buffering race freedom and certification locality. In addition, we develop WMC, a model checker for Weakestmo2 with performance close to that of the best tools for per-execution models.
@Article{OOPSLA222p152,
author = {Evgenii Moiseenko and Michalis Kokologiannakis and Viktor Vafeiadis},
title = {Model Checking for a Multi-Execution Memory Model},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {152},
numpages = {28},
doi = {10.1145/3563315},
year = {2022},
}
Publisher's Version
Artifacts Reusable
The Road Not Taken: Exploring Alias Analysis Based Optimizations Missed by the Compiler
Khushboo Chitre,
Piyus Kedia, and
Rahul Purandare
(IIIT Delhi, India; University of Nebraska-Lincoln, USA)
Context-sensitive inter-procedural alias analyses are more precise than intra-procedural alias analyses. However, context-sensitive inter-procedural alias analyses are not scalable. As a consequence, most of the production compilers sacrifice precision for scalability and implement intra-procedural alias analysis. The alias analysis is used by many compiler optimizations, including loop transformations. Due to the imprecision of alias analysis, the program’s performance may suffer, especially in the presence of loops.
Previous work proposed a general approach based on code-versioning with dynamic checks to disambiguate pointers at runtime. However, the overhead of dynamic checks in this approach is O(log n), which is substantially high to enable interesting optimizations. Other suggested approaches, e.g., polyhedral and symbolic range analysis, have O(1) overheads, but they only work for loops with certain constraints. The production compilers, such as LLVM and GCC, use scalar evolution analysis to compute an O(1) range check for loops to resolve memory dependencies at runtime. However, this approach also can only be applied to loops with certain constraints.
In this work, we present our tool, Scout, that can disambiguate two pointers at runtime using single memory access. Scout is based on the key idea to constrain the allocation size and alignment during memory allocations. Scout can also disambiguate array accesses within a loop for which the existing O(1) range checks technique cannot be applied. In addition, Scout uses feedback from static optimizations to reduce the number of dynamic checks needed for optimizations.
Our technique enabled new opportunities for loop-invariant code motion, dead store elimination, loop vectorization, and load elimination in an already optimized code. Our performance improvements are up to 51.11% for Polybench and up to 0.89% for CPU SPEC 2017 suites. The geometric means for our allocator’s CPU and memory overheads for CPU SPEC 2017 benchmarks are 1.05%, and 7.47%, respectively. For Polybench benchmarks, the geometric mean of CPU and memory overheads are 0.21% and 0.13%, respectively.
@Article{OOPSLA222p153,
author = {Khushboo Chitre and Piyus Kedia and Rahul Purandare},
title = {The Road Not Taken: Exploring Alias Analysis Based Optimizations Missed by the Compiler},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {153},
numpages = {25},
doi = {10.1145/3563316},
year = {2022},
}
Publisher's Version
Necessity Specifications for Robustness
Julian Mackay,
Susan Eisenbach,
James Noble, and
Sophia Drossopoulou
(Victoria University of Wellington, New Zealand; Imperial College London, UK; Creative Research & Programming, New Zealand; Meta, UK)
Robust modules guarantee to do only what they are supposed to do – even in the presence of untrusted malicious clients, and considering not just the direct behaviour of individual methods, but also the emergent behaviour from calls to more than one method. Necessity is a language for specifying robustness, based on novel necessity operators capturing temporal implication, and a proof logic that derives explicit robustness specifications from functional specifications. Soundness and an exemplar proof are mechanised in Coq.
@Article{OOPSLA222p154,
author = {Julian Mackay and Susan Eisenbach and James Noble and Sophia Drossopoulou},
title = {<i>Necessity</i> Specifications for Robustness},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {154},
numpages = {30},
doi = {10.1145/3563317},
year = {2022},
}
Publisher's Version
Artifacts Functional
A Bunch of Sessions: A Propositions-as-Sessions Interpretation of Bunched Implications in Channel-Based Concurrency
Dan Frumin,
Emanuele D’Osualdo,
Bas van den Heuvel, and
Jorge A. Pérez
(University of Groningen, Netherlands; MPI-SWS, Germany)
The emergence of propositions-as-sessions, a Curry-Howard correspondence between propositions of Linear Logic and session types for concurrent processes, has settled the logical foundations of message-passing concurrency. Central to this approach is the resource consumption paradigm heralded by Linear Logic. In this paper, we investigate a new point in the design space of session type systems for message-passing concurrent programs. We identify O’Hearn and Pym’s Logic of Bunched Implications (BI) as a fruitful basis for an interpretation of the logic as a concurrent programming language. This leads to a treatment of non-linear resources that is radically different from existing approaches based on Linear Logic. We introduce a new π-calculus with sessions, called πBI; its most salient feature is a construct called spawn, which expresses new forms of sharing that are induced by structural principles in BI. We illustrate the expressiveness of πBI and lay out its fundamental theory: type preservation, deadlock-freedom, and weak normalization results for well-typed processes; an operationally sound and complete typed encoding of an affine λ-calculus; and a non-interference result for access of resources.
@Article{OOPSLA222p155,
author = {Dan Frumin and Emanuele D’Osualdo and Bas van den Heuvel and Jorge A. Pérez},
title = {A Bunch of Sessions: A Propositions-as-Sessions Interpretation of Bunched Implications in Channel-Based Concurrency},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {155},
numpages = {29},
doi = {10.1145/3563318},
year = {2022},
}
Publisher's Version
Coeffects for Sharing and Mutation
Riccardo Bianchini,
Francesco Dagnino,
Paola Giannini,
Elena Zucca, and
Marco Servetto
(University of Genoa, Italy; University of Eastern Piedmont, Italy; Victoria University of Wellington, New Zealand)
In type-and-coeffect systems, contexts are enriched by coeffects modeling how they are actually used, typically through annotations on single variables. Coeffects are computed bottom-up, combining, for each term, the coeffects of its subterms, through a fixed set of algebraic operators. We show that this principled approach can be adopted to track sharing in the imperative paradigm, that is, links among variables possibly introduced by the execution. This provides a significant example of non-structural coeffects, which cannot be computed by-variable, since the way a given variable is used can affect the coeffects of other variables. To illustrate the effectiveness of the approach, we enhance the type system tracking sharing to model a sophisticated set of features related to uniqueness and immutability. Thanks to the coeffect-based approach, we can express such features in a simple way and prove related properties with standard techniques.
@Article{OOPSLA222p156,
author = {Riccardo Bianchini and Francesco Dagnino and Paola Giannini and Elena Zucca and Marco Servetto},
title = {Coeffects for Sharing and Mutation},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {156},
numpages = {29},
doi = {10.1145/3563319},
year = {2022},
}
Publisher's Version
The Essence of Online Data Processing
Philip Dexter,
Yu David Liu, and
Kenneth Chiu
(SUNY Binghamton, USA)
Data processing systems are a fundamental component of the modern computing stack. These systems are routinely deployed online: they continuously receive the requests of data processing operations, and continuously return the results to end users or client applications. Online data processing systems have unique features beyond conventional data processing, and the optimizations designed for them are complex, especially when data themselves are structured and dynamic. This paper describes DON Calculus, the first rigorous foundation for online data processing. It captures the essential behavior of both the backend data processing engine and the frontend application, with the focus on two design dimensions essential yet unique to online data processing systems: incremental operation processing (IOP) and temporal locality optimization (TLO). A novel design insight is that the operations continuously applied to the data can be defined as an operation stream flowing through the data structure, and this abstraction unifies diverse designs of IOP and TLO in one calculus. DON Calculus is endowed with a mechanized metatheory centering around a key observable equivalence property: despite the significant non-deterministic executions introduced by IOP and TLO, the observable result of DON Calculus data processing is identical to that of conventional data processing without IOP and TLO. Broadly, DON Calculus is a novel instance in the active pursuit of providing rigorous guarantees to the software system stack. The specification and mechanization of DON Calculus provide a sound base for the designers of future data processing systems to build upon, helping them embrace rigorous semantic engineering without the need of developing from scratch.
@Article{OOPSLA222p157,
author = {Philip Dexter and Yu David Liu and Kenneth Chiu},
title = {The Essence of Online Data Processing},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {157},
numpages = {30},
doi = {10.1145/3563320},
year = {2022},
}
Publisher's Version
Consistency-Preserving Propagation for SMT Solving of Concurrent Program Verification
Zhihang Sun,
Hongyu Fan, and
Fei He
(Tsinghua University, China)
The happens-before orders have been widely adopted to model thread interleaving behaviors of concurrent programs. A dedicated ordering theory solver, usually composed of theory propagation, consistency checking, and conflict clause generation, plays a central role in concurrent program verification. We propose a novel preventive reasoning approach that automatically preserves the ordering consistency and makes consistency checking and conflict clause generation omissible. We implement our approach in a prototype tool and conduct experiments on credible benchmarks; results reveal a significant improvement over existing state-of-the-art concurrent program verifiers.
@Article{OOPSLA222p158,
author = {Zhihang Sun and Hongyu Fan and Fei He},
title = {Consistency-Preserving Propagation for SMT Solving of Concurrent Program Verification},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {158},
numpages = {28},
doi = {10.1145/3563321},
year = {2022},
}
Publisher's Version
Oracle-Free Repair Synthesis for Floating-Point Programs
Daming Zou,
Yuchen Gu,
Yuanfeng Shi,
MingZhe Wang,
Yingfei Xiong, and
Zhendong Su
(ETH Zurich, Switzerland; Peking University, China; Princeton University, USA)
The floating-point representation provides widely-used data types (such as “float” and “double”) for modern numerical software. Numerical errors are inherent due to floating-point’s approximate nature, and pose an important, well-known challenge. It is nontrivial to fix/repair numerical code to reduce numerical errors — it requires either numerical expertise (for manual fixing) or high-precision oracles (for automatic repair); both are difficult requirements. To tackle this challenge, this paper introduces a principled dynamic approach that is fully automated and oracle-free for effectively repairing floating-point errors. The key of our approach is the novel notion of micro-structure that characterizes structural patterns of floating-point errors. We leverage micro-structures’ statistical information on floating-point errors to effectively guide repair synthesis and validation. Compared with existing state-of-the-art repair approaches, our work is fully automatic and has the distinctive benefit of not relying on the difficult to obtain high-precision oracles. Evaluation results on 36 commonly-used numerical programs show that our approach is highly efficient and effective: (1) it is able to synthesize repairs instantaneously, and (2) versus the original programs, the repaired programs have orders of magnitude smaller floating-point errors, while having faster runtime performance.
@Article{OOPSLA222p159,
author = {Daming Zou and Yuchen Gu and Yuanfeng Shi and MingZhe Wang and Yingfei Xiong and Zhendong Su},
title = {Oracle-Free Repair Synthesis for Floating-Point Programs},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {159},
numpages = {29},
doi = {10.1145/3563322},
year = {2022},
}
Publisher's Version
Optimal Heap Limits for Reducing Browser Memory Use
Marisa Kirisame,
Pranav Shenoy, and
Pavel Panchekha
(University of Utah, USA)
Garbage-collected language runtimes carefully tune heap limits to reduce garbage collection time and memory usage. However, there's a trade-off: a lower heap limit reduces memory use but increases garbage collection time. Classic methods for setting heap limits include manually tuned heap limits and multiple-of-live-size rules of thumb, but it is not clear when one rule is better than another or how to compare them.
We address this problem with a new framework where heap limits are set for multiple heaps at once. Our key insight is that every heap limit rule induces a particular allocation of memory across multiple processes, and this allocation can be sub-optimal. We use our framework to derive an optimal "square-root" heap limit rule, which minimizes total memory usage for any amount of total garbage collection time. Paradoxically, the square-root heap limit rule achieves this coordination without communication: it allocates memory optimally across multiple heaps without requiring any communication between those heaps.
To demonstrate that this heap limit rule is effective, we prototype it for V8, the JavaScript runtime used in Google Chrome, Microsoft Edge, and other browsers, as well as in server-side frameworks like node.js and Deno. On real-world web pages, our prototype achieves reductions of approximately 16.0% of memory usage while keeping garbage collection time constant. On memory-intensive benchmarks, reductions of up to 30.0% of garbage collection time are possible with no change in total memory usage.
@Article{OOPSLA222p160,
author = {Marisa Kirisame and Pranav Shenoy and Pavel Panchekha},
title = {Optimal Heap Limits for Reducing Browser Memory Use},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {160},
numpages = {21},
doi = {10.1145/3563323},
year = {2022},
}
Publisher's Version
A General Construction for Abstract Interpretation of Higher-Order Automatic Differentiation
Jacob Laurel,
Rem Yang,
Shubham Ugare,
Robert Nagel,
Gagandeep Singh, and
Sasa Misailovic
(University of Illinois at Urbana-Champaign, USA; VMware Research, USA)
We present a novel, general construction to abstractly interpret higher-order automatic differentiation (AD). Our construction allows one to instantiate an abstract interpreter for computing derivatives up to a chosen order. Furthermore, since our construction reduces the problem of abstractly reasoning about derivatives to abstractly reasoning about real-valued straight-line programs, it can be instantiated with almost any numerical abstract domain, both relational and non-relational. We formally establish the soundness of this construction.
We implement our technique by instantiating our construction with both the non-relational interval domain and the relational zonotope domain to compute both first and higher-order derivatives. In the latter case, we are the first to apply a relational domain to automatic differentiation for abstracting higher-order derivatives, and hence we are also the first abstract interpretation work to track correlations across not only different variables, but different orders of derivatives.
We evaluate these instantiations on multiple case studies, namely robustly explaining a neural network and more precisely computing a neural network’s Lipschitz constant. For robust interpretation, first and second derivatives computed via zonotope AD are up to 4.76× and 6.98× more precise, respectively, compared to interval AD. For Lipschitz certification, we obtain bounds that are up to 11,850× more precise with zonotopes, compared to the state-of-the-art interval-based tool.
@Article{OOPSLA222p161,
author = {Jacob Laurel and Rem Yang and Shubham Ugare and Robert Nagel and Gagandeep Singh and Sasa Misailovic},
title = {A General Construction for Abstract Interpretation of Higher-Order Automatic Differentiation},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {161},
numpages = {29},
doi = {10.1145/3563324},
year = {2022},
}
Publisher's Version
Artifacts Functional
Scalable Verification of GNN-Based Job Schedulers
Haoze Wu,
Clark Barrett,
Mahmood Sharif,
Nina Narodytska, and
Gagandeep Singh
(Stanford University, USA; Tel Aviv University, Israel; VMware Research, USA; University of Illinois at Urbana-Champaign, USA)
Recently, Graph Neural Networks (GNNs) have been applied for scheduling jobs over clusters, achieving better performance than hand-crafted heuristics. Despite their impressive performance, concerns remain over whether these GNN-based job schedulers meet users’ expectations about other important properties, such as strategy-proofness, sharing incentive, and stability. In this work, we consider formal verification of GNN-based job schedulers. We address several domain-specific challenges such as networks that are deeper and specifications that are richer than those encountered when verifying image and NLP classifiers. We develop vegas, the first general framework for verifying both single-step and multi-step properties of these schedulers based on carefully designed algorithms that combine abstractions, refinements, solvers, and proof transfer. Our experimental results show that vegas achieves significant speed-up when verifying important properties of a state-of-the-art GNN-based scheduler compared to previous methods.
@Article{OOPSLA222p162,
author = {Haoze Wu and Clark Barrett and Mahmood Sharif and Nina Narodytska and Gagandeep Singh},
title = {Scalable Verification of GNN-Based Job Schedulers},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {162},
numpages = {30},
doi = {10.1145/3563325},
year = {2022},
}
Publisher's Version
Info
Fractional Resources in Unbounded Separation Logic
Thibault Dardinier,
Peter Müller, and
Alexander J. Summers
(ETH Zurich, Switzerland; University of British Columbia, Canada)
Many separation logics support fractional permissions to distinguish between read and write access to a heap location, for instance, to allow concurrent reads while enforcing exclusive writes. Fractional permissions extend to composite assertions such as (co)inductive predicates and magic wands by allowing those to be multiplied by a fraction. Typical separation logic proofs require that this multiplication has three key properties: it needs to distribute over assertions, it should permit fractions to be factored out from assertions, and two fractions of the same assertion should be combinable into one larger fraction.
Existing formal semantics incorporating fractional assertions into a separation logic define multiplication semantically (via models), resulting in a semantics in which distributivity and combinability do not hold for key resource assertions such as magic wands, and fractions cannot be factored out from a separating conjunction. By contrast, existing automatic separation logic verifiers define multiplication syntactically, resulting in a different semantics for which it is unknown whether distributivity and combinability hold for all assertions.
In this paper, we present a novel semantics for separation logic assertions that allows states to hold more than a full permission to a heap location during the evaluation of an assertion. By reimposing upper bounds on the permissions held per location at statement boundaries, we retain key properties of separation logic, in particular, the frame rule. Our assertion semantics unifies semantic and syntactic multiplication and thereby reconciles the discrepancy between separation logic theory and tools and enjoys distributivity, factorisability, and combinability. We have formalised our semantics and proved its properties in Isabelle/HOL.
@Article{OOPSLA222p163,
author = {Thibault Dardinier and Peter Müller and Alexander J. Summers},
title = {Fractional Resources in Unbounded Separation Logic},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {163},
numpages = {27},
doi = {10.1145/3563326},
year = {2022},
}
Publisher's Version
Artifacts Reusable
Neurosymbolic Repair for Low-Code Formula Languages
Rohan Bavishi,
Harshit Joshi,
José Cambronero,
Anna Fariha,
Sumit Gulwani,
Vu Le,
Ivan Radiček, and
Ashish Tiwari
(University of California at Berkeley, USA; Microsoft, India; Microsoft, USA; Microsoft, Croatia)
Most users of low-code platforms, such as Excel and PowerApps, write programs in domain-specific formula languages to carry out nontrivial tasks. Often users can write most of the program they want, but introduce small mistakes that yield broken formulas. These mistakes, which can be both syntactic and semantic, are hard for low-code users to identify and fix, even though they can be resolved with just a few edits. We formalize the problem of producing such edits as the last-mile repair problem. To address this problem, we developed LaMirage, a LAst-MIle RepAir-engine GEnerator that combines symbolic and neural techniques to perform last-mile repair in low-code formula languages. LaMirage takes a grammar and a set of domain-specific constraints/rules, which jointly approximate the target language, and uses these to generate a repair engine that can fix formulas in that language. To tackle the challenges of localizing errors and ranking candidate repairs, LaMirage leverages neural techniques, whereas it relies on symbolic methods to generate candidate edits. This combination allows LaMirage to find repairs that satisfy the provided grammar and constraints, and then pick the most natural repair. We compare LaMirage to state-of-the-art neural and symbolic approaches on 400 real Excel and Power Fx formulas, where LaMirage outperforms all baselines. We release these benchmarks to encourage subsequent work in low-code domains.
@Article{OOPSLA222p164,
author = {Rohan Bavishi and Harshit Joshi and José Cambronero and Anna Fariha and Sumit Gulwani and Vu Le and Ivan Radiček and Ashish Tiwari},
title = {Neurosymbolic Repair for Low-Code Formula Languages},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {164},
numpages = {30},
doi = {10.1145/3563327},
year = {2022},
}
Publisher's Version
Archive submitted (350 kB)
A Study of Inline Assembly in Solidity Smart Contracts
Stefanos Chaliasos,
Arthur Gervais, and
Benjamin Livshits
(Imperial College London, UK)
The Solidity programming language is the most widely used language for smart contract development. Improving smart contracts’ correctness, security, and performance has been the driving force for research in vulnerability detection, program analysis, and compiler techniques for Solidity. Similar to system-level languages such as C, Solidity enables the embedding of low-level code in programs, in the form of inline assembly code. Developers use inline assembly for low-level optimizations, extending the Solidity language through libraries, and using blockchain-specific opcodes only available through inline assembly. Nevertheless, inline assembly fragments are not well understood by an average developer and can introduce security threats as well as affect the optimizations that can be applied to programs by the compiler; it also significantly limits the effectiveness of source code static analyzers that operate on the Solidity level. A better understanding of how inline assembly is used in practice could in turn increase the performance, security, and support for inline assembly in Solidity.
This paper presents a large-scale quantitative study of the use of inline assembly in 6.8M smart contracts deployed on the Ethereum blockchain. We find that 23% of the analyzed smart contracts contain inline assembly code, and that the use of inline assembly has become more widespread over time. We further performed a manual qualitative analysis for identifying usage patterns of inline assembly in Solidity smart contracts. Our findings are intended to help practitioners understand when they should use inline assembly and guide developers of Solidity tools in prioritizing which parts of inline assembly to implement first. Finally, the insights of this study could be used to enhance the Solidity language, improve the Solidity compiler, and to open up new research directions by driving future researchers to build appropriate methods and techniques for replacing inline assembly in Solidity programs when there is no real necessity to use it.
@Article{OOPSLA222p165,
author = {Stefanos Chaliasos and Arthur Gervais and Benjamin Livshits},
title = {A Study of Inline Assembly in Solidity Smart Contracts},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {165},
numpages = {27},
doi = {10.1145/3563328},
year = {2022},
}
Publisher's Version
Artifacts Reusable
Neural Architecture Search using Property Guided Synthesis
Charles Jin,
Phitchaya Mangpo Phothilimthana, and
Sudip Roy
(Massachusetts Institute of Technology, USA; Google Research, USA; Cohere, USA)
Neural architecture search (NAS) has become an increasingly important tool within the deep learning community in recent years, yielding many practical advancements in the design of deep neural network architectures. However, most existing approaches operate within highly structured design spaces, and hence (1) explore only a small fraction of the full search space of neural architectures while also (2) requiring significant manual effort from domain experts. In this work, we develop techniques that enable efficient NAS in a significantly larger design space. In particular, we propose to perform NAS in an abstract search space of program properties. Our key insights are as follows: (1) an abstract search space can be significantly smaller than the original search space, and (2) architectures with similar program properties should also have similar performance; thus, we can search more efficiently in the abstract search space. To enable this approach, we also introduce a novel efficient synthesis procedure, which performs the role of concretizing a set of promising program properties into a satisfying neural architecture. We implement our approach, αNAS, within an evolutionary framework, where the mutations are guided by the program properties. Starting with a ResNet-34 model, αNAS produces a model with slightly improved accuracy on CIFAR-10 but 96% fewer parameters. On ImageNet, αNAS is able to improve over Vision Transformer (30% fewer FLOPS and parameters), ResNet-50 (23% fewer FLOPS, 14% fewer parameters), and EfficientNet (7% fewer FLOPS and parameters) without any degradation in accuracy.
@Article{OOPSLA222p166,
author = {Charles Jin and Phitchaya Mangpo Phothilimthana and Sudip Roy},
title = {Neural Architecture Search using Property Guided Synthesis},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {166},
numpages = {30},
doi = {10.1145/3563329},
year = {2022},
}
Publisher's Version
Archive submitted (1.3 MB)
Seq2Parse: Neurosymbolic Parse Error Repair
Georgios Sakkas,
Madeline Endres,
Philip J. Guo,
Westley Weimer, and
Ranjit Jhala
(University of California at San Diego, USA; University of Michigan, USA)
We present Seq2Parse, a language-agnostic neurosymbolic approach to automatically repairing parse errors. Seq2Parse is based on the insight that Symbolic Error Correcting (EC) Parsers can, in principle, synthesize repairs, but, in practice, are overwhelmed by the many error-correction rules that are not relevant to the particular program that requires repair. In contrast, Neural approaches are fooled by the large space of possible sequence level edits, but can precisely pinpoint the set of EC-rules that are relevant to a particular program. We show how to combine their complementary strengths by using neural methods to train a sequence classifier that predicts the small set of relevant EC-rules for an ill-parsed program, after which, the symbolic EC-parsing algorithm can make short work of generating useful repairs. We train and evaluate our method on a dataset of 1,100,000 Python programs, and show that Seq2Parse is accurate and efficient: it can parse 94% of our tests within 2.1 seconds, while generating the exact user fix in 1 out 3 of the cases; and useful: humans perceive both Seq2Parse-generated error locations and repairs to be almost as good as human-generated ones in a statistically-significant manner.
@Article{OOPSLA222p167,
author = {Georgios Sakkas and Madeline Endres and Philip J. Guo and Westley Weimer and Ranjit Jhala},
title = {Seq2Parse: Neurosymbolic Parse Error Repair},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {167},
numpages = {27},
doi = {10.1145/3563330},
year = {2022},
}
Publisher's Version
Artifacts Functional
Generic Go to Go: Dictionary-Passing, Monomorphisation, and Hybrid
Stephen Ellis,
Shuofei Zhu,
Nobuko Yoshida, and
Linhai Song
(University of Oxford, UK; Pennsylvania State University, USA)
Go is a popular statically-typed industrial programming language. To aid the type safe reuse of code, the
recent Go release (Go 1.18) published early 2022 includes bounded parametric polymorphism via generic types.
Go 1.18 implements generic types using a combination of monomorphisation and call-graph based
dictionary-passing called hybrid. This hybrid approach can be viewed as an optimised form of monomorphisation that
statically generates specialised methods and types based on possible instantiations. A monolithic dictionary
supplements information lost during monomorphisation, and is structured according to the program’s call
graph. Unfortunately, the hybrid approach still suffers from code bloat, poor compilation speed, and limited
code coverage.
In this paper we propose and formalise a new non-specialising call-site based dictionary-passing translation.
Our call-site based translation creates individual dictionaries for each type parameter, with dictionary
construction occurring in place of instantiation, overcoming the limitations of hybrid. We prove it correct using
a novel and general bisimulation up to technique. To better understand how different generics translation
approaches work in practice, we benchmark five translators, Go 1.18, two existing monomorphisation
translators, our dictionary-passing translator, and an erasure translator. Our findings reveal several suggestions for
improvements for Go 1.18— specifically how to overcome the expressiveness limitations of generic Go and
improve compile time and compiled code size performance of Go 1.18.
@Article{OOPSLA222p168,
author = {Stephen Ellis and Shuofei Zhu and Nobuko Yoshida and Linhai Song},
title = {Generic Go to Go: Dictionary-Passing, Monomorphisation, and Hybrid},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {168},
numpages = {29},
doi = {10.1145/3563331},
year = {2022},
}
Publisher's Version
Artifacts Functional
Satisfiability Modulo Fuzzing: A Synergistic Combination of SMT Solving and Fuzzing
Sujit Kumar Muduli and
Subhajit Roy
(IIT Kanpur, India)
Programming languages and software engineering tools routinely encounter components that are difficult to reason on via formal techniques or whose formal semantics are not even available—third-party libraries, inline assembly code, SIMD instructions, system calls, calls to machine learning models, etc. However, often access to these components is available as input-output oracles—interfaces are available to query these components on certain inputs to receive the respective outputs. We refer to such functions as closed-box functions. Regular SMT solvers are unable to handle such closed-box functions.
We propose Sādhak, a solver for SMT theories modulo closed-box functions. Our core idea is to use a synergistic combination of a fuzzer to reason on closed-box functions and an SMT engine to solve the constraints pertaining to the SMT theories. The fuzz and the SMT engines attempt to converge to a model by exchanging a rich set of interface constraints that are relevant and interpretable by them. Our implementation, Sādhak, demonstrates a significant advantage over the only other solver that is capable of handling such closed-box constraints: Sādhak solves 36.45% more benchmarks than the best-performing mode of this state-of-the-art solver and has 5.72x better PAR-2 score; on the benchmarks that are solved by both tools, Sādhak is (on an average) 14.62x faster.
@Article{OOPSLA222p169,
author = {Sujit Kumar Muduli and Subhajit Roy},
title = {Satisfiability Modulo Fuzzing: A Synergistic Combination of SMT Solving and Fuzzing},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {169},
numpages = {28},
doi = {10.1145/3563332},
year = {2022},
}
Publisher's Version
Artifacts Reusable
UniRec: A Unimodular-Like Framework for Nested Recursions and Loops
Kirshanthan Sundararajah,
Charitha Saumya, and
Milind Kulkarni
(Purdue University, USA)
Scheduling transformations reorder operations in a program to improve locality and/or parallelism. There are mature loop transformation frameworks such as the polyhedral model for composing and applying instance-wise scheduling transformations for loop nests.In recent years, there have been efforts to build frameworks for composing and applying scheduling transformations for nested recursion and loops, but these frameworks cannot employ the full power of transformations for loop nests since they have overly-restrictive representations. This paper describes a new framework, UniRec, that not only generalizes prior frameworks for reasoning about transformations on recursion, but also generalizes the unimodular framework, and hence unifies reasoning about perfectly-nested loops and recursion.
@Article{OOPSLA222p170,
author = {Kirshanthan Sundararajah and Charitha Saumya and Milind Kulkarni},
title = {UniRec: A Unimodular-Like Framework for Nested Recursions and Loops},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {170},
numpages = {27},
doi = {10.1145/3563333},
year = {2022},
}
Publisher's Version
Synthesizing Abstract Transformers
Pankaj Kumar Kalita,
Sujit Kumar Muduli,
Loris D’Antoni,
Thomas Reps, and
Subhajit Roy
(IIT Kanpur, India; University of Wisconsin-Madison, USA)
This paper addresses the problem of creating abstract transformers automatically. The method we present automates the construction of static analyzers in a fashion similar to the way yacc automates the construction of parsers. Our method treats the problem as a program-synthesis problem. The user provides specifications of (i) the concrete semantics of a given operation op, (ii) the abstract domain A to be used by the analyzer, and (iii) the semantics of a domain-specific language L in which the abstract transformer is to be expressed. As output, our method creates an abstract transformer for op in abstract domain A, expressed in L (an “L-transformer for op over A”). Moreover, the abstract transformer obtained is a most-precise L-transformer for op over A; that is, there is no other L-transformer for op over A that is strictly more precise.
We implemented our method in a tool called AMURTH. We used AMURTH to create sets of replacement abstract transformers for those used in two existing analyzers, and obtained essentially identical performance. However, when we compared the existing transformers with the transformers obtained using AMURTH, we discovered that four of the existing transformers were unsound, which demonstrates the risk of using manually created transformers.
@Article{OOPSLA222p171,
author = {Pankaj Kumar Kalita and Sujit Kumar Muduli and Loris D’Antoni and Thomas Reps and Subhajit Roy},
title = {Synthesizing Abstract Transformers},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {171},
numpages = {29},
doi = {10.1145/3563334},
year = {2022},
}
Publisher's Version
Artifacts Functional
Monadic and Comonadic Aspects of Dependency Analysis
Pritam Choudhury
(University of Pennsylvania, USA)
Dependency analysis is vital to several applications in computer science. It lies at the essence of secure information flow analysis, binding-time analysis, etc. Various calculi have been proposed in the literature for analysing individual dependencies. Abadi et. al., by extending Moggi’s monadic metalanguage, unified several of these calculi into the Dependency Core Calculus (DCC). DCC has served as a foundational framework for dependency analysis for the last two decades. However, in spite of its success, DCC has its limitations. First, the monadic bind rule of the calculus is nonstandard and relies upon an auxiliary protection judgement. Second, being of a monadic nature, the calculus cannot capture dependency analyses that possess a comonadic nature, for example, the binding-time calculus, λ∘, of Davies. In this paper, we address these limitations by designing an alternative dependency calculus that is inspired by standard ideas from category theory. Our calculus is both monadic and comonadic in nature and subsumes both DCC and λ∘. Our construction explains the nonstandard bind rule and the protection judgement of DCC in terms of standard categorical concepts. It also leads to a novel technique for proving correctness of dependency analysis. We use this technique to present alternative proofs of correctness for DCC and λ∘.
@Article{OOPSLA222p172,
author = {Pritam Choudhury},
title = {Monadic and Comonadic Aspects of Dependency Analysis},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {172},
numpages = {29},
doi = {10.1145/3563335},
year = {2022},
}
Publisher's Version
Katara: Synthesizing CRDTs with Verified Lifting
Shadaj Laddad,
Conor Power,
Mae Milano,
Alvin Cheung, and
Joseph M. Hellerstein
(University of California at Berkeley, USA)
Conflict-free replicated data types (CRDTs) are a promising tool for designing scalable, coordination-free distributed systems. However, constructing correct CRDTs is difficult, posing a challenge for even seasoned developers. As a result, CRDT development is still largely the domain of academics, with new designs often awaiting peer review and a manual proof of correctness. In this paper, we present Katara, a program synthesis-based system that takes sequential data type implementations and automatically synthesizes verified CRDT designs from them. Key to this process is a new formal definition of CRDT correctness that combines a reference sequential type with a lightweight ordering constraint that resolves conflicts between non-commutative operations. Our process follows the tradition of work in verified lifting, including an encoding of correctness into SMT logic using synthesized inductive invariants and hand-crafted grammars for the CRDT state and runtime. Katara is able to automatically synthesize CRDTs for a wide variety of scenarios, from reproducing classic CRDTs to synthesizing novel designs based on specifications in existing literature. Crucially, our synthesized CRDTs are fully, automatically verified, eliminating entire classes of common errors and reducing the process of producing a new CRDT from a painstaking paper proof of correctness to a lightweight specification.
@Article{OOPSLA222p173,
author = {Shadaj Laddad and Conor Power and Mae Milano and Alvin Cheung and Joseph M. Hellerstein},
title = {Katara: Synthesizing CRDTs with Verified Lifting},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {173},
numpages = {29},
doi = {10.1145/3563336},
year = {2022},
}
Publisher's Version
Artifacts Functional
A Concurrent Program Logic with a Future and History
Roland Meyer,
Thomas Wies, and
Sebastian Wolff
(TU Braunschweig, Germany; New York University, USA)
Verifying fine-grained optimistic concurrent programs remains an open problem. Modern program logics provide abstraction mechanisms and compositional reasoning principles to deal with the inherent complexity. However, their use is mostly confined to pencil-and-paper or mechanized proofs. We devise a new separation logic geared towards the lacking automation. While local reasoning is known to be crucial for automation, we are the first to show how to retain this locality for (i) reasoning about inductive properties without the need for ghost code, and (ii) reasoning about computation histories in hindsight. We implemented our new logic in a tool and used it to automatically verify challenging concurrent search structures that require inductive properties and hindsight reasoning, such as the Harris set.
@Article{OOPSLA222p174,
author = {Roland Meyer and Thomas Wies and Sebastian Wolff},
title = {A Concurrent Program Logic with a Future and History},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {174},
numpages = {30},
doi = {10.1145/3563337},
year = {2022},
}
Publisher's Version
Info
Artifacts Reusable
Compilation of Dynamic Sparse Tensor Algebra
Stephen Chou and
Saman Amarasinghe
(Massachusetts Institute of Technology, USA)
Many applications, from social network graph analytics to control flow analysis, compute on sparse data that evolves over the course of program execution.
Such data can be represented as dynamic sparse tensors and efficiently stored in formats (data layouts) that utilize pointer-based data structures like block linked lists, binary search trees, B-trees, and C-trees among others.
These specialized formats support fast in-place modification and are thus better suited than traditional, array-based data structures like CSR for storing dynamic sparse tensors.
However, different dynamic sparse tensor formats have distinct benefits and drawbacks, and performing different computations on tensors that are stored in different formats can require vastly dissimilar code that are not straightforward to correctly implement and optimize.
This paper shows how a compiler can generate efficient code to compute tensor algebra operations on dynamic sparse tensors that may be stored in a wide range of disparate formats.
We propose a language for precisely specifying recursive, pointer-based data structures, and we show how this language can express many different dynamic data structures, including all the ones named above as well as many more.
We then describe how, given high-level specifications of such dynamic data structures, a compiler can emit code to efficiently access and compute on dynamic sparse tensors that are stored in the aforementioned data structures.
We evaluate our technique and find it generates efficient dynamic sparse tensor algebra kernels that have performance comparable to, if not better than, state-of-the-art libraries and frameworks such as PAM, Aspen, STINGER, and Terrace.
At the same time, our technique supports a wider range of tensor algebra operations---such as those that simultaneously compute with static and dynamic sparse tensors---than Aspen, STINGER, and Terrace, while also achieving significantly better performance than PAM for those same operations.
@Article{OOPSLA222p175,
author = {Stephen Chou and Saman Amarasinghe},
title = {Compilation of Dynamic Sparse Tensor Algebra},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {175},
numpages = {30},
doi = {10.1145/3563338},
year = {2022},
}
Publisher's Version
Artifacts Functional
Indexing the Extended Dyck-CFL Reachability for Context-Sensitive Program Analysis
Qingkai Shi,
Yongchao Wang,
Peisen Yao, and
Charles Zhang
(Ant Group, China; Hong Kong University of Science and Technology, China)
Many context-sensitive dataflow analyses can be formulated as an extended Dyck-CFL reachability problem, where function calls and returns are modeled as partially matched parentheses. Unfortunately, despite many works on the standard Dyck-CFL reachability problem, solving the extended version is still of quadratic space complexity and nearly cubic time complexity, significantly limiting the scalability of program analyses. This paper, for the first time to the best of our knowledge, presents a cheap approach to transforming the extended Dyck-CFL reachability problem to conventional graph reachability, a much easier and well-studied problem. This transformation allows us to benefit from recent advances in reachability indexing schemes, making it possible to answer any reachability query in a context-sensitive dataflow analysis within almost constant time plus only a few extra spaces. We have implemented our approach in two common context-sensitive dataflow analyses, one determines pointer alias relations and the other tracks information flows. Experimental results demonstrate that, compared to their original analyses, we can achieve orders of magnitude (102× to 105×) speedup at the cost of only a moderate space overhead. Our implementation is publicly available.
@Article{OOPSLA222p176,
author = {Qingkai Shi and Yongchao Wang and Peisen Yao and Charles Zhang},
title = {Indexing the Extended Dyck-CFL Reachability for Context-Sensitive Program Analysis},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {176},
numpages = {31},
doi = {10.1145/3563339},
year = {2022},
}
Publisher's Version
Checking Equivalence in a Non-strict Language
John C. Kolesar,
Ruzica Piskac, and
William T. Hallahan
(Yale University, USA; Binghamton University, USA)
Program equivalence checking is the task of confirming that two programs have the same behavior on corresponding inputs. We develop a calculus based on symbolic execution and coinduction to check the equivalence of programs in a non-strict functional language. Additionally, we show that our calculus can be used to derive counterexamples for pairs of inequivalent programs, including counterexamples that arise from non-termination. We describe a fully automated approach for finding both equivalence proofs and counterexamples. Our implementation, Nebula, proves equivalences of programs written in Haskell. We demonstrate Nebula's practical effectiveness at both proving equivalence and producing counterexamples automatically by applying Nebula to existing benchmark properties.
@Article{OOPSLA222p177,
author = {John C. Kolesar and Ruzica Piskac and William T. Hallahan},
title = {Checking Equivalence in a Non-strict Language},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {177},
numpages = {28},
doi = {10.1145/3563340},
year = {2022},
}
Publisher's Version
Info
Artifacts Functional
This Is the Moment for Probabilistic Loops
Marcel Moosbrugger,
Miroslav Stankovič,
Ezio Bartocci, and
Laura Kovács
(TU Wien, Austria)
We present a novel static analysis technique to derive higher moments for program variables for a large class of probabilistic loops with potentially uncountable state spaces. Our approach is fully automatic, meaning it does not rely on externally provided invariants or templates. We employ algebraic techniques based on linear recurrences and introduce program transformations to simplify probabilistic programs while preserving their statistical properties. We develop power reduction techniques to further simplify the polynomial arithmetic of probabilistic programs and define the theory of moment-computable probabilistic loops for which higher moments can precisely be computed. Our work has applications towards recovering probability distributions of random variables and computing tail probabilities. The empirical evaluation of our results demonstrates the applicability of our work on many challenging examples.
@Article{OOPSLA222p178,
author = {Marcel Moosbrugger and Miroslav Stankovič and Ezio Bartocci and Laura Kovács},
title = {This Is the Moment for Probabilistic Loops},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {178},
numpages = {29},
doi = {10.1145/3563341},
year = {2022},
}
Publisher's Version
Artifacts Reusable
A case for DOT: Theoretical Foundations for Objects with Pattern Matching and GADT-Style Reasoning
Aleksander Boruch-Gruszecki,
Radosław Waśko,
Yichen Xu, and
Lionel Parreaux
(EPFL, Switzerland; University of Warsaw, Poland; Beijing University of Posts and Telecommunications, China; Hong Kong University of Science and Technology, China)
Many programming languages in the OO tradition now support pattern matching in some form. Historical examples include Scala and Ceylon, with the more recent additions of Java, Kotlin, TypeScript, and Flow. But pattern matching on generic class hierarchies currently results in puzzling type errors in most of these languages. Yet this combination of features occurs naturally in many scenarios, such as when manipulating typed ASTs. To support it properly, compilers needs to implement a form of subtyping reconstruction: the ability to reconstruct subtyping information uncovered at runtime during pattern matching. We introduce cDOT, a new calculus in the family of Dependent Object Types (DOT) intended to serve as a formal foundation for subtyping reconstruction. Being descended from pDOT, itself a formal foundation for Scala, cDOT can be used to encode advanced object-oriented features such as generic inheritance, type constructor variance, F-bounded polymorphism, and first-class recursive modules. We demonstrate that subtyping reconstruction subsumes GADTs by encoding λ2,Gµ, a classical constraint-based GADT calculus, into cDOT.
@Article{OOPSLA222p179,
author = {Aleksander Boruch-Gruszecki and Radosław Waśko and Yichen Xu and Lionel Parreaux},
title = {A case for DOT: Theoretical Foundations for Objects with Pattern Matching and GADT-Style Reasoning},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {179},
numpages = {30},
doi = {10.1145/3563342},
year = {2022},
}
Publisher's Version
Artifacts Reusable
Taming Transitive Redundancy for Context-Free Language Reachability
Yuxiang Lei,
Yulei Sui,
Shuo Ding, and
Qirun Zhang
(University of Technology Sydney, Australia; Georgia Institute of Technology, USA)
Given an edge-labeled graph, context-free language reachability (CFL-reachability) computes reachable node pairs by deriving new edges and adding them to the graph. The redundancy that limits the scalability of CFL-reachability manifests as redundant derivations, i.e., identical edges can be derived multiple times due to the many paths between two reachable nodes. We observe that most redundancy arises from the derivations involving transitive relations of reachable node pairs. Unfortunately, existing techniques for reducing redundancy in transitive-closure-based problems are either ineffective or inapplicable to identifying and eliminating redundant derivations during on-the-fly CFL-reachability solving.
This paper proposes a scalable yet precision-preserving approach to all-pairs CFL-reachability analysis by taming its transitive redundancy. Our key insight is that transitive relations are intrinsically ordered, and utilizing the order for edge derivation can avoid most redundancy. To address the challenges in determining the derivation order from the dynamically changed graph during CFL-reachability solving, we introduce a hybrid graph representation by combining spanning trees and adjacency lists, together with a dynamic construction algorithm. Based on this representation, we propose a fast and effective partially ordered algorithm POCR to boost the performance of CFL-reachability analysis by reducing its transitive redundancy during on-the-fly solving. Our experiments on context-sensitive value-flow analysis and field-sensitive alias analysis for C/C++ demonstrate the promising performance of POCR. On average, POCR eliminates 98.50% and 97.26% redundant derivations respectively for the value-flow and alias analysis, achieving speedups of 21.48× and 19.57× over the standard CFL-reachability algorithm. We also compare POCR with two recent open-source tools, Graspan (a CFL-reachability solver) and Soufflé (a Datalog engine). The results demonstrate that POCR is over 3.67× faster than Graspan and Soufflé on average for both value-flow analysis and alias analysis.
@Article{OOPSLA222p180,
author = {Yuxiang Lei and Yulei Sui and Shuo Ding and Qirun Zhang},
title = {Taming Transitive Redundancy for Context-Free Language Reachability},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {180},
numpages = {27},
doi = {10.1145/3563343},
year = {2022},
}
Publisher's Version
Artifacts Reusable
Symbolic Execution for Randomized Programs
Zachary Susag,
Sumit Lahiri,
Justin Hsu, and
Subhajit Roy
(Cornell University, USA; IIT Kanpur, India)
We propose a symbolic execution method for programs that can draw random samples. In contrast to existing work, our method can verify randomized programs with unknown inputs and can prove probabilistic properties that universally quantify over all possible inputs. Our technique augments standard symbolic execution with a new class of probabilistic symbolic variables, which represent the results of random draws, and computes symbolic expressions representing the probability of taking individual paths. We implement our method on top of the KLEE symbolic execution engine alongside multiple optimizations and use it to prove properties about probabilities and expected values for a range of challenging case studies written in C++, including Freivalds’ algorithm, randomized quicksort, and a randomized property-testing algorithm for monotonicity. We evaluate our method against Psi, an exact probabilistic symbolic inference engine, and Storm, a probabilistic model checker, and show that our method significantly outperforms both tools.
@Article{OOPSLA222p181,
author = {Zachary Susag and Sumit Lahiri and Justin Hsu and Subhajit Roy},
title = {Symbolic Execution for Randomized Programs},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {181},
numpages = {30},
doi = {10.1145/3563344},
year = {2022},
}
Publisher's Version
Artifacts Functional
BFF: Foundational and Automated Verification of Bitfield-Manipulating Programs
Fengmin Zhu,
Michael Sammler,
Rodolphe Lepigre,
Derek Dreyer, and
Deepak Garg
(MPI-SWS, Germany)
Low-level systems code often needs to interact with data, such as page table entries or network packet headers, in which multiple pieces of information are packaged together as bitfield components of a single machine integer and accessed via bitfield manipulations (e.g., shifts and masking). Most existing approaches to verifying such code employ SMT solvers, instantiated with theories for bit vector reasoning: these provide a powerful hammer, but also significantly increase the trusted computing base of the verification toolchain.
In this work, we propose an alternative approach to the verification of bitfield-manipulating systems code, which we call BFF. Building on the RefinedC framework, BFF is not only highly automated (as SMT-based approaches are) but also foundational---i.e., it produces a machine-checked proof of program correctness against a formal semantics for C programs, fully mechanized in Coq. Unlike SMT-based approaches, we do not try to solve the general problem of arbitrary bit vector reasoning, but rather observe that real systems code typically accesses bitfields using simple, well-understood programming patterns: the layout of a bit vector is known up front, and its bitfields are accessed in predictable ways through a handful of bitwise operations involving bit masks. Correspondingly, we center our approach around the concept of a structured bit vector---i.e., a bit vector with a known bitfield layout---which we use to drive simple and predictable automation. We validate the BFF approach by verifying a range of bitfield-manipulating C functions drawn from real systems code, including page table manipulation code from the Linux kernel and the pKVM hypervisor.
@Article{OOPSLA222p182,
author = {Fengmin Zhu and Michael Sammler and Rodolphe Lepigre and Derek Dreyer and Deepak Garg},
title = {BFF: Foundational and Automated Verification of Bitfield-Manipulating Programs},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {182},
numpages = {26},
doi = {10.1145/3563345},
year = {2022},
}
Publisher's Version
Info
Artifacts Reusable
High-Level Effect Handlers in C++
Dan Ghica,
Sam Lindley,
Marcos Maroñas Bravo, and
Maciej Piróg
(Huawei, UK; University of Edinburgh, UK)
Effect handlers allow the programmer to implement computational effects, such as custom error handling, various forms of lightweight concurrency, and dynamic binding, inside the programming language. We introduce cpp-effects, a C++ library for effect handlers with a typed high-level, object-oriented interface. We demonstrate that effect handlers can be successfully applied in imperative systems programming languages with manual memory management. Through a collection of examples, we explore how to program effectively with effect handlers in C++, discuss the intricacies and challenges of the implementation, and show that despite its limitations, cpp-effects performance is competitive and in some cases even outperforms state-of-the-art approaches such as C++20 coroutines and the libmprompt library for multiprompt delimited control.
@Article{OOPSLA222p183,
author = {Dan Ghica and Sam Lindley and Marcos Maroñas Bravo and Maciej Piróg},
title = {High-Level Effect Handlers in C++},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {183},
numpages = {29},
doi = {10.1145/3563445},
year = {2022},
}
Publisher's Version
Artifacts Reusable
Semi-symbolic Inference for Efficient Streaming Probabilistic Programming
Eric Atkinson,
Charles Yuan,
Guillaume Baudart,
Louis Mandel, and
Michael Carbin
(Massachusetts Institute of Technology, USA; ENS — PSL University — CNRS — Inria, France; IBM Research, USA)
A streaming probabilistic program receives a stream of observations and produces a stream of distributions that are conditioned on these observations. Efficient inference is often possible in a streaming context using Rao-Blackwellized particle filters (RBPFs), which exactly solve inference problems when possible and fall back on sampling approximations when necessary. While RBPFs can be implemented by hand to provide efficient inference, the goal of streaming probabilistic programming is to automatically generate such efficient inference implementations given input probabilistic programs.
In this work, we propose semi-symbolic inference, a technique for executing probabilistic programs using a runtime inference system that automatically implements Rao-Blackwellized particle filtering. To perform exact and approximate inference together, the semi-symbolic inference system manipulates symbolic distributions to perform exact inference when possible and falls back on approximate sampling when necessary. This approach enables the system to implement the same RBPF a developer would write by hand. To ensure this, we identify closed families of distributions – such as linear-Gaussian and finite discrete models – on which the inference system guarantees exact inference. We have implemented the runtime inference system in the ProbZelus streaming probabilistic programming language. Despite an average 1.6× slowdown compared to the state of the art on existing benchmarks, our evaluation shows that speedups of 3×-87× are obtainable on a new set of challenging benchmarks we have designed to exploit closed families.
@Article{OOPSLA222p184,
author = {Eric Atkinson and Charles Yuan and Guillaume Baudart and Louis Mandel and Michael Carbin},
title = {Semi-symbolic Inference for Efficient Streaming Probabilistic Programming},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {184},
numpages = {29},
doi = {10.1145/3563347},
year = {2022},
}
Publisher's Version
Artifacts Reusable
Synthesizing Axiomatizations using Logic Learning
Paul Krogmeier,
Zhengyao Lin,
Adithya Murali, and
P. Madhusudan
(University of Illinois at Urbana-Champaign, USA)
Axioms and inference rules form the foundation of deductive systems
and are crucial in the study of reasoning with logics over structures.
Historically, axiomatizations have been discovered manually with much
expertise and effort. In this paper we show the feasibility of using synthesis
techniques to discover axiomatizations for different classes of
structures, and in some contexts, automatically prove their
completeness. For evaluation, we apply our technique to find axioms
for (1) classes of frames in modal logic characterized in first-order
logic and (2) the class of language models with regular operations.
@Article{OOPSLA222p185,
author = {Paul Krogmeier and Zhengyao Lin and Adithya Murali and P. Madhusudan},
title = {Synthesizing Axiomatizations using Logic Learning},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {185},
numpages = {29},
doi = {10.1145/3563348},
year = {2022},
}
Publisher's Version
Veracity: Declarative Multicore Programming with Commutativity
Adam Chen,
Parisa Fathololumi,
Eric Koskinen, and
Jared Pincus
(Stevens Institute of Technology, USA)
There is an ongoing effort to provide programming abstractions that ease the burden of exploiting multicore hardware. Many programming abstractions (e.g., concurrent objects, transactional memory, etc.) simplify matters, but still involve intricate engineering. We argue that some difficulty of multicore programming can be meliorated through a declarative programming style in which programmers directly express the independence of fragments of sequential programs.
In our proposed paradigm, programmers write programs in a familiar, sequential manner, with the added ability to explicitly express the conditions under which code fragments sequentially commute. Putting such commutativity conditions into source code offers a new entry point for a compiler to exploit the known connection between commutativity and parallelism. We give a semantics for the programmer’s sequential perspective and, under a correctness condition, find that a compiler-transformed parallel execution is equivalent to the sequential semantics. Serializability/linearizability are not the right fit for this condition, so we introduce scoped serializability and show how it can be enforced with lock synthesis techniques.
We next describe a technique for automatically verifying and synthesizing commute conditions via a new reduction from our commute blocks to logical specifications, upon which symbolic commutativity reasoning can be performed. We implemented our work in a new language called Veracity, implemented in Multicore OCaml. We show that commutativity conditions can be automatically generated across a variety of new benchmark programs, confirm the expectation that concurrency speedups can be seen as the computation increases, and apply our work to a small in-memory filesystem and an adaptation of a crowdfund blockchain smart contract.
@Article{OOPSLA222p186,
author = {Adam Chen and Parisa Fathololumi and Eric Koskinen and Jared Pincus},
title = {Veracity: Declarative Multicore Programming with Commutativity},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {186},
numpages = {31},
doi = {10.1145/3563349},
year = {2022},
}
Publisher's Version
Info
Artifacts Reusable
Synthesizing Code Quality Rules from Examples
Pranav Garg and
Srinivasan H. Sengamedu
(AWS, USA; Amazon, USA)
Static Analysis tools have rules for several code quality issues and these rules are created by experts manually. In this paper, we address the problem of automatic synthesis of code quality rules from examples. We formulate the rule synthesis problem as synthesizing first order logic formulas over graph representations of code. We present a new synthesis algorithm RhoSynth that is based on Integer Linear Programming-based graph alignment for identifying code elements of interest to the rule. We bootstrap RhoSynth by leveraging code changes made by developers as the source of positive and negative examples. We also address rule refinement in which the rules are incrementally improved with additional user-provided examples. We validate RhoSynth by synthesizing more than 30 Java code quality rules. These rules have been deployed as part of Amazon CodeGuru Reviewer and their precision exceeds 75% based on developer feedback collected during live code-reviews within Amazon. Through comparisons with recent baselines, we show that current state-of-the-art program synthesis approaches are unable to synthesize most of these rules.
@Article{OOPSLA222p187,
author = {Pranav Garg and Srinivasan H. Sengamedu},
title = {Synthesizing Code Quality Rules from Examples},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {187},
numpages = {31},
doi = {10.1145/3563350},
year = {2022},
}
Publisher's Version
Archive submitted (1.2 MB)
Modular Verification of Op-Based CRDTs in Separation Logic
Abel Nieto,
Léon Gondelman,
Alban Reynaud,
Amin Timany, and
Lars Birkedal
(Aarhus University, Denmark; ENS Lyon, France)
Operation-based Conflict-free Replicated Data Types (op-based CRDTs) are a family of distributed data structures where all operations are designed to commute, so that replica states eventually converge. Additionally, op-based CRDTs require that operations be propagated between replicas in causal order. This paper presents a framework for verifying safety properties of CRDT implementations using separation logic. The framework consists of two libraries. One implements a Reliable Causal Broadcast (RCB) protocol so that replicas can exchange messages in causal order. A second “OpLib” library then uses RCB to simplify the creation and correctness proofs of op-based CRDTs. OpLib allows clients to implement new CRDTs as purely-functional data structures, without having to reason about network operations, concurrency control and mutable state, and without having to each time re-implement causal broadcast. Using OpLib, we have implemented 12 example CRDTs from the literature, including multiple versions of replicated registers and sets, two CRDT combinators for products and maps, and two example use cases of the map combinator. Our proofs are conducted in the Aneris distributed separation logic and are formalized in Coq. Our technique is the first work on verification of op-based CRDTs that satisfies both of the following properties: it is modular and targets executable implementations, as opposed to high-level protocols.
@Article{OOPSLA222p188,
author = {Abel Nieto and Léon Gondelman and Alban Reynaud and Amin Timany and Lars Birkedal},
title = {Modular Verification of Op-Based CRDTs in Separation Logic},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {188},
numpages = {29},
doi = {10.1145/3563351},
year = {2022},
}
Publisher's Version
Artifacts Reusable
Implementing and Verifying Release-Acquire Transactional Memory in C11
Sadegh Dalvandi and
Brijesh Dongol
(University of Surrey, UK)
Transactional memory (TM) is an intensively studied synchronisation paradigm with many proposed implementations in software and hardware, and combinations thereof. However, TM under relaxed memory, e.g., C11 (the 2011 C/C++ standard) is still poorly understood, lacking rigorous foundations that support verifiable implementations. This paper addresses this gap by developing TMS2-ra, a relaxed operational TM specification. We integrate TMS2-ra with RC11 (the repaired C11 memory model that disallows load-buffering) to provide a formal semantics for TM libraries and their clients. We develop a logic, TARO, for verifying client programs that use TMS2-ra for synchronisation. We also show how TMS2-ra can be implemented by a C11 library, TML-ra, that uses relaxed and release-acquire atomics, yet guarantees the synchronisation properties required by TMS2-ra. We benchmark TML-ra and show that it outperforms its sequentially consistent counterpart in the STAMP benchmarks. Finally, we use a simulation-based verification technique to prove correctness of TML-ra. Our entire development is supported by the Isabelle/HOL proof assistant.
@Article{OOPSLA222p189,
author = {Sadegh Dalvandi and Brijesh Dongol},
title = {Implementing and Verifying Release-Acquire Transactional Memory in C11},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {189},
numpages = {28},
doi = {10.1145/3563352},
year = {2022},
}
Publisher's Version
Fast Shadow Execution for Debugging Numerical Errors using Error Free Transformations
Sangeeta Chowdhary and
Santosh Nagarakatte
(Rutgers University, USA)
This paper proposes, EFTSanitizer, a fast shadow execution framework for detecting and debugging numerical errors during late stages of testing especially for long-running applications. Any shadow execution framework needs an oracle to compare against the floating point (FP) execution. This paper makes a case for using error free transformations, which is a sequence of operations to compute the error of a primitive operation with existing hardware supported FP operations, as an oracle for shadow execution. Although the error of a single correctly rounded FP operation is bounded, the accumulation of errors across operations can result in exceptions, slow convergences, and even crashes. To ease the job of debugging such errors, EFTSanitizer provides a directed acyclic graph (DAG) that highlights the propagation of errors, which results in exceptions or crashes. Unlike prior work, DAGs produced by EFTSanitizer include operations that span various function calls while keeping the memory usage bounded. To enable the use of such shadow execution tools with long-running applications, EFTSanitizer also supports starting the shadow execution at an arbitrary point in the dynamic execution, which we call selective shadow execution. EFTSanitizer is an order of magnitude faster than prior state-of-art shadow execution tools such as FPSanitizer and Herbgrind. We have discovered new numerical errors and debugged them using EFTSanitizer.
@Article{OOPSLA222p190,
author = {Sangeeta Chowdhary and Santosh Nagarakatte},
title = {Fast Shadow Execution for Debugging Numerical Errors using Error Free Transformations},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {190},
numpages = {28},
doi = {10.1145/3563353},
year = {2022},
}
Publisher's Version
Artifacts Functional
Model-Guided Synthesis of Inductive Lemmas for FOL with Least Fixpoints
Adithya Murali,
Lucas Peña,
Eion Blanchard,
Christof Löding, and
P. Madhusudan
(University of Illinois at Urbana-Champaign, USA; RWTH Aachen University, Germany)
Recursively defined linked data structures embedded in a pointer-based heap and their properties are naturally expressed in pure first-order logic with least fixpoint definitions (FO+lfp) with background theories. Such logics, unlike pure first-order logic, do not admit even complete procedures. In this paper, we undertake a novel approach for synthesizing inductive hypotheses to prove validity in this logic. The idea is to utilize several kinds of finite first-order models as counterexamples that capture the non-provability and invalidity of formulas to guide the search for inductive hypotheses. We implement our procedures and evaluate them extensively over theorems involving heap data structures that require inductive proofs and demonstrate the effectiveness of our methodology.
@Article{OOPSLA222p191,
author = {Adithya Murali and Lucas Peña and Eion Blanchard and Christof Löding and P. Madhusudan},
title = {Model-Guided Synthesis of Inductive Lemmas for FOL with Least Fixpoints},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {191},
numpages = {30},
doi = {10.1145/3563354},
year = {2022},
}
Publisher's Version
Intrinsically-Typed Definitional Interpreters à la Carte
Cas van der Rest,
Casper Bach Poulsen,
Arjen Rouvoet,
Eelco Visser, and
Peter Mosses
(Delft University of Technology, Netherlands)
Specifying and mechanically verifying type safe programming languages requires significant effort. This effort can in theory be reduced by defining and reusing pre-verified, modular components. In practice, however, existing approaches to modular mechanical verification require many times as much specification code as plain, monolithic definitions. This makes it hard to develop new reusable components, and makes existing component specifications hard to grasp. We present an alternative approach based on intrinsically-typed interpreters, which reduces the size and complexity of modular specifications as compared to existing approaches. Furthermore, we introduce a new abstraction for safe-by-construction specification and composition of pre-verified type safe language components: language fragments. Language fragments are about as concise and easy to develop as plain, monolithic intrinsically-typed interpreters, but require about 10 times less code than previous approaches to modular mechanical verification of type safety.
@Article{OOPSLA222p192,
author = {Cas van der Rest and Casper Bach Poulsen and Arjen Rouvoet and Eelco Visser and Peter Mosses},
title = {Intrinsically-Typed Definitional Interpreters à la Carte},
journal = {Proc. ACM Program. Lang.},
volume = {6},
number = {OOPSLA2},
articleno = {192},
numpages = {30},
doi = {10.1145/3563355},
year = {2022},
}
Publisher's Version
Artifacts Reusable
proc time: 12.72