PEPM 2016
2016 ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation (PEPM 2016)
Powered by
Conference Publishing Consulting

2016 ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation (PEPM 2016), January 18–19, 2016, St. Petersburg, FL, USA

PEPM 2016 – Proceedings

Contents - Abstracts - Authors


Title Page

Message from the Chairs
We are pleased to present the proceedings of the 2016 ACM SIGPLAN Workshop on Partial Eval- uation and Program Manipulation (PEPM’16). The workshop is held in St. Petersburg, FL, USA, on January 18-19, 2016. It is part of the annual Symposium on Principles of Programming Lan- guages (POPL’16). The PEPM series aims to bring together researchers and practitioners working in the broad area of program transformation, an area which spans from refactoring, partial evaluation, super-compilation, staged programming, fusion and other meta-programming to model-driven development, program analysis, inductive programming, program generation and applications of machine learning and probabilistic search.
Committee listings


Practical, General Parser Combinators
Anastasia Izmaylova, Ali Afroozeh, and Tijs van der Storm
(CWI, Netherlands)
Parser combinators are a popular approach to parsing where context-free grammars are represented as executable code. However, conventional parser combinators do not support left recursion, and can have worst-case exponential runtime. These limitations hinder the expressivity and performance predictability of parser combinators when constructing parsers for programming languages. In this paper we present general parser combinators that support all context-free grammars and construct a parse forest in cubic time and space in the worst case, while behaving nearly linearly on grammars of real programming languages. Our general parser combinators are based on earlier work on memoized Continuation-Passing Style (CPS) recognizers. First, we extend this work to achieve recognition in cubic time. Second, we extend the resulting cubic CPS recognizers to parsers that construct a binarized Shared Packed Parse Forest (SPPF). Our general parser combinators bring the best of both worlds: the flexibility and extensibility of conventional parser combinators and the expressivity and performance guarantees of general parsing algorithms. We used the approach presented in this paper as the basis for Meerkat, a general parser combinator library for Scala.
Publisher's Version Article Search
Operator Precedence for Data-Dependent Grammars
Ali Afroozeh and Anastasia Izmaylova
(CWI, Netherlands)
Constructing parsers based on declarative specification of operator precedence is a very old research topic, and there are various existing approaches. However, these approaches are either tied to a particular parsing technique, or cannot deal with all corner cases found in programming languages. In this paper we present an implementation of declarative specification of operator precedence for general parsing that (1) is independent of the underlying parsing algorithm, (2) does not require any grammar transformation that increases the size of the grammar, (3) preserves the shape of parse trees of the original, natural grammar, and (4) can deal with intricate cases of operator precedence found in functional programming languages such as OCaml. Our new approach to operator precedence is formulated using data-dependent grammars, which extend context-free grammars with arbitrary computation, variable binding and constraints. We implemented our approach using Iguana, a data-dependent parsing framework, and evaluated it by parsing Java and OCaml source files. The results show that our approach is practical for parsing programming languages with complicated operator precedence rules.
Publisher's Version Article Search

Domain-Specific Languages 1

Everything Old Is New Again: Quoted Domain-Specific Languages
Shayan Najd, Sam Lindley, Josef Svenningsson, and Philip Wadler
(University of Edinburgh, UK; Chalmers University of Technology, Sweden)
We describe a new approach to implementing Domain-Specific Languages(DSLs), called Quoted DSLs (QDSLs), that is inspired by two old ideas:quasi-quotation, from McCarthy's Lisp of 1960, and the subformula principle of normal proofs, from Gentzen's natural deduction of 1935. QDSLs reuse facilities provided for the host language, since host and quoted terms share the same syntax, type system, and normalisation rules. QDSL terms are normalised to a canonical form, inspired by the subformula principle, which guarantees that one can use higher-order types in the source while guaranteeing first-order types in the target, and enables using types to guide fusion. We test our ideas by re-implementing Feldspar, which was originally implemented as an Embedded DSL (EDSL), as a QDSL; and we compare the QDSL and EDSL variants. The two variants produce identical code.
Publisher's Version Article Search
Finally, Safely-Extensible and Efficient Language-Integrated Query
Kenichi Suzuki, Oleg Kiselyov, and Yukiyoshi Kameyama
(University of Tsukuba, Japan; Tohoku University, Japan)
Language-integrated query is an embedding of database queries into a host language to code queries at a higher level than the all-to-common concatenation of strings of SQL fragments. The eventually produced SQL is ensured to be well-formed and well-typed, and hence free from the embarrassing (security) problems. Language-integrated query takes advantage of the host language's functional and modular abstractions to compose and reuse queries and build query libraries. Furthermore, language-integrated query systems like T-LINQ generate efficient SQL, by applying a number of program transformations to the embedded query. Alas, the set of transformation rules is not designed to be extensible. We demonstrate a new technique of integrating database queries into a typed functional programming language, so to write well-typed, composable queries and execute them efficiently on any SQL back-end as well as on an in-memory noSQL store. A distinct feature of our framework is that both the query language as well as the transformation rules needed to generate efficient SQL are safely user-extensible, to account for many variations in the SQL back-ends, as well for domain-specific knowledge. The transformation rules are guaranteed to be type-preserving and hygienic by their very construction. They can be built from separately developed and reusable parts and arbitrarily composed into optimization pipelines. With this technique we have embedded into OCaml a relational query language that supports a very large subset of SQL including grouping and aggregation. Its types cover the complete set of intricate SQL behaviors.
Publisher's Version Article Search Info

Domain-Specific Languages 2

A Constraint Language for Static Semantic Analysis Based on Scope Graphs
Hendrik van Antwerpen, Pierre Néron, Andrew Tolmach, Eelco Visser, and Guido Wachsmuth
(Delft University of Technology, Netherlands; Portland State University, USA)
In previous work, we introduced scope graphs as a formalism for describing program binding structure and performing name resolution in an AST-independent way. In this paper, we show how to use scope graphs to build static semantic analyzers. We use constraints extracted from the AST to specify facts about binding, typing, and initialization. We treat name and type resolution as separate building blocks, but our approach can handle language constructs---such as record field access---for which binding and typing are mutually dependent. We also refine and extend our previous scope graph theory to address practical concerns including ambiguity checking and support for a wider range of scope relationships. We describe the details of constraint generation for a model language that illustrates many of the interesting static analysis issues associated with modules and records.
Publisher's Version Article Search
BiGUL: A Formally Verified Core Language for Putback-Based Bidirectional Programming
Hsiang-Shang Ko, Tao Zan, and Zhenjiang Hu
(National Institute of Informatics, Japan; Sokendai, Japan)
Putback-based bidirectional programming allows the programmer to write only one putback transformation, from which the unique corresponding forward transformation is derived for free. The logic of a putback transformation is more sophisticated than that of a forward transformation and does not always give rise to well-behaved bidirectional programs; this calls for more robust language design to support development of well-behaved putback transformations. In this paper, we design and implement a concise core language BiGUL for putback-based bidirectional programming to serve as a foundation for higher-level putback-based languages. BiGUL is completely formally verified in the dependently typed programming language Agda to guarantee that any putback transformation written in BiGUL is well-behaved.
Publisher's Version Article Search Info Artifacts Available


Removing Runtime Overhead for Optimized Object Queries
Jon Brandvein and Yanhong A. Liu
(Stony Brook University, USA)
Powerful optimizations of object queries can lead to reduced asymptotic running times. However, such queries are often used in dynamic languages, and the required generality of the optimizations in handling a dynamic language can lead to significant runtime overhead as well as significantly increased code size. This paper studies combinations of optimizations for reducing this runtime overhead and code size. We describe two new optimizations --- counting elimination and result set elimination, their effectiveness when combined with inlining and when using specialized data structures, and additional optimizations enabled by type analysis and alias analysis. The two new optimizations are enabled by the high-level nature of queries, even though they are difficult and not supported by general compiler optimizations. We have run a variety of benchmarks, including distributed algorithms and benchmarks from prior best systems, obtaining a speedup of up to 56% and code size reduction of up to 37%.
Publisher's Version Article Search
Staging Generic Programming
Jeremy Yallop
(University of Cambridge, UK)
Generic programming libraries such as Scrap Your Boilerplate eliminate the need to write repetitive code, but typically introduce significant performance overheads. This leaves programmers with the unfortunate choice of writing succinct but slow programs or writing tedious but efficient programs. We show how to systematically transform an implementation of the Scrap Your Boilerplate library in the multi-stage programming language MetaOCaml to eliminate the overhead, making it possible to combine the benefits of high-level abstract programming with the efficiency of low-level code.
Publisher's Version Article Search
Toward Introducing Binding-Time Analysis to MetaOCaml
Kenichi Asai
(Ochanomizu University, Japan)
This paper relates 2-level lambda-calculus and staged lambda-calculus (restricted to 2 stages) to obtain monovariant binding-time analysis for lambda-calculus that produces the output in the form of staging annotations. The relationship between the two lambda-calculi provides us with a precise and easy instruction on how to implement binding-time analysis to be incorporated in the staged lambda-calculus. It forms a basis for introducing binding-time analysis to full-fledged staged languages such as MetaOCaml.
Publisher's Version Article Search Artifacts Available
Staging beyond Terms: Prospects and Challenges
Jun Inoue, Oleg Kiselyov, and Yukiyoshi Kameyama
(National Institute of Advanced Industrial Science and Technology, Japan; Tohoku University, Japan; University of Tsukuba, Japan)
Staging is a program generation paradigm with a clean, well-investigated semantics which statically ensures that the generated code is always well-typed and well-scoped. Staging is often used for specializing programs to the known properties or parts of data to improve efficiency, but so far it has been limited to generating terms. This short paper describes our ongoing work on extending staging, with its strong safety guarantees, to generation of non-terms, focusing on ML-style modules. The purpose is to map out the promises and challenges, then to pose a question to solicit the community's expertise in evaluating how essential our extensions are for the purpose of applying staging beyond the realm of terms. We demonstrate our extensions' use in specializing functor applications to eliminate its (currently large) overhead in OCaml. We explain the challenges that those extensions bring in and identify a promising line of attack. Unexpectedly, however, it turns out that we can avoid module generation altogether by representing modules, possibly containing abstract types, as polymorphic records. With the help of first-class modules, module specialization reduces to ordinary term specialization, which can be done with conventional staging. The extent to which this hack generalizes is unclear. Thus we have a question to the community: is there a compelling use case for module generation? With these insights and questions, we offer a starting point for a long-term program in the next stage of staging research.
Publisher's Version Article Search

proc time: 0.82