Powered by
17th ACM SIGPLAN International Conference on Generative Programming: Concepts and Experiences (GPCE 2018),
November 5–6, 2018,
Boston, MA, USA
Frontmatter
Message from the Chairs
Welcome to the 17th ACM SIGPLAN International Conference on Generative
Programming: Concepts & Experiences (GPCE '18), again part of the
SPLASH collection of conferences and workshops and held in Boston,
Massachusetts, USA on November 5 and 6, 2018. GPCE is a programming
languages conference focusing on techniques and tools for code
generation, language implementation, and product-line development.
Many topics fall within the scope of GPCE; these include code
generation topics such as program transformation, staging, and macro
systems; topics from language design including domain-specific
languages, language embedding, and language workbenches; and
product-line concerns such as feature-oriented programming, domain
engineering, and feature interactions. GPCE also solicits tool
demonstrations and work that applies results from these areas in novel
ways.
Papers
A Domain-Specific Language for Exploratory Data Visualization
Karl Smeltzer and Martin Erwig
(Oregon State University, USA)
With an ever-growing amount of collected data, the importance of visualization
as an analysis component is growing in concert. The creation of good
visualizations often doesn't happen in one step but is rather an iterative and
exploratory process. However, this process is currently not well supported in
most of the available visualization tools and systems. Visualization authors
are forced to commit prematurely to particular design aspects of their
creations, and when exploring potential variant visualizations, they are forced
to adopt ad hoc techniques such as copying code snippets or keeping a
collection of separate files.
We propose variational visualizations as a model supporting open-ended
exploration of the design space of information visualization. Together with
that model, we present a prototype implementation in the form of a
domain-specific language embedded in Purescript.
@InProceedings{GPCE18p1,
author = {Karl Smeltzer and Martin Erwig},
title = {A Domain-Specific Language for Exploratory Data Visualization},
booktitle = {Proc.\ GPCE},
publisher = {ACM},
pages = {1--13},
doi = {10.1145/3278122.3278138},
year = {2018},
}
Publisher's Version
A Practical Unification of Multi-stage Programming and Macros
Nicolas Stucki, Aggelos Biboudis, and
Martin Odersky
(EPFL, Switzerland)
Program generation is indispensable. We propose a novel unification of two existing metaprogramming techniques: multi-stage programming and hygienic generative macros. The former supports runtime code generation and execution in a type-safe manner while the latter offers compile-time code generation.
In this work we draw upon a long line of research on metaprogramming, starting with Lisp, MetaML and MetaOCaml.
We provide direct support for quotes, splices and top-level splices, all regulated uniformly by a level-counting Phase Consistency Principle. Our design enables the construction and combination of code values for both expressions and types. Moreover, code generation can happen either at runtime à la MetaML or at compile time, in a macro fashion, à la MacroML.
We provide an implementation of our design in Scala and we present two case studies. The first implements the Hidden Markov Model, Shonan Challenge for HPC. The second implements the staged streaming library Strymonas.
@InProceedings{GPCE18p14,
author = {Nicolas Stucki and Aggelos Biboudis and Martin Odersky},
title = {A Practical Unification of Multi-stage Programming and Macros},
booktitle = {Proc.\ GPCE},
publisher = {ACM},
pages = {14--27},
doi = {10.1145/3278122.3278139},
year = {2018},
}
Publisher's Version
Rash: From Reckless Interactions to Reliable Programs
William Gallard Hatch and
Matthew Flatt
(University of Utah, USA)
Command languages like the Bourne Shell provide a terse syntax for exploratory programming and system interaction.
Shell users can begin to write programs that automate their tasks by simply copying their interactions verbatim into a script file.
However, command languages usually scale poorly beyond small scripts, and they can be difficult to integrate into larger programs.
General-purpose languages scale well, but are verbose and unwieldy for common interactive actions such as process composition.
We present Rash, a domain-specific command language embedded in Racket.
Rash provides a terse and extensible syntax for interactive system administration and scripting, as well as easy composition of both Racket functions and operating system processes.
Rash and normal Racket code can be nested together at the expression level, providing the benefits of a shell language and a general-purpose language together.
Thus, Rash provides a gradual scale between shell-style interactions and general-purpose programming.
@InProceedings{GPCE18p28,
author = {William Gallard Hatch and Matthew Flatt},
title = {Rash: From Reckless Interactions to Reliable Programs},
booktitle = {Proc.\ GPCE},
publisher = {ACM},
pages = {28--39},
doi = {10.1145/3278122.3278129},
year = {2018},
}
Publisher's Version
Exploring Feature Interactions without Specifications: A Controlled Experiment
Larissa Rocha Soares, Jens Meinicke,
Sarah Nadi,
Christian Kästner, and Eduardo Santana de Almeida
(Federal University of Bahia, Brazil; University of Magdeburg, Germany; University of Alberta, Canada; Carnegie Mellon University, USA)
In highly configurable systems, features may interact unexpectedly and produce faulty behavior. Those faults are not easily identified from the analysis of each feature separately, especially when feature specifications are missing. We propose VarXplorer, a dynamic and iterative approach to detect suspicious interactions. It provides information on how features impact the control and data flow of the program. VarXplorer supports developers with a graph that visualizes this information, mainly showing suppress and require relations between features. To evaluate whether VarXplorer helps improve the performance of identifying suspicious interactions, we perform a controlled study with 24 subjects. We find that with our proposed feature-interaction graphs, participants are able to identify suspicious interactions more than 3 times faster compared to the state-of-the-art tool.
@InProceedings{GPCE18p40,
author = {Larissa Rocha Soares and Jens Meinicke and Sarah Nadi and Christian Kästner and Eduardo Santana de Almeida},
title = {Exploring Feature Interactions without Specifications: A Controlled Experiment},
booktitle = {Proc.\ GPCE},
publisher = {ACM},
pages = {40--52},
doi = {10.1145/3278122.3278127},
year = {2018},
}
Publisher's Version
Inferring Ownership Domains from Refinements
Ebrahim Khalaj and Marwan Abi-Antoun
(Wayne State University, USA)
Ownership type qualifiers clarify aliasing invariants that cannot be directly expressed in mainstream programming languages. Adding qualifiers to code, however, often involves significant overhead and difficult interaction.
We propose an analysis to infer qualifiers in the code based on developer refinements that express strict encapsulation, logical containment and architectural tiers. Refinements include: makeOwnedBy, to make an object strictly encapsulated by another; makePartOf, to make an object logically contained in another; makePeer, to make two objects peers; makeParam, to make an object more accessible than the above choices; or makeShared, to allow an object to be globally aliased. If the code as-written matches the requested refinements, the analysis generates qualifiers that type-check; otherwise, it reports that the refinements do not match the code, so developers must investigate unexpected aliasing, change their understanding of the code and pick different refinements, or change the code and re-run the analysis.
We implement the analysis and confirm that refinements generate precise qualifiers that express strict encapsulation, logical containment and architectural tiers.
@InProceedings{GPCE18p53,
author = {Ebrahim Khalaj and Marwan Abi-Antoun},
title = {Inferring Ownership Domains from Refinements},
booktitle = {Proc.\ GPCE},
publisher = {ACM},
pages = {53--65},
doi = {10.1145/3278122.3278128},
year = {2018},
}
Publisher's Version
Implementing a Semi-causal Domain-Specific Language for Context Detection over Binary Sensors
Nic Volanschi,
Bernard Serpette, and Charles Consel
(Inria, France; Bordeaux INP, France)
In spite of the fact that many sensors in use today are binary (i.e. produce only values of 0 and 1), and that useful context-aware applications are built exclusively on top of them, there is currently no development approach specifically targeted to binary sensors. Dealing with notions of state and state combinators, central to binary sensors, is tedious and error-prone in current approaches. For instance, developing such applications in a general programming language requires writing code to process events, maintain state and perform state transitions on events, manage timers and/or event histories.
In another paper, we introduced a domain specific language (DSL) called Allen, specifically targeted to binary sensors. Allen natively expresses states and state combinations, and detects contexts on line, on incoming streams of binary events. Expressing state combinations in Allen is natural and intuitive due to a key ingredient: semi-causal operators. That paper focused on the concept of the language and its main operators, but did not address its implementation challenges. Indeed, online evaluation of expressions containing semi-causal operators is difficult, because semi-causal sub-expressions may block waiting for future events, thus generating unknown values, besides 0 and 1. These unknown values may or may not propagate to the containing expressions, depending on the current value of the other arguments.
This paper presents a compiler and runtime for the Allen language, and shows how they implement its state combining operators, based on reducing complex expressions to a core subset of operators, which are implemented natively. We define several assisted living applications both in Allen and in a general scripting language. We show that the former are much more concise in Allen, achieve more effective code reuse, and ease the checking of some domain properties.
@InProceedings{GPCE18p66,
author = {Nic Volanschi and Bernard Serpette and Charles Consel},
title = {Implementing a Semi-causal Domain-Specific Language for Context Detection over Binary Sensors},
booktitle = {Proc.\ GPCE},
publisher = {ACM},
pages = {66--78},
doi = {10.1145/3278122.3278134},
year = {2018},
}
Publisher's Version
Meta-programming for Cross-Domain Tensor Optimizations
Adilla Susungi, Norman A. Rink, Albert Cohen,
Jeronimo Castrillon, and Claude Tadonki
(MINES ParisTech, France; PSL Research University, France; TU Dresden, Germany; Inria, France; ENS, France)
Many modern application domains crucially rely on tensor operations.
The optimization of programs that operate on tensors poses difficulties that are not adequately addressed by existing languages and tools.
Frameworks such as TensorFlow offer good abstractions for tensor operations, but target a specific domain, i.e. machine learning, and their optimization strategies cannot easily be adjusted to other domains.
General-purpose optimization tools such as Pluto and existing meta-languages offer more flexibility in applying optimizations but lack abstractions for tensors.
This work closes the gap between domain-specific tensor languages and general-purpose optimization tools by proposing the Tensor optimizations Meta-Language (TeML).
TeML offers high-level abstractions for both tensor operations and loop transformations, and enables flexible composition of transformations into effective optimization paths.
This compositionality is built into TeML's design, as our formal language specification will reveal.
We also show that TeML can express tensor computations as comfortably as TensorFlow and that it can reproduce Pluto's optimization paths.
Thus, optimized programs generated by TeML execute at least as fast as the corresponding Pluto programs.
In addition, TeML enables optimization paths that often allow outperforming Pluto.
@InProceedings{GPCE18p79,
author = {Adilla Susungi and Norman A. Rink and Albert Cohen and Jeronimo Castrillon and Claude Tadonki},
title = {Meta-programming for Cross-Domain Tensor Optimizations},
booktitle = {Proc.\ GPCE},
publisher = {ACM},
pages = {79--92},
doi = {10.1145/3278122.3278131},
year = {2018},
}
Publisher's Version
Model-Based Security Analysis of Feature-Oriented Software Product Lines
Sven Peldszus, Daniel Strüber, and Jan Jürjens
(University of Koblenz-Landau, Germany)
Today's software systems are too complex to ensure security after the fact – security has to be built into systems by design. To this end, model-based techniques such as UMLsec support the design-time specification and analysis of security requirements by providing custom model annotations and checks. Yet, a particularly challenging type of complexity arises from the variability of software product lines. Analyzing the security of all products separately is generally infeasible. In this work, we propose SecPL, a methodology for ensuring security in a software product line. SecPL allows developers to annotate the system design model with product-line variability and security requirements. To keep the exponentially large configuration space tractable during security checks, SecPL provides a family-based security analysis. In our experiments, this analysis outperforms the naive strategy of checking all products individually. Finally, we present the results of a user study that indicates the usability of our overall methodology.
@InProceedings{GPCE18p93,
author = {Sven Peldszus and Daniel Strüber and Jan Jürjens},
title = {Model-Based Security Analysis of Feature-Oriented Software Product Lines},
booktitle = {Proc.\ GPCE},
publisher = {ACM},
pages = {93--106},
doi = {10.1145/3278122.3278126},
year = {2018},
}
Publisher's Version
Orchestrating Dynamic Analyses of Distributed Processes for Full-Stack JavaScript Programs
Laurent Christophe, Coen De Roover, Elisa Gonzalez Boix, and
Wolfgang De Meuter
(Vrije Universiteit Brussel, Belgium)
Dynamic analyses are commonly implemented by instrumenting the program under analysis. Examples of such analyses for JavaScript range from checkers of user- defined invariants to concolic testers. For a full-stack JavaScript program, these analyses would benefit from reasoning about the state of the client-side and server-side processes it is comprised of. Lifting a dynamic analysis so that it supports full-stack programs can be challenging. It involves distributed communication to maintain the analysis state across all processes, which has to be deadlock-free. In this paper, we advocate maintaining distributed analysis state in a centralized analysis process instead — which is communicated with from the processes under analysis. The approach is supported by a dynamic analysis platform that provides abstractions for this communication. We evaluate the approach through a case study. We use the platform to build a distributed origin analysis, capable of tracking the expressions from which values originate from across process boundaries, and deploy it on collaborative drawing application. The results show that our approach greatly simplifies the lifting process at the cost of a computational overhead. We deem this overhead acceptable for analyses intended for use at development time.
@InProceedings{GPCE18p107,
author = {Laurent Christophe and Coen De Roover and Elisa Gonzalez Boix and Wolfgang De Meuter},
title = {Orchestrating Dynamic Analyses of Distributed Processes for Full-Stack JavaScript Programs},
booktitle = {Proc.\ GPCE},
publisher = {ACM},
pages = {107--118},
doi = {10.1145/3278122.3278135},
year = {2018},
}
Publisher's Version
Measuring Effectiveness of Sample-Based Product-Line Testing
Sebastian Ruland, Lars Luthmann, Johannes Bürdek, Sascha Lity, Thomas Thüm, Malte Lochau, and Márcio Ribeiro
(TU Darmstadt, Germany; TU Braunschweig, Germany; Federal University of Alagoas, Brazil)
Recent research on quality assurance (QA) of configurable
software systems (e.g., software product lines) proposes different
analysis strategies to cope with the inherent
complexity caused by the well-known combinatorial-explosion problem.
Those strategies aim at improving efficiency of QA techniques like
software testing as compared to
brute-force configuration-by-configuration analysis.
Sampling constitutes one of the most established
strategies, defining criteria for selecting a drastically reduced, yet
sufficiently diverse subset of software configurations considered during QA.
However, finding generally accepted measures for assessing the impact
of sample-based analysis on the effectiveness of QA techniques is still an open issue.
We address this problem by lifting concepts from
single-software mutation testing to configurable software.
Our framework incorporates a rich collection of mutation
operators for product lines implemented in C
to measure mutation scores of samples, including a novel family-based technique
for product-line mutation detection.
Our experimental results gained from applying our tool implementation
to a collection of subject systems confirms
the widely-accepted assumption that pairwise sampling
constitutes the most reasonable
efficiency/effectiveness trade-off for sample-based product-line testing.
@InProceedings{GPCE18p119,
author = {Sebastian Ruland and Lars Luthmann and Johannes Bürdek and Sascha Lity and Thomas Thüm and Malte Lochau and Márcio Ribeiro},
title = {Measuring Effectiveness of Sample-Based Product-Line Testing},
booktitle = {Proc.\ GPCE},
publisher = {ACM},
pages = {119--133},
doi = {10.1145/3278122.3278130},
year = {2018},
}
Publisher's Version
Pattern Matching in an Open World
Weixin Zhang and
Bruno C. d. S. Oliveira
(University of Hong Kong, China)
Pattern matching is a pervasive and useful feature in functional programming. There have been many attempts to bring similar notions to Object-Oriented Programming (OOP) in the past. However, a key challenge in OOP is how pattern matching can coexist with the open nature of OOP data structures, while at the same time guaranteeing other desirable properties for pattern matching.
This paper discusses several desirable properties for pattern matching in an OOP context and shows how existing approaches are lacking some of these properties. We argue that the traditional semantics of pattern matching, which is based on the order of patterns and adopted by many approaches, is in conflict with the openness of data structures. Therefore we suggest that a more restricted, top-level pattern matching model, where the order of patterns is irrelevant, is worthwhile considering in an OOP context. To compensate for the absence of ordered patterns we propose a complementary mechanism for case analysis with defaults, which can be used when nested and/or multiple case analysis is needed. To illustrate our points we develop Castor: a meta-programming library inScala that adopts both ideas. Castor generates code that uses type-safe extensible visitors, and largely removes boilerplate code typically associated with visitors. We illustrate the applicability of our approach with a case study modularizing the interpreters in the famous book ”Types and Programming Languages”.
@InProceedings{GPCE18p134,
author = {Weixin Zhang and Bruno C. d. S. Oliveira},
title = {Pattern Matching in an Open World},
booktitle = {Proc.\ GPCE},
publisher = {ACM},
pages = {134--146},
doi = {10.1145/3278122.3278124},
year = {2018},
}
Publisher's Version
Verification of High-Level Transformations with Inductive Refinement Types
Ahmad Salim Al-Sibahi, Thomas P. Jensen, Aleksandar S. Dimovski, and
Andrzej Wąsowski
(University of Copenhagen, Denmark; Skanned, Denmark; IT University of Copenhagen, Denmark; Inria, France; Mother Teresa University, Macedonia)
High-level transformation languages like Rascal include expressive features for manipulating
large abstract syntax trees: first-class traversals, expressive
pattern matching, backtracking and generalized iterators. We present
the design and implementation of an abstract
interpretation tool, Rabit, for verifying inductive type and shape properties for
transformations written in such languages. We describe how to perform abstract
interpretation based on operational semantics,
specifically focusing on the challenges arising when analyzing the expressive
traversals and pattern matching. Finally, we evaluate Rabit on a series of
transformations (normalization, desugaring, refactoring, code
generators, type inference, etc.) showing that we can effectively verify stated properties.
@InProceedings{GPCE18p147,
author = {Ahmad Salim Al-Sibahi and Thomas P. Jensen and Aleksandar S. Dimovski and Andrzej Wąsowski},
title = {Verification of High-Level Transformations with Inductive Refinement Types},
booktitle = {Proc.\ GPCE},
publisher = {ACM},
pages = {147--160},
doi = {10.1145/3278122.3278125},
year = {2018},
}
Publisher's Version
Explaining Spreadsheets with Spreadsheets (Short Paper)
Jácome Cunha, Mihai Dan, Martin Erwig, Danila Fedorin, and Alex Grejuc
(University of Minho, Portugal; NOVA-LINCS, Portugal; Oregon State University, USA)
Based on the concept of explanation sheets, we present an approach to make
spreadsheets easier to understand and thus easier to use and maintain.
We identify the notion of explanation soundness and show that explanation
sheets which conform to simple rules of formula coverage provide sound
explanations. We also present a practical evaluation of explanation sheets
based on samples drawn from widely used spreadsheet corpora and based on a
small user study.
In addition to supporting spreadsheet understanding and maintenance, our work
on explanation sheets has also uncovered several general principles of
explanation languages that can help guide the design of explanations for other
programming and domain-specific languages.
@InProceedings{GPCE18p161,
author = {Jácome Cunha and Mihai Dan and Martin Erwig and Danila Fedorin and Alex Grejuc},
title = {Explaining Spreadsheets with Spreadsheets (Short Paper)},
booktitle = {Proc.\ GPCE},
publisher = {ACM},
pages = {161--167},
doi = {10.1145/3278122.3278136},
year = {2018},
}
Publisher's Version
Funcons for HGMP: The Fundamental Constructs of Homogeneous Generative Meta-programming (Short Paper)
L. Thomas van Binsbergen
(Royal Holloway University of London, UK)
The PLanCompS project proposes a component-based approach to programming-language development in which fundamental constructs (funcons) are reused across language definitions.
Homogeneous Generative Meta-Programming (HGMP) enables writing programs that generate code as data, at run-time or compile-time, for manipulation and staged evaluation.
Building on existing formalisations of HGMP, this paper introduces funcons for HGMP and demonstrates their usage in component-based semantics.
@InProceedings{GPCE18p168,
author = {L. Thomas van Binsbergen},
title = {Funcons for HGMP: The Fundamental Constructs of Homogeneous Generative Meta-programming (Short Paper)},
booktitle = {Proc.\ GPCE},
publisher = {ACM},
pages = {168--174},
doi = {10.1145/3278122.3278132},
year = {2018},
}
Publisher's Version
RT-Trust: Automated Refactoring for Trusted Execution under Real-Time Constraints
Yin Liu, Kijin An, and
Eli Tilevich
(Virginia Tech, USA)
Real-time systems must meet strict timeliness requirements. These systems also often need to protect their critical program information (CPI) from adversarial interference and intellectual property theft. Trusted execution environments (TEE) execute CPI tasks on a special-purpose processor, thus providing hardware protection. However, adapting a system written to execute in environments without TEE requires partitioning the code into the regular and trusted parts. This process involves complex manual program transformations that are not only laborious and intellectually tiresome, but also hard to validate and verify for the adherence to real-time constraints. To address these problems, this paper presents novel program analyses and transformation techniques, accessible to the developer via a declarative meta-programming model. The developer declaratively specifies the CPI portion of the system. A custom static analysis checks CPI specifications for validity, while probe-based profiling helps identify whether the transformed system would continue to meet the original real-time constraints, with a feedback loop suggesting how to modify the code, so its CPI can be isolated. Finally, an automated refactoring isolates the CPI portion for TEE-based execution, communicated with through generated calls to the TEE API. We have evaluated our approach by successfully enabling the trusted execution of the CPI portions of several microbenchmarks and a drone autopilot. Our approach shows the promise of declarative meta-programming in reducing the programmer effort required to adapt systems for trusted execution under real-time constraints.
@InProceedings{GPCE18p175,
author = {Yin Liu and Kijin An and Eli Tilevich},
title = {RT-Trust: Automated Refactoring for Trusted Execution under Real-Time Constraints},
booktitle = {Proc.\ GPCE},
publisher = {ACM},
pages = {175--187},
doi = {10.1145/3278122.3278137},
year = {2018},
}
Publisher's Version
Anomaly Analyses for Feature-Model Evolution
Michael Nieke, Jacopo Mauro, Christoph Seidl, Thomas Thüm, Ingrid Chieh Yu, and Felix Franzke
(TU Braunschweig, Germany; University of Southern Denmark, Denmark; University of Oslo, Norway)
Software Product Lines (SPLs) are a common technique to capture families of software products in terms of commonalities and variabilities.
On a conceptual level, functionality of an SPL is modeled in terms of features in Feature Models (FMs).
As other software systems, SPLs and their FMs are subject to evolution that may lead to the introduction of anomalies (e.g., non-selectable features).
To fix such anomalies, developers need to understand the cause for them.
However, for large evolution histories and large SPLs, explanations may become very long and, as a consequence, hard to understand.
In this paper, we present a method for anomaly detection and explanation that, by encoding the entire evolution history, identifies the evolution step of anomaly introduction and explains which of the performed evolution operations lead to it.
In our evaluation, we show that our method significantly reduces the complexity of generated explanations.
@InProceedings{GPCE18p188,
author = {Michael Nieke and Jacopo Mauro and Christoph Seidl and Thomas Thüm and Ingrid Chieh Yu and Felix Franzke},
title = {Anomaly Analyses for Feature-Model Evolution},
booktitle = {Proc.\ GPCE},
publisher = {ACM},
pages = {188--201},
doi = {10.1145/3278122.3278123},
year = {2018},
}
Publisher's Version
Regenerate: A Language Generator for Extended Regular Expressions
Gabriel Radanne and
Peter Thiemann
(University of Freiburg, Germany)
Regular expressions are part of every programmer’s toolbox.
They are used for a wide variety of language-related tasks
and there are many algorithms for manipulating them. In
particular, matching algorithms that detect whether a word
belongs to the language described by a regular expression
are well explored, yet new algorithms appear frequently.
However, there is no satisfactory methodology for testing
such matchers.
We propose a testing methodology which is based on
generating positive as well as negative examples of words
in the language. To this end, we present a new algorithm
to generate the language described by a generalized regular
expression with intersection and complement operators. The
complement operator allows us to generate both positive and
negative example words from a given regular expression. We
implement our generator in Haskell and OCaml and show
that its performance is more than adequate for testing.
@InProceedings{GPCE18p202,
author = {Gabriel Radanne and Peter Thiemann},
title = {Regenerate: A Language Generator for Extended Regular Expressions},
booktitle = {Proc.\ GPCE},
publisher = {ACM},
pages = {202--214},
doi = {10.1145/3278122.3278133},
year = {2018},
}
Publisher's Version
Info
proc time: 5.77