SLE 2017
10th ACM SIGPLAN International Conference on Software Language Engineering (SLE 2017)
Powered by
Conference Publishing Consulting

10th ACM SIGPLAN International Conference on Software Language Engineering (SLE 2017), October 23–24, 2017, Vancouver, BC, Canada

SLE 2017 – Proceedings

Contents - Abstracts - Authors

Frontmatter

Title Page

Message from the Chairs
This volume contains the papers presented at SLE 2017: the 10th ACM SIGPLAN International Conference on Software Language Engineering held on October 23 - 24, 2017 in Vancouver, Canada as part of SPLASH 2017 (ACM SIGPLAN conference on Systems, Programming, Languages and Applications: Software for Humanity).

Keynotes

Engineering Meta-languages for Specifying Software Languages (Keynote)
Peter D. Mosses
(Swansea University, UK)

The programming and modelling languages currently used in software engineering generally have plenty of tool support. But although their syntax is specified using formal grammars or meta-models, complete formal semantic specifications are seldom provided.

The difficulty of reuse of parts of semantic specifications, and of co-evolution of such specifications with languages, are significant drawbacks for practical use of formal semantics. I have collaborated in the development of several meta-languages for semantic specification, aiming to eliminate such drawbacks: action semantics, and modular variants of structural operational semantics (MSOS, I-MSOS); this led to the PLanCompS project and to CBS, a meta-language for component-based semantics.

The components of language specifications in CBS correspond to so-called fundamental programming constructs (funcons). The main feature of CBS is that each funcon is defined once and for all: the addition of new funcons does not require any changes to previous definitions, and behavioural laws are preserved. In contrast to software packages, the definition of each funcon has to remain fixed after its publication.

As well as explaining how component-based semantics achieves these desirable pragmatic properties, and comparing its features with those of some other meta-languages, I will demonstrate the current tool support for CBS, which is implemented in Spoofax.


Publisher's Version Article Search

Parsing

Type-Safe Modular Parsing
Haoyuan Zhang, Huang Li, and Bruno C. d. S. Oliveira
(University of Hong Kong, China)

Over the years a lot of effort has been put on solving extensibility problems, while retaining important software engineering properties such as modular type-safety and separate compilation. Most previous work focused on operations that traverse and process extensible Abstract Syntax Tree (AST) structures. However, there is almost no work on operations that build such extensible ASTs, including parsing.

This paper investigates solutions for the problem of modular parsing. We focus on semantic modularity and not just syntactic modularity. That is, the solutions should not only allow complete parsers to be built out of modular parsing components, but also enable the parsing components to be modularly type-checked and separately compiled. We present a technique based on parser combinators that enables modular parsing. Interestingly, the modularity requirements for modular parsing rule out several existing parser combinator approaches, which rely on some non-modular techniques. We show that Packrat parsing techniques, provide solutions for such modularity problems, and enable reasonable performance in a modular setting. Extensibility is achieved using multiple inheritance and Object Algebras. To evaluate the approach we conduct a case study based on the “Types and Programming Languages” interpreters. The case study shows the effectiveness at reusing parsing code from existing interpreters, and the total parsing code is 69% shorter than an existing code base using a non-modular parsing approach.


Publisher's Version Article Search Artifacts Functional
Incremental Packrat Parsing
Patrick Dubroy and Alessandro Warth
(Y Combinator Research, USA)

Packrat parsing is a popular technique for implementing top-down, unlimited-lookahead parsers that operate in guaranteed linear time. In this paper, we describe a method for turning a standard packrat parser into an incremental parser through a simple modification to its memoization strategy. By “incremental”, we mean that the parser can perform syntax analysis without completely reparsing the input after each edit operation. This makes packrat parsing suitable for interactive use in code editors and IDEs — even with large inputs. Our experiments show that with our technique, an incremental packrat parser for JavaScript can outperform even a hand-optimized, non-incremental parser.


Publisher's Version Article Search Artifacts Functional
A Symbol-Based Extension of Parsing Expression Grammars and Context-Sensitive Packrat Parsing
Kimio Kuramitsu
(Yokohama National University, Japan)
Parsing expression grammars (PEGs) are a powerful and popular foundation for describing syntax. Despite PEGs' expressiveness, they cannot recognize many syntax patterns of popular programming languages. Typical examples include typedef-defined names in C/C++ and here documents appearing in many scripting languages. We use a single unified state representation, called a symbol table, to capture various context-sensitive patterns. Over the symbol table, we design a small set of restricted semantic predicates and actions. The extended PEGs are called SPEGs, and are designed to be safe in contexts of backtracking and the linear time guarantee of packrat parsing. This paper will show that SPEGs have improved the expressive power in such ways that they recognize practical context-sensitive grammars, including back referencing, indentation-based code layout, and contextual keywords.
Publisher's Version Article Search
Red Shift: Procedural Shift-Reduce Parsing (Vision Paper)
Nicolas Laurent
(Université Catholique de Louvain, Belgium)
Red Shift is a new design pattern for implementing parsers. The pattern draws ideas from traditional shift-reduce parsing as well as procedural PEG parsers. Red Shift parsers behaves like shift-reduce parsers, but eliminate ambiguity by always prioritizing reductions over shifts. To compensate the resulting lack of expressivity, reducers are not simple reduction rules but full-blown procedures written in a general-purpose host language. I found many advantages to this style of parser. In particular, we can generate high-quality error messages more easily; and compose different style of parsers. I also speculate about how Red Shift parsers may improve partial compilation in the context of an IDE.
Publisher's Version Article Search

Textual Models

Towards a Taxonomy of Grammar Smells
Mats Stijlaart and Vadim Zaytsev
(University of Amsterdam, Netherlands; Raincode Labs, Belgium)
Any grammar engineer can tell a good grammar from a bad one, but there is no commonly accepted taxonomy of indicators of required grammar refactorings. One of the consequences of this lack of general smell taxonomy is the scarcity of tools to assess and improve the quality of grammars. By combining two lines of research — on smell detection and on grammar transformation — we have assembled a taxonomy of smells in grammars. As a pilot case, the detectors for identified smells were implemented for grammars in a broad sense and applied to the 641 grammars of the Grammar Zoo.
Publisher's Version Article Search
Deep Priority Conflicts in the Wild: A Pilot Study
Luís Eduardo de Souza Amorim, Michael J. Steindorfer, and Eelco Visser
(Delft University of Technology, Netherlands)
Context-free grammars are suitable for formalizing the syntax of programming languages concisely and declaratively. Thus, such grammars are often found in reference manuals of programming languages, and used in language workbenches for language prototyping. However, the natural and concise way of writing a context-free grammar is often ambiguous. Safe and complete declarative disambiguation of operator precedence and associativity conflicts guarantees that all ambiguities arising from combining the operators of the language are resolved. Ambiguities can occur due to shallow conflicts, which can be captured by one-level tree patterns, and deep conflicts, which require more elaborate techniques. Approaches to solve deep priority conflicts include grammar transformations, which may result in large unambiguous grammars, or may require adapted parser technologies to include data-dependency tracking at parse time. In this paper we study deep priority conflicts "in the wild". We investigate the efficiency of grammar transformations to solve deep priority conflicts by using a lazy parse table generation technique. On top of lazily-generated parse tables, we define metrics, aiming to answer how often deep priority conflicts occur in real-world programs and to what extent programmers explicitly disambiguate programs themselves. By applying our metrics to a small corpus of popular open-source repositories we found that in OCaml, up to 17% of the source files contain deep priority conflicts.
Publisher's Version Article Search Info Artifacts Functional
Virtual Textual Model Composition for Supporting Versioning and Aspect-Orientation
Robert Bill, Patrick Neubauer, and Manuel Wimmer
(Vienna University of Technology, Austria; University of York, UK)
The maintenance of modern systems often requires developers to perform complex and error-prone cognitive tasks, which are caused by the obscurity, redundancy, and irrelevancy of code, distracting from essential maintenance tasks. Typical maintenance scenarios include multiple branches of code in repositories, which involves dealing with branch-interdependent changes, and aspects in aspect-oriented development, which requires in-depth knowledge of behavior-interdependent changes. Thus, merging branched files as well as validating the behavior of statically composed code requires developers to conduct exhaustive individual introspection. In this work we present VirtualEdit for associative, commutative, and invertible model composition. It allows simultaneous editing of multiple model versions or variants through dynamically derived virtual models. We implemented the approach in terms of an open-source framework that enables multi-version editing and aspect-orientation by selectively focusing on specific parts of code, which are significant for a particular engineering task. The VirtualEdit framework is evaluated based on its application to the most popular publicly available Xtext-based languages. Our results indicate that VirtualEdit can be applied to existing languages with reasonably low effort.
Publisher's Version Article Search Video Info Artifacts Functional
Robust Projectional Editing
Friedrich Steimann, Marcus Frenkel, and Markus Voelter
(Fernuniversität in Hagen, Germany; itemis, Germany)

While contemporary projectional editors make sure that the edited programs conform to the programming language’s metamodel, they do not enforce that they are also well-formed, that is, that they obey the well-formedness rules defined for the language. We show how, based on a constraint-based capture of well-formedness, projectional editors can be empowered to enforce well-formedness in much the same way they enforce conformance with the metamodel. The resulting robust edits may be more complex than ordinary, well-formedness breaking edits, and hence may require more user involvement; yet, maintaining well-formedness at all times ensures that necessary corrections of a program are linked to the edit that necessitated them, and that the projectional editor’s services are never compromised by inconsistent programs. Robust projectional editing is not a straitjacket, however: If a programmer prefers to work without it, its constraint-based capture of well-formedness will still catch all introduced errors — unlike many other editor services, well-formedness checking and robust editing are based on the same implementation, and are hence guaranteed to behave consistently.


Publisher's Version Article Search Artifacts Functional

DSLs

Debugging with Domain-Specific Events via Macros
Xiangqi Li and Matthew Flatt
(University of Utah, USA)
Extensible languages enable the convenient construction of many kinds of domain-specific languages (DSLs) by mapping domain-specific surface syntax into the host language's core forms in a layered and composable way. The host language's debugger, however, reports evaluation and data details in ways that reflect the host language, instead of the DSL in its own terms, and closing the gap may require more than correlating host evaluation steps to the original DSL source. In this paper, we describe an approach to DSL construction with macros that pairs the mapping of DSL terms to host terms with a mapping to convert primitive events back to domain-specific concepts. Domain-specific events are then suitable for presenting to a user or wiring into a domain-specific visualization. We present a core model of evaluation and events, and we present a language design---analogous to pattern-based notations for macros, but in the other direction---for describing how events in a DSL's expansion are mapped to events at the DSL's level.
Publisher's Version Article Search Artifacts Functional
A Chrestomathy of DSL Implementations
Simon Schauss, Ralf Lämmel, Johannes Härtel, Marcel Heinz, Kevin Klein, Lukas Härtel, and Thorsten Berger
(University of Koblenz-Landau, Germany; Chalmers University of Technology, Sweden; University of Gothenburg, Sweden)

Selecting and properly using approaches for DSL implementation can be challenging, given their variety and complexity. To support developers, we present the software chrestomathy MetaLib, a well-organized and well-documented collection of DSL implementations useful for learning. We focus on basic metaprogramming techniques for implementing DSL syntax and semantics. The DSL implementations are organized and enhanced by feature modeling, semantic annotation, and model-based documentation. The chrestomathy enables side-by-side exploration of different implementation approaches for DSLs. Source code, feature model, feature configurations, semantic annotations, and documentation are publicly available online, explorable through a web application, and maintained by a collaborative process.


Publisher's Version Article Search
A Requirements Engineering Approach for Usability-Driven DSL Development
Ankica Barišić, Dominique Blouin, Vasco Amaral, and Miguel Goulão
(NOVA-LINCS, Portugal; Nova University of Lisbon, Portugal; Telecom ParisTech, France)
There is currently a lack of Requirements Engineering (RE) approaches applied to, or supporting, the development of a Domain-Specific Language (DSL) taking into account the environment in which it is to be used. We present a model-based RE approach to support DSL development with a focus on usability concerns. RDAL is a RE fragment language that can be complemented with other languages to support RE and design. USE-ME is a model driven approach for DSLs usability evaluation which is integrable with a DSL development approach. We combine RDAL and a new DSL, named DSSL, that we created for the specification of DSL-based systems. Integrated with this combination we add USE-ME to support usability evaluation. This combination of existing languages and tools provides a comprehensive RE approach for DSL development. We illustrate the approach with the development of the Gyro DSL for programming robots.
Publisher's Version Article Search Artifacts Functional
Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages
Marco Brambilla, Jordi Cabot, Javier Luis Cánovas Izquierdo, and Andrea Mauri
(Politecnico di Milano, Italy; ICREA, Spain; Open University of Catalonia, Spain)
Crowdsourcing has emerged as a novel paradigm where humans are employed to perform computational tasks. In the context of Domain-Specific Modeling Language (DSML) development, where the involvement of end-users is crucial to assure that the resulting language satisfies their needs, crowdsourcing tasks could be defined to assist in the language definition process. By relying on the crowd, it is possible to show an early version of the language to a wider spectrum of users, thus increasing the validation scope and eventually promoting its acceptance and adoption. We propose a systematic method for creating crowdsourcing campaigns aimed at refining the graphical notation of DSMLs. The method defines a set of steps to identify, create and order the questions for the crowd. As a result, developers are provided with a set of notation choices that best fit end-users' needs. We also report on an experiment validating the approach.
Publisher's Version Article Search

Grammars

A Formalisation of Parameterised Reference Attribute Grammars
Scott J. H. Buckley and Anthony M. Sloane
(Macquarie University, Australia)

The similarities and differences between attribute grammar systems are obscured by their implementations. A formalism that captures the essence of such systems would allow for equivalence, correctness, and other analyses to be formally framed and proven. We present Saiga, a core language and small-step operational semantics that precisely captures the fundamental concepts of the specification and execution of parameterised reference attribute grammars. We demonstrate the utility of by a) proving a meta-theoretic property about attribute caching, and b) by specifying two attribute grammars for a realistic name analysis problem and proving that they are equivalent. The language, semantics and associated tests have been mechanised in Coq; we are currently mechanising the proofs.


Publisher's Version Article Search Info Artifacts Functional
Concurrent Circular Reference Attribute Grammars
Jesper Öqvist and Görel Hedin
(Lund University, Sweden)
Reference Attribute Grammars (RAGs) is a declarative executable formalism used for constructing compilers and related tools. Existing implementations support concurrent evaluation only with global evaluation locks. This may lead to long latencies in interactive tools, where interactive and background threads query attributes concurrently. We present lock-free algorithms for concurrent attribute evaluation, enabling low latency in interactive tools. Our algorithms support important extensions to RAGs like circular (fixed-point) attributes and higher-order attributes. We have implemented our algorithms in Java, for the JastAdd metacompiler. We evaluate the implementation on a JastAdd-specified compiler for the Java language, demonstrating very low latencies for interactive attribute queries, on the order of milliseconds. Furthermore, initial experiments show a speedup of about a factor 2 when using four parallel compilation threads.
Publisher's Version Article Search Artifacts Functional
Ensuring Non-interference of Composable Language Extensions
Ted Kaminski and Eric Van Wyk
(University of Minnesota, USA)
Extensible language frameworks aim to allow independently-developed language extensions to be easily added to a host programming language. It should not require being a compiler expert, and the resulting compiler should "just work" as expected. Previous work has shown how specifications for parsing (based on context free grammars) and for semantic analysis (based on attribute grammars) can be automatically and reliably composed, ensuring that the resulting compiler does not terminate abnormally. However, this work does not ensure that a property proven to hold for a language (or extended language) still holds when another extension is added, a problem we call interference. We present a solution to this problem using of a logical notion of coherence. We show that a useful class of language extensions, implemented as attribute grammars, preserve all coherent properties. If we also restrict extensions to only making use of coherent properties in establishing their correctness, then the correctness properties of each extension will hold when composed with other extensions. As a result, there can be no interference: each extension behaves as specified.
Publisher's Version Article Search
A Domain-Specific Controlled English Language for Automated Regulatory Compliance (Industrial Paper)
Suman Roychoudhury, Sagar Sunkle, Deepali Kholkar, and Vinay Kulkarni
(Tata Consultancy Services, India)
Modern enterprises operate in an unprecedented regulatory environment where increasing regulation and heavy penalties on non-compliance have placed regulatory compliance among the topmost concerns of enterprises worldwide. Previous research in the field of compliance has established that the manual specification of the regulations used by GRC frameworks not only fails to ensure their proper coverage but also negatively affects the turnaround time both in proving and maintaining the compliance. Our key contribution in this paper is an implementation of a controlled natural English like (domain-specific) language that can be used by domain experts to specify regulations for automated compliance checking. We demonstrate this language using examples from industry regulations in banking and financial services domain.
Publisher's Version Article Search Artifacts Functional

Meta-modelling

Concrete Syntax: A Multi-paradigm Modelling Approach
Yentl Van Tendeloo, Simon Van Mierlo, Bart Meyers, and Hans Vangheluwe
(University of Antwerp, Belgium; Flanders Make, Belgium; McGill University, Canada)
Domain-Specific Modelling Languages (DSLs) allow domain experts to create models using abstractions they are most familiar with. A DSL's syntax is specified in two parts: the abstract syntax defines the language's concepts and their allowed combinations, and the concrete syntax defines how those concepts are presented to the user (typically using a graphical or textual notation). However important concrete syntax is for the usability of the language, current modelling tools offer limited possibilities for defining the mapping between abstract and concrete syntax. Often, the language designer is restricted to defining a single icon representation of each concept, which is then rendered to the user in a (fixed) graphical interface. This paper presents a framework that explicitly models the bi-directional mapping between the abstract and concrete syntax, thereby making these restrictions easy to overcome. It is more flexible and allows, amongst others, for a model to be represented in multiple front-ends, using multiple representation formats, and multiple mappings. Our approach is evaluated with an implementation in our prototype tool, the Modelverse, and by applying it on an example language.
Publisher's Version Article Search
Structural Model Subtyping with OCL Constraints
Artur Boronat
(University of Leicester, UK)

In model-driven engineering (MDE), models abstract the relevant features of software artefacts and model management operations, including model transformations, act on them automating large tasks of the development process. Flexible reuse of such operations is an important factor to improve productivity when developing and maintaining MDE solutions. In this work, we revisit the traditional notion of object subtyping based on subsumption, discarded by other approaches to model subtyping. We refine a type system for object-oriented programming, with multiple inheritance, to support model types in order to analyse its advantages and limitations with respect to reuse in MDE. Specifically, we extend type expressions with referential constraints and with OCL constraints. Our approach has been validated with a tool that extracts model types from (EMF) metamodels, paired with their OCL constraints, automatically and that exploits the extended subtyping relation to reuse model management operations. We show that structural model subtyping is expressive enough to support variants of model subtyping, including multiple, partial and dynamic model subtyping. The tool has received the ACM badge ”Artifacts Evaluated − Functional”.


Publisher's Version Article Search Info Artifacts Functional
Comparison of the Expressiveness and Performance of Template-Based Code Generation Tools
Lechanceux Luhunu and Eugene Syriani
(Université de Montréal, Canada)
A critical step in model-driven engineering (MDE) is the automatic synthesis of a textual artifact from models. This is a very useful model transformation to generate application code, to serialize the model in persistent storage, generate documentation or reports. Among the various model-to-text (M2T) paradigms, template-based code generation is the most popular in MDE. This is supported by over 70 different tools, whether they are model-based (e.g., Acceleo, EGL) or code-based (JET, Velocity). To help developers in their difficult choice of selecting the M2T tool, we compare the expressiveness power and performance of the nine most popular tools spanning the different technological approaches. We evaluate the expressiveness based on common metamodel patterns and evaluate the performance on a range of models that conform to a metamodel composed by the combination of these patterns. The results show that MDE-based tools are more expressive, but that code-based tools are more performant. Xtend2 offers the best compromise between the expressiveness and the performance.
Publisher's Version Article Search
A Development Environment for the Alf Language within the MagicDraw UML Tool (Tool Demo)
Ed Seidewitz
(nMeta, USA)

Alf is an action language designed as a textual notation for specifying detailed behaviors within an executable UML model. The Alf implementation in MagicDraw, a leading commercial tool for modeling using the Unified Modeling Language (UML) from No Magic, Inc., aims to support the practical application of Alf in real-world uses of executable UML modeling. It includes syntax-aware editing and checking of Alf code, with valid code automatically and transparently compiled into UML activity models. The resulting models are fully integrated within the wider UML modeling context, and they can then be executed as part of full system simulation scenarios. The Alf compiler also tracks the dependencies of all Alf text on other UML model elements, allowing for automatic re-checking and re-building the Alf code as necessitated by changes in referenced elements. The goal is to provide an IDE-level experience for the easy entry and maintenance of Alf code within an overall executable UML model.


Publisher's Version Article Search

GPL/DSL Implementation

FlowSpec: Declarative Dataflow Analysis Specification
Jeff Smits and Eelco Visser
(Delft University of Technology, Netherlands)
We present FlowSpec, a declarative specification language for the domain of dataflow analysis. FlowSpec has declarative support for the specification of control flow graphs of programming languages, and dataflow analyses on these control flow graphs. We define the formal semantics of FlowSpec, which is rooted in Monotone Frameworks. We also discuss a prototype implementation of the language, built in the Spoofax Language Workbench. Finally, we evaluate the expressiveness and conciseness of the language with two case studies. These case studies are analyses for Green-Marl, an industrial, domain-specific language for graph processing. The first case study is a classical dataflow analysis, scaled to this full language. The second case study is a domain-specific analysis of Green-Marl.
Publisher's Version Article Search
Metacasanova: An Optimized Meta-compiler for Domain-Specific Languages
Francesco Di Giacomo, Mohamed Abbadi, Agostino Cortesi, Pieter Spronck, and Giuseppe Maggiore
(Università Ca' Foscari, Italy; Hogeschool Rotterdam, Netherlands; Tilburg University, Netherlands)
Domain-Specific Languages (DSL's) offer language-level abstractions that General-Purpose Languages do not offer, thus speeding up the implementation of the solution of problems within a specific domain. Developers have the choice of developing a DSL by building an interpreter/compiler for it, which is a hard and time-consuming task, or embedding it in a host language, thus speeding up the development process but losing several advantages that having a dedicated compiler might bring. In this work we present a meta-compiler called Metacasanova, whose meta-language is based on operational semantics. Then, we propose a language extension with functors and modules that allows to embed the type system of a language definition inside the meta-type system of Metacasanova and improves the performance of manipulating data structures at run-time. Our results show that Metacasanova dramatically reduces the code lines required to develop a compiler, and that the running time of the Meta-program is improved by embedding the host language type system in the meta-type system with the use of functors in the meta-language.
Publisher's Version Article Search
Robust Programs with Filtered Iterators
Jiasi Shen and Martin Rinard
(Massachusetts Institute of Technology, USA)
We present a new language construct, filtered iterators, for robust input processing. Filtered iterators are designed to eliminate many common input processing errors while enabling robust continued execution. The design is inspired by (1) observed common input processing errors and (2) successful strategies implemented by human developers fixing input processing errors. Filtered iterators decompose inputs into input units and atomically and automatically discard units that trigger errors. Statistically significant results from a developer study demonstrate the effectiveness of filtered iterators in enabling developers to produce robust input processing code without common input processing defects.
Publisher's Version Article Search Artifacts Functional
Energy Efficiency across Programming Languages: How Do Energy, Time, and Memory Relate?
Rui Pereira, Marco Couto, Francisco Ribeiro, Rui Rua, Jácome Cunha, João Paulo Fernandes, and João Saraiva
(INESC TEC, Portugal; University of Minho, Portugal; NOVA-LINCS, Portugal; Nova University of Lisbon, Portugal; LISP, Portugal; CISUC, Portugal; University of Coimbra, Portugal)
This paper presents a study of the runtime, memory usage and energy consumption of twenty seven well-known software languages. We monitor the performance of such languages using ten different programming problems, expressed in each of the languages. Our results show interesting findings, such as, slower/faster languages consuming less/more energy, and how memory usage influences energy consumption. Finally, we show how to use our results to provide software engineers support to decide which language to use when energy efficiency is a concern.
Publisher's Version Article Search Info

proc time: 4.55