SLE 2018 – Author Index |
Contents -
Abstracts -
Authors
|
A B C D E F G H J K L M O R S T V W Z
Amorim, Luís Eduardo de Souza |
SLE '18: "Declarative Specification ..."
Declarative Specification of Indentation Rules: A Tooling Perspective on Parsing and Pretty-Printing Layout-Sensitive Languages
Luís Eduardo de Souza Amorim, Michael J. Steindorfer, Sebastian Erdweg, and Eelco Visser (Delft University of Technology, Netherlands) In layout-sensitive languages, the indentation of an expression or statement can influence how a program is parsed. While some of these languages (e.g., Haskell and Python) have been widely adopted, there is little support for software language engineers in building tools for layout-sensitive languages. As a result, parsers, pretty-printers, program analyses, and refactoring tools often need to be handwritten, which decreases the maintainability and extensibility of these tools. Even state-of-the-art language workbenches have little support for layout-sensitive languages, restricting the development and prototyping of such languages. In this paper, we introduce a novel approach to declarative specification of layout-sensitive languages using layout declarations. Layout declarations are high-level specifications of indentation rules that abstract from low-level technicalities. We show how to derive an efficient layout-sensitive generalized parser and a corresponding pretty-printer automatically from a language specification with layout declarations. We validate our approach in a case-study using a syntax definition for the Haskell programming language, investigating the performance of the generated parser and the correctness of the generated pretty-printer against 22191 Haskell files. @InProceedings{SLE18p3, author = {Luís Eduardo de Souza Amorim and Michael J. Steindorfer and Sebastian Erdweg and Eelco Visser}, title = {Declarative Specification of Indentation Rules: A Tooling Perspective on Parsing and Pretty-Printing Layout-Sensitive Languages}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {3--15}, doi = {10.1145/3276604.3276607}, year = {2018}, } Publisher's Version Info Artifacts Functional |
|
Aßmann, Uwe |
SLE '18: "Continuous Model Validation ..."
Continuous Model Validation using Reference Attribute Grammars
Johannes Mey, René Schöne, Görel Hedin, Emma Söderberg, Thomas Kühn, Niklas Fors, Jesper Öqvist, and Uwe Aßmann (TU Dresden, Germany; Lund University, Sweden) Just like current software systems, models are characterised by increasing complexity and rate of change. Yet, these models only become useful if they can be continuously evaluated and validated. To achieve sufficiently low response times for large models, incremental analysis is required. Reference Attribute Grammars (RAGs) offer mechanisms to perform an incremental analysis efficiently using dynamic dependency tracking. However, not all features used in conceptual modelling are directly available in RAGs. In particular, support for non-containment model relations is only available through manual implementation. We present an approach to directly model uni- and bidirectional non-containment relations in RAGs and provide efficient means for navigating and editing them. This approach is evaluated using a scalable benchmark for incremental model editing and the JastAdd RAG system. Our work demonstrates the suitability of RAGs for validating complex and continuously changing models of current software systems. @InProceedings{SLE18p70, author = {Johannes Mey and René Schöne and Görel Hedin and Emma Söderberg and Thomas Kühn and Niklas Fors and Jesper Öqvist and Uwe Aßmann}, title = {Continuous Model Validation using Reference Attribute Grammars}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {70--82}, doi = {10.1145/3276604.3276616}, year = {2018}, } Publisher's Version Artifacts Functional |
|
Bennich-Björkman, Oscar |
SLE '18: "The Next 700 Unit of Measurement ..."
The Next 700 Unit of Measurement Checkers
Oscar Bennich-Björkman and Steve McKeever (Uppsala University, Sweden) In scientific applications, physical quantities and units of measurement are used regularly. If the inherent incompatibility between these units is not handled properly it can lead to major, sometimes catastrophic, problems. Although the risk of a miscalculation is high and the cost equally so, almost none of the major programming languages has support for physical quantities. Instead, scientific code developers often make their own tools or rely on external libraries to help them spot or prevent these mistakes. We employed a systematic approach to examine and analyse all available physical quantity open-source libraries. Approximately 3700 search results across seven repository hosting sites were condensed into a list of 82 of the most comprehensive and well-developed libraries currently available. In this group, 30 different programming languages are represented. Out of these 82 libraries, 38 have been updated within the last two years. These 38 are summarised in this paper as they are deemed the most relevant. The conclusion we draw from these results is that there is clearly too much diversity, duplicated efforts, and a lack of code sharing and harmonisation which discourages use and adoption. @InProceedings{SLE18p121, author = {Oscar Bennich-Björkman and Steve McKeever}, title = {The Next 700 Unit of Measurement Checkers}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {121--132}, doi = {10.1145/3276604.3276613}, year = {2018}, } Publisher's Version |
|
Buchs, Didier |
SLE '18: "A Practical Type System for ..."
A Practical Type System for Safe Aliasing
Dimitri Racordon and Didier Buchs (University of Geneva, Switzerland) Aliasing is a vital concept of programming, but it comes with a plethora of challenging issues, such as the problems related to race safety. This has motivated years of research, and promising solutions such as ownership or linear types have found their way into modern programming languages. Unfortunately, most current approaches are restrictive. In particular, they often enforce a single-writer constraint, which prohibits the creation of mutable self-referential structures. While this constraint is often indispensable in the context of preemptive multithreading, it can be worked around in the case of single threaded programs. With the recent resurgence of cooperative multitasking, where processes voluntarily share control over a single execution thread, this appears to be interesting trade-off. In this paper, we propose a type system that relaxes the usual single-writer constraint for single threaded programs, without sacrificing race safety properties. We present it in the form of a simple reference-based language, for which we provide a formal semantics, as well as an interpreter. @InProceedings{SLE18p133, author = {Dimitri Racordon and Didier Buchs}, title = {A Practical Type System for Safe Aliasing}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {133--146}, doi = {10.1145/3276604.3276612}, year = {2018}, } Publisher's Version Info Artifacts Functional |
|
Butting, Arvid |
SLE '18: "Deriving Fluent Internal Domain-Specific ..."
Deriving Fluent Internal Domain-Specific Languages from Grammars
Arvid Butting, Manuela Dalibor, Gerrit Leonhardt, Bernhard Rumpe, and Andreas Wortmann (RWTH Aachen University, Germany) A prime decision of engineering domain-specific languages (DSLs) is implementing these as external DSLs or internal DSLs. Agile language engineering benefits from easily switching between both shapes to provide rapidly developed prototypes before settling on a specific syntax. This switching, however, is rarely feasible due to the effort of re-implementing language tooling for both shapes. Current research in software language engineering focuses either on internal DSLs or external DSLs. We conceived a concept to automatically derive customizable internal DSLs from grammars that operate on the same abstract syntax as the external DSL. This supports reusing tooling (such as model checkers or code generators) between both shapes. We realized our concept with the MontiCore language workbench and Groovy as host language for internal DSLs. This concept is applicable to many grammar-based language definition @InProceedings{SLE18p187, author = {Arvid Butting and Manuela Dalibor and Gerrit Leonhardt and Bernhard Rumpe and Andreas Wortmann}, title = {Deriving Fluent Internal Domain-Specific Languages from Grammars}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {187--199}, doi = {10.1145/3276604.3276621}, year = {2018}, } Publisher's Version SLE '18: "Translating Grammars to Accurate ..." Translating Grammars to Accurate Metamodels Arvid Butting, Nico Jansen, Bernhard Rumpe, and Andreas Wortmann (RWTH Aachen University, Germany) There is a software language engineering gap between metamodel-based languages and grammar-based languages. Grammars can support integrated definition of concrete syntax and abstract syntax, which facilitates processing models, but usually prevents reusing the variety of language tools operating on Ecore metamodels (such as editors, interpreters, debuggers, etc.). Existing work on translating grammars to Ecore metamodels features very cursory translations only, which requires re-engineering intricacies natural to grammars for the metamodels again. We conceived a translation from an EBNF-like syntax to Ecore metamodels that considers the grammars’ intricacies. This translation is realized as a fully automated toolchain from grammars into Ecore & OCL using the language workbench MontiCore. Using this translation enables grammar-based languages to leverage the benefits of Ecore-compatible language tools while supporting natural definition of concrete and abstract syntax. @InProceedings{SLE18p174, author = {Arvid Butting and Nico Jansen and Bernhard Rumpe and Andreas Wortmann}, title = {Translating Grammars to Accurate Metamodels}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {174--186}, doi = {10.1145/3276604.3276605}, year = {2018}, } Publisher's Version |
|
Capozucca, Alfredo |
SLE '18: "Messir: A Text-First DSL-Based ..."
Messir: A Text-First DSL-Based Approach for UML Requirements Engineering (Tool Demo)
Benoît Ries, Alfredo Capozucca, and Nicolas Guelfi (University of Luxembourg, Luxembourg) This tool paper presents the design and tool-support of Messir, an approach centered on textual domain-specific languages supported by our open-source UML requirements engineering tool, named Excalibur. The novelty of our approach is the actual integration in a single workbench (Excalibur) of textual DSLs richly covering the requirements and analysis phases, i.e. improved use-cases, environment, conceptual and operations models; and the read-only visualisation of the requirements with UML-compliant views; and the generation of scientific requirements analysis documents in LATEX; and the formal simulation of test cases requirements. We designed our Messir language, with a grammar-based approach generating a textual editor, using the XText framework as an Eclipse plugin. Messir DSL’s static semantics is defined as a set of validation rules guiding end-users through the requirements analysis phase. Messir DSL’s semantics is given as a semi-automatic translation to prolog code. We also generate, from the requirements model elements, read-only graphical views (using the Sirius eclipse plugin) as well as a complete requirements analysis document in LATEX. This approach and tool have been used as a requirements engineering educational tool in several bachelor and master semesters. @InProceedings{SLE18p103, author = {Benoît Ries and Alfredo Capozucca and Nicolas Guelfi}, title = {Messir: A Text-First DSL-Based Approach for UML Requirements Engineering (Tool Demo)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {103--107}, doi = {10.1145/3276604.3276614}, year = {2018}, } Publisher's Version Info Artifacts Functional |
|
Chechik, Marsha |
SLE '18: "Analysing Meta-Model Product ..."
Analysing Meta-Model Product Lines
Esther Guerra, Juan de Lara, Marsha Chechik, and Rick Salay (Autonomous University of Madrid, Spain; University of Toronto, Canada) Model-driven engineering advocates the use of models to describe and automate many software development tasks. The syntax of modelling languages is defined by meta-models, making them essential artefacts. A combination of product line engineering methods and meta-models has been proposed to enable specification of modelling language variants, e.g., to describe a range of systems. However, there is a lack of techniques for ensuring syntactic correctness of all meta-models within a family (including their OCL constraints), and semantic correctness related to properties of individual instances of the different variants. The absence of verification methods at the product-line level can cause synthesis of ill-formed meta-models and problematic feature combinations whose effect at the instance level may go unnoticed. To attack this problem, we propose an approach to lifting both the meta-model syntax checking and the satisfiability checking of properties of individual meta-model instances, to the product-line level. We validate the approach via a prototype tool called Merlin, and report on several experiments that show the advantages of our method w.r.t. an enumerative analysis approach. @InProceedings{SLE18p160, author = {Esther Guerra and Juan de Lara and Marsha Chechik and Rick Salay}, title = {Analysing Meta-Model Product Lines}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {160--173}, doi = {10.1145/3276604.3276609}, year = {2018}, } Publisher's Version Info |
|
Cimini, Matteo |
SLE '18: "Languages as First-Class Citizens ..."
Languages as First-Class Citizens (Vision Paper)
Matteo Cimini (University of Massachusetts at Lowell, USA) In this paper, we introduce languages as first-class citizens as a sub-paradigm of language-oriented programming. In this approach, language definitions are in the context of a general purpose programming language with the same status as any other expression. In particular, language definitions are elevated to be run-time values, that can be assigned to variables, passed to functions, returned by functions, and inserted into lists, to name a few possibilities. This approach offers flexible features in the run-time creation and modification of languages, and may promote new idioms in language-oriented programming. As a proof of concept, we have designed and implemented lang-n-play, a functional language with languages as first-class citizens. We present the features of lang-n-play with an example, and show that they naturally enable dynamic programming scenarios. @InProceedings{SLE18p65, author = {Matteo Cimini}, title = {Languages as First-Class Citizens (Vision Paper)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {65--69}, doi = {10.1145/3276604.3276983}, year = {2018}, } Publisher's Version |
|
Combemale, Benoit |
SLE '18: "Fostering Metamodels and Grammars ..."
Fostering Metamodels and Grammars within a Dedicated Environment for HPC: The NabLab Environment (Tool Demo)
Benoît Lelandais, Marie-Pierre Oudot, and Benoit Combemale (CEA, France; DAM, France; DIF, France; University of Toulouse, France; Inria, France) Advanced and mature language workbenches have been proposed in the past decades to develop Domain-Specific Languages (DSL) and rich associated environments. They all come in various flavors, mostly depending on the underlying technological space (e.g., grammarware or modelware). However, when the time comes to start a new DSL project, it often comes with the choice of a unique technological space which later bounds the possible expected features. In this tool paper, we introduce NabLab, a full-fledged industrial environment for scientific computing and High Performance Computing (HPC), involving several metamodels and grammars. Beyond the description of an industrial experience of the development and use of tool-supported DSLs, we report in this paper our lessons learned, and demonstrate the benefits from usefully combining metamodels and grammars in an integrated environment. @InProceedings{SLE18p200, author = {Benoît Lelandais and Marie-Pierre Oudot and Benoit Combemale}, title = {Fostering Metamodels and Grammars within a Dedicated Environment for HPC: The NabLab Environment (Tool Demo)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {200--204}, doi = {10.1145/3276604.3276620}, year = {2018}, } Publisher's Version SLE '18: "Modular Language Composition ..." Modular Language Composition for the Masses Manuel Leduc, Thomas Degueule, and Benoit Combemale (University of Rennes, France; Inria, France; CNRS, France; IRISA, France; CWI, Netherlands; University of Toulouse, France; IRIT, France) The goal of modular language development is to enable the definition of new languages as assemblies of pre-existing ones. Recent approaches in this area are plentiful but usually suffer from two main problems: either they do not support modular language composition both at the specification and implementation levels, or they require advanced knowledge of specific paradigms which hampers wide adoption in the industry. In this paper, we introduce a non-intrusive approach to modular development of language concerns with well-defined interfaces that can be composed modularly at the specification and implementation levels. We present an implementation of our approach atop the Eclipse Modeling Framework, namely Alex, an object-oriented meta-language for semantics definition and language composition. We evaluate Alex in the development of a new DSL for IoT systems modeling resulting from the composition of three independently defined languages (UML activity diagrams, Lua, and the OMG Interface Description Language). We evaluate the effort required to implement and compose these languages using Alex with regards to similar approaches of the literature. @InProceedings{SLE18p47, author = {Manuel Leduc and Thomas Degueule and Benoit Combemale}, title = {Modular Language Composition for the Masses}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {47--59}, doi = {10.1145/3276604.3276622}, year = {2018}, } Publisher's Version Artifacts Functional SLE '18: "Shape-Diverse DSLs: Languages ..." Shape-Diverse DSLs: Languages without Borders (Vision Paper) Fabien Coulon, Thomas Degueule, Tijs van der Storm, and Benoit Combemale (University of Toulouse, France; IRIT, France; Obeo, France; CWI, Netherlands; University of Groningen, Netherlands; Inria, France) Domain-Specific Languages (DSLs) manifest themselves in remarkably diverse shapes, ranging from internal DSLs embedded as a mere fluent API within a programming language, to external DSLs with dedicated syntax and tool support. Although different shapes have different pros and cons, combining them for a single language is problematic: language designers usually commit to a particular shape early in the design process, and it is hard to reconsider this choice later. In this new ideas paper, we envision a language engineering approach enabling (i) language users to manipulate language constructs in the most appropriate shape according to the task at hand, and (ii) language designers to combine the strengths of different technologies for a single DSL. We report on early experiments and lessons learned building , our prototype approach to this problem. We illustrate its applicability in the engineering of a simple shape-diverse DSL implemented conjointly in Rascal, EMF, and Java. We hope that our initial contribution will raise the awareness of the community and encourage future research. @InProceedings{SLE18p215, author = {Fabien Coulon and Thomas Degueule and Tijs van der Storm and Benoit Combemale}, title = {Shape-Diverse DSLs: Languages without Borders (Vision Paper)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {215--219}, doi = {10.1145/3276604.3276623}, year = {2018}, } Publisher's Version Info |
|
Coulon, Fabien |
SLE '18: "Shape-Diverse DSLs: Languages ..."
Shape-Diverse DSLs: Languages without Borders (Vision Paper)
Fabien Coulon, Thomas Degueule, Tijs van der Storm, and Benoit Combemale (University of Toulouse, France; IRIT, France; Obeo, France; CWI, Netherlands; University of Groningen, Netherlands; Inria, France) Domain-Specific Languages (DSLs) manifest themselves in remarkably diverse shapes, ranging from internal DSLs embedded as a mere fluent API within a programming language, to external DSLs with dedicated syntax and tool support. Although different shapes have different pros and cons, combining them for a single language is problematic: language designers usually commit to a particular shape early in the design process, and it is hard to reconsider this choice later. In this new ideas paper, we envision a language engineering approach enabling (i) language users to manipulate language constructs in the most appropriate shape according to the task at hand, and (ii) language designers to combine the strengths of different technologies for a single DSL. We report on early experiments and lessons learned building , our prototype approach to this problem. We illustrate its applicability in the engineering of a simple shape-diverse DSL implemented conjointly in Rascal, EMF, and Java. We hope that our initial contribution will raise the awareness of the community and encourage future research. @InProceedings{SLE18p215, author = {Fabien Coulon and Thomas Degueule and Tijs van der Storm and Benoit Combemale}, title = {Shape-Diverse DSLs: Languages without Borders (Vision Paper)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {215--219}, doi = {10.1145/3276604.3276623}, year = {2018}, } Publisher's Version Info |
|
Dalibor, Manuela |
SLE '18: "Deriving Fluent Internal Domain-Specific ..."
Deriving Fluent Internal Domain-Specific Languages from Grammars
Arvid Butting, Manuela Dalibor, Gerrit Leonhardt, Bernhard Rumpe, and Andreas Wortmann (RWTH Aachen University, Germany) A prime decision of engineering domain-specific languages (DSLs) is implementing these as external DSLs or internal DSLs. Agile language engineering benefits from easily switching between both shapes to provide rapidly developed prototypes before settling on a specific syntax. This switching, however, is rarely feasible due to the effort of re-implementing language tooling for both shapes. Current research in software language engineering focuses either on internal DSLs or external DSLs. We conceived a concept to automatically derive customizable internal DSLs from grammars that operate on the same abstract syntax as the external DSL. This supports reusing tooling (such as model checkers or code generators) between both shapes. We realized our concept with the MontiCore language workbench and Groovy as host language for internal DSLs. This concept is applicable to many grammar-based language definition @InProceedings{SLE18p187, author = {Arvid Butting and Manuela Dalibor and Gerrit Leonhardt and Bernhard Rumpe and Andreas Wortmann}, title = {Deriving Fluent Internal Domain-Specific Languages from Grammars}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {187--199}, doi = {10.1145/3276604.3276621}, year = {2018}, } Publisher's Version |
|
Degueule, Thomas |
SLE '18: "Constraint-based Run-time ..."
Constraint-based Run-time State Migration for Live Modeling
Ulyana Tikhonova, Jouke Stoel, Tijs van der Storm, and Thomas Degueule (CWI, Netherlands; Eindhoven University of Technology, Netherlands; University of Groningen, Netherlands) Live modeling enables modelers to incrementally update models as they are running and get immediate feedback about the impact of their changes. Changes introduced in a model may trigger inconsistencies between the model and its run-time state (e.g., deleting the current state in a statemachine); effectively requiring to migrate the run-time state to comply with the updated model. In this paper, we introduce an approach that enables to automatically migrate such run-time state based on declarative constraints defined by the language designer. We illustrate the approach using Nextep, a meta-modeling language for defining invariants and migration constraints on run-time state models. When a model changes, Nextep employs model finding techniques, backed by a solver, to automatically infer a new run-time model that satisfies the declared constraints. We apply Nextep to define migration strategies for two DSLs, and report on its expressiveness and performance. @InProceedings{SLE18p108, author = {Ulyana Tikhonova and Jouke Stoel and Tijs van der Storm and Thomas Degueule}, title = {Constraint-based Run-time State Migration for Live Modeling}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {108--120}, doi = {10.1145/3276604.3276611}, year = {2018}, } Publisher's Version Artifacts Functional SLE '18: "Modular Language Composition ..." Modular Language Composition for the Masses Manuel Leduc, Thomas Degueule, and Benoit Combemale (University of Rennes, France; Inria, France; CNRS, France; IRISA, France; CWI, Netherlands; University of Toulouse, France; IRIT, France) The goal of modular language development is to enable the definition of new languages as assemblies of pre-existing ones. Recent approaches in this area are plentiful but usually suffer from two main problems: either they do not support modular language composition both at the specification and implementation levels, or they require advanced knowledge of specific paradigms which hampers wide adoption in the industry. In this paper, we introduce a non-intrusive approach to modular development of language concerns with well-defined interfaces that can be composed modularly at the specification and implementation levels. We present an implementation of our approach atop the Eclipse Modeling Framework, namely Alex, an object-oriented meta-language for semantics definition and language composition. We evaluate Alex in the development of a new DSL for IoT systems modeling resulting from the composition of three independently defined languages (UML activity diagrams, Lua, and the OMG Interface Description Language). We evaluate the effort required to implement and compose these languages using Alex with regards to similar approaches of the literature. @InProceedings{SLE18p47, author = {Manuel Leduc and Thomas Degueule and Benoit Combemale}, title = {Modular Language Composition for the Masses}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {47--59}, doi = {10.1145/3276604.3276622}, year = {2018}, } Publisher's Version Artifacts Functional SLE '18: "Shape-Diverse DSLs: Languages ..." Shape-Diverse DSLs: Languages without Borders (Vision Paper) Fabien Coulon, Thomas Degueule, Tijs van der Storm, and Benoit Combemale (University of Toulouse, France; IRIT, France; Obeo, France; CWI, Netherlands; University of Groningen, Netherlands; Inria, France) Domain-Specific Languages (DSLs) manifest themselves in remarkably diverse shapes, ranging from internal DSLs embedded as a mere fluent API within a programming language, to external DSLs with dedicated syntax and tool support. Although different shapes have different pros and cons, combining them for a single language is problematic: language designers usually commit to a particular shape early in the design process, and it is hard to reconsider this choice later. In this new ideas paper, we envision a language engineering approach enabling (i) language users to manipulate language constructs in the most appropriate shape according to the task at hand, and (ii) language designers to combine the strengths of different technologies for a single DSL. We report on early experiments and lessons learned building , our prototype approach to this problem. We illustrate its applicability in the engineering of a simple shape-diverse DSL implemented conjointly in Rascal, EMF, and Java. We hope that our initial contribution will raise the awareness of the community and encourage future research. @InProceedings{SLE18p215, author = {Fabien Coulon and Thomas Degueule and Tijs van der Storm and Benoit Combemale}, title = {Shape-Diverse DSLs: Languages without Borders (Vision Paper)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {215--219}, doi = {10.1145/3276604.3276623}, year = {2018}, } Publisher's Version Info |
|
De Lara, Juan |
SLE '18: "Analysing Meta-Model Product ..."
Analysing Meta-Model Product Lines
Esther Guerra, Juan de Lara, Marsha Chechik, and Rick Salay (Autonomous University of Madrid, Spain; University of Toronto, Canada) Model-driven engineering advocates the use of models to describe and automate many software development tasks. The syntax of modelling languages is defined by meta-models, making them essential artefacts. A combination of product line engineering methods and meta-models has been proposed to enable specification of modelling language variants, e.g., to describe a range of systems. However, there is a lack of techniques for ensuring syntactic correctness of all meta-models within a family (including their OCL constraints), and semantic correctness related to properties of individual instances of the different variants. The absence of verification methods at the product-line level can cause synthesis of ill-formed meta-models and problematic feature combinations whose effect at the instance level may go unnoticed. To attack this problem, we propose an approach to lifting both the meta-model syntax checking and the satisfiability checking of properties of individual meta-model instances, to the product-line level. We validate the approach via a prototype tool called Merlin, and report on several experiments that show the advantages of our method w.r.t. an enumerative analysis approach. @InProceedings{SLE18p160, author = {Esther Guerra and Juan de Lara and Marsha Chechik and Rick Salay}, title = {Analysing Meta-Model Product Lines}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {160--173}, doi = {10.1145/3276604.3276609}, year = {2018}, } Publisher's Version Info SLE '18: "Facet-Oriented Modelling: ..." Facet-Oriented Modelling: Open Objects for Model-Driven Engineering Juan de Lara, Esther Guerra, Jörg Kienzle, and Yanis Hattab (Autonomous University of Madrid, Spain; McGill University, Canada) Model-driven engineering (MDE) promotes models as the principal assets in software projects. Models are built using a modelling language whose syntax is defined by a metamodel. Hence, objects in models are typed by a metamodel class, and this typing relation is static as it is established at creation time and cannot be changed later. This way, objects in MDE are closed and fixed with respect to the type they conform to, the slots/properties they have, and the constraints they should obey. This hampers the reuse of model-related artefacts like model transformations, as well as the opportunistic or dynamic combination of metamodels. To alleviate this rigidity, we propose making model objects open so that they can acquire or drop so-called facets, each one contributing a type, slots and constraints to the object. Facets are defined by regular metamodels, hence being a lightweight extension of standard metamodelling. Facet metamodels may declare usage interfaces, and it is possible to specify laws that govern how facets are to be assigned to the instances of a metamodel. In this paper, we describe our proposal, report on an implementation, and illustrate scenarios where facets have advantages over other techniques. @InProceedings{SLE18p147, author = {Juan de Lara and Esther Guerra and Jörg Kienzle and Yanis Hattab}, title = {Facet-Oriented Modelling: Open Objects for Model-Driven Engineering}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {147--159}, doi = {10.1145/3276604.3276610}, year = {2018}, } Publisher's Version Info |
|
Denkers, Jasper |
SLE '18: "Migrating Custom DSL Implementations ..."
Migrating Custom DSL Implementations to a Language Workbench (Tool Demo)
Jasper Denkers, Louis van Gool, and Eelco Visser (Delft University of Technology, Netherlands; Océ, Netherlands) We present a tool architecture that supports migrating custom domain-specific language (DSL) implementations to a language workbench. We demonstrate an implementation of this architecture for models in the domains of defining component interfaces (IDL) and modeling system behavior (OIL) which are developed and used at a digital printer manufacturing company. Increasing complexity and the lack of DSL syntax and IDE support for existing implementations in Python based on XML syntax hindered their evolution and adoption. A reimplementation in Spoofax using modular language definition enables composition between IDL and OIL and introduces more concise DSL syntax and IDE support. The presented tool supports migrating to new implementations while being backward compatible with existing syntax and related tooling. @InProceedings{SLE18p205, author = {Jasper Denkers and Louis van Gool and Eelco Visser}, title = {Migrating Custom DSL Implementations to a Language Workbench (Tool Demo)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {205--209}, doi = {10.1145/3276604.3276608}, year = {2018}, } Publisher's Version |
|
Erdweg, Sebastian |
SLE '18: "Declarative Specification ..."
Declarative Specification of Indentation Rules: A Tooling Perspective on Parsing and Pretty-Printing Layout-Sensitive Languages
Luís Eduardo de Souza Amorim, Michael J. Steindorfer, Sebastian Erdweg, and Eelco Visser (Delft University of Technology, Netherlands) In layout-sensitive languages, the indentation of an expression or statement can influence how a program is parsed. While some of these languages (e.g., Haskell and Python) have been widely adopted, there is little support for software language engineers in building tools for layout-sensitive languages. As a result, parsers, pretty-printers, program analyses, and refactoring tools often need to be handwritten, which decreases the maintainability and extensibility of these tools. Even state-of-the-art language workbenches have little support for layout-sensitive languages, restricting the development and prototyping of such languages. In this paper, we introduce a novel approach to declarative specification of layout-sensitive languages using layout declarations. Layout declarations are high-level specifications of indentation rules that abstract from low-level technicalities. We show how to derive an efficient layout-sensitive generalized parser and a corresponding pretty-printer automatically from a language specification with layout declarations. We validate our approach in a case-study using a syntax definition for the Haskell programming language, investigating the performance of the generated parser and the correctness of the generated pretty-printer against 22191 Haskell files. @InProceedings{SLE18p3, author = {Luís Eduardo de Souza Amorim and Michael J. Steindorfer and Sebastian Erdweg and Eelco Visser}, title = {Declarative Specification of Indentation Rules: A Tooling Perspective on Parsing and Pretty-Printing Layout-Sensitive Languages}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {3--15}, doi = {10.1145/3276604.3276607}, year = {2018}, } Publisher's Version Info Artifacts Functional |
|
Fors, Niklas |
SLE '18: "Continuous Model Validation ..."
Continuous Model Validation using Reference Attribute Grammars
Johannes Mey, René Schöne, Görel Hedin, Emma Söderberg, Thomas Kühn, Niklas Fors, Jesper Öqvist, and Uwe Aßmann (TU Dresden, Germany; Lund University, Sweden) Just like current software systems, models are characterised by increasing complexity and rate of change. Yet, these models only become useful if they can be continuously evaluated and validated. To achieve sufficiently low response times for large models, incremental analysis is required. Reference Attribute Grammars (RAGs) offer mechanisms to perform an incremental analysis efficiently using dynamic dependency tracking. However, not all features used in conceptual modelling are directly available in RAGs. In particular, support for non-containment model relations is only available through manual implementation. We present an approach to directly model uni- and bidirectional non-containment relations in RAGs and provide efficient means for navigating and editing them. This approach is evaluated using a scalable benchmark for incremental model editing and the JastAdd RAG system. Our work demonstrates the suitability of RAGs for validating complex and continuously changing models of current software systems. @InProceedings{SLE18p70, author = {Johannes Mey and René Schöne and Görel Hedin and Emma Söderberg and Thomas Kühn and Niklas Fors and Jesper Öqvist and Uwe Aßmann}, title = {Continuous Model Validation using Reference Attribute Grammars}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {70--82}, doi = {10.1145/3276604.3276616}, year = {2018}, } Publisher's Version Artifacts Functional |
|
Guelfi, Nicolas |
SLE '18: "Messir: A Text-First DSL-Based ..."
Messir: A Text-First DSL-Based Approach for UML Requirements Engineering (Tool Demo)
Benoît Ries, Alfredo Capozucca, and Nicolas Guelfi (University of Luxembourg, Luxembourg) This tool paper presents the design and tool-support of Messir, an approach centered on textual domain-specific languages supported by our open-source UML requirements engineering tool, named Excalibur. The novelty of our approach is the actual integration in a single workbench (Excalibur) of textual DSLs richly covering the requirements and analysis phases, i.e. improved use-cases, environment, conceptual and operations models; and the read-only visualisation of the requirements with UML-compliant views; and the generation of scientific requirements analysis documents in LATEX; and the formal simulation of test cases requirements. We designed our Messir language, with a grammar-based approach generating a textual editor, using the XText framework as an Eclipse plugin. Messir DSL’s static semantics is defined as a set of validation rules guiding end-users through the requirements analysis phase. Messir DSL’s semantics is given as a semi-automatic translation to prolog code. We also generate, from the requirements model elements, read-only graphical views (using the Sirius eclipse plugin) as well as a complete requirements analysis document in LATEX. This approach and tool have been used as a requirements engineering educational tool in several bachelor and master semesters. @InProceedings{SLE18p103, author = {Benoît Ries and Alfredo Capozucca and Nicolas Guelfi}, title = {Messir: A Text-First DSL-Based Approach for UML Requirements Engineering (Tool Demo)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {103--107}, doi = {10.1145/3276604.3276614}, year = {2018}, } Publisher's Version Info Artifacts Functional |
|
Guerra, Esther |
SLE '18: "Analysing Meta-Model Product ..."
Analysing Meta-Model Product Lines
Esther Guerra, Juan de Lara, Marsha Chechik, and Rick Salay (Autonomous University of Madrid, Spain; University of Toronto, Canada) Model-driven engineering advocates the use of models to describe and automate many software development tasks. The syntax of modelling languages is defined by meta-models, making them essential artefacts. A combination of product line engineering methods and meta-models has been proposed to enable specification of modelling language variants, e.g., to describe a range of systems. However, there is a lack of techniques for ensuring syntactic correctness of all meta-models within a family (including their OCL constraints), and semantic correctness related to properties of individual instances of the different variants. The absence of verification methods at the product-line level can cause synthesis of ill-formed meta-models and problematic feature combinations whose effect at the instance level may go unnoticed. To attack this problem, we propose an approach to lifting both the meta-model syntax checking and the satisfiability checking of properties of individual meta-model instances, to the product-line level. We validate the approach via a prototype tool called Merlin, and report on several experiments that show the advantages of our method w.r.t. an enumerative analysis approach. @InProceedings{SLE18p160, author = {Esther Guerra and Juan de Lara and Marsha Chechik and Rick Salay}, title = {Analysing Meta-Model Product Lines}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {160--173}, doi = {10.1145/3276604.3276609}, year = {2018}, } Publisher's Version Info SLE '18: "Facet-Oriented Modelling: ..." Facet-Oriented Modelling: Open Objects for Model-Driven Engineering Juan de Lara, Esther Guerra, Jörg Kienzle, and Yanis Hattab (Autonomous University of Madrid, Spain; McGill University, Canada) Model-driven engineering (MDE) promotes models as the principal assets in software projects. Models are built using a modelling language whose syntax is defined by a metamodel. Hence, objects in models are typed by a metamodel class, and this typing relation is static as it is established at creation time and cannot be changed later. This way, objects in MDE are closed and fixed with respect to the type they conform to, the slots/properties they have, and the constraints they should obey. This hampers the reuse of model-related artefacts like model transformations, as well as the opportunistic or dynamic combination of metamodels. To alleviate this rigidity, we propose making model objects open so that they can acquire or drop so-called facets, each one contributing a type, slots and constraints to the object. Facets are defined by regular metamodels, hence being a lightweight extension of standard metamodelling. Facet metamodels may declare usage interfaces, and it is possible to specify laws that govern how facets are to be assigned to the instances of a metamodel. In this paper, we describe our proposal, report on an implementation, and illustrate scenarios where facets have advantages over other techniques. @InProceedings{SLE18p147, author = {Juan de Lara and Esther Guerra and Jörg Kienzle and Yanis Hattab}, title = {Facet-Oriented Modelling: Open Objects for Model-Driven Engineering}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {147--159}, doi = {10.1145/3276604.3276610}, year = {2018}, } Publisher's Version Info |
|
Harkes, Daco C. |
SLE '18: "Migrating Business Logic to ..."
Migrating Business Logic to an Incremental Computing DSL: A Case Study
Daco C. Harkes, Elmer van Chastelet, and Eelco Visser (Delft University of Technology, Netherlands) To provide empirical evidence to what extent migration of business logic to an incremental computing language (ICL) is useful, we report on a case study on a learning management system. Our contribution is to analyze a real-life project, how migrating business logic to an ICL affects information system validatability, performance, and development effort. We find that the migrated code has better validatability; it is straightforward to establish that a program ‘does the right thing’. Moreover, the performance is better than the previous hand-written incremental computing solution. The effort spent on modeling business logic is reduced, but integrating that logic in the application and tuning performance takes considerable effort. Thus, the ICL separates the concerns of business logic and performance, but does not reduce effort. @InProceedings{SLE18p83, author = {Daco C. Harkes and Elmer van Chastelet and Eelco Visser}, title = {Migrating Business Logic to an Incremental Computing DSL: A Case Study}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {83--96}, doi = {10.1145/3276604.3276617}, year = {2018}, } Publisher's Version |
|
Hattab, Yanis |
SLE '18: "Facet-Oriented Modelling: ..."
Facet-Oriented Modelling: Open Objects for Model-Driven Engineering
Juan de Lara, Esther Guerra, Jörg Kienzle, and Yanis Hattab (Autonomous University of Madrid, Spain; McGill University, Canada) Model-driven engineering (MDE) promotes models as the principal assets in software projects. Models are built using a modelling language whose syntax is defined by a metamodel. Hence, objects in models are typed by a metamodel class, and this typing relation is static as it is established at creation time and cannot be changed later. This way, objects in MDE are closed and fixed with respect to the type they conform to, the slots/properties they have, and the constraints they should obey. This hampers the reuse of model-related artefacts like model transformations, as well as the opportunistic or dynamic combination of metamodels. To alleviate this rigidity, we propose making model objects open so that they can acquire or drop so-called facets, each one contributing a type, slots and constraints to the object. Facets are defined by regular metamodels, hence being a lightweight extension of standard metamodelling. Facet metamodels may declare usage interfaces, and it is possible to specify laws that govern how facets are to be assigned to the instances of a metamodel. In this paper, we describe our proposal, report on an implementation, and illustrate scenarios where facets have advantages over other techniques. @InProceedings{SLE18p147, author = {Juan de Lara and Esther Guerra and Jörg Kienzle and Yanis Hattab}, title = {Facet-Oriented Modelling: Open Objects for Model-Driven Engineering}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {147--159}, doi = {10.1145/3276604.3276610}, year = {2018}, } Publisher's Version Info |
|
Hedin, Görel |
SLE '18: "Continuous Model Validation ..."
Continuous Model Validation using Reference Attribute Grammars
Johannes Mey, René Schöne, Görel Hedin, Emma Söderberg, Thomas Kühn, Niklas Fors, Jesper Öqvist, and Uwe Aßmann (TU Dresden, Germany; Lund University, Sweden) Just like current software systems, models are characterised by increasing complexity and rate of change. Yet, these models only become useful if they can be continuously evaluated and validated. To achieve sufficiently low response times for large models, incremental analysis is required. Reference Attribute Grammars (RAGs) offer mechanisms to perform an incremental analysis efficiently using dynamic dependency tracking. However, not all features used in conceptual modelling are directly available in RAGs. In particular, support for non-containment model relations is only available through manual implementation. We present an approach to directly model uni- and bidirectional non-containment relations in RAGs and provide efficient means for navigating and editing them. This approach is evaluated using a scalable benchmark for incremental model editing and the JastAdd RAG system. Our work demonstrates the suitability of RAGs for validating complex and continuously changing models of current software systems. @InProceedings{SLE18p70, author = {Johannes Mey and René Schöne and Görel Hedin and Emma Söderberg and Thomas Kühn and Niklas Fors and Jesper Öqvist and Uwe Aßmann}, title = {Continuous Model Validation using Reference Attribute Grammars}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {70--82}, doi = {10.1145/3276604.3276616}, year = {2018}, } Publisher's Version Artifacts Functional |
|
Jansen, Nico |
SLE '18: "Translating Grammars to Accurate ..."
Translating Grammars to Accurate Metamodels
Arvid Butting, Nico Jansen, Bernhard Rumpe, and Andreas Wortmann (RWTH Aachen University, Germany) There is a software language engineering gap between metamodel-based languages and grammar-based languages. Grammars can support integrated definition of concrete syntax and abstract syntax, which facilitates processing models, but usually prevents reusing the variety of language tools operating on Ecore metamodels (such as editors, interpreters, debuggers, etc.). Existing work on translating grammars to Ecore metamodels features very cursory translations only, which requires re-engineering intricacies natural to grammars for the metamodels again. We conceived a translation from an EBNF-like syntax to Ecore metamodels that considers the grammars’ intricacies. This translation is realized as a fully automated toolchain from grammars into Ecore & OCL using the language workbench MontiCore. Using this translation enables grammar-based languages to leverage the benefits of Ecore-compatible language tools while supporting natural definition of concrete and abstract syntax. @InProceedings{SLE18p174, author = {Arvid Butting and Nico Jansen and Bernhard Rumpe and Andreas Wortmann}, title = {Translating Grammars to Accurate Metamodels}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {174--186}, doi = {10.1145/3276604.3276605}, year = {2018}, } Publisher's Version |
|
Jeannerod, Nicolas |
SLE '18: "Morbig: A Static Parser for ..."
Morbig: A Static Parser for POSIX Shell
Yann Régis-Gianas, Nicolas Jeannerod, and Ralf Treinen (IRIF, France; University of Paris Diderot, France; CNRS, France; Inria, France; ENS, France) The POSIX shell language defies conventional wisdom of compiler construction on several levels: The shell language was not designed for static parsing, but with an intertwining of syntactic analysis and execution by expansion in mind. Token recognition cannot be specified by regular expressions, lexical analysis depends on the parsing context and the evaluation context, and the shell grammar given in the specification is ambiguous. Besides, the unorthodox design choices of the shell language fit badly in the usual specification languages used to describe other programming languages. This makes the standard usage of LEX and YACC as a pipeline inadequate for the implementation of a parser for POSIX shell. The existing implementations of shell parsers are complex and use low-level character-level parsing code which is difficult to relate to the POSIX specification. We find it hard to trust such parsers, especially when using them for writing automatic verification tools for shell scripts. This paper offers an overview of the technical difficulties related to the syntactic analysis of the POSIX shell language. It also describes how we have resolved these difficulties using advanced parsing techniques (namely speculative parsing, parser state introspection, context-dependent lexical analysis and longest-prefix parsing) while keeping the implementation at a sufficiently high level of abstraction so that experts can check that the POSIX standard is respected. The resulting tool, called MORBIG, is an open-source static parser for a well-defined and realistic subset of the POSIX shell language. Its implementation crucially relies on the purity and incrementality of LR(1) parsers generated by MENHIR, a parser generator for OCaml. @InProceedings{SLE18p29, author = {Yann Régis-Gianas and Nicolas Jeannerod and Ralf Treinen}, title = {Morbig: A Static Parser for POSIX Shell}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {29--41}, doi = {10.1145/3276604.3276615}, year = {2018}, } Publisher's Version Artifacts Functional |
|
Johnstone, Adrian |
SLE '18: "GLL Parsing with Flexible ..."
GLL Parsing with Flexible Combinators
L. Thomas van Binsbergen, Elizabeth Scott, and Adrian Johnstone (Royal Holloway University of London, UK) At SLE in 2014, Ridge presented the P3 combinator library with which parsers can be developed for left-recursive, non-deterministic and ambiguous grammars. A combinator expression in P3 yields a binarised grammar reflecting the expression's structure. The grammar is given to an underlying, generalised parsing procedure computing all derivations. In this paper we present a combinator library with a similar architecture to P3, adjusting it to avoid grammar binarisation. Avoiding binarisation has a significant positive effect on the running times of the underlying parsing procedure, which we demonstrate using real-world grammars. Binarisation is avoided by restricting the applicability of combinators, resulting in combinator expressions closely resembling BNF fragments. Usability is recovered by defining coercions that automatically convert expressions where necessary. As the underlying parsing procedure, we use a purely functional variant of generalised top-down (GLL) parsing. @InProceedings{SLE18p16, author = {L. Thomas van Binsbergen and Elizabeth Scott and Adrian Johnstone}, title = {GLL Parsing with Flexible Combinators}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {16--28}, doi = {10.1145/3276604.3276618}, year = {2018}, } Publisher's Version Artifacts Functional |
|
Kienzle, Jörg |
SLE '18: "Facet-Oriented Modelling: ..."
Facet-Oriented Modelling: Open Objects for Model-Driven Engineering
Juan de Lara, Esther Guerra, Jörg Kienzle, and Yanis Hattab (Autonomous University of Madrid, Spain; McGill University, Canada) Model-driven engineering (MDE) promotes models as the principal assets in software projects. Models are built using a modelling language whose syntax is defined by a metamodel. Hence, objects in models are typed by a metamodel class, and this typing relation is static as it is established at creation time and cannot be changed later. This way, objects in MDE are closed and fixed with respect to the type they conform to, the slots/properties they have, and the constraints they should obey. This hampers the reuse of model-related artefacts like model transformations, as well as the opportunistic or dynamic combination of metamodels. To alleviate this rigidity, we propose making model objects open so that they can acquire or drop so-called facets, each one contributing a type, slots and constraints to the object. Facets are defined by regular metamodels, hence being a lightweight extension of standard metamodelling. Facet metamodels may declare usage interfaces, and it is possible to specify laws that govern how facets are to be assigned to the instances of a metamodel. In this paper, we describe our proposal, report on an implementation, and illustrate scenarios where facets have advantages over other techniques. @InProceedings{SLE18p147, author = {Juan de Lara and Esther Guerra and Jörg Kienzle and Yanis Hattab}, title = {Facet-Oriented Modelling: Open Objects for Model-Driven Engineering}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {147--159}, doi = {10.1145/3276604.3276610}, year = {2018}, } Publisher's Version Info |
|
Kühn, Thomas |
SLE '18: "Continuous Model Validation ..."
Continuous Model Validation using Reference Attribute Grammars
Johannes Mey, René Schöne, Görel Hedin, Emma Söderberg, Thomas Kühn, Niklas Fors, Jesper Öqvist, and Uwe Aßmann (TU Dresden, Germany; Lund University, Sweden) Just like current software systems, models are characterised by increasing complexity and rate of change. Yet, these models only become useful if they can be continuously evaluated and validated. To achieve sufficiently low response times for large models, incremental analysis is required. Reference Attribute Grammars (RAGs) offer mechanisms to perform an incremental analysis efficiently using dynamic dependency tracking. However, not all features used in conceptual modelling are directly available in RAGs. In particular, support for non-containment model relations is only available through manual implementation. We present an approach to directly model uni- and bidirectional non-containment relations in RAGs and provide efficient means for navigating and editing them. This approach is evaluated using a scalable benchmark for incremental model editing and the JastAdd RAG system. Our work demonstrates the suitability of RAGs for validating complex and continuously changing models of current software systems. @InProceedings{SLE18p70, author = {Johannes Mey and René Schöne and Görel Hedin and Emma Söderberg and Thomas Kühn and Niklas Fors and Jesper Öqvist and Uwe Aßmann}, title = {Continuous Model Validation using Reference Attribute Grammars}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {70--82}, doi = {10.1145/3276604.3276616}, year = {2018}, } Publisher's Version Artifacts Functional |
|
Leduc, Manuel |
SLE '18: "Modular Language Composition ..."
Modular Language Composition for the Masses
Manuel Leduc, Thomas Degueule, and Benoit Combemale (University of Rennes, France; Inria, France; CNRS, France; IRISA, France; CWI, Netherlands; University of Toulouse, France; IRIT, France) The goal of modular language development is to enable the definition of new languages as assemblies of pre-existing ones. Recent approaches in this area are plentiful but usually suffer from two main problems: either they do not support modular language composition both at the specification and implementation levels, or they require advanced knowledge of specific paradigms which hampers wide adoption in the industry. In this paper, we introduce a non-intrusive approach to modular development of language concerns with well-defined interfaces that can be composed modularly at the specification and implementation levels. We present an implementation of our approach atop the Eclipse Modeling Framework, namely Alex, an object-oriented meta-language for semantics definition and language composition. We evaluate Alex in the development of a new DSL for IoT systems modeling resulting from the composition of three independently defined languages (UML activity diagrams, Lua, and the OMG Interface Description Language). We evaluate the effort required to implement and compose these languages using Alex with regards to similar approaches of the literature. @InProceedings{SLE18p47, author = {Manuel Leduc and Thomas Degueule and Benoit Combemale}, title = {Modular Language Composition for the Masses}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {47--59}, doi = {10.1145/3276604.3276622}, year = {2018}, } Publisher's Version Artifacts Functional |
|
Lelandais, Benoît |
SLE '18: "Fostering Metamodels and Grammars ..."
Fostering Metamodels and Grammars within a Dedicated Environment for HPC: The NabLab Environment (Tool Demo)
Benoît Lelandais, Marie-Pierre Oudot, and Benoit Combemale (CEA, France; DAM, France; DIF, France; University of Toulouse, France; Inria, France) Advanced and mature language workbenches have been proposed in the past decades to develop Domain-Specific Languages (DSL) and rich associated environments. They all come in various flavors, mostly depending on the underlying technological space (e.g., grammarware or modelware). However, when the time comes to start a new DSL project, it often comes with the choice of a unique technological space which later bounds the possible expected features. In this tool paper, we introduce NabLab, a full-fledged industrial environment for scientific computing and High Performance Computing (HPC), involving several metamodels and grammars. Beyond the description of an industrial experience of the development and use of tool-supported DSLs, we report in this paper our lessons learned, and demonstrate the benefits from usefully combining metamodels and grammars in an integrated environment. @InProceedings{SLE18p200, author = {Benoît Lelandais and Marie-Pierre Oudot and Benoit Combemale}, title = {Fostering Metamodels and Grammars within a Dedicated Environment for HPC: The NabLab Environment (Tool Demo)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {200--204}, doi = {10.1145/3276604.3276620}, year = {2018}, } Publisher's Version |
|
Leonhardt, Gerrit |
SLE '18: "Deriving Fluent Internal Domain-Specific ..."
Deriving Fluent Internal Domain-Specific Languages from Grammars
Arvid Butting, Manuela Dalibor, Gerrit Leonhardt, Bernhard Rumpe, and Andreas Wortmann (RWTH Aachen University, Germany) A prime decision of engineering domain-specific languages (DSLs) is implementing these as external DSLs or internal DSLs. Agile language engineering benefits from easily switching between both shapes to provide rapidly developed prototypes before settling on a specific syntax. This switching, however, is rarely feasible due to the effort of re-implementing language tooling for both shapes. Current research in software language engineering focuses either on internal DSLs or external DSLs. We conceived a concept to automatically derive customizable internal DSLs from grammars that operate on the same abstract syntax as the external DSL. This supports reusing tooling (such as model checkers or code generators) between both shapes. We realized our concept with the MontiCore language workbench and Groovy as host language for internal DSLs. This concept is applicable to many grammar-based language definition @InProceedings{SLE18p187, author = {Arvid Butting and Manuela Dalibor and Gerrit Leonhardt and Bernhard Rumpe and Andreas Wortmann}, title = {Deriving Fluent Internal Domain-Specific Languages from Grammars}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {187--199}, doi = {10.1145/3276604.3276621}, year = {2018}, } Publisher's Version |
|
McKeever, Steve |
SLE '18: "The Next 700 Unit of Measurement ..."
The Next 700 Unit of Measurement Checkers
Oscar Bennich-Björkman and Steve McKeever (Uppsala University, Sweden) In scientific applications, physical quantities and units of measurement are used regularly. If the inherent incompatibility between these units is not handled properly it can lead to major, sometimes catastrophic, problems. Although the risk of a miscalculation is high and the cost equally so, almost none of the major programming languages has support for physical quantities. Instead, scientific code developers often make their own tools or rely on external libraries to help them spot or prevent these mistakes. We employed a systematic approach to examine and analyse all available physical quantity open-source libraries. Approximately 3700 search results across seven repository hosting sites were condensed into a list of 82 of the most comprehensive and well-developed libraries currently available. In this group, 30 different programming languages are represented. Out of these 82 libraries, 38 have been updated within the last two years. These 38 are summarised in this paper as they are deemed the most relevant. The conclusion we draw from these results is that there is clearly too much diversity, duplicated efforts, and a lack of code sharing and harmonisation which discourages use and adoption. @InProceedings{SLE18p121, author = {Oscar Bennich-Björkman and Steve McKeever}, title = {The Next 700 Unit of Measurement Checkers}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {121--132}, doi = {10.1145/3276604.3276613}, year = {2018}, } Publisher's Version |
|
Merino, Mauricio Verano |
SLE '18: "Bacatá: A Language Parametric ..."
Bacatá: A Language Parametric Notebook Generator (Tool Demo)
Mauricio Verano Merino, Jurgen Vinju, and Tijs van der Storm (Eindhoven University of Technology, Netherlands; CWI, Netherlands; University of Groningen, Netherlands) Interactive notebooks allow people to communicate and collaborate through a single rich document that might include live code, multimedia, computed results, and documentation, which is persisted as a whole for reproducibility. Notebooks are currently being used extensively in domains such as data science, data journalism, and machine learning. However, constructing a notebook interface for a new language requires a lot of effort. In this tool paper, we present Bacatá, a language parametric notebook generator for domain-specific languages (DSL) based on the Jupyter framework. Bacatá is designed so that language engineers may reuse existing language components (such as parsers, code generators, interpreters, etc.) as much as possible. Moreover, we explain the design of Bacatá and how DSL notebooks can be generated with minimum effort in the context of the Rascal meta programming system and language workbench. @InProceedings{SLE18p210, author = {Mauricio Verano Merino and Jurgen Vinju and Tijs van der Storm}, title = {Bacatá: A Language Parametric Notebook Generator (Tool Demo)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {210--214}, doi = {10.1145/3276604.3276981}, year = {2018}, } Publisher's Version |
|
Mey, Johannes |
SLE '18: "Continuous Model Validation ..."
Continuous Model Validation using Reference Attribute Grammars
Johannes Mey, René Schöne, Görel Hedin, Emma Söderberg, Thomas Kühn, Niklas Fors, Jesper Öqvist, and Uwe Aßmann (TU Dresden, Germany; Lund University, Sweden) Just like current software systems, models are characterised by increasing complexity and rate of change. Yet, these models only become useful if they can be continuously evaluated and validated. To achieve sufficiently low response times for large models, incremental analysis is required. Reference Attribute Grammars (RAGs) offer mechanisms to perform an incremental analysis efficiently using dynamic dependency tracking. However, not all features used in conceptual modelling are directly available in RAGs. In particular, support for non-containment model relations is only available through manual implementation. We present an approach to directly model uni- and bidirectional non-containment relations in RAGs and provide efficient means for navigating and editing them. This approach is evaluated using a scalable benchmark for incremental model editing and the JastAdd RAG system. Our work demonstrates the suitability of RAGs for validating complex and continuously changing models of current software systems. @InProceedings{SLE18p70, author = {Johannes Mey and René Schöne and Görel Hedin and Emma Söderberg and Thomas Kühn and Niklas Fors and Jesper Öqvist and Uwe Aßmann}, title = {Continuous Model Validation using Reference Attribute Grammars}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {70--82}, doi = {10.1145/3276604.3276616}, year = {2018}, } Publisher's Version Artifacts Functional |
|
Öqvist, Jesper |
SLE '18: "Continuous Model Validation ..."
Continuous Model Validation using Reference Attribute Grammars
Johannes Mey, René Schöne, Görel Hedin, Emma Söderberg, Thomas Kühn, Niklas Fors, Jesper Öqvist, and Uwe Aßmann (TU Dresden, Germany; Lund University, Sweden) Just like current software systems, models are characterised by increasing complexity and rate of change. Yet, these models only become useful if they can be continuously evaluated and validated. To achieve sufficiently low response times for large models, incremental analysis is required. Reference Attribute Grammars (RAGs) offer mechanisms to perform an incremental analysis efficiently using dynamic dependency tracking. However, not all features used in conceptual modelling are directly available in RAGs. In particular, support for non-containment model relations is only available through manual implementation. We present an approach to directly model uni- and bidirectional non-containment relations in RAGs and provide efficient means for navigating and editing them. This approach is evaluated using a scalable benchmark for incremental model editing and the JastAdd RAG system. Our work demonstrates the suitability of RAGs for validating complex and continuously changing models of current software systems. @InProceedings{SLE18p70, author = {Johannes Mey and René Schöne and Görel Hedin and Emma Söderberg and Thomas Kühn and Niklas Fors and Jesper Öqvist and Uwe Aßmann}, title = {Continuous Model Validation using Reference Attribute Grammars}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {70--82}, doi = {10.1145/3276604.3276616}, year = {2018}, } Publisher's Version Artifacts Functional |
|
Oudot, Marie-Pierre |
SLE '18: "Fostering Metamodels and Grammars ..."
Fostering Metamodels and Grammars within a Dedicated Environment for HPC: The NabLab Environment (Tool Demo)
Benoît Lelandais, Marie-Pierre Oudot, and Benoit Combemale (CEA, France; DAM, France; DIF, France; University of Toulouse, France; Inria, France) Advanced and mature language workbenches have been proposed in the past decades to develop Domain-Specific Languages (DSL) and rich associated environments. They all come in various flavors, mostly depending on the underlying technological space (e.g., grammarware or modelware). However, when the time comes to start a new DSL project, it often comes with the choice of a unique technological space which later bounds the possible expected features. In this tool paper, we introduce NabLab, a full-fledged industrial environment for scientific computing and High Performance Computing (HPC), involving several metamodels and grammars. Beyond the description of an industrial experience of the development and use of tool-supported DSLs, we report in this paper our lessons learned, and demonstrate the benefits from usefully combining metamodels and grammars in an integrated environment. @InProceedings{SLE18p200, author = {Benoît Lelandais and Marie-Pierre Oudot and Benoit Combemale}, title = {Fostering Metamodels and Grammars within a Dedicated Environment for HPC: The NabLab Environment (Tool Demo)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {200--204}, doi = {10.1145/3276604.3276620}, year = {2018}, } Publisher's Version |
|
Racordon, Dimitri |
SLE '18: "A Practical Type System for ..."
A Practical Type System for Safe Aliasing
Dimitri Racordon and Didier Buchs (University of Geneva, Switzerland) Aliasing is a vital concept of programming, but it comes with a plethora of challenging issues, such as the problems related to race safety. This has motivated years of research, and promising solutions such as ownership or linear types have found their way into modern programming languages. Unfortunately, most current approaches are restrictive. In particular, they often enforce a single-writer constraint, which prohibits the creation of mutable self-referential structures. While this constraint is often indispensable in the context of preemptive multithreading, it can be worked around in the case of single threaded programs. With the recent resurgence of cooperative multitasking, where processes voluntarily share control over a single execution thread, this appears to be interesting trade-off. In this paper, we propose a type system that relaxes the usual single-writer constraint for single threaded programs, without sacrificing race safety properties. We present it in the form of a simple reference-based language, for which we provide a formal semantics, as well as an interpreter. @InProceedings{SLE18p133, author = {Dimitri Racordon and Didier Buchs}, title = {A Practical Type System for Safe Aliasing}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {133--146}, doi = {10.1145/3276604.3276612}, year = {2018}, } Publisher's Version Info Artifacts Functional |
|
Régis-Gianas, Yann |
SLE '18: "Morbig: A Static Parser for ..."
Morbig: A Static Parser for POSIX Shell
Yann Régis-Gianas, Nicolas Jeannerod, and Ralf Treinen (IRIF, France; University of Paris Diderot, France; CNRS, France; Inria, France; ENS, France) The POSIX shell language defies conventional wisdom of compiler construction on several levels: The shell language was not designed for static parsing, but with an intertwining of syntactic analysis and execution by expansion in mind. Token recognition cannot be specified by regular expressions, lexical analysis depends on the parsing context and the evaluation context, and the shell grammar given in the specification is ambiguous. Besides, the unorthodox design choices of the shell language fit badly in the usual specification languages used to describe other programming languages. This makes the standard usage of LEX and YACC as a pipeline inadequate for the implementation of a parser for POSIX shell. The existing implementations of shell parsers are complex and use low-level character-level parsing code which is difficult to relate to the POSIX specification. We find it hard to trust such parsers, especially when using them for writing automatic verification tools for shell scripts. This paper offers an overview of the technical difficulties related to the syntactic analysis of the POSIX shell language. It also describes how we have resolved these difficulties using advanced parsing techniques (namely speculative parsing, parser state introspection, context-dependent lexical analysis and longest-prefix parsing) while keeping the implementation at a sufficiently high level of abstraction so that experts can check that the POSIX standard is respected. The resulting tool, called MORBIG, is an open-source static parser for a well-defined and realistic subset of the POSIX shell language. Its implementation crucially relies on the purity and incrementality of LR(1) parsers generated by MENHIR, a parser generator for OCaml. @InProceedings{SLE18p29, author = {Yann Régis-Gianas and Nicolas Jeannerod and Ralf Treinen}, title = {Morbig: A Static Parser for POSIX Shell}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {29--41}, doi = {10.1145/3276604.3276615}, year = {2018}, } Publisher's Version Artifacts Functional |
|
Ries, Benoît |
SLE '18: "Messir: A Text-First DSL-Based ..."
Messir: A Text-First DSL-Based Approach for UML Requirements Engineering (Tool Demo)
Benoît Ries, Alfredo Capozucca, and Nicolas Guelfi (University of Luxembourg, Luxembourg) This tool paper presents the design and tool-support of Messir, an approach centered on textual domain-specific languages supported by our open-source UML requirements engineering tool, named Excalibur. The novelty of our approach is the actual integration in a single workbench (Excalibur) of textual DSLs richly covering the requirements and analysis phases, i.e. improved use-cases, environment, conceptual and operations models; and the read-only visualisation of the requirements with UML-compliant views; and the generation of scientific requirements analysis documents in LATEX; and the formal simulation of test cases requirements. We designed our Messir language, with a grammar-based approach generating a textual editor, using the XText framework as an Eclipse plugin. Messir DSL’s static semantics is defined as a set of validation rules guiding end-users through the requirements analysis phase. Messir DSL’s semantics is given as a semi-automatic translation to prolog code. We also generate, from the requirements model elements, read-only graphical views (using the Sirius eclipse plugin) as well as a complete requirements analysis document in LATEX. This approach and tool have been used as a requirements engineering educational tool in several bachelor and master semesters. @InProceedings{SLE18p103, author = {Benoît Ries and Alfredo Capozucca and Nicolas Guelfi}, title = {Messir: A Text-First DSL-Based Approach for UML Requirements Engineering (Tool Demo)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {103--107}, doi = {10.1145/3276604.3276614}, year = {2018}, } Publisher's Version Info Artifacts Functional |
|
Rinard, Martin C. |
SLE '18: "A New Approach for Software ..."
A New Approach for Software Correctness and Reliability (Keynote)
Martin C. Rinard (Massachusetts Institute of Technology, USA) Software correctness and security have been a central issue in the field for decades. Researchers have developed a wide range of approaches to these problems, none of which has solved these problems to date. In this talk I consider two very different approaches to solving correctness and security problems, failure-oblivious computing and domain-specific languages. I will discuss how these approaches (as well as others) interact with the cognitive limitations and available technical skills of the human population of software developers that currently must be part of any solution for it to be successful. I’ll conclude by outlining a new approach that, by deploying automated programming language technology in an appropriately targeted way, may interact more productively with the characteristics of the developer population as a whole. @InProceedings{SLE18p1, author = {Martin C. Rinard}, title = {A New Approach for Software Correctness and Reliability (Keynote)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {1--2}, doi = {10.1145/3276604.3284957}, year = {2018}, } Publisher's Version |
|
Rumpe, Bernhard |
SLE '18: "Deriving Fluent Internal Domain-Specific ..."
Deriving Fluent Internal Domain-Specific Languages from Grammars
Arvid Butting, Manuela Dalibor, Gerrit Leonhardt, Bernhard Rumpe, and Andreas Wortmann (RWTH Aachen University, Germany) A prime decision of engineering domain-specific languages (DSLs) is implementing these as external DSLs or internal DSLs. Agile language engineering benefits from easily switching between both shapes to provide rapidly developed prototypes before settling on a specific syntax. This switching, however, is rarely feasible due to the effort of re-implementing language tooling for both shapes. Current research in software language engineering focuses either on internal DSLs or external DSLs. We conceived a concept to automatically derive customizable internal DSLs from grammars that operate on the same abstract syntax as the external DSL. This supports reusing tooling (such as model checkers or code generators) between both shapes. We realized our concept with the MontiCore language workbench and Groovy as host language for internal DSLs. This concept is applicable to many grammar-based language definition @InProceedings{SLE18p187, author = {Arvid Butting and Manuela Dalibor and Gerrit Leonhardt and Bernhard Rumpe and Andreas Wortmann}, title = {Deriving Fluent Internal Domain-Specific Languages from Grammars}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {187--199}, doi = {10.1145/3276604.3276621}, year = {2018}, } Publisher's Version SLE '18: "Translating Grammars to Accurate ..." Translating Grammars to Accurate Metamodels Arvid Butting, Nico Jansen, Bernhard Rumpe, and Andreas Wortmann (RWTH Aachen University, Germany) There is a software language engineering gap between metamodel-based languages and grammar-based languages. Grammars can support integrated definition of concrete syntax and abstract syntax, which facilitates processing models, but usually prevents reusing the variety of language tools operating on Ecore metamodels (such as editors, interpreters, debuggers, etc.). Existing work on translating grammars to Ecore metamodels features very cursory translations only, which requires re-engineering intricacies natural to grammars for the metamodels again. We conceived a translation from an EBNF-like syntax to Ecore metamodels that considers the grammars’ intricacies. This translation is realized as a fully automated toolchain from grammars into Ecore & OCL using the language workbench MontiCore. Using this translation enables grammar-based languages to leverage the benefits of Ecore-compatible language tools while supporting natural definition of concrete and abstract syntax. @InProceedings{SLE18p174, author = {Arvid Butting and Nico Jansen and Bernhard Rumpe and Andreas Wortmann}, title = {Translating Grammars to Accurate Metamodels}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {174--186}, doi = {10.1145/3276604.3276605}, year = {2018}, } Publisher's Version |
|
Sakharov, Alexander |
SLE '18: "Input-Driven Regular Expressions ..."
Input-Driven Regular Expressions (Vision Paper)
Alexander Sakharov (Synstretch, USA) Regular expressions are extended by splitting the terminals into left brackets, right brackets, and neutral terminals. These extended regular expressions define a superset of regular languages. Their languages are parsed in linear time in the size of the input. The addition of annotations to these regular expressions results in more detailed parse trees. @InProceedings{SLE18p42, author = {Alexander Sakharov}, title = {Input-Driven Regular Expressions (Vision Paper)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {42--46}, doi = {10.1145/3276604.3276606}, year = {2018}, } Publisher's Version |
|
Salay, Rick |
SLE '18: "Analysing Meta-Model Product ..."
Analysing Meta-Model Product Lines
Esther Guerra, Juan de Lara, Marsha Chechik, and Rick Salay (Autonomous University of Madrid, Spain; University of Toronto, Canada) Model-driven engineering advocates the use of models to describe and automate many software development tasks. The syntax of modelling languages is defined by meta-models, making them essential artefacts. A combination of product line engineering methods and meta-models has been proposed to enable specification of modelling language variants, e.g., to describe a range of systems. However, there is a lack of techniques for ensuring syntactic correctness of all meta-models within a family (including their OCL constraints), and semantic correctness related to properties of individual instances of the different variants. The absence of verification methods at the product-line level can cause synthesis of ill-formed meta-models and problematic feature combinations whose effect at the instance level may go unnoticed. To attack this problem, we propose an approach to lifting both the meta-model syntax checking and the satisfiability checking of properties of individual meta-model instances, to the product-line level. We validate the approach via a prototype tool called Merlin, and report on several experiments that show the advantages of our method w.r.t. an enumerative analysis approach. @InProceedings{SLE18p160, author = {Esther Guerra and Juan de Lara and Marsha Chechik and Rick Salay}, title = {Analysing Meta-Model Product Lines}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {160--173}, doi = {10.1145/3276604.3276609}, year = {2018}, } Publisher's Version Info |
|
Schöne, René |
SLE '18: "Continuous Model Validation ..."
Continuous Model Validation using Reference Attribute Grammars
Johannes Mey, René Schöne, Görel Hedin, Emma Söderberg, Thomas Kühn, Niklas Fors, Jesper Öqvist, and Uwe Aßmann (TU Dresden, Germany; Lund University, Sweden) Just like current software systems, models are characterised by increasing complexity and rate of change. Yet, these models only become useful if they can be continuously evaluated and validated. To achieve sufficiently low response times for large models, incremental analysis is required. Reference Attribute Grammars (RAGs) offer mechanisms to perform an incremental analysis efficiently using dynamic dependency tracking. However, not all features used in conceptual modelling are directly available in RAGs. In particular, support for non-containment model relations is only available through manual implementation. We present an approach to directly model uni- and bidirectional non-containment relations in RAGs and provide efficient means for navigating and editing them. This approach is evaluated using a scalable benchmark for incremental model editing and the JastAdd RAG system. Our work demonstrates the suitability of RAGs for validating complex and continuously changing models of current software systems. @InProceedings{SLE18p70, author = {Johannes Mey and René Schöne and Görel Hedin and Emma Söderberg and Thomas Kühn and Niklas Fors and Jesper Öqvist and Uwe Aßmann}, title = {Continuous Model Validation using Reference Attribute Grammars}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {70--82}, doi = {10.1145/3276604.3276616}, year = {2018}, } Publisher's Version Artifacts Functional |
|
Scott, Elizabeth |
SLE '18: "GLL Parsing with Flexible ..."
GLL Parsing with Flexible Combinators
L. Thomas van Binsbergen, Elizabeth Scott, and Adrian Johnstone (Royal Holloway University of London, UK) At SLE in 2014, Ridge presented the P3 combinator library with which parsers can be developed for left-recursive, non-deterministic and ambiguous grammars. A combinator expression in P3 yields a binarised grammar reflecting the expression's structure. The grammar is given to an underlying, generalised parsing procedure computing all derivations. In this paper we present a combinator library with a similar architecture to P3, adjusting it to avoid grammar binarisation. Avoiding binarisation has a significant positive effect on the running times of the underlying parsing procedure, which we demonstrate using real-world grammars. Binarisation is avoided by restricting the applicability of combinators, resulting in combinator expressions closely resembling BNF fragments. Usability is recovered by defining coercions that automatically convert expressions where necessary. As the underlying parsing procedure, we use a purely functional variant of generalised top-down (GLL) parsing. @InProceedings{SLE18p16, author = {L. Thomas van Binsbergen and Elizabeth Scott and Adrian Johnstone}, title = {GLL Parsing with Flexible Combinators}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {16--28}, doi = {10.1145/3276604.3276618}, year = {2018}, } Publisher's Version Artifacts Functional |
|
Söderberg, Emma |
SLE '18: "Continuous Model Validation ..."
Continuous Model Validation using Reference Attribute Grammars
Johannes Mey, René Schöne, Görel Hedin, Emma Söderberg, Thomas Kühn, Niklas Fors, Jesper Öqvist, and Uwe Aßmann (TU Dresden, Germany; Lund University, Sweden) Just like current software systems, models are characterised by increasing complexity and rate of change. Yet, these models only become useful if they can be continuously evaluated and validated. To achieve sufficiently low response times for large models, incremental analysis is required. Reference Attribute Grammars (RAGs) offer mechanisms to perform an incremental analysis efficiently using dynamic dependency tracking. However, not all features used in conceptual modelling are directly available in RAGs. In particular, support for non-containment model relations is only available through manual implementation. We present an approach to directly model uni- and bidirectional non-containment relations in RAGs and provide efficient means for navigating and editing them. This approach is evaluated using a scalable benchmark for incremental model editing and the JastAdd RAG system. Our work demonstrates the suitability of RAGs for validating complex and continuously changing models of current software systems. @InProceedings{SLE18p70, author = {Johannes Mey and René Schöne and Görel Hedin and Emma Söderberg and Thomas Kühn and Niklas Fors and Jesper Öqvist and Uwe Aßmann}, title = {Continuous Model Validation using Reference Attribute Grammars}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {70--82}, doi = {10.1145/3276604.3276616}, year = {2018}, } Publisher's Version Artifacts Functional |
|
Steindorfer, Michael J. |
SLE '18: "Declarative Specification ..."
Declarative Specification of Indentation Rules: A Tooling Perspective on Parsing and Pretty-Printing Layout-Sensitive Languages
Luís Eduardo de Souza Amorim, Michael J. Steindorfer, Sebastian Erdweg, and Eelco Visser (Delft University of Technology, Netherlands) In layout-sensitive languages, the indentation of an expression or statement can influence how a program is parsed. While some of these languages (e.g., Haskell and Python) have been widely adopted, there is little support for software language engineers in building tools for layout-sensitive languages. As a result, parsers, pretty-printers, program analyses, and refactoring tools often need to be handwritten, which decreases the maintainability and extensibility of these tools. Even state-of-the-art language workbenches have little support for layout-sensitive languages, restricting the development and prototyping of such languages. In this paper, we introduce a novel approach to declarative specification of layout-sensitive languages using layout declarations. Layout declarations are high-level specifications of indentation rules that abstract from low-level technicalities. We show how to derive an efficient layout-sensitive generalized parser and a corresponding pretty-printer automatically from a language specification with layout declarations. We validate our approach in a case-study using a syntax definition for the Haskell programming language, investigating the performance of the generated parser and the correctness of the generated pretty-printer against 22191 Haskell files. @InProceedings{SLE18p3, author = {Luís Eduardo de Souza Amorim and Michael J. Steindorfer and Sebastian Erdweg and Eelco Visser}, title = {Declarative Specification of Indentation Rules: A Tooling Perspective on Parsing and Pretty-Printing Layout-Sensitive Languages}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {3--15}, doi = {10.1145/3276604.3276607}, year = {2018}, } Publisher's Version Info Artifacts Functional |
|
Stoel, Jouke |
SLE '18: "Constraint-based Run-time ..."
Constraint-based Run-time State Migration for Live Modeling
Ulyana Tikhonova, Jouke Stoel, Tijs van der Storm, and Thomas Degueule (CWI, Netherlands; Eindhoven University of Technology, Netherlands; University of Groningen, Netherlands) Live modeling enables modelers to incrementally update models as they are running and get immediate feedback about the impact of their changes. Changes introduced in a model may trigger inconsistencies between the model and its run-time state (e.g., deleting the current state in a statemachine); effectively requiring to migrate the run-time state to comply with the updated model. In this paper, we introduce an approach that enables to automatically migrate such run-time state based on declarative constraints defined by the language designer. We illustrate the approach using Nextep, a meta-modeling language for defining invariants and migration constraints on run-time state models. When a model changes, Nextep employs model finding techniques, backed by a solver, to automatically infer a new run-time model that satisfies the declared constraints. We apply Nextep to define migration strategies for two DSLs, and report on its expressiveness and performance. @InProceedings{SLE18p108, author = {Ulyana Tikhonova and Jouke Stoel and Tijs van der Storm and Thomas Degueule}, title = {Constraint-based Run-time State Migration for Live Modeling}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {108--120}, doi = {10.1145/3276604.3276611}, year = {2018}, } Publisher's Version Artifacts Functional |
|
Strömbäck, Filip |
SLE '18: "Storm: A Language Platform ..."
Storm: A Language Platform for Interacting and Extensible Languages (Tool Demo)
Filip Strömbäck (Linköping University, Sweden) The ability to extend programming languages with domain-specific concepts is becoming an essential technology for developing complex software. However, many domain-specific languages are implemented in a way that interact poorly with the host language. There are a number of tools that aim to improve the situation by simplifying the creation of domain-specific languages, and allow easier interactions between the host language and the domain-specific language. However, many of these tools are limited to a single host language, and rarely allow extending the language used for language creation. To improve the situation, we created the language platform Storm, which aims to make the creation and usage of multiple extensible languages easy and seamless. This is accomplished by means of a shared, standardized namespace and in-process code generation, which gives Storm a high degree of extensibility, making it possible to extend or replace the built-in languages at will. @InProceedings{SLE18p60, author = {Filip Strömbäck}, title = {Storm: A Language Platform for Interacting and Extensible Languages (Tool Demo)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {60--64}, doi = {10.1145/3276604.3276982}, year = {2018}, } Publisher's Version Info Artifacts Functional |
|
Tikhonova, Ulyana |
SLE '18: "Constraint-based Run-time ..."
Constraint-based Run-time State Migration for Live Modeling
Ulyana Tikhonova, Jouke Stoel, Tijs van der Storm, and Thomas Degueule (CWI, Netherlands; Eindhoven University of Technology, Netherlands; University of Groningen, Netherlands) Live modeling enables modelers to incrementally update models as they are running and get immediate feedback about the impact of their changes. Changes introduced in a model may trigger inconsistencies between the model and its run-time state (e.g., deleting the current state in a statemachine); effectively requiring to migrate the run-time state to comply with the updated model. In this paper, we introduce an approach that enables to automatically migrate such run-time state based on declarative constraints defined by the language designer. We illustrate the approach using Nextep, a meta-modeling language for defining invariants and migration constraints on run-time state models. When a model changes, Nextep employs model finding techniques, backed by a solver, to automatically infer a new run-time model that satisfies the declared constraints. We apply Nextep to define migration strategies for two DSLs, and report on its expressiveness and performance. @InProceedings{SLE18p108, author = {Ulyana Tikhonova and Jouke Stoel and Tijs van der Storm and Thomas Degueule}, title = {Constraint-based Run-time State Migration for Live Modeling}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {108--120}, doi = {10.1145/3276604.3276611}, year = {2018}, } Publisher's Version Artifacts Functional |
|
Treinen, Ralf |
SLE '18: "Morbig: A Static Parser for ..."
Morbig: A Static Parser for POSIX Shell
Yann Régis-Gianas, Nicolas Jeannerod, and Ralf Treinen (IRIF, France; University of Paris Diderot, France; CNRS, France; Inria, France; ENS, France) The POSIX shell language defies conventional wisdom of compiler construction on several levels: The shell language was not designed for static parsing, but with an intertwining of syntactic analysis and execution by expansion in mind. Token recognition cannot be specified by regular expressions, lexical analysis depends on the parsing context and the evaluation context, and the shell grammar given in the specification is ambiguous. Besides, the unorthodox design choices of the shell language fit badly in the usual specification languages used to describe other programming languages. This makes the standard usage of LEX and YACC as a pipeline inadequate for the implementation of a parser for POSIX shell. The existing implementations of shell parsers are complex and use low-level character-level parsing code which is difficult to relate to the POSIX specification. We find it hard to trust such parsers, especially when using them for writing automatic verification tools for shell scripts. This paper offers an overview of the technical difficulties related to the syntactic analysis of the POSIX shell language. It also describes how we have resolved these difficulties using advanced parsing techniques (namely speculative parsing, parser state introspection, context-dependent lexical analysis and longest-prefix parsing) while keeping the implementation at a sufficiently high level of abstraction so that experts can check that the POSIX standard is respected. The resulting tool, called MORBIG, is an open-source static parser for a well-defined and realistic subset of the POSIX shell language. Its implementation crucially relies on the purity and incrementality of LR(1) parsers generated by MENHIR, a parser generator for OCaml. @InProceedings{SLE18p29, author = {Yann Régis-Gianas and Nicolas Jeannerod and Ralf Treinen}, title = {Morbig: A Static Parser for POSIX Shell}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {29--41}, doi = {10.1145/3276604.3276615}, year = {2018}, } Publisher's Version Artifacts Functional |
|
Van Binsbergen, L. Thomas |
SLE '18: "GLL Parsing with Flexible ..."
GLL Parsing with Flexible Combinators
L. Thomas van Binsbergen, Elizabeth Scott, and Adrian Johnstone (Royal Holloway University of London, UK) At SLE in 2014, Ridge presented the P3 combinator library with which parsers can be developed for left-recursive, non-deterministic and ambiguous grammars. A combinator expression in P3 yields a binarised grammar reflecting the expression's structure. The grammar is given to an underlying, generalised parsing procedure computing all derivations. In this paper we present a combinator library with a similar architecture to P3, adjusting it to avoid grammar binarisation. Avoiding binarisation has a significant positive effect on the running times of the underlying parsing procedure, which we demonstrate using real-world grammars. Binarisation is avoided by restricting the applicability of combinators, resulting in combinator expressions closely resembling BNF fragments. Usability is recovered by defining coercions that automatically convert expressions where necessary. As the underlying parsing procedure, we use a purely functional variant of generalised top-down (GLL) parsing. @InProceedings{SLE18p16, author = {L. Thomas van Binsbergen and Elizabeth Scott and Adrian Johnstone}, title = {GLL Parsing with Flexible Combinators}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {16--28}, doi = {10.1145/3276604.3276618}, year = {2018}, } Publisher's Version Artifacts Functional |
|
Van Chastelet, Elmer |
SLE '18: "Migrating Business Logic to ..."
Migrating Business Logic to an Incremental Computing DSL: A Case Study
Daco C. Harkes, Elmer van Chastelet, and Eelco Visser (Delft University of Technology, Netherlands) To provide empirical evidence to what extent migration of business logic to an incremental computing language (ICL) is useful, we report on a case study on a learning management system. Our contribution is to analyze a real-life project, how migrating business logic to an ICL affects information system validatability, performance, and development effort. We find that the migrated code has better validatability; it is straightforward to establish that a program ‘does the right thing’. Moreover, the performance is better than the previous hand-written incremental computing solution. The effort spent on modeling business logic is reduced, but integrating that logic in the application and tuning performance takes considerable effort. Thus, the ICL separates the concerns of business logic and performance, but does not reduce effort. @InProceedings{SLE18p83, author = {Daco C. Harkes and Elmer van Chastelet and Eelco Visser}, title = {Migrating Business Logic to an Incremental Computing DSL: A Case Study}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {83--96}, doi = {10.1145/3276604.3276617}, year = {2018}, } Publisher's Version |
|
Van der Storm, Tijs |
SLE '18: "Constraint-based Run-time ..."
Constraint-based Run-time State Migration for Live Modeling
Ulyana Tikhonova, Jouke Stoel, Tijs van der Storm, and Thomas Degueule (CWI, Netherlands; Eindhoven University of Technology, Netherlands; University of Groningen, Netherlands) Live modeling enables modelers to incrementally update models as they are running and get immediate feedback about the impact of their changes. Changes introduced in a model may trigger inconsistencies between the model and its run-time state (e.g., deleting the current state in a statemachine); effectively requiring to migrate the run-time state to comply with the updated model. In this paper, we introduce an approach that enables to automatically migrate such run-time state based on declarative constraints defined by the language designer. We illustrate the approach using Nextep, a meta-modeling language for defining invariants and migration constraints on run-time state models. When a model changes, Nextep employs model finding techniques, backed by a solver, to automatically infer a new run-time model that satisfies the declared constraints. We apply Nextep to define migration strategies for two DSLs, and report on its expressiveness and performance. @InProceedings{SLE18p108, author = {Ulyana Tikhonova and Jouke Stoel and Tijs van der Storm and Thomas Degueule}, title = {Constraint-based Run-time State Migration for Live Modeling}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {108--120}, doi = {10.1145/3276604.3276611}, year = {2018}, } Publisher's Version Artifacts Functional SLE '18: "Bacatá: A Language Parametric ..." Bacatá: A Language Parametric Notebook Generator (Tool Demo) Mauricio Verano Merino, Jurgen Vinju, and Tijs van der Storm (Eindhoven University of Technology, Netherlands; CWI, Netherlands; University of Groningen, Netherlands) Interactive notebooks allow people to communicate and collaborate through a single rich document that might include live code, multimedia, computed results, and documentation, which is persisted as a whole for reproducibility. Notebooks are currently being used extensively in domains such as data science, data journalism, and machine learning. However, constructing a notebook interface for a new language requires a lot of effort. In this tool paper, we present Bacatá, a language parametric notebook generator for domain-specific languages (DSL) based on the Jupyter framework. Bacatá is designed so that language engineers may reuse existing language components (such as parsers, code generators, interpreters, etc.) as much as possible. Moreover, we explain the design of Bacatá and how DSL notebooks can be generated with minimum effort in the context of the Rascal meta programming system and language workbench. @InProceedings{SLE18p210, author = {Mauricio Verano Merino and Jurgen Vinju and Tijs van der Storm}, title = {Bacatá: A Language Parametric Notebook Generator (Tool Demo)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {210--214}, doi = {10.1145/3276604.3276981}, year = {2018}, } Publisher's Version SLE '18: "Shape-Diverse DSLs: Languages ..." Shape-Diverse DSLs: Languages without Borders (Vision Paper) Fabien Coulon, Thomas Degueule, Tijs van der Storm, and Benoit Combemale (University of Toulouse, France; IRIT, France; Obeo, France; CWI, Netherlands; University of Groningen, Netherlands; Inria, France) Domain-Specific Languages (DSLs) manifest themselves in remarkably diverse shapes, ranging from internal DSLs embedded as a mere fluent API within a programming language, to external DSLs with dedicated syntax and tool support. Although different shapes have different pros and cons, combining them for a single language is problematic: language designers usually commit to a particular shape early in the design process, and it is hard to reconsider this choice later. In this new ideas paper, we envision a language engineering approach enabling (i) language users to manipulate language constructs in the most appropriate shape according to the task at hand, and (ii) language designers to combine the strengths of different technologies for a single DSL. We report on early experiments and lessons learned building , our prototype approach to this problem. We illustrate its applicability in the engineering of a simple shape-diverse DSL implemented conjointly in Rascal, EMF, and Java. We hope that our initial contribution will raise the awareness of the community and encourage future research. @InProceedings{SLE18p215, author = {Fabien Coulon and Thomas Degueule and Tijs van der Storm and Benoit Combemale}, title = {Shape-Diverse DSLs: Languages without Borders (Vision Paper)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {215--219}, doi = {10.1145/3276604.3276623}, year = {2018}, } Publisher's Version Info |
|
Van Gool, Louis |
SLE '18: "Migrating Custom DSL Implementations ..."
Migrating Custom DSL Implementations to a Language Workbench (Tool Demo)
Jasper Denkers, Louis van Gool, and Eelco Visser (Delft University of Technology, Netherlands; Océ, Netherlands) We present a tool architecture that supports migrating custom domain-specific language (DSL) implementations to a language workbench. We demonstrate an implementation of this architecture for models in the domains of defining component interfaces (IDL) and modeling system behavior (OIL) which are developed and used at a digital printer manufacturing company. Increasing complexity and the lack of DSL syntax and IDE support for existing implementations in Python based on XML syntax hindered their evolution and adoption. A reimplementation in Spoofax using modular language definition enables composition between IDL and OIL and introduces more concise DSL syntax and IDE support. The presented tool supports migrating to new implementations while being backward compatible with existing syntax and related tooling. @InProceedings{SLE18p205, author = {Jasper Denkers and Louis van Gool and Eelco Visser}, title = {Migrating Custom DSL Implementations to a Language Workbench (Tool Demo)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {205--209}, doi = {10.1145/3276604.3276608}, year = {2018}, } Publisher's Version |
|
Vinju, Jurgen |
SLE '18: "Bacatá: A Language Parametric ..."
Bacatá: A Language Parametric Notebook Generator (Tool Demo)
Mauricio Verano Merino, Jurgen Vinju, and Tijs van der Storm (Eindhoven University of Technology, Netherlands; CWI, Netherlands; University of Groningen, Netherlands) Interactive notebooks allow people to communicate and collaborate through a single rich document that might include live code, multimedia, computed results, and documentation, which is persisted as a whole for reproducibility. Notebooks are currently being used extensively in domains such as data science, data journalism, and machine learning. However, constructing a notebook interface for a new language requires a lot of effort. In this tool paper, we present Bacatá, a language parametric notebook generator for domain-specific languages (DSL) based on the Jupyter framework. Bacatá is designed so that language engineers may reuse existing language components (such as parsers, code generators, interpreters, etc.) as much as possible. Moreover, we explain the design of Bacatá and how DSL notebooks can be generated with minimum effort in the context of the Rascal meta programming system and language workbench. @InProceedings{SLE18p210, author = {Mauricio Verano Merino and Jurgen Vinju and Tijs van der Storm}, title = {Bacatá: A Language Parametric Notebook Generator (Tool Demo)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {210--214}, doi = {10.1145/3276604.3276981}, year = {2018}, } Publisher's Version |
|
Visser, Eelco |
SLE '18: "Declarative Specification ..."
Declarative Specification of Indentation Rules: A Tooling Perspective on Parsing and Pretty-Printing Layout-Sensitive Languages
Luís Eduardo de Souza Amorim, Michael J. Steindorfer, Sebastian Erdweg, and Eelco Visser (Delft University of Technology, Netherlands) In layout-sensitive languages, the indentation of an expression or statement can influence how a program is parsed. While some of these languages (e.g., Haskell and Python) have been widely adopted, there is little support for software language engineers in building tools for layout-sensitive languages. As a result, parsers, pretty-printers, program analyses, and refactoring tools often need to be handwritten, which decreases the maintainability and extensibility of these tools. Even state-of-the-art language workbenches have little support for layout-sensitive languages, restricting the development and prototyping of such languages. In this paper, we introduce a novel approach to declarative specification of layout-sensitive languages using layout declarations. Layout declarations are high-level specifications of indentation rules that abstract from low-level technicalities. We show how to derive an efficient layout-sensitive generalized parser and a corresponding pretty-printer automatically from a language specification with layout declarations. We validate our approach in a case-study using a syntax definition for the Haskell programming language, investigating the performance of the generated parser and the correctness of the generated pretty-printer against 22191 Haskell files. @InProceedings{SLE18p3, author = {Luís Eduardo de Souza Amorim and Michael J. Steindorfer and Sebastian Erdweg and Eelco Visser}, title = {Declarative Specification of Indentation Rules: A Tooling Perspective on Parsing and Pretty-Printing Layout-Sensitive Languages}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {3--15}, doi = {10.1145/3276604.3276607}, year = {2018}, } Publisher's Version Info Artifacts Functional SLE '18: "Migrating Custom DSL Implementations ..." Migrating Custom DSL Implementations to a Language Workbench (Tool Demo) Jasper Denkers, Louis van Gool, and Eelco Visser (Delft University of Technology, Netherlands; Océ, Netherlands) We present a tool architecture that supports migrating custom domain-specific language (DSL) implementations to a language workbench. We demonstrate an implementation of this architecture for models in the domains of defining component interfaces (IDL) and modeling system behavior (OIL) which are developed and used at a digital printer manufacturing company. Increasing complexity and the lack of DSL syntax and IDE support for existing implementations in Python based on XML syntax hindered their evolution and adoption. A reimplementation in Spoofax using modular language definition enables composition between IDL and OIL and introduces more concise DSL syntax and IDE support. The presented tool supports migrating to new implementations while being backward compatible with existing syntax and related tooling. @InProceedings{SLE18p205, author = {Jasper Denkers and Louis van Gool and Eelco Visser}, title = {Migrating Custom DSL Implementations to a Language Workbench (Tool Demo)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {205--209}, doi = {10.1145/3276604.3276608}, year = {2018}, } Publisher's Version SLE '18: "Migrating Business Logic to ..." Migrating Business Logic to an Incremental Computing DSL: A Case Study Daco C. Harkes, Elmer van Chastelet, and Eelco Visser (Delft University of Technology, Netherlands) To provide empirical evidence to what extent migration of business logic to an incremental computing language (ICL) is useful, we report on a case study on a learning management system. Our contribution is to analyze a real-life project, how migrating business logic to an ICL affects information system validatability, performance, and development effort. We find that the migrated code has better validatability; it is straightforward to establish that a program ‘does the right thing’. Moreover, the performance is better than the previous hand-written incremental computing solution. The effort spent on modeling business logic is reduced, but integrating that logic in the application and tuning performance takes considerable effort. Thus, the ICL separates the concerns of business logic and performance, but does not reduce effort. @InProceedings{SLE18p83, author = {Daco C. Harkes and Elmer van Chastelet and Eelco Visser}, title = {Migrating Business Logic to an Incremental Computing DSL: A Case Study}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {83--96}, doi = {10.1145/3276604.3276617}, year = {2018}, } Publisher's Version |
|
Wortmann, Andreas |
SLE '18: "Deriving Fluent Internal Domain-Specific ..."
Deriving Fluent Internal Domain-Specific Languages from Grammars
Arvid Butting, Manuela Dalibor, Gerrit Leonhardt, Bernhard Rumpe, and Andreas Wortmann (RWTH Aachen University, Germany) A prime decision of engineering domain-specific languages (DSLs) is implementing these as external DSLs or internal DSLs. Agile language engineering benefits from easily switching between both shapes to provide rapidly developed prototypes before settling on a specific syntax. This switching, however, is rarely feasible due to the effort of re-implementing language tooling for both shapes. Current research in software language engineering focuses either on internal DSLs or external DSLs. We conceived a concept to automatically derive customizable internal DSLs from grammars that operate on the same abstract syntax as the external DSL. This supports reusing tooling (such as model checkers or code generators) between both shapes. We realized our concept with the MontiCore language workbench and Groovy as host language for internal DSLs. This concept is applicable to many grammar-based language definition @InProceedings{SLE18p187, author = {Arvid Butting and Manuela Dalibor and Gerrit Leonhardt and Bernhard Rumpe and Andreas Wortmann}, title = {Deriving Fluent Internal Domain-Specific Languages from Grammars}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {187--199}, doi = {10.1145/3276604.3276621}, year = {2018}, } Publisher's Version SLE '18: "Translating Grammars to Accurate ..." Translating Grammars to Accurate Metamodels Arvid Butting, Nico Jansen, Bernhard Rumpe, and Andreas Wortmann (RWTH Aachen University, Germany) There is a software language engineering gap between metamodel-based languages and grammar-based languages. Grammars can support integrated definition of concrete syntax and abstract syntax, which facilitates processing models, but usually prevents reusing the variety of language tools operating on Ecore metamodels (such as editors, interpreters, debuggers, etc.). Existing work on translating grammars to Ecore metamodels features very cursory translations only, which requires re-engineering intricacies natural to grammars for the metamodels again. We conceived a translation from an EBNF-like syntax to Ecore metamodels that considers the grammars’ intricacies. This translation is realized as a fully automated toolchain from grammars into Ecore & OCL using the language workbench MontiCore. Using this translation enables grammar-based languages to leverage the benefits of Ecore-compatible language tools while supporting natural definition of concrete and abstract syntax. @InProceedings{SLE18p174, author = {Arvid Butting and Nico Jansen and Bernhard Rumpe and Andreas Wortmann}, title = {Translating Grammars to Accurate Metamodels}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {174--186}, doi = {10.1145/3276604.3276605}, year = {2018}, } Publisher's Version |
|
Zaytsev, Vadim |
SLE '18: "An Industrial Case Study in ..."
An Industrial Case Study in Compiler Testing (Tool Demo)
Vadim Zaytsev (Raincode Labs, Belgium) Compiler construction is one of the oldest areas of software engineering, yet despite its maturity it has underdeveloped sides such as compiler testing. There exist many disparate methods for testing parsers, optimisers and other components, but no unified methodology that consumable by practitioners from a book to be directly applied to fulfil their needs. Instead of striving to cover all theoretical aspects of compiler testing in one paper, we present a case study for an ongoing project of a relatively large size for our company (2 years, 3--6 devs, 500kLOC), a clean room compiler development effort in replicating a 4GL. We built a testing framework and a model-based test data generator, which consumes manually written specifications and generates all the necessary test code in the 4GL, in the host language, and in auxiliary DSLs (batch files, XML project descriptions), to both the developers' and the customer's satisfaction. The number of specifications is 927 at the publication time, while the number of test cases generated from them, is 6268. All these tests have been run prior to shipping for the last 49 releases of the compiler, both to ensure the lack of regression and to report on the project overall progress. The generated tests are separated into 11 categories which the paper details in the hope that the classification will aid in seeking related work and in pushing this line of research forward. @InProceedings{SLE18p97, author = {Vadim Zaytsev}, title = {An Industrial Case Study in Compiler Testing (Tool Demo)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {97--102}, doi = {10.1145/3276604.3276619}, year = {2018}, } Publisher's Version |
57 authors
proc time: 11.36