Workshop SLE 2022 – Author Index |
Contents -
Abstracts -
Authors
|
A B C D F G H J K L M N O P R S T V W Y Z
Ali, Qurat ul ain |
SLE '22: "Selective Traceability for ..."
Selective Traceability for Rule-Based Model-to-Model Transformations
Qurat ul ain Ali, Dimitris Kolovos, and Konstantinos Barmpis (University of York, UK) Model-to-model (M2M) transformation is a key ingredient in a typical Model-Driven Engineering workflow and there are several tailored high-level interpreted languages for capturing and executing such transformations. While these languages enable the specification of concise transformations through task-specific constructs (rules/mappings, bindings), their use can pose scalability challenges when it comes to very large models. In this paper, we present an architecture for optimising the execution of model-to-model transformations written in such a language, by leveraging static analysis and automated program rewriting techniques. We demonstrate how static analysis and dependency information between rules can be used to reduce the size of the transformation trace and to optimise certain classes of transformations. Finally, we detail the performance benefits that can be delivered by this form of optimisation, through a series of benchmarks performed with an existing transformation language (Epsilon Transformation Language - ETL) and EMF-based models. Our experiments have shown considerable performance improvements compared to the existing ETL execution engine, without sacrificing any features of the language. @InProceedings{SLE22p98, author = {Qurat ul ain Ali and Dimitris Kolovos and Konstantinos Barmpis}, title = {Selective Traceability for Rule-Based Model-to-Model Transformations}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {98--109}, doi = {10.1145/3567512.3567521}, year = {2022}, } Publisher's Version |
|
Aotani, Tomoyuki |
SLE '22: "BatakJava: An Object-Oriented ..."
BatakJava: An Object-Oriented Programming Language with Versions
Luthfan Anshar Lubis, Yudai Tanabe, Tomoyuki Aotani, and Hidehiko Masuhara (Tokyo Institute of Technology, Japan; Mamezou, Japan) Programming with versions is a recent proposal that supports multiple versions of software components in a program. Though it would provide greater freedom for the programmer, the concept is only realized as a simple core calculus, called λVL, where a value consists of λ-terms with multiple versions. We explore a design space of programming with versions in the presence of data structures and module systems, and propose BatakJava, an object-oriented programming language in which multiple versions of a class can be used in a program. This paper presents BatakJava’s language design, its core semantics with subject reduction, an implementation as a source-to-Java translator, and a case study to understand how we can exploit multiple versions in BatakJava for developing an application program with an evolving library. @InProceedings{SLE22p222, author = {Luthfan Anshar Lubis and Yudai Tanabe and Tomoyuki Aotani and Hidehiko Masuhara}, title = {BatakJava: An Object-Oriented Programming Language with Versions}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {222--234}, doi = {10.1145/3567512.3567531}, year = {2022}, } Publisher's Version Artifacts Reusable |
|
Aranega, Vincent |
SLE '22: "Reflection as a Tool to Debug ..."
Reflection as a Tool to Debug Objects
Steven Costiou, Vincent Aranega, and Marcus Denker (University of Lille, France; Inria, France; CNRS, France; Centrale Lille, France; UMR 9189 CRIStAL, France) In this paper, we share our experience with using reflection as a systematic tool to build advanced debuggers. We illustrate the usage and combination of reflection techniques for the implementation of object-centric debugging. Object-centric debugging is a technique for object-oriented systems that scopes debugging operations to specific objects. The implementation of this technique is not straightforward, as there are, to the best of our knowledge, no description in the literature about how to build such debugger. We describe an implementation of object-centric breakpoints. We built these breakpoints with Pharo, a highly reflective system, based on the combination of different classical reflection techniques: proxy, anonymous subclasses, and sub-method partial behavioral reflection. Because this implementation is based on common reflective techniques, it is applicable to other reflective languages and systems for which a set of identified primitives are available. @InProceedings{SLE22p55, author = {Steven Costiou and Vincent Aranega and Marcus Denker}, title = {Reflection as a Tool to Debug Objects}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {55--60}, doi = {10.1145/3567512.3567517}, year = {2022}, } Publisher's Version |
|
Barais, Olivier |
SLE '22: "A Language-Parametric Approach ..."
A Language-Parametric Approach to Exploratory Programming Environments
L. Thomas van Binsbergen, Damian Frölich, Mauricio Verano Merino, Joey Lai, Pierre Jeanjean, Tijs van der Storm, Benoit Combemale, and Olivier Barais (University of Amsterdam, Netherlands; Vrije Universiteit Amsterdam, Netherlands; Inria, France; University of Rennes, France; CNRS, France; IRISA, France; CWI, Netherlands; University of Groningen, Netherlands) Exploratory programming is a software development style in which code is a medium for prototyping ideas and solutions, and in which even the end-goal can evolve over time. Exploratory programming is valuable in various contexts such as programming education, data science, and end-user programming. However, there is a lack of appropriate tooling and language design principles to support exploratory programming. This paper presents a host language- and object language-independent protocol for exploratory programming akin to the Language Server Protocol. The protocol serves as a basis to develop novel (or extend existing) programming environments for exploratory programming such as computational notebooks and command-line REPLs. An architecture is presented on top of which prototype environments can be developed with relative ease, because existing (language) components can be reused. Our prototypes demonstrate that the proposed protocol is sufficiently expressive to support exploratory programming scenarios as encountered in literature within the software engineering, human-computer interaction and data science domains. @InProceedings{SLE22p175, author = {L. Thomas van Binsbergen and Damian Frölich and Mauricio Verano Merino and Joey Lai and Pierre Jeanjean and Tijs van der Storm and Benoit Combemale and Olivier Barais}, title = {A Language-Parametric Approach to Exploratory Programming Environments}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {175--188}, doi = {10.1145/3567512.3567527}, year = {2022}, } Publisher's Version Artifacts Functional |
|
Barmpis, Konstantinos |
SLE '22: "Selective Traceability for ..."
Selective Traceability for Rule-Based Model-to-Model Transformations
Qurat ul ain Ali, Dimitris Kolovos, and Konstantinos Barmpis (University of York, UK) Model-to-model (M2M) transformation is a key ingredient in a typical Model-Driven Engineering workflow and there are several tailored high-level interpreted languages for capturing and executing such transformations. While these languages enable the specification of concise transformations through task-specific constructs (rules/mappings, bindings), their use can pose scalability challenges when it comes to very large models. In this paper, we present an architecture for optimising the execution of model-to-model transformations written in such a language, by leveraging static analysis and automated program rewriting techniques. We demonstrate how static analysis and dependency information between rules can be used to reduce the size of the transformation trace and to optimise certain classes of transformations. Finally, we detail the performance benefits that can be delivered by this form of optimisation, through a series of benchmarks performed with an existing transformation language (Epsilon Transformation Language - ETL) and EMF-based models. Our experiments have shown considerable performance improvements compared to the existing ETL execution engine, without sacrificing any features of the language. @InProceedings{SLE22p98, author = {Qurat ul ain Ali and Dimitris Kolovos and Konstantinos Barmpis}, title = {Selective Traceability for Rule-Based Model-to-Model Transformations}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {98--109}, doi = {10.1145/3567512.3567521}, year = {2022}, } Publisher's Version |
|
Beckmann, Tom |
SLE '22: "Partial Parsing for Structured ..."
Partial Parsing for Structured Editors
Tom Beckmann, Patrick Rein, Toni Mattis, and Robert Hirschfeld (University of Potsdam, Germany; Hasso Plattner Institute, Germany) Creating structured editors, which maintain a valid syntax tree at all times rather than allowing to edit program text, is typically a time consuming task. Recent work has investigated the use of existing general-purpose language grammars as a basis for automatically generating structured editors, thus considerably reducing the effort required. However, in these generated editors, input occurs through menu and mouse-based interaction, rather than via keyboard entry that is familiar to most users. In this paper we introduce modifications to a parser of general-purpose programming language grammars to support keyboard-centric interactions with generated structured editors. Specifically, we describe a system we call partial parsing to autocomplete language structures, removing the need for a menu of language constructs in favor of keyboard-based disambiguation. We demonstrate our system's applicability and performance for use in interactive, generated structured editors. Our system thus constitutes a step towards making structured editors generated from language grammars usable with more efficient and familiar keyboard-centric interactions. @InProceedings{SLE22p110, author = {Tom Beckmann and Patrick Rein and Toni Mattis and Robert Hirschfeld}, title = {Partial Parsing for Structured Editors}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {110--120}, doi = {10.1145/3567512.3567522}, year = {2022}, } Publisher's Version |
|
Bertram, Vincent |
SLE '22: "Neural Language Models and ..."
Neural Language Models and Few Shot Learning for Systematic Requirements Processing in MDSE
Vincent Bertram, Miriam Boß, Evgeny Kusmenko, Imke Helene Nachmann, Bernhard Rumpe, Danilo Trotta, and Louis Wachtmeister (RWTH Aachen University, Germany) Systems engineering, in particular in the automotive domain, needs to cope with the massively increasing numbers of requirements that arise during the development process. The language in which requirements are written is mostly informal and highly individual. This hinders automated processing of requirements as well as the linking of requirements to models. Introducing formal requirement notations in existing projects leads to the challenge of translating masses of requirements and the necessity of training for requirements engineers. In this paper, we derive domain-specific language constructs helping us to avoid ambiguities in requirements and increase the level of formality. The main contribution is the adoption and evaluation of few-shot learning with large pretrained language models for the automated translation of informal requirements to structured languages such as a requirement DSL. @InProceedings{SLE22p260, author = {Vincent Bertram and Miriam Boß and Evgeny Kusmenko and Imke Helene Nachmann and Bernhard Rumpe and Danilo Trotta and Louis Wachtmeister}, title = {Neural Language Models and Few Shot Learning for Systematic Requirements Processing in MDSE}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {260--265}, doi = {10.1145/3567512.3567534}, year = {2022}, } Publisher's Version |
|
Binder, Simon |
SLE '22: "jGuard: Programming Misuse-Resilient ..."
jGuard: Programming Misuse-Resilient APIs
Simon Binder, Krishna Narasimhan, Svenja Kernig, and Mira Mezini (TU Darmstadt, Germany) APIs provide access to valuable features, but studies have shown that they are hard to use correctly. Misuses of these APIs can be quite costly. Even though documentations and usage manuals exist, developers find it hard to integrate these in practice. Several static and dynamic analysis tools exist to detect and mitigate API misuses. But it is natural to wonder if APIs can be made more difficult to misuse by capturing the knowledge of domain experts (, API designers). Approaches like CogniCrypt have made inroads into this direction by offering API specification languages like CrySL which are then consumed by static analysis tools. But studies have shown that developers do not enjoy installing new tools into their pipeline. In this paper, we present jGuard, an extension to Java that allows API designers to directly encode their specifications while implementing their APIs. Code written in jGuard is then compiled to regular Java with the checks encoded as exceptions, thereby making sure the API user does not need to install any new tooling. Our evaluation shows that jGuard can be used to express the most commonly occuring misuses in practice, matches the accuracy of state of the art in API misuse detection tools, and introduces negligible performance overhead. @InProceedings{SLE22p161, author = {Simon Binder and Krishna Narasimhan and Svenja Kernig and Mira Mezini}, title = {jGuard: Programming Misuse-Resilient APIs}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {161--174}, doi = {10.1145/3567512.3567526}, year = {2022}, } Publisher's Version Artifacts Functional |
|
Boß, Miriam |
SLE '22: "Neural Language Models and ..."
Neural Language Models and Few Shot Learning for Systematic Requirements Processing in MDSE
Vincent Bertram, Miriam Boß, Evgeny Kusmenko, Imke Helene Nachmann, Bernhard Rumpe, Danilo Trotta, and Louis Wachtmeister (RWTH Aachen University, Germany) Systems engineering, in particular in the automotive domain, needs to cope with the massively increasing numbers of requirements that arise during the development process. The language in which requirements are written is mostly informal and highly individual. This hinders automated processing of requirements as well as the linking of requirements to models. Introducing formal requirement notations in existing projects leads to the challenge of translating masses of requirements and the necessity of training for requirements engineers. In this paper, we derive domain-specific language constructs helping us to avoid ambiguities in requirements and increase the level of formality. The main contribution is the adoption and evaluation of few-shot learning with large pretrained language models for the automated translation of informal requirements to structured languages such as a requirement DSL. @InProceedings{SLE22p260, author = {Vincent Bertram and Miriam Boß and Evgeny Kusmenko and Imke Helene Nachmann and Bernhard Rumpe and Danilo Trotta and Louis Wachtmeister}, title = {Neural Language Models and Few Shot Learning for Systematic Requirements Processing in MDSE}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {260--265}, doi = {10.1145/3567512.3567534}, year = {2022}, } Publisher's Version |
|
Boukham, Houda |
SLE '22: "A Multi-target, Multi-paradigm ..."
A Multi-target, Multi-paradigm DSL Compiler for Algorithmic Graph Processing
Houda Boukham, Guido Wachsmuth, Martijn Dwars, and Dalila Chiadmi (Ecole Mohammadia d'Ingénieurs, Morocco; Oracle Labs, Morocco; Oracle Labs, Switzerland) Domain-specific language compilers need to close the gap between the domain abstractions of the language and the low-level concepts of the target platform.This can be challenging to achieve for compilers targeting multiple platforms with potentially very different computing paradigms.In this paper, we present a multi-target, multi-paradigm DSL compiler for algorithmic graph processing. Our approach centers around an intermediate representation and reusable, composable transformations to be shared between the different compiler targets. These transformations embrace abstractions that align closely with the concepts of a particular target platform, and disallow abstractions that are semantically more distant. We report on our experience implementing the compiler and highlight some of the challenges and requirements for applying language workbenches in industrial use cases. @InProceedings{SLE22p2, author = {Houda Boukham and Guido Wachsmuth and Martijn Dwars and Dalila Chiadmi}, title = {A Multi-target, Multi-paradigm DSL Compiler for Algorithmic Graph Processing}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {2--15}, doi = {10.1145/3567512.3567513}, year = {2022}, } Publisher's Version |
|
Bousse, Erwan |
SLE '22: "From Coverage Computation ..."
From Coverage Computation to Fault Localization: A Generic Framework for Domain-Specific Languages
Faezeh Khorram, Erwan Bousse, Antonio Garmendia, Jean-Marie Mottu, Gerson Sunyé, and Manuel Wimmer (IMT Atlantique, France; Nantes Université, France; École Centrale Nantes, France; JKU Linz, Austria) To test a system efficiently, we need to know how good are the defined test cases and to localize detected faults in the system. Measuring test coverage can address both concerns as it is a popular metric for test quality evaluation and, at the same time, is the foundation of advanced fault localization techniques. However, for Domain-Specific Languages (DSLs), coverage metrics and associated tools are usually manually defined for each DSL representing costly, error-prone, and non-reusable work. To address this problem, we propose a generic coverage computation and fault localization framework for DSLs. Considering a test suite executed on a model conforming to a DSL, we compute a coverage matrix based on three ingredients: the DSL specification, the coverage rules, and the model's execution trace. Using the test execution result and the computed coverage matrix, the framework calculates the suspiciousness-based ranking of the model's elements based on existing spectrum-based techniques to help the user in localizing the model's faults. We provide a tool atop the Eclipse GEMOC Studio and evaluate our approach using four different DSLs, with 297 test cases for 21 models in total. Results show that we can successfully create meaningful coverage matrices for all investigated DSLs and models. The applied fault localization techniques are capable of identifying the defects injected in the models based on the provided coverage measurements, thus demonstrating the usefulness of the automatically computed measurements. @InProceedings{SLE22p235, author = {Faezeh Khorram and Erwan Bousse and Antonio Garmendia and Jean-Marie Mottu and Gerson Sunyé and Manuel Wimmer}, title = {From Coverage Computation to Fault Localization: A Generic Framework for Domain-Specific Languages}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {235--248}, doi = {10.1145/3567512.3567532}, year = {2022}, } Publisher's Version Info Artifacts Functional |
|
Chen, Zilin |
SLE '22: "Property-Based Testing: Climbing ..."
Property-Based Testing: Climbing the Stairway to Verification
Zilin Chen, Christine Rizkallah, Liam O'Connor, Partha Susarla, Gerwin Klein, Gernot Heiser, and Gabriele Keller (UNSW, Australia; University of Melbourne, Australia; University of Edinburgh, UK; Independent, Australia; Proofcraft, Australia; Utrecht University, Netherlands) Property-based testing (PBT) is a powerful tool that is widely available in modern programming languages. It has been used to reduce formal software verification effort. We demonstrate how PBT can be used in conjunction with formal verification to incrementally gain greater assurance in code correctness by integrating PBT into the verification framework of Cogent---a programming language equipped with a certifying compiler for developing high-assurance systems components. Specifically, for PBT and formal verification to work in tandem, we structure the tests to mirror the refinement proof that we used in Cogent's verification framework: The expected behaviour of the system under test is captured by a functional correctness specification, which mimics the formal specification of the system, and we test the refinement relation between the implementation and the specification. We exhibit the additional benefits that this mutualism brings to developers and demonstrate the techniques we used in this style of PBT, by studying two concrete examples. @InProceedings{SLE22p84, author = {Zilin Chen and Christine Rizkallah and Liam O'Connor and Partha Susarla and Gerwin Klein and Gernot Heiser and Gabriele Keller}, title = {Property-Based Testing: Climbing the Stairway to Verification}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {84--97}, doi = {10.1145/3567512.3567520}, year = {2022}, } Publisher's Version Artifacts Reusable |
|
Chiadmi, Dalila |
SLE '22: "A Multi-target, Multi-paradigm ..."
A Multi-target, Multi-paradigm DSL Compiler for Algorithmic Graph Processing
Houda Boukham, Guido Wachsmuth, Martijn Dwars, and Dalila Chiadmi (Ecole Mohammadia d'Ingénieurs, Morocco; Oracle Labs, Morocco; Oracle Labs, Switzerland) Domain-specific language compilers need to close the gap between the domain abstractions of the language and the low-level concepts of the target platform.This can be challenging to achieve for compilers targeting multiple platforms with potentially very different computing paradigms.In this paper, we present a multi-target, multi-paradigm DSL compiler for algorithmic graph processing. Our approach centers around an intermediate representation and reusable, composable transformations to be shared between the different compiler targets. These transformations embrace abstractions that align closely with the concepts of a particular target platform, and disallow abstractions that are semantically more distant. We report on our experience implementing the compiler and highlight some of the challenges and requirements for applying language workbenches in industrial use cases. @InProceedings{SLE22p2, author = {Houda Boukham and Guido Wachsmuth and Martijn Dwars and Dalila Chiadmi}, title = {A Multi-target, Multi-paradigm DSL Compiler for Algorithmic Graph Processing}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {2--15}, doi = {10.1145/3567512.3567513}, year = {2022}, } Publisher's Version |
|
Chiba, Shigeru |
SLE '22: "Yet Another Generating Method ..."
Yet Another Generating Method of Fluent Interfaces Supporting Flat- and Sub-chaining Styles
Tetsuro Yamazaki, Tomoki Nakamaru, and Shigeru Chiba (University of Tokyo, Japan) Researchers discovered methods to generate fluent interfaces equipped with static checking to verify their calling conventions. This static checking is done by carefully designing classes and method signatures to make type checking to perform a calculation equivalent to syntax checking. In this paper, we propose a method to generate a fluent interface with syntax checking, which accepts both styles of method chaining; flat-chaining style and sub-chaining style. Supporting both styles is worthwhile because it allows programmers to wrap out parts of their method chaining for readability. Our method is based on grammar rewriting so that we could inspect the acceptable grammar. In conclusion, our method succeeds generation when the input grammar is LL(1) and there is no non-terminal symbol that generates either only an empty string or nothing. @InProceedings{SLE22p249, author = {Tetsuro Yamazaki and Tomoki Nakamaru and Shigeru Chiba}, title = {Yet Another Generating Method of Fluent Interfaces Supporting Flat- and Sub-chaining Styles}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {249--259}, doi = {10.1145/3567512.3567533}, year = {2022}, } Publisher's Version SLE '22: "People Do Not Want to Learn ..." People Do Not Want to Learn a New Language But a New Library (Keynote) Shigeru Chiba (University of Tokyo, Japan) One day, a student raised a question. I spent many years to learn a programming language. Why do you try to develop yet another language? I don’t wanna learn no more language. One is enough! My answer was, well, don’t you hate to learn a new library, either? People seem to accept learning a new library as necessary work although they might not be happy to learn a new language (they might not be very happy to learn a new library, either, but they seem much happier). However, a modern library is something we should consider as a programming language. During this talk, I will survey technology around language-like libraries, which are often called embedded domain specific languages. Then I will present my vision of where we, programming-language researchers, should go for further study. @InProceedings{SLE22p1, author = {Shigeru Chiba}, title = {People Do Not Want to Learn a New Language But a New Library (Keynote)}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {1--1}, doi = {10.1145/3567512.3571831}, year = {2022}, } Publisher's Version |
|
Cimini, Matteo |
SLE '22: "Lang-n-Prove: A DSL for Language ..."
Lang-n-Prove: A DSL for Language Proofs
Matteo Cimini (University of Massachusetts Lowell, USA) Proofs of language properties often follow a schema that does not apply just to one language but, rather, applies to many languages of a certain class. In this paper, we present Lang-n-Prove, a domain-specific language for expressing theorems and proofs in such a way that they apply to many languages. The main characteristic of Lang-n-Prove is that it contains linguistic features that are specific to the domain of language design. We have used Lang-n-Prove to express the theorems and proofs of canonical forms lemmas, the progress theorem, and the type preservation theorem for a restricted class of functional languages. We have applied our Lang-n-Prove proofs to several functional languages, including languages with polymorphism, exceptions, recursive types, list operations, and other common types and operators. Our tool has generated the proof code in Abella that machine-checks the type safety of all these languages, when the correct code for substitution lemmas is provided. @InProceedings{SLE22p16, author = {Matteo Cimini}, title = {Lang-n-Prove: A DSL for Language Proofs}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {16--29}, doi = {10.1145/3567512.3567514}, year = {2022}, } Publisher's Version |
|
Cockx, Jesper |
SLE '22: "Optimising First-Class Pattern ..."
Optimising First-Class Pattern Matching
Jeff Smits, Toine Hartman, and Jesper Cockx (Delft University of Technology, Netherlands; Independent, Netherlands) Pattern matching is a high-level notation for programs to analyse the shape of data, and can be optimised to efficient low-level instructions. The Stratego language uses first-class pattern matching, a powerful form of pattern matching that traditional optimisation techniques do not apply to directly. In this paper, we investigate how to optimise programs that use first-class pattern matching. Concretely, we show how to map first-class pattern matching to a form close to traditional pattern matching, on which standard optimisations can be applied. Through benchmarks, we demonstrate the positive effect of these optimisations on the run-time performance of Stratego programs. We conclude that the expressive power of first-class pattern matching does not hamper the optimisation potential of a language that features it. @InProceedings{SLE22p74, author = {Jeff Smits and Toine Hartman and Jesper Cockx}, title = {Optimising First-Class Pattern Matching}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {74--83}, doi = {10.1145/3567512.3567519}, year = {2022}, } Publisher's Version Artifacts Functional |
|
Combemale, Benoit |
SLE '22: "A Language-Parametric Approach ..."
A Language-Parametric Approach to Exploratory Programming Environments
L. Thomas van Binsbergen, Damian Frölich, Mauricio Verano Merino, Joey Lai, Pierre Jeanjean, Tijs van der Storm, Benoit Combemale, and Olivier Barais (University of Amsterdam, Netherlands; Vrije Universiteit Amsterdam, Netherlands; Inria, France; University of Rennes, France; CNRS, France; IRISA, France; CWI, Netherlands; University of Groningen, Netherlands) Exploratory programming is a software development style in which code is a medium for prototyping ideas and solutions, and in which even the end-goal can evolve over time. Exploratory programming is valuable in various contexts such as programming education, data science, and end-user programming. However, there is a lack of appropriate tooling and language design principles to support exploratory programming. This paper presents a host language- and object language-independent protocol for exploratory programming akin to the Language Server Protocol. The protocol serves as a basis to develop novel (or extend existing) programming environments for exploratory programming such as computational notebooks and command-line REPLs. An architecture is presented on top of which prototype environments can be developed with relative ease, because existing (language) components can be reused. Our prototypes demonstrate that the proposed protocol is sufficiently expressive to support exploratory programming scenarios as encountered in literature within the software engineering, human-computer interaction and data science domains. @InProceedings{SLE22p175, author = {L. Thomas van Binsbergen and Damian Frölich and Mauricio Verano Merino and Joey Lai and Pierre Jeanjean and Tijs van der Storm and Benoit Combemale and Olivier Barais}, title = {A Language-Parametric Approach to Exploratory Programming Environments}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {175--188}, doi = {10.1145/3567512.3567527}, year = {2022}, } Publisher's Version Artifacts Functional |
|
Costiou, Steven |
SLE '22: "Reflection as a Tool to Debug ..."
Reflection as a Tool to Debug Objects
Steven Costiou, Vincent Aranega, and Marcus Denker (University of Lille, France; Inria, France; CNRS, France; Centrale Lille, France; UMR 9189 CRIStAL, France) In this paper, we share our experience with using reflection as a systematic tool to build advanced debuggers. We illustrate the usage and combination of reflection techniques for the implementation of object-centric debugging. Object-centric debugging is a technique for object-oriented systems that scopes debugging operations to specific objects. The implementation of this technique is not straightforward, as there are, to the best of our knowledge, no description in the literature about how to build such debugger. We describe an implementation of object-centric breakpoints. We built these breakpoints with Pharo, a highly reflective system, based on the combination of different classical reflection techniques: proxy, anonymous subclasses, and sub-method partial behavioral reflection. Because this implementation is based on common reflective techniques, it is applicable to other reflective languages and systems for which a set of identified primitives are available. @InProceedings{SLE22p55, author = {Steven Costiou and Vincent Aranega and Marcus Denker}, title = {Reflection as a Tool to Debug Objects}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {55--60}, doi = {10.1145/3567512.3567517}, year = {2022}, } Publisher's Version |
|
Denker, Marcus |
SLE '22: "Reflection as a Tool to Debug ..."
Reflection as a Tool to Debug Objects
Steven Costiou, Vincent Aranega, and Marcus Denker (University of Lille, France; Inria, France; CNRS, France; Centrale Lille, France; UMR 9189 CRIStAL, France) In this paper, we share our experience with using reflection as a systematic tool to build advanced debuggers. We illustrate the usage and combination of reflection techniques for the implementation of object-centric debugging. Object-centric debugging is a technique for object-oriented systems that scopes debugging operations to specific objects. The implementation of this technique is not straightforward, as there are, to the best of our knowledge, no description in the literature about how to build such debugger. We describe an implementation of object-centric breakpoints. We built these breakpoints with Pharo, a highly reflective system, based on the combination of different classical reflection techniques: proxy, anonymous subclasses, and sub-method partial behavioral reflection. Because this implementation is based on common reflective techniques, it is applicable to other reflective languages and systems for which a set of identified primitives are available. @InProceedings{SLE22p55, author = {Steven Costiou and Vincent Aranega and Marcus Denker}, title = {Reflection as a Tool to Debug Objects}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {55--60}, doi = {10.1145/3567512.3567517}, year = {2022}, } Publisher's Version |
|
Donat-Bouillud, Pierre |
SLE '22: "signatr: A Data-Driven Fuzzing ..."
signatr: A Data-Driven Fuzzing Tool for R
Alexi Turcotte, Pierre Donat-Bouillud, Filip Křikava, and Jan Vitek (Northeastern University, USA; Czech Technical University in Prague, Czechia) The fast-and-loose, permissive semantics of dynamic programming languages limit the power of static analyses. For that reason, soundness is often traded for precision through dynamic program analysis. Dynamic analysis is only as good as the available runnable code, and relying solely on test suites is fraught as they do not cover the full gamut of possible behaviors. Fuzzing is an approach for automatically exercising code, and could be used to obtain more runnable code. However, the shape of user-defined data in dynamic languages is difficult to intuit, limiting a fuzzer's reach. We propose a feedback-driven blackbox fuzzing approach which draws inputs from a database of values recorded from existing code. We implement this approach in a tool called signatr for the R language. We present the insights of its design and implementation, and assess signatr's ability to uncover new behaviors by fuzzing 4,829 R functions from 100 R packages, revealing 1,195,184 new signatures. @InProceedings{SLE22p216, author = {Alexi Turcotte and Pierre Donat-Bouillud and Filip Křikava and Jan Vitek}, title = {signatr: A Data-Driven Fuzzing Tool for R}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {216--221}, doi = {10.1145/3567512.3567530}, year = {2022}, } Publisher's Version Artifacts Reusable |
|
Dwars, Martijn |
SLE '22: "A Multi-target, Multi-paradigm ..."
A Multi-target, Multi-paradigm DSL Compiler for Algorithmic Graph Processing
Houda Boukham, Guido Wachsmuth, Martijn Dwars, and Dalila Chiadmi (Ecole Mohammadia d'Ingénieurs, Morocco; Oracle Labs, Morocco; Oracle Labs, Switzerland) Domain-specific language compilers need to close the gap between the domain abstractions of the language and the low-level concepts of the target platform.This can be challenging to achieve for compilers targeting multiple platforms with potentially very different computing paradigms.In this paper, we present a multi-target, multi-paradigm DSL compiler for algorithmic graph processing. Our approach centers around an intermediate representation and reusable, composable transformations to be shared between the different compiler targets. These transformations embrace abstractions that align closely with the concepts of a particular target platform, and disallow abstractions that are semantically more distant. We report on our experience implementing the compiler and highlight some of the challenges and requirements for applying language workbenches in industrial use cases. @InProceedings{SLE22p2, author = {Houda Boukham and Guido Wachsmuth and Martijn Dwars and Dalila Chiadmi}, title = {A Multi-target, Multi-paradigm DSL Compiler for Algorithmic Graph Processing}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {2--15}, doi = {10.1145/3567512.3567513}, year = {2022}, } Publisher's Version |
|
Fors, Niklas |
SLE '22: "Property Probes: Source Code ..."
Property Probes: Source Code Based Exploration of Program Analysis Results
Anton Risberg Alaküla, Görel Hedin, Niklas Fors, and Adrian Pop (Lund University, Sweden; Linköping University, Sweden) We present property probes, a mechanism for helping a developer interactively explore partial program analysis results in terms of the source program, and as the program is edited. A node locator data structure is introduced that maps between source code spans and program representation nodes, and that helps identify probed nodes in a robust way, after modifications to the source code. We have developed a client-server based tool supporting property probes, and argue that it is very helpful in debugging and understanding program analyses. We have evaluated our tool on several languages and analyses, including a full Java compiler and a tool for intraprocedural dataflow analysis. Our performance results show that the probe overhead is negligible even when analyzing large projects. @InProceedings{SLE22p148, author = {Anton Risberg Alaküla and Görel Hedin and Niklas Fors and Adrian Pop}, title = {Property Probes: Source Code Based Exploration of Program Analysis Results}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {148--160}, doi = {10.1145/3567512.3567525}, year = {2022}, } Publisher's Version Artifacts Reusable |
|
Franke, Björn |
SLE '22: "Collection Skeletons: Declarative ..."
Collection Skeletons: Declarative Abstractions for Data Collections
Björn Franke, Zhibo Li, Magnus Morton, and Michel Steuwer (University of Edinburgh, UK; Huawei, UK) Modern programming languages provide programmers with rich abstractions for data collections as part of their standard libraries, e.g. Containers in the C++ STL, the Java Collections Framework, or the Scala Collections API. Typically, these collections frameworks are organised as hierarchies that provide programmers with common abstract data types (ADTs) like lists, queues, and stacks. While convenient, this approach introduces problems which ultimately affect application performance due to users over-specifying collection data types limiting implementation flexibility. In this paper, we develop Collection Skeletons which provide a novel, declarative approach to data collections. Using our framework, programmers explicitly select properties for their collections, thereby truly decoupling specification from implementation. By making collection properties explicit immediate benefits materialise in form of reduced risk of over-specification and increased implementation flexibility. We have prototyped our declarative abstractions for collections as a C++ library, and demonstrate that benchmark applications rewritten to use Collection Skeletons incur little or no overhead. In fact, for several benchmarks, we observe performance speedups (on average between 2.57 to 2.93, and up to 16.37) and also enhanced performance portability across three different hardware platforms. @InProceedings{SLE22p189, author = {Björn Franke and Zhibo Li and Magnus Morton and Michel Steuwer}, title = {Collection Skeletons: Declarative Abstractions for Data Collections}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {189--201}, doi = {10.1145/3567512.3567528}, year = {2022}, } Publisher's Version |
|
Freitag, Marius |
SLE '22: "The Semantics of Plurals ..."
The Semantics of Plurals
Friedrich Steimann and Marius Freitag (Fernuniversität in Hagen, Germany) Inside many software languages lives an expression language that caters for the computation of single values from single values. These languages' fixation on single-valuedness is often at odds with their application domains, in which many values, or plurals, regularly occur in the places of single. While the classical mathematical means of dealing with plurals is the set, in computing, other representations have evolved, notably strings and the much lesser known bunches. We review bunch theory in the context of expression languages including non-recursive functions, and show how giving bunches set semantics suggests that evaluating bunch functions amounts to computing with relations. We maintain that the ensuing seamless integration of relations in expression languages that otherwise know only functions makes a worthwhile contribution in a field in which the difference between modeling, with its preference for relations, and programming, with its preference for functions, is increasingly considered accidental. @InProceedings{SLE22p36, author = {Friedrich Steimann and Marius Freitag}, title = {The Semantics of Plurals}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {36--54}, doi = {10.1145/3567512.3567516}, year = {2022}, } Publisher's Version |
|
Frölich, Damian |
SLE '22: "iCoLa: A Compositional Meta-language ..."
iCoLa: A Compositional Meta-language with Support for Incremental Language Development
Damian Frölich and L. Thomas van Binsbergen (University of Amsterdam, Netherlands) Programming languages providing high-level abstractions can increase programmers’ productivity and program safety. Language-oriented programming is a paradigm in which domain-specific languages are developed to solve problems within specific domains with (high-level) abstractions relevant to those domains. However, language development involves complex design and engineering processes. These processes can be simplified by reusing (parts of) existing languages and by offering language-parametric tooling. In this paper we present iCoLa, a meta-language supporting incremental (meta-)programming based on reusable components. In our implementation of iCoLa, languages are first-class citizens, providing the full power of the host-language (Haskell) to compose and manipulate languages. We demonstrate iCoLa through the construction of the Imp, SIMPLE, and MiniJava languages via the composition and restriction of language fragments and demonstrate the variability of our approach through the construction of several languages using a fixed-set of operators. @InProceedings{SLE22p202, author = {Damian Frölich and L. Thomas van Binsbergen}, title = {iCoLa: A Compositional Meta-language with Support for Incremental Language Development}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {202--215}, doi = {10.1145/3567512.3567529}, year = {2022}, } Publisher's Version SLE '22: "A Language-Parametric Approach ..." A Language-Parametric Approach to Exploratory Programming Environments L. Thomas van Binsbergen, Damian Frölich, Mauricio Verano Merino, Joey Lai, Pierre Jeanjean, Tijs van der Storm, Benoit Combemale, and Olivier Barais (University of Amsterdam, Netherlands; Vrije Universiteit Amsterdam, Netherlands; Inria, France; University of Rennes, France; CNRS, France; IRISA, France; CWI, Netherlands; University of Groningen, Netherlands) Exploratory programming is a software development style in which code is a medium for prototyping ideas and solutions, and in which even the end-goal can evolve over time. Exploratory programming is valuable in various contexts such as programming education, data science, and end-user programming. However, there is a lack of appropriate tooling and language design principles to support exploratory programming. This paper presents a host language- and object language-independent protocol for exploratory programming akin to the Language Server Protocol. The protocol serves as a basis to develop novel (or extend existing) programming environments for exploratory programming such as computational notebooks and command-line REPLs. An architecture is presented on top of which prototype environments can be developed with relative ease, because existing (language) components can be reused. Our prototypes demonstrate that the proposed protocol is sufficiently expressive to support exploratory programming scenarios as encountered in literature within the software engineering, human-computer interaction and data science domains. @InProceedings{SLE22p175, author = {L. Thomas van Binsbergen and Damian Frölich and Mauricio Verano Merino and Joey Lai and Pierre Jeanjean and Tijs van der Storm and Benoit Combemale and Olivier Barais}, title = {A Language-Parametric Approach to Exploratory Programming Environments}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {175--188}, doi = {10.1145/3567512.3567527}, year = {2022}, } Publisher's Version Artifacts Functional |
|
Garmendia, Antonio |
SLE '22: "From Coverage Computation ..."
From Coverage Computation to Fault Localization: A Generic Framework for Domain-Specific Languages
Faezeh Khorram, Erwan Bousse, Antonio Garmendia, Jean-Marie Mottu, Gerson Sunyé, and Manuel Wimmer (IMT Atlantique, France; Nantes Université, France; École Centrale Nantes, France; JKU Linz, Austria) To test a system efficiently, we need to know how good are the defined test cases and to localize detected faults in the system. Measuring test coverage can address both concerns as it is a popular metric for test quality evaluation and, at the same time, is the foundation of advanced fault localization techniques. However, for Domain-Specific Languages (DSLs), coverage metrics and associated tools are usually manually defined for each DSL representing costly, error-prone, and non-reusable work. To address this problem, we propose a generic coverage computation and fault localization framework for DSLs. Considering a test suite executed on a model conforming to a DSL, we compute a coverage matrix based on three ingredients: the DSL specification, the coverage rules, and the model's execution trace. Using the test execution result and the computed coverage matrix, the framework calculates the suspiciousness-based ranking of the model's elements based on existing spectrum-based techniques to help the user in localizing the model's faults. We provide a tool atop the Eclipse GEMOC Studio and evaluate our approach using four different DSLs, with 297 test cases for 21 models in total. Results show that we can successfully create meaningful coverage matrices for all investigated DSLs and models. The applied fault localization techniques are capable of identifying the defects injected in the models based on the provided coverage measurements, thus demonstrating the usefulness of the automatically computed measurements. @InProceedings{SLE22p235, author = {Faezeh Khorram and Erwan Bousse and Antonio Garmendia and Jean-Marie Mottu and Gerson Sunyé and Manuel Wimmer}, title = {From Coverage Computation to Fault Localization: A Generic Framework for Domain-Specific Languages}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {235--248}, doi = {10.1145/3567512.3567532}, year = {2022}, } Publisher's Version Info Artifacts Functional |
|
Gerasimou, Simos |
SLE '22: "Partial Loading of Repository-Based ..."
Partial Loading of Repository-Based Models through Static Analysis
Sorour Jahanbin, Dimitris Kolovos, Simos Gerasimou, and Gerson Sunyé (University of York, UK; University of Nantes, France) Abstract: As the size of software and system models grows, scalability issues in the current generation of model management languages (e.g. transformation, validation) and their supporting tooling become more prominent. To address this challenge, execution engines of model management programs need to become more efficient in their use of system resources. This paper presents an approach for partial loading of large models that reside in graph-database-backed model repositories. This approach leverages sophisticated static analysis of model management programs and auto-generation of graph (Cypher) queries to load only relevant model elements instead of naively loading the entire models into memory. Our experimental evaluation shows that our approach enables model management programs to process larger models, faster, and with a reduced memory footprint compared to the state of the art. @InProceedings{SLE22p266, author = {Sorour Jahanbin and Dimitris Kolovos and Simos Gerasimou and Gerson Sunyé}, title = {Partial Loading of Repository-Based Models through Static Analysis}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {266--278}, doi = {10.1145/3567512.3567535}, year = {2022}, } Publisher's Version |
|
Hartman, Toine |
SLE '22: "Optimising First-Class Pattern ..."
Optimising First-Class Pattern Matching
Jeff Smits, Toine Hartman, and Jesper Cockx (Delft University of Technology, Netherlands; Independent, Netherlands) Pattern matching is a high-level notation for programs to analyse the shape of data, and can be optimised to efficient low-level instructions. The Stratego language uses first-class pattern matching, a powerful form of pattern matching that traditional optimisation techniques do not apply to directly. In this paper, we investigate how to optimise programs that use first-class pattern matching. Concretely, we show how to map first-class pattern matching to a form close to traditional pattern matching, on which standard optimisations can be applied. Through benchmarks, we demonstrate the positive effect of these optimisations on the run-time performance of Stratego programs. We conclude that the expressive power of first-class pattern matching does not hamper the optimisation potential of a language that features it. @InProceedings{SLE22p74, author = {Jeff Smits and Toine Hartman and Jesper Cockx}, title = {Optimising First-Class Pattern Matching}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {74--83}, doi = {10.1145/3567512.3567519}, year = {2022}, } Publisher's Version Artifacts Functional |
|
Hedin, Görel |
SLE '22: "Property Probes: Source Code ..."
Property Probes: Source Code Based Exploration of Program Analysis Results
Anton Risberg Alaküla, Görel Hedin, Niklas Fors, and Adrian Pop (Lund University, Sweden; Linköping University, Sweden) We present property probes, a mechanism for helping a developer interactively explore partial program analysis results in terms of the source program, and as the program is edited. A node locator data structure is introduced that maps between source code spans and program representation nodes, and that helps identify probed nodes in a robust way, after modifications to the source code. We have developed a client-server based tool supporting property probes, and argue that it is very helpful in debugging and understanding program analyses. We have evaluated our tool on several languages and analyses, including a full Java compiler and a tool for intraprocedural dataflow analysis. Our performance results show that the probe overhead is negligible even when analyzing large projects. @InProceedings{SLE22p148, author = {Anton Risberg Alaküla and Görel Hedin and Niklas Fors and Adrian Pop}, title = {Property Probes: Source Code Based Exploration of Program Analysis Results}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {148--160}, doi = {10.1145/3567512.3567525}, year = {2022}, } Publisher's Version Artifacts Reusable |
|
Heiser, Gernot |
SLE '22: "Property-Based Testing: Climbing ..."
Property-Based Testing: Climbing the Stairway to Verification
Zilin Chen, Christine Rizkallah, Liam O'Connor, Partha Susarla, Gerwin Klein, Gernot Heiser, and Gabriele Keller (UNSW, Australia; University of Melbourne, Australia; University of Edinburgh, UK; Independent, Australia; Proofcraft, Australia; Utrecht University, Netherlands) Property-based testing (PBT) is a powerful tool that is widely available in modern programming languages. It has been used to reduce formal software verification effort. We demonstrate how PBT can be used in conjunction with formal verification to incrementally gain greater assurance in code correctness by integrating PBT into the verification framework of Cogent---a programming language equipped with a certifying compiler for developing high-assurance systems components. Specifically, for PBT and formal verification to work in tandem, we structure the tests to mirror the refinement proof that we used in Cogent's verification framework: The expected behaviour of the system under test is captured by a functional correctness specification, which mimics the formal specification of the system, and we test the refinement relation between the implementation and the specification. We exhibit the additional benefits that this mutualism brings to developers and demonstrate the techniques we used in this style of PBT, by studying two concrete examples. @InProceedings{SLE22p84, author = {Zilin Chen and Christine Rizkallah and Liam O'Connor and Partha Susarla and Gerwin Klein and Gernot Heiser and Gabriele Keller}, title = {Property-Based Testing: Climbing the Stairway to Verification}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {84--97}, doi = {10.1145/3567512.3567520}, year = {2022}, } Publisher's Version Artifacts Reusable |
|
Hermans, Felienne |
SLE '22: "Gradual Grammars: Syntax in ..."
Gradual Grammars: Syntax in Levels and Locales
Tijs van der Storm and Felienne Hermans (CWI, Netherlands; University of Groningen, Netherlands; Vrije Universiteit Amsterdam, Netherlands) Programming language implementations are often one-size-fits-all. Irrespective of the ethnographic background or proficiency of their users, they offer a single, canonical syntax for all language users. Whereas professional software developers might be willing to learn a programming language all in one go, this might be a significant barrier for non-technical users, such as children who learn to program, or domain experts using domain-specific languages (DSLs). Parser tools, however, do not offer sufficient support for graduality or internationalization, leading (worst case) to maintaining multiple parsers, for each target class of users. In this paper we present Fabric, a grammar formalism that supports: 1) the gradual extension with (and deprecation of) syntactic constructs in consecutive levels ("vertical"), and, orthogonally, 2) the internationalization of syntax by translating keywords and shuffling sentence order ("horizontal"). This is done in such a way that downstream language processors (compilers, interpreters, type checkers etc.) are affected as little as possible. We discuss the design of Fabric and its implementation on top of the LARK parser generator, and how Fabric can be embedded in the Rascal language workbench. A case study on the gradual programming language Hedy shows that language levels can be represented and internationalized concisely, with hardly any duplication. We evaluate the Fabric embedding using the Rebel2 DSL, by translating it to Dutch, and "untranslating" its concrete syntax trees, to reuse its existing compiler. Fabric thus provides a principled approach to gradual syntax definition in levels and locales. @InProceedings{SLE22p134, author = {Tijs van der Storm and Felienne Hermans}, title = {Gradual Grammars: Syntax in Levels and Locales}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {134--147}, doi = {10.1145/3567512.3567524}, year = {2022}, } Publisher's Version Artifacts Reusable |
|
Hirschfeld, Robert |
SLE '22: "Partial Parsing for Structured ..."
Partial Parsing for Structured Editors
Tom Beckmann, Patrick Rein, Toni Mattis, and Robert Hirschfeld (University of Potsdam, Germany; Hasso Plattner Institute, Germany) Creating structured editors, which maintain a valid syntax tree at all times rather than allowing to edit program text, is typically a time consuming task. Recent work has investigated the use of existing general-purpose language grammars as a basis for automatically generating structured editors, thus considerably reducing the effort required. However, in these generated editors, input occurs through menu and mouse-based interaction, rather than via keyboard entry that is familiar to most users. In this paper we introduce modifications to a parser of general-purpose programming language grammars to support keyboard-centric interactions with generated structured editors. Specifically, we describe a system we call partial parsing to autocomplete language structures, removing the need for a menu of language constructs in favor of keyboard-based disambiguation. We demonstrate our system's applicability and performance for use in interactive, generated structured editors. Our system thus constitutes a step towards making structured editors generated from language grammars usable with more efficient and familiar keyboard-centric interactions. @InProceedings{SLE22p110, author = {Tom Beckmann and Patrick Rein and Toni Mattis and Robert Hirschfeld}, title = {Partial Parsing for Structured Editors}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {110--120}, doi = {10.1145/3567512.3567522}, year = {2022}, } Publisher's Version |
|
Jahanbin, Sorour |
SLE '22: "Partial Loading of Repository-Based ..."
Partial Loading of Repository-Based Models through Static Analysis
Sorour Jahanbin, Dimitris Kolovos, Simos Gerasimou, and Gerson Sunyé (University of York, UK; University of Nantes, France) Abstract: As the size of software and system models grows, scalability issues in the current generation of model management languages (e.g. transformation, validation) and their supporting tooling become more prominent. To address this challenge, execution engines of model management programs need to become more efficient in their use of system resources. This paper presents an approach for partial loading of large models that reside in graph-database-backed model repositories. This approach leverages sophisticated static analysis of model management programs and auto-generation of graph (Cypher) queries to load only relevant model elements instead of naively loading the entire models into memory. Our experimental evaluation shows that our approach enables model management programs to process larger models, faster, and with a reduced memory footprint compared to the state of the art. @InProceedings{SLE22p266, author = {Sorour Jahanbin and Dimitris Kolovos and Simos Gerasimou and Gerson Sunyé}, title = {Partial Loading of Repository-Based Models through Static Analysis}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {266--278}, doi = {10.1145/3567512.3567535}, year = {2022}, } Publisher's Version |
|
Jeanjean, Pierre |
SLE '22: "A Language-Parametric Approach ..."
A Language-Parametric Approach to Exploratory Programming Environments
L. Thomas van Binsbergen, Damian Frölich, Mauricio Verano Merino, Joey Lai, Pierre Jeanjean, Tijs van der Storm, Benoit Combemale, and Olivier Barais (University of Amsterdam, Netherlands; Vrije Universiteit Amsterdam, Netherlands; Inria, France; University of Rennes, France; CNRS, France; IRISA, France; CWI, Netherlands; University of Groningen, Netherlands) Exploratory programming is a software development style in which code is a medium for prototyping ideas and solutions, and in which even the end-goal can evolve over time. Exploratory programming is valuable in various contexts such as programming education, data science, and end-user programming. However, there is a lack of appropriate tooling and language design principles to support exploratory programming. This paper presents a host language- and object language-independent protocol for exploratory programming akin to the Language Server Protocol. The protocol serves as a basis to develop novel (or extend existing) programming environments for exploratory programming such as computational notebooks and command-line REPLs. An architecture is presented on top of which prototype environments can be developed with relative ease, because existing (language) components can be reused. Our prototypes demonstrate that the proposed protocol is sufficiently expressive to support exploratory programming scenarios as encountered in literature within the software engineering, human-computer interaction and data science domains. @InProceedings{SLE22p175, author = {L. Thomas van Binsbergen and Damian Frölich and Mauricio Verano Merino and Joey Lai and Pierre Jeanjean and Tijs van der Storm and Benoit Combemale and Olivier Barais}, title = {A Language-Parametric Approach to Exploratory Programming Environments}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {175--188}, doi = {10.1145/3567512.3567527}, year = {2022}, } Publisher's Version Artifacts Functional |
|
Keller, Gabriele |
SLE '22: "Property-Based Testing: Climbing ..."
Property-Based Testing: Climbing the Stairway to Verification
Zilin Chen, Christine Rizkallah, Liam O'Connor, Partha Susarla, Gerwin Klein, Gernot Heiser, and Gabriele Keller (UNSW, Australia; University of Melbourne, Australia; University of Edinburgh, UK; Independent, Australia; Proofcraft, Australia; Utrecht University, Netherlands) Property-based testing (PBT) is a powerful tool that is widely available in modern programming languages. It has been used to reduce formal software verification effort. We demonstrate how PBT can be used in conjunction with formal verification to incrementally gain greater assurance in code correctness by integrating PBT into the verification framework of Cogent---a programming language equipped with a certifying compiler for developing high-assurance systems components. Specifically, for PBT and formal verification to work in tandem, we structure the tests to mirror the refinement proof that we used in Cogent's verification framework: The expected behaviour of the system under test is captured by a functional correctness specification, which mimics the formal specification of the system, and we test the refinement relation between the implementation and the specification. We exhibit the additional benefits that this mutualism brings to developers and demonstrate the techniques we used in this style of PBT, by studying two concrete examples. @InProceedings{SLE22p84, author = {Zilin Chen and Christine Rizkallah and Liam O'Connor and Partha Susarla and Gerwin Klein and Gernot Heiser and Gabriele Keller}, title = {Property-Based Testing: Climbing the Stairway to Verification}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {84--97}, doi = {10.1145/3567512.3567520}, year = {2022}, } Publisher's Version Artifacts Reusable |
|
Kernig, Svenja |
SLE '22: "jGuard: Programming Misuse-Resilient ..."
jGuard: Programming Misuse-Resilient APIs
Simon Binder, Krishna Narasimhan, Svenja Kernig, and Mira Mezini (TU Darmstadt, Germany) APIs provide access to valuable features, but studies have shown that they are hard to use correctly. Misuses of these APIs can be quite costly. Even though documentations and usage manuals exist, developers find it hard to integrate these in practice. Several static and dynamic analysis tools exist to detect and mitigate API misuses. But it is natural to wonder if APIs can be made more difficult to misuse by capturing the knowledge of domain experts (, API designers). Approaches like CogniCrypt have made inroads into this direction by offering API specification languages like CrySL which are then consumed by static analysis tools. But studies have shown that developers do not enjoy installing new tools into their pipeline. In this paper, we present jGuard, an extension to Java that allows API designers to directly encode their specifications while implementing their APIs. Code written in jGuard is then compiled to regular Java with the checks encoded as exceptions, thereby making sure the API user does not need to install any new tooling. Our evaluation shows that jGuard can be used to express the most commonly occuring misuses in practice, matches the accuracy of state of the art in API misuse detection tools, and introduces negligible performance overhead. @InProceedings{SLE22p161, author = {Simon Binder and Krishna Narasimhan and Svenja Kernig and Mira Mezini}, title = {jGuard: Programming Misuse-Resilient APIs}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {161--174}, doi = {10.1145/3567512.3567526}, year = {2022}, } Publisher's Version Artifacts Functional |
|
Khorram, Faezeh |
SLE '22: "From Coverage Computation ..."
From Coverage Computation to Fault Localization: A Generic Framework for Domain-Specific Languages
Faezeh Khorram, Erwan Bousse, Antonio Garmendia, Jean-Marie Mottu, Gerson Sunyé, and Manuel Wimmer (IMT Atlantique, France; Nantes Université, France; École Centrale Nantes, France; JKU Linz, Austria) To test a system efficiently, we need to know how good are the defined test cases and to localize detected faults in the system. Measuring test coverage can address both concerns as it is a popular metric for test quality evaluation and, at the same time, is the foundation of advanced fault localization techniques. However, for Domain-Specific Languages (DSLs), coverage metrics and associated tools are usually manually defined for each DSL representing costly, error-prone, and non-reusable work. To address this problem, we propose a generic coverage computation and fault localization framework for DSLs. Considering a test suite executed on a model conforming to a DSL, we compute a coverage matrix based on three ingredients: the DSL specification, the coverage rules, and the model's execution trace. Using the test execution result and the computed coverage matrix, the framework calculates the suspiciousness-based ranking of the model's elements based on existing spectrum-based techniques to help the user in localizing the model's faults. We provide a tool atop the Eclipse GEMOC Studio and evaluate our approach using four different DSLs, with 297 test cases for 21 models in total. Results show that we can successfully create meaningful coverage matrices for all investigated DSLs and models. The applied fault localization techniques are capable of identifying the defects injected in the models based on the provided coverage measurements, thus demonstrating the usefulness of the automatically computed measurements. @InProceedings{SLE22p235, author = {Faezeh Khorram and Erwan Bousse and Antonio Garmendia and Jean-Marie Mottu and Gerson Sunyé and Manuel Wimmer}, title = {From Coverage Computation to Fault Localization: A Generic Framework for Domain-Specific Languages}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {235--248}, doi = {10.1145/3567512.3567532}, year = {2022}, } Publisher's Version Info Artifacts Functional |
|
Klein, Gerwin |
SLE '22: "Property-Based Testing: Climbing ..."
Property-Based Testing: Climbing the Stairway to Verification
Zilin Chen, Christine Rizkallah, Liam O'Connor, Partha Susarla, Gerwin Klein, Gernot Heiser, and Gabriele Keller (UNSW, Australia; University of Melbourne, Australia; University of Edinburgh, UK; Independent, Australia; Proofcraft, Australia; Utrecht University, Netherlands) Property-based testing (PBT) is a powerful tool that is widely available in modern programming languages. It has been used to reduce formal software verification effort. We demonstrate how PBT can be used in conjunction with formal verification to incrementally gain greater assurance in code correctness by integrating PBT into the verification framework of Cogent---a programming language equipped with a certifying compiler for developing high-assurance systems components. Specifically, for PBT and formal verification to work in tandem, we structure the tests to mirror the refinement proof that we used in Cogent's verification framework: The expected behaviour of the system under test is captured by a functional correctness specification, which mimics the formal specification of the system, and we test the refinement relation between the implementation and the specification. We exhibit the additional benefits that this mutualism brings to developers and demonstrate the techniques we used in this style of PBT, by studying two concrete examples. @InProceedings{SLE22p84, author = {Zilin Chen and Christine Rizkallah and Liam O'Connor and Partha Susarla and Gerwin Klein and Gernot Heiser and Gabriele Keller}, title = {Property-Based Testing: Climbing the Stairway to Verification}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {84--97}, doi = {10.1145/3567512.3567520}, year = {2022}, } Publisher's Version Artifacts Reusable |
|
Kleppe, Anneke |
SLE '22: "Freon: An Open Web Native ..."
Freon: An Open Web Native Language Workbench
Jos Warmer and Anneke Kleppe (Independent, Netherlands) Freon (formerly called ProjectIt) is a language workbench that generates a set of tools to support a given domain specific modeling language (DSL). The most outstanding tool is a web-based projectional editor, but also included are a scoper, typer, validator, parser, unparser, and a JSON exporter/importer. Because DSLs have (sometimes very) different requirements, we do not assume Freon to be the one tool that can meet all these requirements. Instead the architecture of the generated tool-set supports language designers to extend and adapt it in several different ways. In this paper we do not focus on the functionality of Freon itself, or on any of the generated tools, but on the flexibility that the chosen architecture delivers. @InProceedings{SLE22p30, author = {Jos Warmer and Anneke Kleppe}, title = {Freon: An Open Web Native Language Workbench}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {30--35}, doi = {10.1145/3567512.3567515}, year = {2022}, } Publisher's Version Info |
|
Kolovos, Dimitris |
SLE '22: "Selective Traceability for ..."
Selective Traceability for Rule-Based Model-to-Model Transformations
Qurat ul ain Ali, Dimitris Kolovos, and Konstantinos Barmpis (University of York, UK) Model-to-model (M2M) transformation is a key ingredient in a typical Model-Driven Engineering workflow and there are several tailored high-level interpreted languages for capturing and executing such transformations. While these languages enable the specification of concise transformations through task-specific constructs (rules/mappings, bindings), their use can pose scalability challenges when it comes to very large models. In this paper, we present an architecture for optimising the execution of model-to-model transformations written in such a language, by leveraging static analysis and automated program rewriting techniques. We demonstrate how static analysis and dependency information between rules can be used to reduce the size of the transformation trace and to optimise certain classes of transformations. Finally, we detail the performance benefits that can be delivered by this form of optimisation, through a series of benchmarks performed with an existing transformation language (Epsilon Transformation Language - ETL) and EMF-based models. Our experiments have shown considerable performance improvements compared to the existing ETL execution engine, without sacrificing any features of the language. @InProceedings{SLE22p98, author = {Qurat ul ain Ali and Dimitris Kolovos and Konstantinos Barmpis}, title = {Selective Traceability for Rule-Based Model-to-Model Transformations}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {98--109}, doi = {10.1145/3567512.3567521}, year = {2022}, } Publisher's Version SLE '22: "Partial Loading of Repository-Based ..." Partial Loading of Repository-Based Models through Static Analysis Sorour Jahanbin, Dimitris Kolovos, Simos Gerasimou, and Gerson Sunyé (University of York, UK; University of Nantes, France) Abstract: As the size of software and system models grows, scalability issues in the current generation of model management languages (e.g. transformation, validation) and their supporting tooling become more prominent. To address this challenge, execution engines of model management programs need to become more efficient in their use of system resources. This paper presents an approach for partial loading of large models that reside in graph-database-backed model repositories. This approach leverages sophisticated static analysis of model management programs and auto-generation of graph (Cypher) queries to load only relevant model elements instead of naively loading the entire models into memory. Our experimental evaluation shows that our approach enables model management programs to process larger models, faster, and with a reduced memory footprint compared to the state of the art. @InProceedings{SLE22p266, author = {Sorour Jahanbin and Dimitris Kolovos and Simos Gerasimou and Gerson Sunyé}, title = {Partial Loading of Repository-Based Models through Static Analysis}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {266--278}, doi = {10.1145/3567512.3567535}, year = {2022}, } Publisher's Version |
|
Kusmenko, Evgeny |
SLE '22: "Neural Language Models and ..."
Neural Language Models and Few Shot Learning for Systematic Requirements Processing in MDSE
Vincent Bertram, Miriam Boß, Evgeny Kusmenko, Imke Helene Nachmann, Bernhard Rumpe, Danilo Trotta, and Louis Wachtmeister (RWTH Aachen University, Germany) Systems engineering, in particular in the automotive domain, needs to cope with the massively increasing numbers of requirements that arise during the development process. The language in which requirements are written is mostly informal and highly individual. This hinders automated processing of requirements as well as the linking of requirements to models. Introducing formal requirement notations in existing projects leads to the challenge of translating masses of requirements and the necessity of training for requirements engineers. In this paper, we derive domain-specific language constructs helping us to avoid ambiguities in requirements and increase the level of formality. The main contribution is the adoption and evaluation of few-shot learning with large pretrained language models for the automated translation of informal requirements to structured languages such as a requirement DSL. @InProceedings{SLE22p260, author = {Vincent Bertram and Miriam Boß and Evgeny Kusmenko and Imke Helene Nachmann and Bernhard Rumpe and Danilo Trotta and Louis Wachtmeister}, title = {Neural Language Models and Few Shot Learning for Systematic Requirements Processing in MDSE}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {260--265}, doi = {10.1145/3567512.3567534}, year = {2022}, } Publisher's Version |
|
Křikava, Filip |
SLE '22: "signatr: A Data-Driven Fuzzing ..."
signatr: A Data-Driven Fuzzing Tool for R
Alexi Turcotte, Pierre Donat-Bouillud, Filip Křikava, and Jan Vitek (Northeastern University, USA; Czech Technical University in Prague, Czechia) The fast-and-loose, permissive semantics of dynamic programming languages limit the power of static analyses. For that reason, soundness is often traded for precision through dynamic program analysis. Dynamic analysis is only as good as the available runnable code, and relying solely on test suites is fraught as they do not cover the full gamut of possible behaviors. Fuzzing is an approach for automatically exercising code, and could be used to obtain more runnable code. However, the shape of user-defined data in dynamic languages is difficult to intuit, limiting a fuzzer's reach. We propose a feedback-driven blackbox fuzzing approach which draws inputs from a database of values recorded from existing code. We implement this approach in a tool called signatr for the R language. We present the insights of its design and implementation, and assess signatr's ability to uncover new behaviors by fuzzing 4,829 R functions from 100 R packages, revealing 1,195,184 new signatures. @InProceedings{SLE22p216, author = {Alexi Turcotte and Pierre Donat-Bouillud and Filip Křikava and Jan Vitek}, title = {signatr: A Data-Driven Fuzzing Tool for R}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {216--221}, doi = {10.1145/3567512.3567530}, year = {2022}, } Publisher's Version Artifacts Reusable |
|
Lai, Joey |
SLE '22: "A Language-Parametric Approach ..."
A Language-Parametric Approach to Exploratory Programming Environments
L. Thomas van Binsbergen, Damian Frölich, Mauricio Verano Merino, Joey Lai, Pierre Jeanjean, Tijs van der Storm, Benoit Combemale, and Olivier Barais (University of Amsterdam, Netherlands; Vrije Universiteit Amsterdam, Netherlands; Inria, France; University of Rennes, France; CNRS, France; IRISA, France; CWI, Netherlands; University of Groningen, Netherlands) Exploratory programming is a software development style in which code is a medium for prototyping ideas and solutions, and in which even the end-goal can evolve over time. Exploratory programming is valuable in various contexts such as programming education, data science, and end-user programming. However, there is a lack of appropriate tooling and language design principles to support exploratory programming. This paper presents a host language- and object language-independent protocol for exploratory programming akin to the Language Server Protocol. The protocol serves as a basis to develop novel (or extend existing) programming environments for exploratory programming such as computational notebooks and command-line REPLs. An architecture is presented on top of which prototype environments can be developed with relative ease, because existing (language) components can be reused. Our prototypes demonstrate that the proposed protocol is sufficiently expressive to support exploratory programming scenarios as encountered in literature within the software engineering, human-computer interaction and data science domains. @InProceedings{SLE22p175, author = {L. Thomas van Binsbergen and Damian Frölich and Mauricio Verano Merino and Joey Lai and Pierre Jeanjean and Tijs van der Storm and Benoit Combemale and Olivier Barais}, title = {A Language-Parametric Approach to Exploratory Programming Environments}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {175--188}, doi = {10.1145/3567512.3567527}, year = {2022}, } Publisher's Version Artifacts Functional |
|
Li, Zhibo |
SLE '22: "Collection Skeletons: Declarative ..."
Collection Skeletons: Declarative Abstractions for Data Collections
Björn Franke, Zhibo Li, Magnus Morton, and Michel Steuwer (University of Edinburgh, UK; Huawei, UK) Modern programming languages provide programmers with rich abstractions for data collections as part of their standard libraries, e.g. Containers in the C++ STL, the Java Collections Framework, or the Scala Collections API. Typically, these collections frameworks are organised as hierarchies that provide programmers with common abstract data types (ADTs) like lists, queues, and stacks. While convenient, this approach introduces problems which ultimately affect application performance due to users over-specifying collection data types limiting implementation flexibility. In this paper, we develop Collection Skeletons which provide a novel, declarative approach to data collections. Using our framework, programmers explicitly select properties for their collections, thereby truly decoupling specification from implementation. By making collection properties explicit immediate benefits materialise in form of reduced risk of over-specification and increased implementation flexibility. We have prototyped our declarative abstractions for collections as a C++ library, and demonstrate that benchmark applications rewritten to use Collection Skeletons incur little or no overhead. In fact, for several benchmarks, we observe performance speedups (on average between 2.57 to 2.93, and up to 16.37) and also enhanced performance portability across three different hardware platforms. @InProceedings{SLE22p189, author = {Björn Franke and Zhibo Li and Magnus Morton and Michel Steuwer}, title = {Collection Skeletons: Declarative Abstractions for Data Collections}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {189--201}, doi = {10.1145/3567512.3567528}, year = {2022}, } Publisher's Version |
|
Lubis, Luthfan Anshar |
SLE '22: "BatakJava: An Object-Oriented ..."
BatakJava: An Object-Oriented Programming Language with Versions
Luthfan Anshar Lubis, Yudai Tanabe, Tomoyuki Aotani, and Hidehiko Masuhara (Tokyo Institute of Technology, Japan; Mamezou, Japan) Programming with versions is a recent proposal that supports multiple versions of software components in a program. Though it would provide greater freedom for the programmer, the concept is only realized as a simple core calculus, called λVL, where a value consists of λ-terms with multiple versions. We explore a design space of programming with versions in the presence of data structures and module systems, and propose BatakJava, an object-oriented programming language in which multiple versions of a class can be used in a program. This paper presents BatakJava’s language design, its core semantics with subject reduction, an implementation as a source-to-Java translator, and a case study to understand how we can exploit multiple versions in BatakJava for developing an application program with an evolving library. @InProceedings{SLE22p222, author = {Luthfan Anshar Lubis and Yudai Tanabe and Tomoyuki Aotani and Hidehiko Masuhara}, title = {BatakJava: An Object-Oriented Programming Language with Versions}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {222--234}, doi = {10.1145/3567512.3567531}, year = {2022}, } Publisher's Version Artifacts Reusable |
|
Masuhara, Hidehiko |
SLE '22: "BatakJava: An Object-Oriented ..."
BatakJava: An Object-Oriented Programming Language with Versions
Luthfan Anshar Lubis, Yudai Tanabe, Tomoyuki Aotani, and Hidehiko Masuhara (Tokyo Institute of Technology, Japan; Mamezou, Japan) Programming with versions is a recent proposal that supports multiple versions of software components in a program. Though it would provide greater freedom for the programmer, the concept is only realized as a simple core calculus, called λVL, where a value consists of λ-terms with multiple versions. We explore a design space of programming with versions in the presence of data structures and module systems, and propose BatakJava, an object-oriented programming language in which multiple versions of a class can be used in a program. This paper presents BatakJava’s language design, its core semantics with subject reduction, an implementation as a source-to-Java translator, and a case study to understand how we can exploit multiple versions in BatakJava for developing an application program with an evolving library. @InProceedings{SLE22p222, author = {Luthfan Anshar Lubis and Yudai Tanabe and Tomoyuki Aotani and Hidehiko Masuhara}, title = {BatakJava: An Object-Oriented Programming Language with Versions}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {222--234}, doi = {10.1145/3567512.3567531}, year = {2022}, } Publisher's Version Artifacts Reusable |
|
Mattis, Toni |
SLE '22: "Partial Parsing for Structured ..."
Partial Parsing for Structured Editors
Tom Beckmann, Patrick Rein, Toni Mattis, and Robert Hirschfeld (University of Potsdam, Germany; Hasso Plattner Institute, Germany) Creating structured editors, which maintain a valid syntax tree at all times rather than allowing to edit program text, is typically a time consuming task. Recent work has investigated the use of existing general-purpose language grammars as a basis for automatically generating structured editors, thus considerably reducing the effort required. However, in these generated editors, input occurs through menu and mouse-based interaction, rather than via keyboard entry that is familiar to most users. In this paper we introduce modifications to a parser of general-purpose programming language grammars to support keyboard-centric interactions with generated structured editors. Specifically, we describe a system we call partial parsing to autocomplete language structures, removing the need for a menu of language constructs in favor of keyboard-based disambiguation. We demonstrate our system's applicability and performance for use in interactive, generated structured editors. Our system thus constitutes a step towards making structured editors generated from language grammars usable with more efficient and familiar keyboard-centric interactions. @InProceedings{SLE22p110, author = {Tom Beckmann and Patrick Rein and Toni Mattis and Robert Hirschfeld}, title = {Partial Parsing for Structured Editors}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {110--120}, doi = {10.1145/3567512.3567522}, year = {2022}, } Publisher's Version |
|
Mezini, Mira |
SLE '22: "jGuard: Programming Misuse-Resilient ..."
jGuard: Programming Misuse-Resilient APIs
Simon Binder, Krishna Narasimhan, Svenja Kernig, and Mira Mezini (TU Darmstadt, Germany) APIs provide access to valuable features, but studies have shown that they are hard to use correctly. Misuses of these APIs can be quite costly. Even though documentations and usage manuals exist, developers find it hard to integrate these in practice. Several static and dynamic analysis tools exist to detect and mitigate API misuses. But it is natural to wonder if APIs can be made more difficult to misuse by capturing the knowledge of domain experts (, API designers). Approaches like CogniCrypt have made inroads into this direction by offering API specification languages like CrySL which are then consumed by static analysis tools. But studies have shown that developers do not enjoy installing new tools into their pipeline. In this paper, we present jGuard, an extension to Java that allows API designers to directly encode their specifications while implementing their APIs. Code written in jGuard is then compiled to regular Java with the checks encoded as exceptions, thereby making sure the API user does not need to install any new tooling. Our evaluation shows that jGuard can be used to express the most commonly occuring misuses in practice, matches the accuracy of state of the art in API misuse detection tools, and introduces negligible performance overhead. @InProceedings{SLE22p161, author = {Simon Binder and Krishna Narasimhan and Svenja Kernig and Mira Mezini}, title = {jGuard: Programming Misuse-Resilient APIs}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {161--174}, doi = {10.1145/3567512.3567526}, year = {2022}, } Publisher's Version Artifacts Functional |
|
Morton, Magnus |
SLE '22: "Collection Skeletons: Declarative ..."
Collection Skeletons: Declarative Abstractions for Data Collections
Björn Franke, Zhibo Li, Magnus Morton, and Michel Steuwer (University of Edinburgh, UK; Huawei, UK) Modern programming languages provide programmers with rich abstractions for data collections as part of their standard libraries, e.g. Containers in the C++ STL, the Java Collections Framework, or the Scala Collections API. Typically, these collections frameworks are organised as hierarchies that provide programmers with common abstract data types (ADTs) like lists, queues, and stacks. While convenient, this approach introduces problems which ultimately affect application performance due to users over-specifying collection data types limiting implementation flexibility. In this paper, we develop Collection Skeletons which provide a novel, declarative approach to data collections. Using our framework, programmers explicitly select properties for their collections, thereby truly decoupling specification from implementation. By making collection properties explicit immediate benefits materialise in form of reduced risk of over-specification and increased implementation flexibility. We have prototyped our declarative abstractions for collections as a C++ library, and demonstrate that benchmark applications rewritten to use Collection Skeletons incur little or no overhead. In fact, for several benchmarks, we observe performance speedups (on average between 2.57 to 2.93, and up to 16.37) and also enhanced performance portability across three different hardware platforms. @InProceedings{SLE22p189, author = {Björn Franke and Zhibo Li and Magnus Morton and Michel Steuwer}, title = {Collection Skeletons: Declarative Abstractions for Data Collections}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {189--201}, doi = {10.1145/3567512.3567528}, year = {2022}, } Publisher's Version |
|
Mottu, Jean-Marie |
SLE '22: "From Coverage Computation ..."
From Coverage Computation to Fault Localization: A Generic Framework for Domain-Specific Languages
Faezeh Khorram, Erwan Bousse, Antonio Garmendia, Jean-Marie Mottu, Gerson Sunyé, and Manuel Wimmer (IMT Atlantique, France; Nantes Université, France; École Centrale Nantes, France; JKU Linz, Austria) To test a system efficiently, we need to know how good are the defined test cases and to localize detected faults in the system. Measuring test coverage can address both concerns as it is a popular metric for test quality evaluation and, at the same time, is the foundation of advanced fault localization techniques. However, for Domain-Specific Languages (DSLs), coverage metrics and associated tools are usually manually defined for each DSL representing costly, error-prone, and non-reusable work. To address this problem, we propose a generic coverage computation and fault localization framework for DSLs. Considering a test suite executed on a model conforming to a DSL, we compute a coverage matrix based on three ingredients: the DSL specification, the coverage rules, and the model's execution trace. Using the test execution result and the computed coverage matrix, the framework calculates the suspiciousness-based ranking of the model's elements based on existing spectrum-based techniques to help the user in localizing the model's faults. We provide a tool atop the Eclipse GEMOC Studio and evaluate our approach using four different DSLs, with 297 test cases for 21 models in total. Results show that we can successfully create meaningful coverage matrices for all investigated DSLs and models. The applied fault localization techniques are capable of identifying the defects injected in the models based on the provided coverage measurements, thus demonstrating the usefulness of the automatically computed measurements. @InProceedings{SLE22p235, author = {Faezeh Khorram and Erwan Bousse and Antonio Garmendia and Jean-Marie Mottu and Gerson Sunyé and Manuel Wimmer}, title = {From Coverage Computation to Fault Localization: A Generic Framework for Domain-Specific Languages}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {235--248}, doi = {10.1145/3567512.3567532}, year = {2022}, } Publisher's Version Info Artifacts Functional |
|
Nachmann, Imke Helene |
SLE '22: "Neural Language Models and ..."
Neural Language Models and Few Shot Learning for Systematic Requirements Processing in MDSE
Vincent Bertram, Miriam Boß, Evgeny Kusmenko, Imke Helene Nachmann, Bernhard Rumpe, Danilo Trotta, and Louis Wachtmeister (RWTH Aachen University, Germany) Systems engineering, in particular in the automotive domain, needs to cope with the massively increasing numbers of requirements that arise during the development process. The language in which requirements are written is mostly informal and highly individual. This hinders automated processing of requirements as well as the linking of requirements to models. Introducing formal requirement notations in existing projects leads to the challenge of translating masses of requirements and the necessity of training for requirements engineers. In this paper, we derive domain-specific language constructs helping us to avoid ambiguities in requirements and increase the level of formality. The main contribution is the adoption and evaluation of few-shot learning with large pretrained language models for the automated translation of informal requirements to structured languages such as a requirement DSL. @InProceedings{SLE22p260, author = {Vincent Bertram and Miriam Boß and Evgeny Kusmenko and Imke Helene Nachmann and Bernhard Rumpe and Danilo Trotta and Louis Wachtmeister}, title = {Neural Language Models and Few Shot Learning for Systematic Requirements Processing in MDSE}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {260--265}, doi = {10.1145/3567512.3567534}, year = {2022}, } Publisher's Version |
|
Nakamaru, Tomoki |
SLE '22: "Yet Another Generating Method ..."
Yet Another Generating Method of Fluent Interfaces Supporting Flat- and Sub-chaining Styles
Tetsuro Yamazaki, Tomoki Nakamaru, and Shigeru Chiba (University of Tokyo, Japan) Researchers discovered methods to generate fluent interfaces equipped with static checking to verify their calling conventions. This static checking is done by carefully designing classes and method signatures to make type checking to perform a calculation equivalent to syntax checking. In this paper, we propose a method to generate a fluent interface with syntax checking, which accepts both styles of method chaining; flat-chaining style and sub-chaining style. Supporting both styles is worthwhile because it allows programmers to wrap out parts of their method chaining for readability. Our method is based on grammar rewriting so that we could inspect the acceptable grammar. In conclusion, our method succeeds generation when the input grammar is LL(1) and there is no non-terminal symbol that generates either only an empty string or nothing. @InProceedings{SLE22p249, author = {Tetsuro Yamazaki and Tomoki Nakamaru and Shigeru Chiba}, title = {Yet Another Generating Method of Fluent Interfaces Supporting Flat- and Sub-chaining Styles}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {249--259}, doi = {10.1145/3567512.3567533}, year = {2022}, } Publisher's Version |
|
Narasimhan, Krishna |
SLE '22: "jGuard: Programming Misuse-Resilient ..."
jGuard: Programming Misuse-Resilient APIs
Simon Binder, Krishna Narasimhan, Svenja Kernig, and Mira Mezini (TU Darmstadt, Germany) APIs provide access to valuable features, but studies have shown that they are hard to use correctly. Misuses of these APIs can be quite costly. Even though documentations and usage manuals exist, developers find it hard to integrate these in practice. Several static and dynamic analysis tools exist to detect and mitigate API misuses. But it is natural to wonder if APIs can be made more difficult to misuse by capturing the knowledge of domain experts (, API designers). Approaches like CogniCrypt have made inroads into this direction by offering API specification languages like CrySL which are then consumed by static analysis tools. But studies have shown that developers do not enjoy installing new tools into their pipeline. In this paper, we present jGuard, an extension to Java that allows API designers to directly encode their specifications while implementing their APIs. Code written in jGuard is then compiled to regular Java with the checks encoded as exceptions, thereby making sure the API user does not need to install any new tooling. Our evaluation shows that jGuard can be used to express the most commonly occuring misuses in practice, matches the accuracy of state of the art in API misuse detection tools, and introduces negligible performance overhead. @InProceedings{SLE22p161, author = {Simon Binder and Krishna Narasimhan and Svenja Kernig and Mira Mezini}, title = {jGuard: Programming Misuse-Resilient APIs}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {161--174}, doi = {10.1145/3567512.3567526}, year = {2022}, } Publisher's Version Artifacts Functional |
|
O'Connor, Liam |
SLE '22: "Property-Based Testing: Climbing ..."
Property-Based Testing: Climbing the Stairway to Verification
Zilin Chen, Christine Rizkallah, Liam O'Connor, Partha Susarla, Gerwin Klein, Gernot Heiser, and Gabriele Keller (UNSW, Australia; University of Melbourne, Australia; University of Edinburgh, UK; Independent, Australia; Proofcraft, Australia; Utrecht University, Netherlands) Property-based testing (PBT) is a powerful tool that is widely available in modern programming languages. It has been used to reduce formal software verification effort. We demonstrate how PBT can be used in conjunction with formal verification to incrementally gain greater assurance in code correctness by integrating PBT into the verification framework of Cogent---a programming language equipped with a certifying compiler for developing high-assurance systems components. Specifically, for PBT and formal verification to work in tandem, we structure the tests to mirror the refinement proof that we used in Cogent's verification framework: The expected behaviour of the system under test is captured by a functional correctness specification, which mimics the formal specification of the system, and we test the refinement relation between the implementation and the specification. We exhibit the additional benefits that this mutualism brings to developers and demonstrate the techniques we used in this style of PBT, by studying two concrete examples. @InProceedings{SLE22p84, author = {Zilin Chen and Christine Rizkallah and Liam O'Connor and Partha Susarla and Gerwin Klein and Gernot Heiser and Gabriele Keller}, title = {Property-Based Testing: Climbing the Stairway to Verification}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {84--97}, doi = {10.1145/3567512.3567520}, year = {2022}, } Publisher's Version Artifacts Reusable |
|
Pop, Adrian |
SLE '22: "Property Probes: Source Code ..."
Property Probes: Source Code Based Exploration of Program Analysis Results
Anton Risberg Alaküla, Görel Hedin, Niklas Fors, and Adrian Pop (Lund University, Sweden; Linköping University, Sweden) We present property probes, a mechanism for helping a developer interactively explore partial program analysis results in terms of the source program, and as the program is edited. A node locator data structure is introduced that maps between source code spans and program representation nodes, and that helps identify probed nodes in a robust way, after modifications to the source code. We have developed a client-server based tool supporting property probes, and argue that it is very helpful in debugging and understanding program analyses. We have evaluated our tool on several languages and analyses, including a full Java compiler and a tool for intraprocedural dataflow analysis. Our performance results show that the probe overhead is negligible even when analyzing large projects. @InProceedings{SLE22p148, author = {Anton Risberg Alaküla and Görel Hedin and Niklas Fors and Adrian Pop}, title = {Property Probes: Source Code Based Exploration of Program Analysis Results}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {148--160}, doi = {10.1145/3567512.3567525}, year = {2022}, } Publisher's Version Artifacts Reusable |
|
Rein, Patrick |
SLE '22: "Partial Parsing for Structured ..."
Partial Parsing for Structured Editors
Tom Beckmann, Patrick Rein, Toni Mattis, and Robert Hirschfeld (University of Potsdam, Germany; Hasso Plattner Institute, Germany) Creating structured editors, which maintain a valid syntax tree at all times rather than allowing to edit program text, is typically a time consuming task. Recent work has investigated the use of existing general-purpose language grammars as a basis for automatically generating structured editors, thus considerably reducing the effort required. However, in these generated editors, input occurs through menu and mouse-based interaction, rather than via keyboard entry that is familiar to most users. In this paper we introduce modifications to a parser of general-purpose programming language grammars to support keyboard-centric interactions with generated structured editors. Specifically, we describe a system we call partial parsing to autocomplete language structures, removing the need for a menu of language constructs in favor of keyboard-based disambiguation. We demonstrate our system's applicability and performance for use in interactive, generated structured editors. Our system thus constitutes a step towards making structured editors generated from language grammars usable with more efficient and familiar keyboard-centric interactions. @InProceedings{SLE22p110, author = {Tom Beckmann and Patrick Rein and Toni Mattis and Robert Hirschfeld}, title = {Partial Parsing for Structured Editors}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {110--120}, doi = {10.1145/3567512.3567522}, year = {2022}, } Publisher's Version |
|
Risberg Alaküla, Anton |
SLE '22: "Property Probes: Source Code ..."
Property Probes: Source Code Based Exploration of Program Analysis Results
Anton Risberg Alaküla, Görel Hedin, Niklas Fors, and Adrian Pop (Lund University, Sweden; Linköping University, Sweden) We present property probes, a mechanism for helping a developer interactively explore partial program analysis results in terms of the source program, and as the program is edited. A node locator data structure is introduced that maps between source code spans and program representation nodes, and that helps identify probed nodes in a robust way, after modifications to the source code. We have developed a client-server based tool supporting property probes, and argue that it is very helpful in debugging and understanding program analyses. We have evaluated our tool on several languages and analyses, including a full Java compiler and a tool for intraprocedural dataflow analysis. Our performance results show that the probe overhead is negligible even when analyzing large projects. @InProceedings{SLE22p148, author = {Anton Risberg Alaküla and Görel Hedin and Niklas Fors and Adrian Pop}, title = {Property Probes: Source Code Based Exploration of Program Analysis Results}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {148--160}, doi = {10.1145/3567512.3567525}, year = {2022}, } Publisher's Version Artifacts Reusable |
|
Rizkallah, Christine |
SLE '22: "Property-Based Testing: Climbing ..."
Property-Based Testing: Climbing the Stairway to Verification
Zilin Chen, Christine Rizkallah, Liam O'Connor, Partha Susarla, Gerwin Klein, Gernot Heiser, and Gabriele Keller (UNSW, Australia; University of Melbourne, Australia; University of Edinburgh, UK; Independent, Australia; Proofcraft, Australia; Utrecht University, Netherlands) Property-based testing (PBT) is a powerful tool that is widely available in modern programming languages. It has been used to reduce formal software verification effort. We demonstrate how PBT can be used in conjunction with formal verification to incrementally gain greater assurance in code correctness by integrating PBT into the verification framework of Cogent---a programming language equipped with a certifying compiler for developing high-assurance systems components. Specifically, for PBT and formal verification to work in tandem, we structure the tests to mirror the refinement proof that we used in Cogent's verification framework: The expected behaviour of the system under test is captured by a functional correctness specification, which mimics the formal specification of the system, and we test the refinement relation between the implementation and the specification. We exhibit the additional benefits that this mutualism brings to developers and demonstrate the techniques we used in this style of PBT, by studying two concrete examples. @InProceedings{SLE22p84, author = {Zilin Chen and Christine Rizkallah and Liam O'Connor and Partha Susarla and Gerwin Klein and Gernot Heiser and Gabriele Keller}, title = {Property-Based Testing: Climbing the Stairway to Verification}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {84--97}, doi = {10.1145/3567512.3567520}, year = {2022}, } Publisher's Version Artifacts Reusable |
|
Rumpe, Bernhard |
SLE '22: "Neural Language Models and ..."
Neural Language Models and Few Shot Learning for Systematic Requirements Processing in MDSE
Vincent Bertram, Miriam Boß, Evgeny Kusmenko, Imke Helene Nachmann, Bernhard Rumpe, Danilo Trotta, and Louis Wachtmeister (RWTH Aachen University, Germany) Systems engineering, in particular in the automotive domain, needs to cope with the massively increasing numbers of requirements that arise during the development process. The language in which requirements are written is mostly informal and highly individual. This hinders automated processing of requirements as well as the linking of requirements to models. Introducing formal requirement notations in existing projects leads to the challenge of translating masses of requirements and the necessity of training for requirements engineers. In this paper, we derive domain-specific language constructs helping us to avoid ambiguities in requirements and increase the level of formality. The main contribution is the adoption and evaluation of few-shot learning with large pretrained language models for the automated translation of informal requirements to structured languages such as a requirement DSL. @InProceedings{SLE22p260, author = {Vincent Bertram and Miriam Boß and Evgeny Kusmenko and Imke Helene Nachmann and Bernhard Rumpe and Danilo Trotta and Louis Wachtmeister}, title = {Neural Language Models and Few Shot Learning for Systematic Requirements Processing in MDSE}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {260--265}, doi = {10.1145/3567512.3567534}, year = {2022}, } Publisher's Version |
|
Smits, Jeff |
SLE '22: "Optimising First-Class Pattern ..."
Optimising First-Class Pattern Matching
Jeff Smits, Toine Hartman, and Jesper Cockx (Delft University of Technology, Netherlands; Independent, Netherlands) Pattern matching is a high-level notation for programs to analyse the shape of data, and can be optimised to efficient low-level instructions. The Stratego language uses first-class pattern matching, a powerful form of pattern matching that traditional optimisation techniques do not apply to directly. In this paper, we investigate how to optimise programs that use first-class pattern matching. Concretely, we show how to map first-class pattern matching to a form close to traditional pattern matching, on which standard optimisations can be applied. Through benchmarks, we demonstrate the positive effect of these optimisations on the run-time performance of Stratego programs. We conclude that the expressive power of first-class pattern matching does not hamper the optimisation potential of a language that features it. @InProceedings{SLE22p74, author = {Jeff Smits and Toine Hartman and Jesper Cockx}, title = {Optimising First-Class Pattern Matching}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {74--83}, doi = {10.1145/3567512.3567519}, year = {2022}, } Publisher's Version Artifacts Functional |
|
Steimann, Friedrich |
SLE '22: "The Semantics of Plurals ..."
The Semantics of Plurals
Friedrich Steimann and Marius Freitag (Fernuniversität in Hagen, Germany) Inside many software languages lives an expression language that caters for the computation of single values from single values. These languages' fixation on single-valuedness is often at odds with their application domains, in which many values, or plurals, regularly occur in the places of single. While the classical mathematical means of dealing with plurals is the set, in computing, other representations have evolved, notably strings and the much lesser known bunches. We review bunch theory in the context of expression languages including non-recursive functions, and show how giving bunches set semantics suggests that evaluating bunch functions amounts to computing with relations. We maintain that the ensuing seamless integration of relations in expression languages that otherwise know only functions makes a worthwhile contribution in a field in which the difference between modeling, with its preference for relations, and programming, with its preference for functions, is increasingly considered accidental. @InProceedings{SLE22p36, author = {Friedrich Steimann and Marius Freitag}, title = {The Semantics of Plurals}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {36--54}, doi = {10.1145/3567512.3567516}, year = {2022}, } Publisher's Version |
|
Steuwer, Michel |
SLE '22: "Collection Skeletons: Declarative ..."
Collection Skeletons: Declarative Abstractions for Data Collections
Björn Franke, Zhibo Li, Magnus Morton, and Michel Steuwer (University of Edinburgh, UK; Huawei, UK) Modern programming languages provide programmers with rich abstractions for data collections as part of their standard libraries, e.g. Containers in the C++ STL, the Java Collections Framework, or the Scala Collections API. Typically, these collections frameworks are organised as hierarchies that provide programmers with common abstract data types (ADTs) like lists, queues, and stacks. While convenient, this approach introduces problems which ultimately affect application performance due to users over-specifying collection data types limiting implementation flexibility. In this paper, we develop Collection Skeletons which provide a novel, declarative approach to data collections. Using our framework, programmers explicitly select properties for their collections, thereby truly decoupling specification from implementation. By making collection properties explicit immediate benefits materialise in form of reduced risk of over-specification and increased implementation flexibility. We have prototyped our declarative abstractions for collections as a C++ library, and demonstrate that benchmark applications rewritten to use Collection Skeletons incur little or no overhead. In fact, for several benchmarks, we observe performance speedups (on average between 2.57 to 2.93, and up to 16.37) and also enhanced performance portability across three different hardware platforms. @InProceedings{SLE22p189, author = {Björn Franke and Zhibo Li and Magnus Morton and Michel Steuwer}, title = {Collection Skeletons: Declarative Abstractions for Data Collections}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {189--201}, doi = {10.1145/3567512.3567528}, year = {2022}, } Publisher's Version |
|
Sunyé, Gerson |
SLE '22: "From Coverage Computation ..."
From Coverage Computation to Fault Localization: A Generic Framework for Domain-Specific Languages
Faezeh Khorram, Erwan Bousse, Antonio Garmendia, Jean-Marie Mottu, Gerson Sunyé, and Manuel Wimmer (IMT Atlantique, France; Nantes Université, France; École Centrale Nantes, France; JKU Linz, Austria) To test a system efficiently, we need to know how good are the defined test cases and to localize detected faults in the system. Measuring test coverage can address both concerns as it is a popular metric for test quality evaluation and, at the same time, is the foundation of advanced fault localization techniques. However, for Domain-Specific Languages (DSLs), coverage metrics and associated tools are usually manually defined for each DSL representing costly, error-prone, and non-reusable work. To address this problem, we propose a generic coverage computation and fault localization framework for DSLs. Considering a test suite executed on a model conforming to a DSL, we compute a coverage matrix based on three ingredients: the DSL specification, the coverage rules, and the model's execution trace. Using the test execution result and the computed coverage matrix, the framework calculates the suspiciousness-based ranking of the model's elements based on existing spectrum-based techniques to help the user in localizing the model's faults. We provide a tool atop the Eclipse GEMOC Studio and evaluate our approach using four different DSLs, with 297 test cases for 21 models in total. Results show that we can successfully create meaningful coverage matrices for all investigated DSLs and models. The applied fault localization techniques are capable of identifying the defects injected in the models based on the provided coverage measurements, thus demonstrating the usefulness of the automatically computed measurements. @InProceedings{SLE22p235, author = {Faezeh Khorram and Erwan Bousse and Antonio Garmendia and Jean-Marie Mottu and Gerson Sunyé and Manuel Wimmer}, title = {From Coverage Computation to Fault Localization: A Generic Framework for Domain-Specific Languages}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {235--248}, doi = {10.1145/3567512.3567532}, year = {2022}, } Publisher's Version Info Artifacts Functional SLE '22: "Partial Loading of Repository-Based ..." Partial Loading of Repository-Based Models through Static Analysis Sorour Jahanbin, Dimitris Kolovos, Simos Gerasimou, and Gerson Sunyé (University of York, UK; University of Nantes, France) Abstract: As the size of software and system models grows, scalability issues in the current generation of model management languages (e.g. transformation, validation) and their supporting tooling become more prominent. To address this challenge, execution engines of model management programs need to become more efficient in their use of system resources. This paper presents an approach for partial loading of large models that reside in graph-database-backed model repositories. This approach leverages sophisticated static analysis of model management programs and auto-generation of graph (Cypher) queries to load only relevant model elements instead of naively loading the entire models into memory. Our experimental evaluation shows that our approach enables model management programs to process larger models, faster, and with a reduced memory footprint compared to the state of the art. @InProceedings{SLE22p266, author = {Sorour Jahanbin and Dimitris Kolovos and Simos Gerasimou and Gerson Sunyé}, title = {Partial Loading of Repository-Based Models through Static Analysis}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {266--278}, doi = {10.1145/3567512.3567535}, year = {2022}, } Publisher's Version |
|
Susarla, Partha |
SLE '22: "Property-Based Testing: Climbing ..."
Property-Based Testing: Climbing the Stairway to Verification
Zilin Chen, Christine Rizkallah, Liam O'Connor, Partha Susarla, Gerwin Klein, Gernot Heiser, and Gabriele Keller (UNSW, Australia; University of Melbourne, Australia; University of Edinburgh, UK; Independent, Australia; Proofcraft, Australia; Utrecht University, Netherlands) Property-based testing (PBT) is a powerful tool that is widely available in modern programming languages. It has been used to reduce formal software verification effort. We demonstrate how PBT can be used in conjunction with formal verification to incrementally gain greater assurance in code correctness by integrating PBT into the verification framework of Cogent---a programming language equipped with a certifying compiler for developing high-assurance systems components. Specifically, for PBT and formal verification to work in tandem, we structure the tests to mirror the refinement proof that we used in Cogent's verification framework: The expected behaviour of the system under test is captured by a functional correctness specification, which mimics the formal specification of the system, and we test the refinement relation between the implementation and the specification. We exhibit the additional benefits that this mutualism brings to developers and demonstrate the techniques we used in this style of PBT, by studying two concrete examples. @InProceedings{SLE22p84, author = {Zilin Chen and Christine Rizkallah and Liam O'Connor and Partha Susarla and Gerwin Klein and Gernot Heiser and Gabriele Keller}, title = {Property-Based Testing: Climbing the Stairway to Verification}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {84--97}, doi = {10.1145/3567512.3567520}, year = {2022}, } Publisher's Version Artifacts Reusable |
|
Tanabe, Yudai |
SLE '22: "BatakJava: An Object-Oriented ..."
BatakJava: An Object-Oriented Programming Language with Versions
Luthfan Anshar Lubis, Yudai Tanabe, Tomoyuki Aotani, and Hidehiko Masuhara (Tokyo Institute of Technology, Japan; Mamezou, Japan) Programming with versions is a recent proposal that supports multiple versions of software components in a program. Though it would provide greater freedom for the programmer, the concept is only realized as a simple core calculus, called λVL, where a value consists of λ-terms with multiple versions. We explore a design space of programming with versions in the presence of data structures and module systems, and propose BatakJava, an object-oriented programming language in which multiple versions of a class can be used in a program. This paper presents BatakJava’s language design, its core semantics with subject reduction, an implementation as a source-to-Java translator, and a case study to understand how we can exploit multiple versions in BatakJava for developing an application program with an evolving library. @InProceedings{SLE22p222, author = {Luthfan Anshar Lubis and Yudai Tanabe and Tomoyuki Aotani and Hidehiko Masuhara}, title = {BatakJava: An Object-Oriented Programming Language with Versions}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {222--234}, doi = {10.1145/3567512.3567531}, year = {2022}, } Publisher's Version Artifacts Reusable |
|
Trotta, Danilo |
SLE '22: "Neural Language Models and ..."
Neural Language Models and Few Shot Learning for Systematic Requirements Processing in MDSE
Vincent Bertram, Miriam Boß, Evgeny Kusmenko, Imke Helene Nachmann, Bernhard Rumpe, Danilo Trotta, and Louis Wachtmeister (RWTH Aachen University, Germany) Systems engineering, in particular in the automotive domain, needs to cope with the massively increasing numbers of requirements that arise during the development process. The language in which requirements are written is mostly informal and highly individual. This hinders automated processing of requirements as well as the linking of requirements to models. Introducing formal requirement notations in existing projects leads to the challenge of translating masses of requirements and the necessity of training for requirements engineers. In this paper, we derive domain-specific language constructs helping us to avoid ambiguities in requirements and increase the level of formality. The main contribution is the adoption and evaluation of few-shot learning with large pretrained language models for the automated translation of informal requirements to structured languages such as a requirement DSL. @InProceedings{SLE22p260, author = {Vincent Bertram and Miriam Boß and Evgeny Kusmenko and Imke Helene Nachmann and Bernhard Rumpe and Danilo Trotta and Louis Wachtmeister}, title = {Neural Language Models and Few Shot Learning for Systematic Requirements Processing in MDSE}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {260--265}, doi = {10.1145/3567512.3567534}, year = {2022}, } Publisher's Version |
|
Turcotte, Alexi |
SLE '22: "signatr: A Data-Driven Fuzzing ..."
signatr: A Data-Driven Fuzzing Tool for R
Alexi Turcotte, Pierre Donat-Bouillud, Filip Křikava, and Jan Vitek (Northeastern University, USA; Czech Technical University in Prague, Czechia) The fast-and-loose, permissive semantics of dynamic programming languages limit the power of static analyses. For that reason, soundness is often traded for precision through dynamic program analysis. Dynamic analysis is only as good as the available runnable code, and relying solely on test suites is fraught as they do not cover the full gamut of possible behaviors. Fuzzing is an approach for automatically exercising code, and could be used to obtain more runnable code. However, the shape of user-defined data in dynamic languages is difficult to intuit, limiting a fuzzer's reach. We propose a feedback-driven blackbox fuzzing approach which draws inputs from a database of values recorded from existing code. We implement this approach in a tool called signatr for the R language. We present the insights of its design and implementation, and assess signatr's ability to uncover new behaviors by fuzzing 4,829 R functions from 100 R packages, revealing 1,195,184 new signatures. @InProceedings{SLE22p216, author = {Alexi Turcotte and Pierre Donat-Bouillud and Filip Křikava and Jan Vitek}, title = {signatr: A Data-Driven Fuzzing Tool for R}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {216--221}, doi = {10.1145/3567512.3567530}, year = {2022}, } Publisher's Version Artifacts Reusable |
|
Van Binsbergen, L. Thomas |
SLE '22: "iCoLa: A Compositional Meta-language ..."
iCoLa: A Compositional Meta-language with Support for Incremental Language Development
Damian Frölich and L. Thomas van Binsbergen (University of Amsterdam, Netherlands) Programming languages providing high-level abstractions can increase programmers’ productivity and program safety. Language-oriented programming is a paradigm in which domain-specific languages are developed to solve problems within specific domains with (high-level) abstractions relevant to those domains. However, language development involves complex design and engineering processes. These processes can be simplified by reusing (parts of) existing languages and by offering language-parametric tooling. In this paper we present iCoLa, a meta-language supporting incremental (meta-)programming based on reusable components. In our implementation of iCoLa, languages are first-class citizens, providing the full power of the host-language (Haskell) to compose and manipulate languages. We demonstrate iCoLa through the construction of the Imp, SIMPLE, and MiniJava languages via the composition and restriction of language fragments and demonstrate the variability of our approach through the construction of several languages using a fixed-set of operators. @InProceedings{SLE22p202, author = {Damian Frölich and L. Thomas van Binsbergen}, title = {iCoLa: A Compositional Meta-language with Support for Incremental Language Development}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {202--215}, doi = {10.1145/3567512.3567529}, year = {2022}, } Publisher's Version SLE '22: "A Language-Parametric Approach ..." A Language-Parametric Approach to Exploratory Programming Environments L. Thomas van Binsbergen, Damian Frölich, Mauricio Verano Merino, Joey Lai, Pierre Jeanjean, Tijs van der Storm, Benoit Combemale, and Olivier Barais (University of Amsterdam, Netherlands; Vrije Universiteit Amsterdam, Netherlands; Inria, France; University of Rennes, France; CNRS, France; IRISA, France; CWI, Netherlands; University of Groningen, Netherlands) Exploratory programming is a software development style in which code is a medium for prototyping ideas and solutions, and in which even the end-goal can evolve over time. Exploratory programming is valuable in various contexts such as programming education, data science, and end-user programming. However, there is a lack of appropriate tooling and language design principles to support exploratory programming. This paper presents a host language- and object language-independent protocol for exploratory programming akin to the Language Server Protocol. The protocol serves as a basis to develop novel (or extend existing) programming environments for exploratory programming such as computational notebooks and command-line REPLs. An architecture is presented on top of which prototype environments can be developed with relative ease, because existing (language) components can be reused. Our prototypes demonstrate that the proposed protocol is sufficiently expressive to support exploratory programming scenarios as encountered in literature within the software engineering, human-computer interaction and data science domains. @InProceedings{SLE22p175, author = {L. Thomas van Binsbergen and Damian Frölich and Mauricio Verano Merino and Joey Lai and Pierre Jeanjean and Tijs van der Storm and Benoit Combemale and Olivier Barais}, title = {A Language-Parametric Approach to Exploratory Programming Environments}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {175--188}, doi = {10.1145/3567512.3567527}, year = {2022}, } Publisher's Version Artifacts Functional |
|
Van der Storm, Tijs |
SLE '22: "Gradual Grammars: Syntax in ..."
Gradual Grammars: Syntax in Levels and Locales
Tijs van der Storm and Felienne Hermans (CWI, Netherlands; University of Groningen, Netherlands; Vrije Universiteit Amsterdam, Netherlands) Programming language implementations are often one-size-fits-all. Irrespective of the ethnographic background or proficiency of their users, they offer a single, canonical syntax for all language users. Whereas professional software developers might be willing to learn a programming language all in one go, this might be a significant barrier for non-technical users, such as children who learn to program, or domain experts using domain-specific languages (DSLs). Parser tools, however, do not offer sufficient support for graduality or internationalization, leading (worst case) to maintaining multiple parsers, for each target class of users. In this paper we present Fabric, a grammar formalism that supports: 1) the gradual extension with (and deprecation of) syntactic constructs in consecutive levels ("vertical"), and, orthogonally, 2) the internationalization of syntax by translating keywords and shuffling sentence order ("horizontal"). This is done in such a way that downstream language processors (compilers, interpreters, type checkers etc.) are affected as little as possible. We discuss the design of Fabric and its implementation on top of the LARK parser generator, and how Fabric can be embedded in the Rascal language workbench. A case study on the gradual programming language Hedy shows that language levels can be represented and internationalized concisely, with hardly any duplication. We evaluate the Fabric embedding using the Rebel2 DSL, by translating it to Dutch, and "untranslating" its concrete syntax trees, to reuse its existing compiler. Fabric thus provides a principled approach to gradual syntax definition in levels and locales. @InProceedings{SLE22p134, author = {Tijs van der Storm and Felienne Hermans}, title = {Gradual Grammars: Syntax in Levels and Locales}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {134--147}, doi = {10.1145/3567512.3567524}, year = {2022}, } Publisher's Version Artifacts Reusable SLE '22: "A Language-Parametric Approach ..." A Language-Parametric Approach to Exploratory Programming Environments L. Thomas van Binsbergen, Damian Frölich, Mauricio Verano Merino, Joey Lai, Pierre Jeanjean, Tijs van der Storm, Benoit Combemale, and Olivier Barais (University of Amsterdam, Netherlands; Vrije Universiteit Amsterdam, Netherlands; Inria, France; University of Rennes, France; CNRS, France; IRISA, France; CWI, Netherlands; University of Groningen, Netherlands) Exploratory programming is a software development style in which code is a medium for prototyping ideas and solutions, and in which even the end-goal can evolve over time. Exploratory programming is valuable in various contexts such as programming education, data science, and end-user programming. However, there is a lack of appropriate tooling and language design principles to support exploratory programming. This paper presents a host language- and object language-independent protocol for exploratory programming akin to the Language Server Protocol. The protocol serves as a basis to develop novel (or extend existing) programming environments for exploratory programming such as computational notebooks and command-line REPLs. An architecture is presented on top of which prototype environments can be developed with relative ease, because existing (language) components can be reused. Our prototypes demonstrate that the proposed protocol is sufficiently expressive to support exploratory programming scenarios as encountered in literature within the software engineering, human-computer interaction and data science domains. @InProceedings{SLE22p175, author = {L. Thomas van Binsbergen and Damian Frölich and Mauricio Verano Merino and Joey Lai and Pierre Jeanjean and Tijs van der Storm and Benoit Combemale and Olivier Barais}, title = {A Language-Parametric Approach to Exploratory Programming Environments}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {175--188}, doi = {10.1145/3567512.3567527}, year = {2022}, } Publisher's Version Artifacts Functional |
|
Van Wijk, Koen |
SLE '22: "Workbench for Creating Block-Based ..."
Workbench for Creating Block-Based Environments
Mauricio Verano Merino and Koen van Wijk (Vrije Universiteit Amsterdam, Netherlands; ICT, Netherlands) Block-based environments are visual-programming environments that allow users to create programs by dragging and dropping blocks that resemble jigsaw puzzle pieces. These environments have proven to lower the entry barrier of programming for end-users. Besides using block-based environments for programming, they can also help edit popular semi-structured data languages such as JSON and YAML. However, creating new block-based environments is still challenging; developers can develop them in an ad-hoc way or using context-free grammars in a language workbench. Given the visual nature of block-based environments, both options are valid; however, developers have some limitations when describing them. In this paper, we present Blocklybench, which is a meta-block-based environment for describing block-based environments for both programming and semi-structured data languages. This tool allows developers to express the specific elements of block-based environments using the blocks notation. To evaluate Blocklybench, we present three case studies. Our results show that Blocklybench allows developers to describe block-based specific aspects of language constructs such as layout, color, block connections, and code generators. @InProceedings{SLE22p61, author = {Mauricio Verano Merino and Koen van Wijk}, title = {Workbench for Creating Block-Based Environments}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {61--73}, doi = {10.1145/3567512.3567518}, year = {2022}, } Publisher's Version Artifacts Functional |
|
Verano Merino, Mauricio |
SLE '22: "Workbench for Creating Block-Based ..."
Workbench for Creating Block-Based Environments
Mauricio Verano Merino and Koen van Wijk (Vrije Universiteit Amsterdam, Netherlands; ICT, Netherlands) Block-based environments are visual-programming environments that allow users to create programs by dragging and dropping blocks that resemble jigsaw puzzle pieces. These environments have proven to lower the entry barrier of programming for end-users. Besides using block-based environments for programming, they can also help edit popular semi-structured data languages such as JSON and YAML. However, creating new block-based environments is still challenging; developers can develop them in an ad-hoc way or using context-free grammars in a language workbench. Given the visual nature of block-based environments, both options are valid; however, developers have some limitations when describing them. In this paper, we present Blocklybench, which is a meta-block-based environment for describing block-based environments for both programming and semi-structured data languages. This tool allows developers to express the specific elements of block-based environments using the blocks notation. To evaluate Blocklybench, we present three case studies. Our results show that Blocklybench allows developers to describe block-based specific aspects of language constructs such as layout, color, block connections, and code generators. @InProceedings{SLE22p61, author = {Mauricio Verano Merino and Koen van Wijk}, title = {Workbench for Creating Block-Based Environments}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {61--73}, doi = {10.1145/3567512.3567518}, year = {2022}, } Publisher's Version Artifacts Functional SLE '22: "A Language-Parametric Approach ..." A Language-Parametric Approach to Exploratory Programming Environments L. Thomas van Binsbergen, Damian Frölich, Mauricio Verano Merino, Joey Lai, Pierre Jeanjean, Tijs van der Storm, Benoit Combemale, and Olivier Barais (University of Amsterdam, Netherlands; Vrije Universiteit Amsterdam, Netherlands; Inria, France; University of Rennes, France; CNRS, France; IRISA, France; CWI, Netherlands; University of Groningen, Netherlands) Exploratory programming is a software development style in which code is a medium for prototyping ideas and solutions, and in which even the end-goal can evolve over time. Exploratory programming is valuable in various contexts such as programming education, data science, and end-user programming. However, there is a lack of appropriate tooling and language design principles to support exploratory programming. This paper presents a host language- and object language-independent protocol for exploratory programming akin to the Language Server Protocol. The protocol serves as a basis to develop novel (or extend existing) programming environments for exploratory programming such as computational notebooks and command-line REPLs. An architecture is presented on top of which prototype environments can be developed with relative ease, because existing (language) components can be reused. Our prototypes demonstrate that the proposed protocol is sufficiently expressive to support exploratory programming scenarios as encountered in literature within the software engineering, human-computer interaction and data science domains. @InProceedings{SLE22p175, author = {L. Thomas van Binsbergen and Damian Frölich and Mauricio Verano Merino and Joey Lai and Pierre Jeanjean and Tijs van der Storm and Benoit Combemale and Olivier Barais}, title = {A Language-Parametric Approach to Exploratory Programming Environments}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {175--188}, doi = {10.1145/3567512.3567527}, year = {2022}, } Publisher's Version Artifacts Functional |
|
Vitek, Jan |
SLE '22: "signatr: A Data-Driven Fuzzing ..."
signatr: A Data-Driven Fuzzing Tool for R
Alexi Turcotte, Pierre Donat-Bouillud, Filip Křikava, and Jan Vitek (Northeastern University, USA; Czech Technical University in Prague, Czechia) The fast-and-loose, permissive semantics of dynamic programming languages limit the power of static analyses. For that reason, soundness is often traded for precision through dynamic program analysis. Dynamic analysis is only as good as the available runnable code, and relying solely on test suites is fraught as they do not cover the full gamut of possible behaviors. Fuzzing is an approach for automatically exercising code, and could be used to obtain more runnable code. However, the shape of user-defined data in dynamic languages is difficult to intuit, limiting a fuzzer's reach. We propose a feedback-driven blackbox fuzzing approach which draws inputs from a database of values recorded from existing code. We implement this approach in a tool called signatr for the R language. We present the insights of its design and implementation, and assess signatr's ability to uncover new behaviors by fuzzing 4,829 R functions from 100 R packages, revealing 1,195,184 new signatures. @InProceedings{SLE22p216, author = {Alexi Turcotte and Pierre Donat-Bouillud and Filip Křikava and Jan Vitek}, title = {signatr: A Data-Driven Fuzzing Tool for R}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {216--221}, doi = {10.1145/3567512.3567530}, year = {2022}, } Publisher's Version Artifacts Reusable |
|
Wachsmuth, Guido |
SLE '22: "A Multi-target, Multi-paradigm ..."
A Multi-target, Multi-paradigm DSL Compiler for Algorithmic Graph Processing
Houda Boukham, Guido Wachsmuth, Martijn Dwars, and Dalila Chiadmi (Ecole Mohammadia d'Ingénieurs, Morocco; Oracle Labs, Morocco; Oracle Labs, Switzerland) Domain-specific language compilers need to close the gap between the domain abstractions of the language and the low-level concepts of the target platform.This can be challenging to achieve for compilers targeting multiple platforms with potentially very different computing paradigms.In this paper, we present a multi-target, multi-paradigm DSL compiler for algorithmic graph processing. Our approach centers around an intermediate representation and reusable, composable transformations to be shared between the different compiler targets. These transformations embrace abstractions that align closely with the concepts of a particular target platform, and disallow abstractions that are semantically more distant. We report on our experience implementing the compiler and highlight some of the challenges and requirements for applying language workbenches in industrial use cases. @InProceedings{SLE22p2, author = {Houda Boukham and Guido Wachsmuth and Martijn Dwars and Dalila Chiadmi}, title = {A Multi-target, Multi-paradigm DSL Compiler for Algorithmic Graph Processing}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {2--15}, doi = {10.1145/3567512.3567513}, year = {2022}, } Publisher's Version |
|
Wachtmeister, Louis |
SLE '22: "Neural Language Models and ..."
Neural Language Models and Few Shot Learning for Systematic Requirements Processing in MDSE
Vincent Bertram, Miriam Boß, Evgeny Kusmenko, Imke Helene Nachmann, Bernhard Rumpe, Danilo Trotta, and Louis Wachtmeister (RWTH Aachen University, Germany) Systems engineering, in particular in the automotive domain, needs to cope with the massively increasing numbers of requirements that arise during the development process. The language in which requirements are written is mostly informal and highly individual. This hinders automated processing of requirements as well as the linking of requirements to models. Introducing formal requirement notations in existing projects leads to the challenge of translating masses of requirements and the necessity of training for requirements engineers. In this paper, we derive domain-specific language constructs helping us to avoid ambiguities in requirements and increase the level of formality. The main contribution is the adoption and evaluation of few-shot learning with large pretrained language models for the automated translation of informal requirements to structured languages such as a requirement DSL. @InProceedings{SLE22p260, author = {Vincent Bertram and Miriam Boß and Evgeny Kusmenko and Imke Helene Nachmann and Bernhard Rumpe and Danilo Trotta and Louis Wachtmeister}, title = {Neural Language Models and Few Shot Learning for Systematic Requirements Processing in MDSE}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {260--265}, doi = {10.1145/3567512.3567534}, year = {2022}, } Publisher's Version |
|
Warmer, Jos |
SLE '22: "Freon: An Open Web Native ..."
Freon: An Open Web Native Language Workbench
Jos Warmer and Anneke Kleppe (Independent, Netherlands) Freon (formerly called ProjectIt) is a language workbench that generates a set of tools to support a given domain specific modeling language (DSL). The most outstanding tool is a web-based projectional editor, but also included are a scoper, typer, validator, parser, unparser, and a JSON exporter/importer. Because DSLs have (sometimes very) different requirements, we do not assume Freon to be the one tool that can meet all these requirements. Instead the architecture of the generated tool-set supports language designers to extend and adapt it in several different ways. In this paper we do not focus on the functionality of Freon itself, or on any of the generated tools, but on the flexibility that the chosen architecture delivers. @InProceedings{SLE22p30, author = {Jos Warmer and Anneke Kleppe}, title = {Freon: An Open Web Native Language Workbench}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {30--35}, doi = {10.1145/3567512.3567515}, year = {2022}, } Publisher's Version Info |
|
Wimmer, Manuel |
SLE '22: "From Coverage Computation ..."
From Coverage Computation to Fault Localization: A Generic Framework for Domain-Specific Languages
Faezeh Khorram, Erwan Bousse, Antonio Garmendia, Jean-Marie Mottu, Gerson Sunyé, and Manuel Wimmer (IMT Atlantique, France; Nantes Université, France; École Centrale Nantes, France; JKU Linz, Austria) To test a system efficiently, we need to know how good are the defined test cases and to localize detected faults in the system. Measuring test coverage can address both concerns as it is a popular metric for test quality evaluation and, at the same time, is the foundation of advanced fault localization techniques. However, for Domain-Specific Languages (DSLs), coverage metrics and associated tools are usually manually defined for each DSL representing costly, error-prone, and non-reusable work. To address this problem, we propose a generic coverage computation and fault localization framework for DSLs. Considering a test suite executed on a model conforming to a DSL, we compute a coverage matrix based on three ingredients: the DSL specification, the coverage rules, and the model's execution trace. Using the test execution result and the computed coverage matrix, the framework calculates the suspiciousness-based ranking of the model's elements based on existing spectrum-based techniques to help the user in localizing the model's faults. We provide a tool atop the Eclipse GEMOC Studio and evaluate our approach using four different DSLs, with 297 test cases for 21 models in total. Results show that we can successfully create meaningful coverage matrices for all investigated DSLs and models. The applied fault localization techniques are capable of identifying the defects injected in the models based on the provided coverage measurements, thus demonstrating the usefulness of the automatically computed measurements. @InProceedings{SLE22p235, author = {Faezeh Khorram and Erwan Bousse and Antonio Garmendia and Jean-Marie Mottu and Gerson Sunyé and Manuel Wimmer}, title = {From Coverage Computation to Fault Localization: A Generic Framework for Domain-Specific Languages}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {235--248}, doi = {10.1145/3567512.3567532}, year = {2022}, } Publisher's Version Info Artifacts Functional |
|
Yamazaki, Tetsuro |
SLE '22: "Yet Another Generating Method ..."
Yet Another Generating Method of Fluent Interfaces Supporting Flat- and Sub-chaining Styles
Tetsuro Yamazaki, Tomoki Nakamaru, and Shigeru Chiba (University of Tokyo, Japan) Researchers discovered methods to generate fluent interfaces equipped with static checking to verify their calling conventions. This static checking is done by carefully designing classes and method signatures to make type checking to perform a calculation equivalent to syntax checking. In this paper, we propose a method to generate a fluent interface with syntax checking, which accepts both styles of method chaining; flat-chaining style and sub-chaining style. Supporting both styles is worthwhile because it allows programmers to wrap out parts of their method chaining for readability. Our method is based on grammar rewriting so that we could inspect the acceptable grammar. In conclusion, our method succeeds generation when the input grammar is LL(1) and there is no non-terminal symbol that generates either only an empty string or nothing. @InProceedings{SLE22p249, author = {Tetsuro Yamazaki and Tomoki Nakamaru and Shigeru Chiba}, title = {Yet Another Generating Method of Fluent Interfaces Supporting Flat- and Sub-chaining Styles}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {249--259}, doi = {10.1145/3567512.3567533}, year = {2022}, } Publisher's Version |
|
Zwaan, Aron |
SLE '22: "Specializing Scope Graph Resolution ..."
Specializing Scope Graph Resolution Queries
Aron Zwaan (Delft University of Technology, Netherlands) To warrant programmer productivity, type checker results should be correct and available quickly. Correctness can be provided when a type checker implementation corresponds to a declarative type system specification. Statix is a type system specification language which achieves this by automatically deriving type checker implementations from declarative typing rules. A key feature of Statix is that it uses scope graphs for declarative specification of name resolution. However, compared to hand-written type checkers, type checkers derived from Statix specifications have sub-optimal run time performance. In this paper, we identify and resolve a performance bottleneck in the Statix solver, namely part of the name resolution algorithm, using partial evaluation. To this end, we introduce a tailored procedural intermediate query resolution language, and provide a specializer that translates declarative queries to this language. Evaluating this specializer by comparing type checking run time performance on three benchmarks (Apache Commons CSV, IO, and Lang3), shows that our specializer improves query resolution time up to 7.7x, which reduces the total type checking run time by 38 - 48%. @InProceedings{SLE22p121, author = {Aron Zwaan}, title = {Specializing Scope Graph Resolution Queries}, booktitle = {Proc.\ SLE}, publisher = {ACM}, pages = {121--133}, doi = {10.1145/3567512.3567523}, year = {2022}, } Publisher's Version Artifacts Reusable |
78 authors
proc time: 15.82