SPLASH Workshop/Symposium Events 2022
2022 ACM SIGPLAN International Conference on Systems, Programming, Languages, and Applications: Software for Humanity (SPLASH Events 2022)
Powered by
Conference Publishing Consulting

15th ACM SIGPLAN International Conference on Software Language Engineering (SLE 2022), December 6–7, 2022, Auckland, New Zealand

SLE 2022 – Proceedings

Contents - Abstracts - Authors

15th ACM SIGPLAN International Conference on Software Language Engineering (SLE 2022)


Title Page

Welcome from the Chairs
Welcome to the 15th ACM SIGPLAN International Conference on Software Language Engineering (SLE), co-located with the ACM SIGPLAN conference on Systems, Programming, Languages, and Applications (SPLASH) in Auckland, a vibrant port city in northern New Zealand, from December 5th to December 10th 2022. Like its predecessors, the this edition of the SLE conference, SLE 2022, is devoted to the principles of software languages: their design, their implementation, and their evolution. As such, SLE brings together researchers united by their common interest in the creation, capture, and tooling of software languages.

SLE 2022 Organization


People Do Not Want to Learn a New Language But a New Library (Keynote)
Shigeru Chiba ORCID logo
(University of Tokyo, Japan)
One day, a student raised a question. I spent many years to learn a programming language. Why do you try to develop yet another language? I don’t wanna learn no more language. One is enough! My answer was, well, don’t you hate to learn a new library, either? People seem to accept learning a new library as necessary work although they might not be happy to learn a new language (they might not be very happy to learn a new library, either, but they seem much happier). However, a modern library is something we should consider as a programming language. During this talk, I will survey technology around language-like libraries, which are often called embedded domain specific languages. Then I will present my vision of where we, programming-language researchers, should go for further study.

Publisher's Version


A Multi-target, Multi-paradigm DSL Compiler for Algorithmic Graph Processing
Houda Boukham ORCID logo, Guido Wachsmuth ORCID logo, Martijn Dwars ORCID logo, and Dalila Chiadmi ORCID logo
(Ecole Mohammadia d'Ingénieurs, Morocco; Oracle Labs, Morocco; Oracle Labs, Switzerland)
Domain-specific language compilers need to close the gap between the domain abstractions of the language and the low-level concepts of the target platform.This can be challenging to achieve for compilers targeting multiple platforms with potentially very different computing paradigms.In this paper, we present a multi-target, multi-paradigm DSL compiler for algorithmic graph processing. Our approach centers around an intermediate representation and reusable, composable transformations to be shared between the different compiler targets. These transformations embrace abstractions that align closely with the concepts of a particular target platform, and disallow abstractions that are semantically more distant. We report on our experience implementing the compiler and highlight some of the challenges and requirements for applying language workbenches in industrial use cases.

Publisher's Version
Lang-n-Prove: A DSL for Language Proofs
Matteo Cimini ORCID logo
(University of Massachusetts Lowell, USA)
Proofs of language properties often follow a schema that does not apply just to one language but, rather, applies to many languages of a certain class.
In this paper, we present Lang-n-Prove, a domain-specific language for expressing theorems and proofs in such a way that they apply to many languages. The main characteristic of Lang-n-Prove is that it contains linguistic features that are specific to the domain of language design.
We have used Lang-n-Prove to express the theorems and proofs of canonical forms lemmas, the progress theorem, and the type preservation theorem for a restricted class of functional languages.
We have applied our Lang-n-Prove proofs to several functional languages, including languages with polymorphism, exceptions, recursive types, list operations, and other common types and operators. Our tool has generated the proof code in Abella that machine-checks the type safety of all these languages, when the correct code for substitution lemmas is provided.

Publisher's Version
Freon: An Open Web Native Language Workbench
Jos Warmer ORCID logo and Anneke Kleppe ORCID logo
(Independent, Netherlands)
Freon (formerly called ProjectIt) is a language workbench that generates a set of tools to support a given domain specific modeling language (DSL). The most outstanding tool is a web-based projectional editor, but also included are a scoper, typer, validator, parser, unparser, and a JSON exporter/importer. Because DSLs have (sometimes very) different requirements, we do not assume Freon to be the one tool that can meet all these requirements. Instead the architecture of the generated tool-set supports language designers to extend and adapt it in several different ways. In this paper we do not focus on the functionality of Freon itself, or on any of the generated tools, but on the flexibility that the chosen architecture delivers.

Publisher's Version Info
The Semantics of Plurals
Friedrich Steimann ORCID logo and Marius Freitag ORCID logo
(Fernuniversität in Hagen, Germany)
Inside many software languages lives an expression language that caters for the computation of single values from single values. These languages' fixation on single-valuedness is often at odds with their application domains, in which many values, or plurals, regularly occur in the places of single. While the classical mathematical means of dealing with plurals is the set, in computing, other representations have evolved, notably strings and the much lesser known bunches. We review bunch theory in the context of expression languages including non-recursive functions, and show how giving bunches set semantics suggests that evaluating bunch functions amounts to computing with relations. We maintain that the ensuing seamless integration of relations in expression languages that otherwise know only functions makes a worthwhile contribution in a field in which the difference between modeling, with its preference for relations, and programming, with its preference for functions, is increasingly considered accidental.

Publisher's Version
Reflection as a Tool to Debug Objects
Steven Costiou ORCID logo, Vincent Aranega ORCID logo, and Marcus Denker ORCID logo
(University of Lille, France; Inria, France; CNRS, France; Centrale Lille, France; UMR 9189 CRIStAL, France)
In this paper, we share our experience with using reflection as a systematic tool to build advanced debuggers. We illustrate the usage and combination of reflection techniques for the implementation of object-centric debugging. Object-centric debugging is a technique for object-oriented systems that scopes debugging operations to specific objects. The implementation of this technique is not straightforward, as there are, to the best of our knowledge, no description in the literature about how to build such debugger.
We describe an implementation of object-centric breakpoints. We built these breakpoints with Pharo, a highly reflective system, based on the combination of different classical reflection techniques: proxy, anonymous subclasses, and sub-method partial behavioral reflection. Because this implementation is based on common reflective techniques, it is applicable to other reflective languages and systems for which a set of identified primitives are available.

Publisher's Version
Workbench for Creating Block-Based Environments
Mauricio Verano Merino ORCID logo and Koen van Wijk ORCID logo
(Vrije Universiteit Amsterdam, Netherlands; ICT, Netherlands)
Block-based environments are visual-programming environments that allow users to create programs by dragging and dropping blocks that resemble jigsaw puzzle pieces. These environments have proven to lower the entry barrier of programming for end-users. Besides using block-based environments for programming, they can also help edit popular semi-structured data languages such as JSON and YAML. However, creating new block-based environments is still challenging; developers can develop them in an ad-hoc way or using context-free grammars in a language workbench. Given the visual nature of block-based environments, both options are valid; however, developers have some limitations when describing them. In this paper, we present Blocklybench, which is a meta-block-based environment for describing block-based environments for both programming and semi-structured data languages. This tool allows developers to express the specific elements of block-based environments using the blocks notation. To evaluate Blocklybench, we present three case studies. Our results show that Blocklybench allows developers to describe block-based specific aspects of language constructs such as layout, color, block connections, and code generators.

Publisher's Version Published Artifact Artifacts Available Artifacts Functional
Optimising First-Class Pattern Matching
Jeff Smits ORCID logo, Toine Hartman ORCID logo, and Jesper Cockx ORCID logo
(Delft University of Technology, Netherlands; Independent, Netherlands)
Pattern matching is a high-level notation for programs to analyse the shape of data, and can be optimised to efficient low-level instructions. The Stratego language uses first-class pattern matching, a powerful form of pattern matching that traditional optimisation techniques do not apply to directly.
In this paper, we investigate how to optimise programs that use first-class pattern matching. Concretely, we show how to map first-class pattern matching to a form close to traditional pattern matching, on which standard optimisations can be applied.
Through benchmarks, we demonstrate the positive effect of these optimisations on the run-time performance of Stratego programs. We conclude that the expressive power of first-class pattern matching does not hamper the optimisation potential of a language that features it.

Publisher's Version Published Artifact Artifacts Available Artifacts Functional
Property-Based Testing: Climbing the Stairway to Verification
Zilin Chen ORCID logo, Christine Rizkallah ORCID logo, Liam O'ConnorORCID logo, Partha SusarlaORCID logo, Gerwin Klein ORCID logo, Gernot Heiser ORCID logo, and Gabriele Keller ORCID logo
(UNSW, Australia; University of Melbourne, Australia; University of Edinburgh, UK; Independent, Australia; Proofcraft, Australia; Utrecht University, Netherlands)
Property-based testing (PBT) is a powerful tool that is widely available in modern programming languages. It has been used to reduce formal software verification effort. We demonstrate how PBT can be used in conjunction with formal verification to incrementally gain greater assurance in code correctness by integrating PBT into the verification framework of Cogent---a programming language equipped with a certifying compiler for developing high-assurance systems components. Specifically, for PBT and formal verification to work in tandem, we structure the tests to mirror the refinement proof that we used in Cogent's verification framework: The expected behaviour of the system under test is captured by a functional correctness specification, which mimics the formal specification of the system, and we test the refinement relation between the implementation and the specification. We exhibit the additional benefits that this mutualism brings to developers and demonstrate the techniques we used in this style of PBT, by studying two concrete examples.

Publisher's Version Published Artifact Artifacts Available Artifacts Reusable
Selective Traceability for Rule-Based Model-to-Model Transformations
Qurat ul ain Ali ORCID logo, Dimitris KolovosORCID logo, and Konstantinos BarmpisORCID logo
(University of York, UK)
Model-to-model (M2M) transformation is a key ingredient in a typical Model-Driven Engineering workflow and there are several tailored high-level interpreted languages for capturing and executing such transformations. While these languages enable the specification of concise transformations through task-specific constructs (rules/mappings, bindings), their use can pose scalability challenges when it comes to very large models. In this paper, we present an architecture for optimising the execution of model-to-model transformations written in such a language, by leveraging static analysis and automated program rewriting techniques. We demonstrate how static analysis and dependency information between rules can be used to reduce the size of the transformation trace and to optimise certain classes of transformations. Finally, we detail the performance benefits that can be delivered by this form of optimisation, through a series of benchmarks performed with an existing transformation language (Epsilon Transformation Language - ETL) and EMF-based models. Our experiments have shown considerable performance improvements compared to the existing ETL execution engine, without sacrificing any features of the language.

Publisher's Version
Partial Parsing for Structured Editors
Tom Beckmann ORCID logo, Patrick Rein ORCID logo, Toni Mattis ORCID logo, and Robert Hirschfeld ORCID logo
(University of Potsdam, Germany; Hasso Plattner Institute, Germany)
Creating structured editors, which maintain a valid syntax tree at all times rather than allowing to edit program text, is typically a time consuming task. Recent work has investigated the use of existing general-purpose language grammars as a basis for automatically generating structured editors, thus considerably reducing the effort required. However, in these generated editors, input occurs through menu and mouse-based interaction, rather than via keyboard entry that is familiar to most users.
In this paper we introduce modifications to a parser of general-purpose programming language grammars to support keyboard-centric interactions with generated structured editors. Specifically, we describe a system we call partial parsing to autocomplete language structures, removing the need for a menu of language constructs in favor of keyboard-based disambiguation. We demonstrate our system's applicability and performance for use in interactive, generated structured editors. Our system thus constitutes a step towards making structured editors generated from language grammars usable with more efficient and familiar keyboard-centric interactions.

Publisher's Version
Specializing Scope Graph Resolution Queries
Aron Zwaan ORCID logo
(Delft University of Technology, Netherlands)
To warrant programmer productivity, type checker results should be correct and available quickly. Correctness can be provided when a type checker implementation corresponds to a declarative type system specification. Statix is a type system specification language which achieves this by automatically deriving type checker implementations from declarative typing rules. A key feature of Statix is that it uses scope graphs for declarative specification of name resolution. However, compared to hand-written type checkers, type checkers derived from Statix specifications have sub-optimal run time performance.
In this paper, we identify and resolve a performance bottleneck in the Statix solver, namely part of the name resolution algorithm, using partial evaluation. To this end, we introduce a tailored procedural intermediate query resolution language, and provide a specializer that translates declarative queries to this language.
Evaluating this specializer by comparing type checking run time performance on three benchmarks (Apache Commons CSV, IO, and Lang3), shows that our specializer improves query resolution time up to 7.7x, which reduces the total type checking run time by 38 - 48%.

Publisher's Version Artifacts Reusable
Gradual Grammars: Syntax in Levels and Locales
Tijs van der Storm ORCID logo and Felienne Hermans ORCID logo
(CWI, Netherlands; University of Groningen, Netherlands; Vrije Universiteit Amsterdam, Netherlands)
Programming language implementations are often one-size-fits-all. Irrespective of the ethnographic background or proficiency of their users, they offer a single, canonical syntax for all language users. Whereas professional software developers might be willing to learn a programming language all in one go, this might be a significant barrier for non-technical users, such as children who learn to program, or domain experts using domain-specific languages (DSLs). Parser tools, however, do not offer sufficient support for graduality or internationalization, leading (worst case) to maintaining multiple parsers, for each target class of users.
In this paper we present Fabric, a grammar formalism that supports: 1) the gradual extension with (and deprecation of) syntactic constructs in consecutive levels ("vertical"), and, orthogonally, 2) the internationalization of syntax by translating keywords and shuffling sentence order ("horizontal"). This is done in such a way that downstream language processors (compilers, interpreters, type checkers etc.) are affected as little as possible. We discuss the design of Fabric and its implementation on top of the LARK parser generator, and how Fabric can be embedded in the Rascal language workbench. A case study on the gradual programming language Hedy shows that language levels can be represented and internationalized concisely, with hardly any duplication. We evaluate the Fabric embedding using the Rebel2 DSL, by translating it to Dutch, and "untranslating" its concrete syntax trees, to reuse its existing compiler. Fabric thus provides a principled approach to gradual syntax definition in levels and locales.

Publisher's Version Published Artifact Artifacts Available Artifacts Reusable
Property Probes: Source Code Based Exploration of Program Analysis Results
Anton Risberg Alaküla ORCID logo, Görel Hedin ORCID logo, Niklas Fors ORCID logo, and Adrian Pop ORCID logo
(Lund University, Sweden; Linköping University, Sweden)
We present property probes, a mechanism for helping a developer interactively explore partial program analysis results in terms of the source program, and as the program is edited. A node locator data structure is introduced that maps between source code spans and program representation nodes, and that helps identify probed nodes in a robust way, after modifications to the source code. We have developed a client-server based tool supporting property probes, and argue that it is very helpful in debugging and understanding program analyses. We have evaluated our tool on several languages and analyses, including a full Java compiler and a tool for intraprocedural dataflow analysis. Our performance results show that the probe overhead is negligible even when analyzing large projects.

Publisher's Version Published Artifact Artifacts Available Artifacts Reusable
jGuard: Programming Misuse-Resilient APIs
Simon Binder ORCID logo, Krishna Narasimhan ORCID logo, Svenja Kernig ORCID logo, and Mira Mezini ORCID logo
(TU Darmstadt, Germany)
APIs provide access to valuable features, but studies have shown that they are hard to use correctly. Misuses of these APIs can be quite costly. Even though documentations and usage manuals exist, developers find it hard to integrate these in practice. Several static and dynamic analysis tools exist to detect and mitigate API misuses. But it is natural to wonder if APIs can be made more difficult to misuse by capturing the knowledge of domain experts (, API designers). Approaches like CogniCrypt have made inroads into this direction by offering API specification languages like CrySL which are then consumed by static analysis tools. But studies have shown that developers do not enjoy installing new tools into their pipeline. In this paper, we present jGuard, an extension to Java that allows API designers to directly encode their specifications while implementing their APIs. Code written in jGuard is then compiled to regular Java with the checks encoded as exceptions, thereby making sure the API user does not need to install any new tooling. Our evaluation shows that jGuard can be used to express the most commonly occuring misuses in practice, matches the accuracy of state of the art in API misuse detection tools, and introduces negligible performance overhead.

Publisher's Version Published Artifact Artifacts Functional
A Language-Parametric Approach to Exploratory Programming Environments
L. Thomas van BinsbergenORCID logo, Damian Frölich ORCID logo, Mauricio Verano Merino ORCID logo, Joey Lai ORCID logo, Pierre Jeanjean ORCID logo, Tijs van der Storm ORCID logo, Benoit Combemale ORCID logo, and Olivier Barais ORCID logo
(University of Amsterdam, Netherlands; Vrije Universiteit Amsterdam, Netherlands; Inria, France; University of Rennes, France; CNRS, France; IRISA, France; CWI, Netherlands; University of Groningen, Netherlands)
Exploratory programming is a software development style in which code is a medium for prototyping ideas and solutions, and in which even the end-goal can evolve over time. Exploratory programming is valuable in various contexts such as programming education, data science, and end-user programming. However, there is a lack of appropriate tooling and language design principles to support exploratory programming. This paper presents a host language- and object language-independent protocol for exploratory programming akin to the Language Server Protocol. The protocol serves as a basis to develop novel (or extend existing) programming environments for exploratory programming such as computational notebooks and command-line REPLs. An architecture is presented on top of which prototype environments can be developed with relative ease, because existing (language) components can be reused. Our prototypes demonstrate that the proposed protocol is sufficiently expressive to support exploratory programming scenarios as encountered in literature within the software engineering, human-computer interaction and data science domains.

Publisher's Version Artifacts Functional
Collection Skeletons: Declarative Abstractions for Data Collections
Björn Franke ORCID logo, Zhibo LiORCID logo, Magnus Morton ORCID logo, and Michel SteuwerORCID logo
(University of Edinburgh, UK; Huawei, UK)
Modern programming languages provide programmers with rich abstractions for data collections as part of their standard libraries, e.g. Containers in the C++ STL, the Java Collections Framework, or the Scala Collections API. Typically, these collections frameworks are organised as hierarchies that provide programmers with common abstract data types (ADTs) like lists, queues, and stacks. While convenient, this approach introduces problems which ultimately affect application performance due to users over-specifying collection data types limiting implementation flexibility. In this paper, we develop Collection Skeletons which provide a novel, declarative approach to data collections. Using our framework, programmers explicitly select properties for their collections, thereby truly decoupling specification from implementation. By making collection properties explicit immediate benefits materialise in form of reduced risk of over-specification and increased implementation flexibility. We have prototyped our declarative abstractions for collections as a C++ library, and demonstrate that benchmark applications rewritten to use Collection Skeletons incur little or no overhead. In fact, for several benchmarks, we observe performance speedups (on average between 2.57 to 2.93, and up to 16.37) and also enhanced performance portability across three different hardware platforms.

Publisher's Version
iCoLa: A Compositional Meta-language with Support for Incremental Language Development
Damian Frölich ORCID logo and L. Thomas van BinsbergenORCID logo
(University of Amsterdam, Netherlands)
Programming languages providing high-level abstractions can increase programmers’ productivity and program safety. Language-oriented programming is a paradigm in which domain-specific languages are developed to solve problems within specific domains with (high-level) abstractions relevant to those domains. However, language development involves complex design and engineering processes. These processes can be simplified by reusing (parts of) existing languages and by offering language-parametric tooling.
In this paper we present iCoLa, a meta-language supporting incremental (meta-)programming based on reusable components. In our implementation of iCoLa, languages are first-class citizens, providing the full power of the host-language (Haskell) to compose and manipulate languages. We demonstrate iCoLa through the construction of the Imp, SIMPLE, and MiniJava languages via the composition and restriction of language fragments and demonstrate the variability of our approach through the construction of several languages using a fixed-set of operators.

Publisher's Version
signatr: A Data-Driven Fuzzing Tool for R
Alexi Turcotte ORCID logo, Pierre Donat-BouilludORCID logo, Filip Křikava ORCID logo, and Jan Vitek ORCID logo
(Northeastern University, USA; Czech Technical University in Prague, Czechia)
The fast-and-loose, permissive semantics of dynamic programming languages limit the power of static analyses. For that reason, soundness is often traded for precision through dynamic program analysis. Dynamic analysis is only as good as the available runnable code, and relying solely on test suites is fraught as they do not cover the full gamut of possible behaviors. Fuzzing is an approach for automatically exercising code, and could be used to obtain more runnable code. However, the shape of user-defined data in dynamic languages is difficult to intuit, limiting a fuzzer's reach.
We propose a feedback-driven blackbox fuzzing approach which draws inputs from a database of values recorded from existing code. We implement this approach in a tool called signatr for the R language. We present the insights of its design and implementation, and assess signatr's ability to uncover new behaviors by fuzzing 4,829 R functions from 100 R packages, revealing 1,195,184 new signatures.

Publisher's Version Published Artifact Artifacts Available Artifacts Reusable
BatakJava: An Object-Oriented Programming Language with Versions
Luthfan Anshar Lubis ORCID logo, Yudai Tanabe ORCID logo, Tomoyuki Aotani ORCID logo, and Hidehiko MasuharaORCID logo
(Tokyo Institute of Technology, Japan; Mamezou, Japan)
Programming with versions is a recent proposal that supports multiple versions of software components in a program. Though it would provide greater freedom for the programmer, the concept is only realized as a simple core calculus, called λVL, where a value consists of λ-terms with multiple versions. We explore a design space of programming with versions in the presence of data structures and module systems, and propose BatakJava, an object-oriented programming language in which multiple versions of a class can be used in a program. This paper presents BatakJava’s language design, its core semantics with subject reduction, an implementation as a source-to-Java translator, and a case study to understand how we can exploit multiple versions in BatakJava for developing an application program with an evolving library.

Publisher's Version Artifacts Reusable
From Coverage Computation to Fault Localization: A Generic Framework for Domain-Specific Languages
Faezeh Khorram ORCID logo, Erwan Bousse ORCID logo, Antonio Garmendia ORCID logo, Jean-Marie Mottu ORCID logo, Gerson Sunyé ORCID logo, and Manuel Wimmer ORCID logo
(IMT Atlantique, France; Nantes Université, France; École Centrale Nantes, France; JKU Linz, Austria)
To test a system efficiently, we need to know how good are the defined test cases and to localize detected faults in the system. Measuring test coverage can address both concerns as it is a popular metric for test quality evaluation and, at the same time, is the foundation of advanced fault localization techniques. However, for Domain-Specific Languages (DSLs), coverage metrics and associated tools are usually manually defined for each DSL representing costly, error-prone, and non-reusable work.
To address this problem, we propose a generic coverage computation and fault localization framework for DSLs. Considering a test suite executed on a model conforming to a DSL, we compute a coverage matrix based on three ingredients: the DSL specification, the coverage rules, and the model's execution trace. Using the test execution result and the computed coverage matrix, the framework calculates the suspiciousness-based ranking of the model's elements based on existing spectrum-based techniques to help the user in localizing the model's faults. We provide a tool atop the Eclipse GEMOC Studio and evaluate our approach using four different DSLs, with 297 test cases for 21 models in total. Results show that we can successfully create meaningful coverage matrices for all investigated DSLs and models. The applied fault localization techniques are capable of identifying the defects injected in the models based on the provided coverage measurements, thus demonstrating the usefulness of the automatically computed measurements.

Publisher's Version Published Artifact Info Artifacts Available Artifacts Functional
Yet Another Generating Method of Fluent Interfaces Supporting Flat- and Sub-chaining Styles
Tetsuro Yamazaki ORCID logo, Tomoki NakamaruORCID logo, and Shigeru Chiba ORCID logo
(University of Tokyo, Japan)
Researchers discovered methods to generate fluent interfaces equipped with static checking to verify their calling conventions. This static checking is done by carefully designing classes and method signatures to make type checking to perform a calculation equivalent to syntax checking. In this paper, we propose a method to generate a fluent interface with syntax checking, which accepts both styles of method chaining; flat-chaining style and sub-chaining style. Supporting both styles is worthwhile because it allows programmers to wrap out parts of their method chaining for readability. Our method is based on grammar rewriting so that we could inspect the acceptable grammar. In conclusion, our method succeeds generation when the input grammar is LL(1) and there is no non-terminal symbol that generates either only an empty string or nothing.

Publisher's Version
Neural Language Models and Few Shot Learning for Systematic Requirements Processing in MDSE
Vincent BertramORCID logo, Miriam BoßORCID logo, Evgeny KusmenkoORCID logo, Imke Helene NachmannORCID logo, Bernhard RumpeORCID logo, Danilo Trotta ORCID logo, and Louis WachtmeisterORCID logo
(RWTH Aachen University, Germany)
Systems engineering, in particular in the automotive domain, needs to cope with the massively increasing numbers of requirements that arise during the development process. The language in which requirements are written is mostly informal and highly individual. This hinders automated processing of requirements as well as the linking of requirements to models. Introducing formal requirement notations in existing projects leads to the challenge of translating masses of requirements and the necessity of training for requirements engineers. In this paper, we derive domain-specific language constructs helping us to avoid ambiguities in requirements and increase the level of formality. The main contribution is the adoption and evaluation of few-shot learning with large pretrained language models for the automated translation of informal requirements to structured languages such as a requirement DSL.

Publisher's Version
Partial Loading of Repository-Based Models through Static Analysis
Sorour Jahanbin ORCID logo, Dimitris KolovosORCID logo, Simos Gerasimou ORCID logo, and Gerson Sunyé ORCID logo
(University of York, UK; University of Nantes, France)
Abstract: As the size of software and system models grows, scalability issues in the current generation of model management languages (e.g. transformation, validation) and their supporting tooling become more prominent. To address this challenge, execution engines of model management programs need to become more efficient in their use of system resources. This paper presents an approach for partial loading of large models that reside in graph-database-backed model repositories. This approach leverages sophisticated static analysis of model management programs and auto-generation of graph (Cypher) queries to load only relevant model elements instead of naively loading the entire models into memory. Our experimental evaluation shows that our approach enables model management programs to process larger models, faster, and with a reduced memory footprint compared to the state of the art.

Publisher's Version

proc time: 5.89