SPLASH Workshop/Symposium Events 2022
2022 ACM SIGPLAN International Conference on Systems, Programming, Languages, and Applications: Software for Humanity (SPLASH Events 2022)
Powered by
Conference Publishing Consulting

18th ACM SIGPLAN International Symposium on Dynamic Languages (DLS 2022), December 7, 2022, Auckland, New Zealand

DLS 2022 – Proceedings

Contents - Abstracts - Authors

18th ACM SIGPLAN International Symposium on Dynamic Languages (DLS 2022)


Title Page

Message from the Chairs
Welcome to the 18th edition of the Dynamic Language Symposium (DLS), co-located with SPLASH 2022.
DLS is the premier forum for researchers and practitioners to share research results and experience on all aspects on dynamic languages by which we mean languages like Clojure, Dart, Elixir, Erlang, JavaScript, Julia, Lisp, Lua, Perl, Python, Ruby, R, Racket, Scheme, Smalltalk, and more.


Execution vs. Parse-Based Language Servers: Tradeoffs and Opportunities for Language-Agnostic Tooling for Dynamic Languages
Stefan Marr ORCID logo, Humphrey Burchell ORCID logo, and Fabio NiephausORCID logo
(University of Kent, UK; Oracle Labs, Germany; University of Potsdam, Germany; Hasso Plattner Institute, Germany)
With the wide adoption of the language server protocol, the desire to have IDE-style tooling even for niche and research languages has exploded. The Truffle language framework facilitates this desire by offering an almost zero-effort approach to language implementers to providing IDE features. However, this existing approach needs to execute the code being worked on to capture much of the information needed for an IDE, ideally with full unit-test coverage.
To capture information more reliably and avoid the need to execute the code being worked on, we propose a new parse-based design for language servers. Our solution provides a language-agnostic interface for structural information, with which we can support most common IDE features for dynamic languages.
Comparing the two approaches, we find that our new parse-based approach requires only a modest development effort for each language and has only minor tradeoffs for precision, for instance for code completion, compared to Truffle's execution-based approach.
Further, we show that less than 1,000 lines of code capture enough details to provide much of the typical IDE functionality, with an order of magnitude less code than ad hoc language servers. We tested our approach for the custom parsers of Newspeak and SOM, as well as SimpleLanguage's ANTLR grammar without any changes to it. Combining both parse and execution-based approaches has the potential to provide good and precise IDE tooling for a wide range of languages with only small development effort. By itself, our approach would be a good addition to the many libraries implementing the language server protocol to enable low-effort implementations of IDE features.

Publisher's Version Info
Who You Gonna Call: Analyzing the Run-Time Call-Site Behavior of Ruby Applications
Sophie Kaleba ORCID logo, Octave Larose ORCID logo, Richard JonesORCID logo, and Stefan Marr ORCID logo
(University of Kent, UK)
Applications written in dynamic languages are becoming larger and larger and companies increasingly use multi-million line codebases in production. At the same time, dynamic languages rely heavily on dynamic optimizations, particularly those that reduce the overhead of method calls. In this work, we study the call-site behavior of Ruby benchmarks that are being used to guide the development of upcoming Ruby implementations such as TruffleRuby and YJIT. We study the interaction of call-site lookup caches, method splitting, and elimination of duplicate call-targets. We find that these optimizations are indeed highly effective on both smaller and large benchmarks, methods and closures alike, and help to open up opportunities for further optimizations such as inlining. However, we show that TruffleRuby’s splitting may be applied too aggressively on already-monomorphic call-sites, coming at a run-time cost. We also find three distinct patterns in the evolution of call-site behavior over time, which may help to guide novel optimizations. We believe that our results may support language implementers in optimizing runtime systems for large code-bases built in dynamic languages.

Publisher's Version

proc time: 1.66