SPLASH Workshop/Symposium Events 2022
2022 ACM SIGPLAN International Conference on Systems, Programming, Languages, and Applications: Software for Humanity (SPLASH Events 2022)
Powered by
Conference Publishing Consulting

14th ACM SIGPLAN International Workshop on Virtual Machines and Intermediate Languages (VMIL 2022), December 5, 2022, Auckland, New Zealand

VMIL 2022 – Proceedings

Contents - Abstracts - Authors

14th ACM SIGPLAN International Workshop on Virtual Machines and Intermediate Languages (VMIL 2022)


Title Page

Welcome from the Chairs
Welcome to the 14th ACM SIGPLAN International Workshop on Virtual Machines and Language Implementations (VMIL 2022). The workshop is co-located with SPLASH'22 and will be held as a hybrid event in Auckland, New Zealand. The workshop aims at advancing the state of the art on the design and implementation of programming systems, with virtual machines---broadly defined---as a focus. VMIL could attract a sizable number of seven submissions this year, after a submission low due to COVID-19 last year.


Ease Virtual Machine Level Tooling with Language Level Ordinary Object Pointers
Pierre Misse-Chanabier ORCID logo and Théo Rogliano ORCID logo
(University of Lille, France; Inria, France; CNRS, France; Centrale Lille, France; UMR 9189 CRIStAL, France)
Virtual Machines (VMs) are programming language implementations. When tooling the VM level, developers face an important abstraction gap. VMs supporting an Object-oriented Programming language often manipulates their memory using addresses i.e., ordinary object pointers (OOPs), even though addresses are hidden in the language this VM supports. This discourages tooling at the VM level.
We propose language level OOP (LLOOP) to reduce abstraction gaps. LLOOP combine language and VM knowledge at the VM level to ease VM tooling. We present our implementation on the Pharo language. Moreover, we created two tools solving two real-world major bugs in the Pharo environment which required VM level support.
First, we investigate how to fix a meta error that was preventing a Pharo environment to open. We repair the broken environment by tracking and fixing the language level method responsible for the error at the VM level. Second, we investigate a corrupted Pharo image. A few objects in the Pharo memory space were corrupted i.e., the VM was not able to read and manipulate them. We are able to identify and remove the corrupted objects, fixing the Pharo environment.

Publisher's Version Info
Inlining-Benefit Prediction with Interprocedural Partial Escape Analysis
Matthew Edwin Weingarten ORCID logo, Theodoros Theodoridis ORCID logo, and Aleksandar Prokopec ORCID logo
(ETH Zurich, Switzerland; Oracle Labs, Switzerland)
Inlining is the primary facilitating mechanism for intraprocedural Partial Escape Analysis (PEA), which allows for the removal of object allocations on a branch-by-branch basis and is critical for performance in object-oriented languages. Prior work used interprocedural Escape Analysis to make inlining decisions, but it discarded control-flow-sensitivity when crossing procedure boundaries, and did not weigh other metrics to model the cost-benefit of inlining, resulting in unpredictable inlining decisions and suboptimal performance. Our work addresses these issues and introduces a novel Interprocedural Partial Escape Analysis algorithm (IPEA) to predict the inlining benefits, and improve the cost-benefit model of an existing optimization-driven inliner. We evaluate the implementation of IPEA in GraalVM Native Image, on industry-standard benchmark suites Dacapo, ScalaBench, and Renaissance. Out of 36 benchmarks with a geometric mean runtime improvement of 1.79%, 6 benchmarks achieve an improvement of over 5% with a geomean of 9.10% and up to 24.62%, while also reducing code size and compilation times compared to existing approaches.

Publisher's Version
Profile Guided Offline Optimization of Hidden Class Graphs for JavaScript VMs in Embedded Systems
Tomoharu Ugawa ORCID logo, Stefan Marr ORCID logo, and Richard JonesORCID logo
(University of Tokyo, Japan; University of Kent, UK)
JavaScript is increasingly used for the Internet of Things (IoT) on embedded systems. However, JavaScript's memory footprint is a challenge, because normal JavaScript virtual machines (VMs) do not fit into the small memory of IoT devices. In part this is because a significant amount of memory is used by hidden classes, which are used to represent JavaScript's dynamic objects efficiently.
In this research, we optimize the hidden class graph to minimize their memory use. Our solution collects the hidden class graph and related information for an application in a profiling run, and optimizes the graph offline. We reduce the number of hidden classes by avoiding introducing intermediate ones, for instance when properties are added one after another. Our optimizations allow the VM to assign the most likely final hidden class to an object at its creation. They also minimize re-allocation of storage for property values, and reduce the polymorphism of inline caches.
We implemented these optimizations in a JavaScript VM, eJSVM, and found that offline optimization can eliminate 61.9% of the hidden classes on average. It also improves execution speed by minimizing the number of hidden class transitions for an object and reducing inline cache misses.

Publisher's Version
Improving Vectorization Heuristics in a Dynamic Compiler with Machine Learning Models
Raphael Mosaner ORCID logo, Gergö Barany ORCID logo, David LeopoldsederORCID logo, and Hanspeter MössenböckORCID logo
(JKU Linz, Austria; Oracle Labs, Austria)
Optimizing compilers rely on many hand-crafted heuristics to guide the optimization process. However, the interactions between different optimizations makes their design a difficult task. We propose using machine learning models to either replace such heuristics or to support their development process, for example, by identifying important code features. Especially in static compilation, machine learning has been shown to outperform hand-crafted heuristics. We applied our approach in a state-of-the-art dynamic compiler, the GraalVM compiler. Our models predict an unroll factor for vectorized loops for which the GraalVM compiler developers have not been able to design satisfactory heuristics. Thereby, we identified features to describe vectorized loops and empirically evaluated the impact of different training data, features or model parameters on the accuracy of the learned models. When deployed in the GraalVM dynamic compiler, our models produce significant speedups of 8-11%, on average. Furthermore, large speedups unveiled a performance bug in the compiler which was fixed after our report. Our work shows that machine learning can be used to improve a dynamic compiler directly by replacing existing vectorization heuristics or indirectly by helping compiler developers to design better hand-crafted heuristics.

Publisher's Version

proc time: 1.82