SPLASH Workshops 2017
2017 ACM SIGPLAN International Conference on Systems, Programming, Languages, and Applications: Software for Humanity (SPLASH Workshops 2017)
Powered by
Conference Publishing Consulting

9th ACM SIGPLAN International Workshop on Virtual Machines and Intermediate Languages (VMIL 2017), October 24, 2017, Vancouver, BC, Canada

VMIL 2017 – Proceedings

Contents - Abstracts - Authors

9th ACM SIGPLAN International Workshop on Virtual Machines and Intermediate Languages (VMIL 2017)

Frontmatter

Title Page


Message from the Chairs
It is our great pleasure to welcome you to the 9th ACM Workshop on Virtual Machines and Intermediate Languages (VMIL). This year’s workshop continues the tradition of having an engaging program designed to stimulate interesting discussions. In all, we have five presentations: three from the authors of accepted papers and two from our invited speakers.
Similar to years past, this workshop focuses on novel ideas on modular approaches to programming language implementation and optimization, extensible virtual machines, as well as reusable runtime components. VMIL also investigates programming language mechanisms or dynamic tooling facilities that are currently implemented as code transformations or in libraries but are worthwhile candidates for integration with the run-time environment. VMIL’s area of interest includes exploration how said mechanisms can be elegantly (and reusably) expressed at the intermediate language level (e.g., in bytecode), how their implementations can be optimized, and how virtual machine architectures might be shaped to facilitate such implementation efforts. Examples of such mechanisms are concurrency constructs (e.g. actors, capsules, processes, software transactional memory), transactions, and development tools (profilers, runtime verification).
We thank the authors and the invited speakers for providing the content of the program. We also thank the program committee for their valuable service. We hope that you will find the workshop thought-provoking and it will provide you with an opportunity to share ideas and make new connections with other researchers and practitioners from around the world. Have a great time and we will see you again at the next VMIL workshop.
Matthias Grimmer VMIL Program Co-Chair Oracle Labs
Adam Welc VMIL Program Co-Chair Uber Technologies

Papers

Cross-ISA Debugging in Meta-circular VMs
Christos Kotselidis, Andy Nisbet, Foivos S. Zakkak, and Nikos Foutris
(University of Manchester, UK)
Extending current Virtual Machine implementations to new Instruction Set Architectures entails a significant programming and debugging effort. Meta-circular VMs add another level of complexity towards this aim since they have to compile themselves with the same compiler that is being extended. Therefore, having low-level debugging tools is of vital importance in decreasing development time and bugs introduced.
In this paper we describe our experiences in extending Maxine VM to the ARMv7 architecture. During that process, we developed a QEMU-based toolchain which enables us to debug a wide range of VM features in an automated way. The presented toolchain has been integrated with the JUNIT testing framework of Maxine VM and is capable of executing from simple assembly instructions to fully JIT compiled code. Furthermore, it is fully open-sourced and can be adapted to any other VMs seamlessly. Finally, we describe a compiler-assisted methodology that helps us identify, at runtime, faulty methods that generate no stack traces, in an automatic and fast manner.

Publisher's Version
Accelerate JavaScript Applications by Cross-Compiling to WebAssembly
Micha Reiser and Luc Bläser
(University of Applied Sciences Rapperswil, Switzerland)
Although the performance of today's JavaScript engines is sufficient for most web applications, faster and more predictable runtimes could be desired for performance-critical web code. Therefore, we present Speedy.js, a cross-compiler that translates JavaScript/TypeScript to WebAssembly, a new standard for native execution supported by all major browsers. Speedy.js only imposes minimal restrictions on the JavaScript code, namely that the performance-critical functions are wrapped in TypeScript and only engage a performance-optimal subset of the JavaScript language. With this approach, we manage to make compute-intense web code up to four times faster, while reducing runtime fluctuations to the half.

Publisher's Version Info
Fusing Method Handle Graphs for Efficient Dynamic JVM Language Implementations
Shijie Xu, David Bremner, and Daniel Heidinga
(University of New Brunswick, Canada; IBM, Canada)
A Method Handle (MH) in JSR 292 (Supporting Dynamically Typed Languages on the JVM) is a typed, directly executable reference to an underlying method, constructor, or field, with optional method type transformations. Multiple connected MHs make up a Method Handle Graph (MHG), which transfers an invocation at a dynamic call site to real method implementations at runtime. Despite benefits that MHGs have for dynamic JVM language implementations, MHGs challenge existing JVM optimization because a) larger MHGs at call sites incur higher graph traversal costs at runtime; and b) JIT expenses, including profiling and compilation of individual MHs, increase along with the number of MHs. This paper proposes dynamic graph fusion to compile an MHG into another equivalent but simpler MHG (e.g., fewer MHs and edges), as well as related optimization opportunities (e.g., selection policy and inline caching). Graph fusion dynamically fuses bytecodes of internal MHs on hot paths, and then substitutes these internal MHs with the instance of the newly generated bytecodes at program runtime. The implementation consists of a template system and GraphJIT. The former emits source bytecodes for individual MHs, while the latter is a JIT compiler that fuses source bytecodes from templates on the bytecode level (i.e., both source code and target code are bytecodes). With the JRuby Micro-Indy benchmark from Computer Language Benchmark Game and JavaScript Octane benchmark on Nashorn, our results show that (a) the technique can reduce execution time of Micro-Indy and Octane benchmarks by 6.28% and 7.73% on average; b) it can speed up a typical MHG’s execution by 31.53% using Ahead-Of-Time (AOT) compilation; and (c) the technique reduces the number of MH JIT compilations by 52.1%.

Publisher's Version

proc time: 0.42