Workshop VMIL 2022 – Author Index |
Contents -
Abstracts -
Authors
|
Barany, Gergö |
VMIL '22: "Improving Vectorization Heuristics ..."
Improving Vectorization Heuristics in a Dynamic Compiler with Machine Learning Models
Raphael Mosaner, Gergö Barany, David Leopoldseder, and Hanspeter Mössenböck (JKU Linz, Austria; Oracle Labs, Austria) Optimizing compilers rely on many hand-crafted heuristics to guide the optimization process. However, the interactions between different optimizations makes their design a difficult task. We propose using machine learning models to either replace such heuristics or to support their development process, for example, by identifying important code features. Especially in static compilation, machine learning has been shown to outperform hand-crafted heuristics. We applied our approach in a state-of-the-art dynamic compiler, the GraalVM compiler. Our models predict an unroll factor for vectorized loops for which the GraalVM compiler developers have not been able to design satisfactory heuristics. Thereby, we identified features to describe vectorized loops and empirically evaluated the impact of different training data, features or model parameters on the accuracy of the learned models. When deployed in the GraalVM dynamic compiler, our models produce significant speedups of 8-11%, on average. Furthermore, large speedups unveiled a performance bug in the compiler which was fixed after our report. Our work shows that machine learning can be used to improve a dynamic compiler directly by replacing existing vectorization heuristics or indirectly by helping compiler developers to design better hand-crafted heuristics. @InProceedings{VMIL22p36, author = {Raphael Mosaner and Gergö Barany and David Leopoldseder and Hanspeter Mössenböck}, title = {Improving Vectorization Heuristics in a Dynamic Compiler with Machine Learning Models}, booktitle = {Proc.\ VMIL}, publisher = {ACM}, pages = {36--47}, doi = {10.1145/3563838.3567679}, year = {2022}, } Publisher's Version |
|
Jones, Richard |
VMIL '22: "Profile Guided Offline Optimization ..."
Profile Guided Offline Optimization of Hidden Class Graphs for JavaScript VMs in Embedded Systems
Tomoharu Ugawa, Stefan Marr, and Richard Jones (University of Tokyo, Japan; University of Kent, UK) JavaScript is increasingly used for the Internet of Things (IoT) on embedded systems. However, JavaScript's memory footprint is a challenge, because normal JavaScript virtual machines (VMs) do not fit into the small memory of IoT devices. In part this is because a significant amount of memory is used by hidden classes, which are used to represent JavaScript's dynamic objects efficiently. In this research, we optimize the hidden class graph to minimize their memory use. Our solution collects the hidden class graph and related information for an application in a profiling run, and optimizes the graph offline. We reduce the number of hidden classes by avoiding introducing intermediate ones, for instance when properties are added one after another. Our optimizations allow the VM to assign the most likely final hidden class to an object at its creation. They also minimize re-allocation of storage for property values, and reduce the polymorphism of inline caches. We implemented these optimizations in a JavaScript VM, eJSVM, and found that offline optimization can eliminate 61.9% of the hidden classes on average. It also improves execution speed by minimizing the number of hidden class transitions for an object and reducing inline cache misses. @InProceedings{VMIL22p25, author = {Tomoharu Ugawa and Stefan Marr and Richard Jones}, title = {Profile Guided Offline Optimization of Hidden Class Graphs for JavaScript VMs in Embedded Systems}, booktitle = {Proc.\ VMIL}, publisher = {ACM}, pages = {25--35}, doi = {10.1145/3563838.3567678}, year = {2022}, } Publisher's Version |
|
Leopoldseder, David |
VMIL '22: "Improving Vectorization Heuristics ..."
Improving Vectorization Heuristics in a Dynamic Compiler with Machine Learning Models
Raphael Mosaner, Gergö Barany, David Leopoldseder, and Hanspeter Mössenböck (JKU Linz, Austria; Oracle Labs, Austria) Optimizing compilers rely on many hand-crafted heuristics to guide the optimization process. However, the interactions between different optimizations makes their design a difficult task. We propose using machine learning models to either replace such heuristics or to support their development process, for example, by identifying important code features. Especially in static compilation, machine learning has been shown to outperform hand-crafted heuristics. We applied our approach in a state-of-the-art dynamic compiler, the GraalVM compiler. Our models predict an unroll factor for vectorized loops for which the GraalVM compiler developers have not been able to design satisfactory heuristics. Thereby, we identified features to describe vectorized loops and empirically evaluated the impact of different training data, features or model parameters on the accuracy of the learned models. When deployed in the GraalVM dynamic compiler, our models produce significant speedups of 8-11%, on average. Furthermore, large speedups unveiled a performance bug in the compiler which was fixed after our report. Our work shows that machine learning can be used to improve a dynamic compiler directly by replacing existing vectorization heuristics or indirectly by helping compiler developers to design better hand-crafted heuristics. @InProceedings{VMIL22p36, author = {Raphael Mosaner and Gergö Barany and David Leopoldseder and Hanspeter Mössenböck}, title = {Improving Vectorization Heuristics in a Dynamic Compiler with Machine Learning Models}, booktitle = {Proc.\ VMIL}, publisher = {ACM}, pages = {36--47}, doi = {10.1145/3563838.3567679}, year = {2022}, } Publisher's Version |
|
Marr, Stefan |
VMIL '22: "Profile Guided Offline Optimization ..."
Profile Guided Offline Optimization of Hidden Class Graphs for JavaScript VMs in Embedded Systems
Tomoharu Ugawa, Stefan Marr, and Richard Jones (University of Tokyo, Japan; University of Kent, UK) JavaScript is increasingly used for the Internet of Things (IoT) on embedded systems. However, JavaScript's memory footprint is a challenge, because normal JavaScript virtual machines (VMs) do not fit into the small memory of IoT devices. In part this is because a significant amount of memory is used by hidden classes, which are used to represent JavaScript's dynamic objects efficiently. In this research, we optimize the hidden class graph to minimize their memory use. Our solution collects the hidden class graph and related information for an application in a profiling run, and optimizes the graph offline. We reduce the number of hidden classes by avoiding introducing intermediate ones, for instance when properties are added one after another. Our optimizations allow the VM to assign the most likely final hidden class to an object at its creation. They also minimize re-allocation of storage for property values, and reduce the polymorphism of inline caches. We implemented these optimizations in a JavaScript VM, eJSVM, and found that offline optimization can eliminate 61.9% of the hidden classes on average. It also improves execution speed by minimizing the number of hidden class transitions for an object and reducing inline cache misses. @InProceedings{VMIL22p25, author = {Tomoharu Ugawa and Stefan Marr and Richard Jones}, title = {Profile Guided Offline Optimization of Hidden Class Graphs for JavaScript VMs in Embedded Systems}, booktitle = {Proc.\ VMIL}, publisher = {ACM}, pages = {25--35}, doi = {10.1145/3563838.3567678}, year = {2022}, } Publisher's Version |
|
Misse-Chanabier, Pierre |
VMIL '22: "Ease Virtual Machine Level ..."
Ease Virtual Machine Level Tooling with Language Level Ordinary Object Pointers
Pierre Misse-Chanabier and Théo Rogliano (University of Lille, France; Inria, France; CNRS, France; Centrale Lille, France; UMR 9189 CRIStAL, France) Virtual Machines (VMs) are programming language implementations. When tooling the VM level, developers face an important abstraction gap. VMs supporting an Object-oriented Programming language often manipulates their memory using addresses i.e., ordinary object pointers (OOPs), even though addresses are hidden in the language this VM supports. This discourages tooling at the VM level. We propose language level OOP (LLOOP) to reduce abstraction gaps. LLOOP combine language and VM knowledge at the VM level to ease VM tooling. We present our implementation on the Pharo language. Moreover, we created two tools solving two real-world major bugs in the Pharo environment which required VM level support. First, we investigate how to fix a meta error that was preventing a Pharo environment to open. We repair the broken environment by tracking and fixing the language level method responsible for the error at the VM level. Second, we investigate a corrupted Pharo image. A few objects in the Pharo memory space were corrupted i.e., the VM was not able to read and manipulate them. We are able to identify and remove the corrupted objects, fixing the Pharo environment. @InProceedings{VMIL22p1, author = {Pierre Misse-Chanabier and Théo Rogliano}, title = {Ease Virtual Machine Level Tooling with Language Level Ordinary Object Pointers}, booktitle = {Proc.\ VMIL}, publisher = {ACM}, pages = {1--12}, doi = {10.1145/3563838.3567676}, year = {2022}, } Publisher's Version Info |
|
Mosaner, Raphael |
VMIL '22: "Improving Vectorization Heuristics ..."
Improving Vectorization Heuristics in a Dynamic Compiler with Machine Learning Models
Raphael Mosaner, Gergö Barany, David Leopoldseder, and Hanspeter Mössenböck (JKU Linz, Austria; Oracle Labs, Austria) Optimizing compilers rely on many hand-crafted heuristics to guide the optimization process. However, the interactions between different optimizations makes their design a difficult task. We propose using machine learning models to either replace such heuristics or to support their development process, for example, by identifying important code features. Especially in static compilation, machine learning has been shown to outperform hand-crafted heuristics. We applied our approach in a state-of-the-art dynamic compiler, the GraalVM compiler. Our models predict an unroll factor for vectorized loops for which the GraalVM compiler developers have not been able to design satisfactory heuristics. Thereby, we identified features to describe vectorized loops and empirically evaluated the impact of different training data, features or model parameters on the accuracy of the learned models. When deployed in the GraalVM dynamic compiler, our models produce significant speedups of 8-11%, on average. Furthermore, large speedups unveiled a performance bug in the compiler which was fixed after our report. Our work shows that machine learning can be used to improve a dynamic compiler directly by replacing existing vectorization heuristics or indirectly by helping compiler developers to design better hand-crafted heuristics. @InProceedings{VMIL22p36, author = {Raphael Mosaner and Gergö Barany and David Leopoldseder and Hanspeter Mössenböck}, title = {Improving Vectorization Heuristics in a Dynamic Compiler with Machine Learning Models}, booktitle = {Proc.\ VMIL}, publisher = {ACM}, pages = {36--47}, doi = {10.1145/3563838.3567679}, year = {2022}, } Publisher's Version |
|
Mössenböck, Hanspeter |
VMIL '22: "Improving Vectorization Heuristics ..."
Improving Vectorization Heuristics in a Dynamic Compiler with Machine Learning Models
Raphael Mosaner, Gergö Barany, David Leopoldseder, and Hanspeter Mössenböck (JKU Linz, Austria; Oracle Labs, Austria) Optimizing compilers rely on many hand-crafted heuristics to guide the optimization process. However, the interactions between different optimizations makes their design a difficult task. We propose using machine learning models to either replace such heuristics or to support their development process, for example, by identifying important code features. Especially in static compilation, machine learning has been shown to outperform hand-crafted heuristics. We applied our approach in a state-of-the-art dynamic compiler, the GraalVM compiler. Our models predict an unroll factor for vectorized loops for which the GraalVM compiler developers have not been able to design satisfactory heuristics. Thereby, we identified features to describe vectorized loops and empirically evaluated the impact of different training data, features or model parameters on the accuracy of the learned models. When deployed in the GraalVM dynamic compiler, our models produce significant speedups of 8-11%, on average. Furthermore, large speedups unveiled a performance bug in the compiler which was fixed after our report. Our work shows that machine learning can be used to improve a dynamic compiler directly by replacing existing vectorization heuristics or indirectly by helping compiler developers to design better hand-crafted heuristics. @InProceedings{VMIL22p36, author = {Raphael Mosaner and Gergö Barany and David Leopoldseder and Hanspeter Mössenböck}, title = {Improving Vectorization Heuristics in a Dynamic Compiler with Machine Learning Models}, booktitle = {Proc.\ VMIL}, publisher = {ACM}, pages = {36--47}, doi = {10.1145/3563838.3567679}, year = {2022}, } Publisher's Version |
|
Prokopec, Aleksandar |
VMIL '22: "Inlining-Benefit Prediction ..."
Inlining-Benefit Prediction with Interprocedural Partial Escape Analysis
Matthew Edwin Weingarten, Theodoros Theodoridis, and Aleksandar Prokopec (ETH Zurich, Switzerland; Oracle Labs, Switzerland) Inlining is the primary facilitating mechanism for intraprocedural Partial Escape Analysis (PEA), which allows for the removal of object allocations on a branch-by-branch basis and is critical for performance in object-oriented languages. Prior work used interprocedural Escape Analysis to make inlining decisions, but it discarded control-flow-sensitivity when crossing procedure boundaries, and did not weigh other metrics to model the cost-benefit of inlining, resulting in unpredictable inlining decisions and suboptimal performance. Our work addresses these issues and introduces a novel Interprocedural Partial Escape Analysis algorithm (IPEA) to predict the inlining benefits, and improve the cost-benefit model of an existing optimization-driven inliner. We evaluate the implementation of IPEA in GraalVM Native Image, on industry-standard benchmark suites Dacapo, ScalaBench, and Renaissance. Out of 36 benchmarks with a geometric mean runtime improvement of 1.79%, 6 benchmarks achieve an improvement of over 5% with a geomean of 9.10% and up to 24.62%, while also reducing code size and compilation times compared to existing approaches. @InProceedings{VMIL22p13, author = {Matthew Edwin Weingarten and Theodoros Theodoridis and Aleksandar Prokopec}, title = {Inlining-Benefit Prediction with Interprocedural Partial Escape Analysis}, booktitle = {Proc.\ VMIL}, publisher = {ACM}, pages = {13--24}, doi = {10.1145/3563838.3567677}, year = {2022}, } Publisher's Version |
|
Rogliano, Théo |
VMIL '22: "Ease Virtual Machine Level ..."
Ease Virtual Machine Level Tooling with Language Level Ordinary Object Pointers
Pierre Misse-Chanabier and Théo Rogliano (University of Lille, France; Inria, France; CNRS, France; Centrale Lille, France; UMR 9189 CRIStAL, France) Virtual Machines (VMs) are programming language implementations. When tooling the VM level, developers face an important abstraction gap. VMs supporting an Object-oriented Programming language often manipulates their memory using addresses i.e., ordinary object pointers (OOPs), even though addresses are hidden in the language this VM supports. This discourages tooling at the VM level. We propose language level OOP (LLOOP) to reduce abstraction gaps. LLOOP combine language and VM knowledge at the VM level to ease VM tooling. We present our implementation on the Pharo language. Moreover, we created two tools solving two real-world major bugs in the Pharo environment which required VM level support. First, we investigate how to fix a meta error that was preventing a Pharo environment to open. We repair the broken environment by tracking and fixing the language level method responsible for the error at the VM level. Second, we investigate a corrupted Pharo image. A few objects in the Pharo memory space were corrupted i.e., the VM was not able to read and manipulate them. We are able to identify and remove the corrupted objects, fixing the Pharo environment. @InProceedings{VMIL22p1, author = {Pierre Misse-Chanabier and Théo Rogliano}, title = {Ease Virtual Machine Level Tooling with Language Level Ordinary Object Pointers}, booktitle = {Proc.\ VMIL}, publisher = {ACM}, pages = {1--12}, doi = {10.1145/3563838.3567676}, year = {2022}, } Publisher's Version Info |
|
Theodoridis, Theodoros |
VMIL '22: "Inlining-Benefit Prediction ..."
Inlining-Benefit Prediction with Interprocedural Partial Escape Analysis
Matthew Edwin Weingarten, Theodoros Theodoridis, and Aleksandar Prokopec (ETH Zurich, Switzerland; Oracle Labs, Switzerland) Inlining is the primary facilitating mechanism for intraprocedural Partial Escape Analysis (PEA), which allows for the removal of object allocations on a branch-by-branch basis and is critical for performance in object-oriented languages. Prior work used interprocedural Escape Analysis to make inlining decisions, but it discarded control-flow-sensitivity when crossing procedure boundaries, and did not weigh other metrics to model the cost-benefit of inlining, resulting in unpredictable inlining decisions and suboptimal performance. Our work addresses these issues and introduces a novel Interprocedural Partial Escape Analysis algorithm (IPEA) to predict the inlining benefits, and improve the cost-benefit model of an existing optimization-driven inliner. We evaluate the implementation of IPEA in GraalVM Native Image, on industry-standard benchmark suites Dacapo, ScalaBench, and Renaissance. Out of 36 benchmarks with a geometric mean runtime improvement of 1.79%, 6 benchmarks achieve an improvement of over 5% with a geomean of 9.10% and up to 24.62%, while also reducing code size and compilation times compared to existing approaches. @InProceedings{VMIL22p13, author = {Matthew Edwin Weingarten and Theodoros Theodoridis and Aleksandar Prokopec}, title = {Inlining-Benefit Prediction with Interprocedural Partial Escape Analysis}, booktitle = {Proc.\ VMIL}, publisher = {ACM}, pages = {13--24}, doi = {10.1145/3563838.3567677}, year = {2022}, } Publisher's Version |
|
Ugawa, Tomoharu |
VMIL '22: "Profile Guided Offline Optimization ..."
Profile Guided Offline Optimization of Hidden Class Graphs for JavaScript VMs in Embedded Systems
Tomoharu Ugawa, Stefan Marr, and Richard Jones (University of Tokyo, Japan; University of Kent, UK) JavaScript is increasingly used for the Internet of Things (IoT) on embedded systems. However, JavaScript's memory footprint is a challenge, because normal JavaScript virtual machines (VMs) do not fit into the small memory of IoT devices. In part this is because a significant amount of memory is used by hidden classes, which are used to represent JavaScript's dynamic objects efficiently. In this research, we optimize the hidden class graph to minimize their memory use. Our solution collects the hidden class graph and related information for an application in a profiling run, and optimizes the graph offline. We reduce the number of hidden classes by avoiding introducing intermediate ones, for instance when properties are added one after another. Our optimizations allow the VM to assign the most likely final hidden class to an object at its creation. They also minimize re-allocation of storage for property values, and reduce the polymorphism of inline caches. We implemented these optimizations in a JavaScript VM, eJSVM, and found that offline optimization can eliminate 61.9% of the hidden classes on average. It also improves execution speed by minimizing the number of hidden class transitions for an object and reducing inline cache misses. @InProceedings{VMIL22p25, author = {Tomoharu Ugawa and Stefan Marr and Richard Jones}, title = {Profile Guided Offline Optimization of Hidden Class Graphs for JavaScript VMs in Embedded Systems}, booktitle = {Proc.\ VMIL}, publisher = {ACM}, pages = {25--35}, doi = {10.1145/3563838.3567678}, year = {2022}, } Publisher's Version |
|
Weingarten, Matthew Edwin |
VMIL '22: "Inlining-Benefit Prediction ..."
Inlining-Benefit Prediction with Interprocedural Partial Escape Analysis
Matthew Edwin Weingarten, Theodoros Theodoridis, and Aleksandar Prokopec (ETH Zurich, Switzerland; Oracle Labs, Switzerland) Inlining is the primary facilitating mechanism for intraprocedural Partial Escape Analysis (PEA), which allows for the removal of object allocations on a branch-by-branch basis and is critical for performance in object-oriented languages. Prior work used interprocedural Escape Analysis to make inlining decisions, but it discarded control-flow-sensitivity when crossing procedure boundaries, and did not weigh other metrics to model the cost-benefit of inlining, resulting in unpredictable inlining decisions and suboptimal performance. Our work addresses these issues and introduces a novel Interprocedural Partial Escape Analysis algorithm (IPEA) to predict the inlining benefits, and improve the cost-benefit model of an existing optimization-driven inliner. We evaluate the implementation of IPEA in GraalVM Native Image, on industry-standard benchmark suites Dacapo, ScalaBench, and Renaissance. Out of 36 benchmarks with a geometric mean runtime improvement of 1.79%, 6 benchmarks achieve an improvement of over 5% with a geomean of 9.10% and up to 24.62%, while also reducing code size and compilation times compared to existing approaches. @InProceedings{VMIL22p13, author = {Matthew Edwin Weingarten and Theodoros Theodoridis and Aleksandar Prokopec}, title = {Inlining-Benefit Prediction with Interprocedural Partial Escape Analysis}, booktitle = {Proc.\ VMIL}, publisher = {ACM}, pages = {13--24}, doi = {10.1145/3563838.3567677}, year = {2022}, } Publisher's Version |
12 authors
proc time: 2.13