DLS 2015 – Author Index |
Contents -
Abstracts -
Authors
|
A B C D E F G H I J K L M N P R S T W
Åkerblom, Beatrice |
![]() Beatrice Åkerblom and Tobias Wrigstad (Stockholm University, Sweden; Uppsala University, Sweden) Following the increased popularity of dynamic languages and their increased use in critical software, there have been many proposals to retrofit static type system to these languages to improve possibilities to catch bugs and improve performance. A key question for any type system is whether the types should be structural, for more expressiveness, or nominal, to carry more meaning for the programmer. For retrofitted type systems, it seems the current trend is using structural types. This paper attempts to answer the question to what extent this extra expressiveness is needed, and how the possible polymorphism in dynamic code is used in practise. We study polymorphism in 36 real-world open source Python programs and approximate to what extent nominal and structural types could be used to type these programs. The study is based on collecting traces from multiple runs of the programs and analysing the polymorphic degrees of targets at more than 7 million call-sites. Our results show that while polymorphism is used in all programs, the programs are to a great extent monomorphic. The polymorphism found is evenly distributed across libraries and program-specific code and occur both during program start-up and normal execution. Most programs contain a few ``megamorphic'' call-sites where receiver types vary widely. The non-monomorphic parts of the programs can to some extent be typed with nominal or structural types, but none of the approaches can type entire programs. ![]() |
|
Alcocer, Juan Pablo Sandoval |
![]() Juan Pablo Sandoval Alcocer and Alexandre Bergel (University of Chile, Chile) Little is known about how software performance evolves across software revisions. The severity of this situation is high since (i) most performance variations seem to happen accidentally and (ii) addressing a performance regression is challenging, especially when functional code is stacked on it. This paper reports an empirical study on the performance evolution of 19 applications, totaling over 19 MLOC. It took 52 days to run our 49 benchmarks. By relating performance variation with source code revisions, we found out that: (i) 1 out of every 3 application revisions introduces a performance variation, (ii) performance variations may be classified into 9 patterns, (iii) the most prominent cause of performance regression involves loops and collections. We carefully describe the patterns we identified, and detail how we addressed the numerous challenges we faced to complete our experiment. ![]() |
|
Bergel, Alexandre |
![]() Juan Pablo Sandoval Alcocer and Alexandre Bergel (University of Chile, Chile) Little is known about how software performance evolves across software revisions. The severity of this situation is high since (i) most performance variations seem to happen accidentally and (ii) addressing a performance regression is challenging, especially when functional code is stacked on it. This paper reports an empirical study on the performance evolution of 19 applications, totaling over 19 MLOC. It took 52 days to run our 49 benchmarks. By relating performance variation with source code revisions, we found out that: (i) 1 out of every 3 application revisions introduces a performance variation, (ii) performance variations may be classified into 9 patterns, (iii) the most prominent cause of performance regression involves loops and collections. We carefully describe the patterns we identified, and detail how we addressed the numerous challenges we faced to complete our experiment. ![]() |
|
Bolz, Carl Friedrich |
![]() Tobias Pape, Tim Felgentreff, Robert Hirschfeld, Anton Gulenko, and Carl Friedrich Bolz (HPI, Germany; TU Berlin, Germany; King's College London, UK) Storage strategies have been proposed as a run-time optimization for the PyPy Python implementation and have shown promising results for optimizing execution speed and memory requirements. However, it remained unclear whether the approach works equally well in other dynamic languages. Furthermore, while PyPy is based on RPython, a language to write VMs with reusable components such as a tracing just-in-time compiler and garbage collection, the strategies design itself was not generalized to be reusable across languages implemented using that same toolchain. In this paper, we present a general design and implementation for storage strategies and show how they can be reused across different RPython-based languages. We evaluate the performance of our implementation for RSqueak, an RPython-based VM for Squeak/Smalltalk and show that storage strategies may indeed offer performance benefits for certain workloads in other dynamic programming languages.We furthermore evaluate the generality of our implementation by applying it to Topaz, a Ruby VM, and Pycket, a Racket implementation. ![]() |
|
Byrd, William E. |
![]() Steven Lyde, William E. Byrd, and Matthew Might (University of Utah, USA) We demonstrate how to map a control-flow analysis for a higher-order language (dynamic languages are typically higher-order) into a pointer analysis for a first-order language, such as C. This allows us to use existing pointer analysis tools to perform a control-flow analysis, exploiting their technical advancements and the engineering effort that went into developing them. We compare the results of two recent parallel pointer analysis tools with a parallel control-flow analysis tool. While it has been known that a control-flow analysis of higher-order languages and a pointer analysis of first-order languages are very similar, we demonstrate that these two analyses are actually more similar than previously thought. We present the first mapping between a high-order control-flow analysis and a pointer analysis. ![]() |
|
Cassou, Damien |
![]() Camille Teruel, Stéphane Ducasse, Damien Cassou, and Marcus Denker (INRIA, France; University of Lille, France) Reflection is a powerful programming language feature that enables language extensions, generic code, dynamic analyses, development tools, etc. However, uncontrolled reflection breaks object encapsulation and considerably increases the attack surface of programs e.g., malicious libraries can use reflection to attack their client applications. To bring reflection and object encapsulation back together, we use dynamic object ownership to design an access control policy to reflective operations. This policy grants objects full reflective power over the objects they own but limited reflective power over other objects. Code is still able to use advanced reflective operations but reflection cannot be used as an attack vector anymore. ![]() |
|
Denker, Marcus |
![]() Camille Teruel, Stéphane Ducasse, Damien Cassou, and Marcus Denker (INRIA, France; University of Lille, France) Reflection is a powerful programming language feature that enables language extensions, generic code, dynamic analyses, development tools, etc. However, uncontrolled reflection breaks object encapsulation and considerably increases the attack surface of programs e.g., malicious libraries can use reflection to attack their client applications. To bring reflection and object encapsulation back together, we use dynamic object ownership to design an access control policy to reflective operations. This policy grants objects full reflective power over the objects they own but limited reflective power over other objects. Code is still able to use advanced reflective operations but reflection cannot be used as an attack vector anymore. ![]() |
|
Ducasse, Stéphane |
![]() Camille Teruel, Stéphane Ducasse, Damien Cassou, and Marcus Denker (INRIA, France; University of Lille, France) Reflection is a powerful programming language feature that enables language extensions, generic code, dynamic analyses, development tools, etc. However, uncontrolled reflection breaks object encapsulation and considerably increases the attack surface of programs e.g., malicious libraries can use reflection to attack their client applications. To bring reflection and object encapsulation back together, we use dynamic object ownership to design an access control policy to reflective operations. This policy grants objects full reflective power over the objects they own but limited reflective power over other objects. Code is still able to use advanced reflective operations but reflection cannot be used as an attack vector anymore. ![]() |
|
Ernst, Erik |
![]() Erik Ernst, Anders Møller, Mathias Schwarz, and Fabio Strocco (Google, Denmark; Aarhus University, Denmark) Unlike traditional static type checking, the type system in the Dart programming language is unsound by design, even for fully annotated programs. The rationale has been that this allows compile-time detection of likely errors and enables code completion in integrated development environments, without being restrictive on programmers. Despite unsoundness, judicious use of type annotations can ensure useful properties of the runtime behavior of Dart programs. We present a formal model of a core of Dart with a focus on its type system, which allows us to elucidate the causes of unsoundness. Our main contribution is a characterization of message-safe programs and a theorem stating that such programs will never encounter 'message not understood' errors at runtime. Message safety is less restrictive than traditional type soundness, and we argue that it forms a natural intermediate point between dynamically typed and statically typed Dart programs. ![]() ![]() |
|
Feeley, Marc |
![]() Marc Feeley (Université de Montréal, Canada) Task migration allows a running program to continue its execution in a different destination environment. Increasingly, execution environments are defined by combinations of cultural and technological constraints, affecting the choice of host language, libraries and tools. A compiler supporting multiple target environments and task migration must be able to marshal continuations and then unmarshal and continue their execution, ideally, even if the language of the destination environment is different. In this paper, we propose a compilation approach based on a virtual machine that strikes a balance between implementation portability and efficiency. We explain its implementation within a Scheme compiler targeting JavaScript, PHP, Python, Ruby and Java -- some of the most popular host languages for web applications. As our experiments show, this approach compares well with other Scheme compilers targeting high-level languages in terms of execution speed, being sometimes up to 3 orders of magnitude faster. ![]() |
|
Felgentreff, Tim |
![]() Tobias Pape, Tim Felgentreff, Robert Hirschfeld, Anton Gulenko, and Carl Friedrich Bolz (HPI, Germany; TU Berlin, Germany; King's College London, UK) Storage strategies have been proposed as a run-time optimization for the PyPy Python implementation and have shown promising results for optimizing execution speed and memory requirements. However, it remained unclear whether the approach works equally well in other dynamic languages. Furthermore, while PyPy is based on RPython, a language to write VMs with reusable components such as a tracing just-in-time compiler and garbage collection, the strategies design itself was not generalized to be reusable across languages implemented using that same toolchain. In this paper, we present a general design and implementation for storage strategies and show how they can be reused across different RPython-based languages. We evaluate the performance of our implementation for RSqueak, an RPython-based VM for Squeak/Smalltalk and show that storage strategies may indeed offer performance benefits for certain workloads in other dynamic programming languages.We furthermore evaluate the generality of our implementation by applying it to Topaz, a Ruby VM, and Pycket, a Racket implementation. ![]() |
|
Fischer, Lars |
![]() Lars Fischer and Stefan Hanenberg (University of Duisburg-Essen, Germany) Recent empirical studies that compared static and dynamic type systems on API usability showed a positive impact of static type systems on developer productivity in most cases. Nevertheless, it is unclear how large this effect is in comparison to other factors. One obvious factor in programming is tooling: It is commonly accepted that modern IDEs have a large positive impact on developers, although it is not clear which parts of modern IDEs are responsible for that. One possible---and for most developers obvious candidate---is code completion. This paper describes a 2x2 randomized trial that compares JavaScript and Microsoft's statically typed alternative TypeScript with and without code completion in MS Visual Studio. While the experiment shows (in correspondence to previous experiments) a large positive effect of the statically typed language TypeScript, the code completion effect is not only marginal, but also just approaching statistical significance. This seems to be an indicator that the effect of static type systems is larger than often assumed, at least in comparison to code completion. ![]() |
|
Grimmer, Matthias |
![]() Matthias Grimmer, Chris Seaton, Roland Schatz, Thomas Würthinger, and Hanspeter Mössenböck (JKU Linz, Austria; Oracle Labs, UK; Oracle Labs, Austria; Oracle Labs, Switzerland) Programmers combine different programming languages because it allows them to use the most suitable language for a given problem, to gradually migrate existing projects from one language to another, or to reuse existing source code. However, existing cross-language mechanisms suffer from complex interfaces, insufficient flexibility, or poor performance. We present the TruffleVM, a multi-language runtime that allows composing different language implementations in a seamless way. It reduces the amount of required boiler-plate code to a minimum by allowing programmers to access foreign functions or objects by using the notation of the host language. We compose language implementations that translate source code to an intermediate representation (IR), which is executed on top of a shared runtime system. Language implementations use language-independent messages that the runtime resolves at their first execution by transforming them to efficient foreign-language-specific operations. The TruffleVM avoids conversion or marshaling of foreign objects at the language boundary and allows the dynamic compiler to perform its optimizations across language boundaries, which guarantees high performance. This paper presents an implementation of our ideas based on the Truffle system and its guest language implementations JavaScript, Ruby, and C. ![]() |
|
Gulenko, Anton |
![]() Tobias Pape, Tim Felgentreff, Robert Hirschfeld, Anton Gulenko, and Carl Friedrich Bolz (HPI, Germany; TU Berlin, Germany; King's College London, UK) Storage strategies have been proposed as a run-time optimization for the PyPy Python implementation and have shown promising results for optimizing execution speed and memory requirements. However, it remained unclear whether the approach works equally well in other dynamic languages. Furthermore, while PyPy is based on RPython, a language to write VMs with reusable components such as a tracing just-in-time compiler and garbage collection, the strategies design itself was not generalized to be reusable across languages implemented using that same toolchain. In this paper, we present a general design and implementation for storage strategies and show how they can be reused across different RPython-based languages. We evaluate the performance of our implementation for RSqueak, an RPython-based VM for Squeak/Smalltalk and show that storage strategies may indeed offer performance benefits for certain workloads in other dynamic programming languages.We furthermore evaluate the generality of our implementation by applying it to Topaz, a Ruby VM, and Pycket, a Racket implementation. ![]() |
|
Hanenberg, Stefan |
![]() Lars Fischer and Stefan Hanenberg (University of Duisburg-Essen, Germany) Recent empirical studies that compared static and dynamic type systems on API usability showed a positive impact of static type systems on developer productivity in most cases. Nevertheless, it is unclear how large this effect is in comparison to other factors. One obvious factor in programming is tooling: It is commonly accepted that modern IDEs have a large positive impact on developers, although it is not clear which parts of modern IDEs are responsible for that. One possible---and for most developers obvious candidate---is code completion. This paper describes a 2x2 randomized trial that compares JavaScript and Microsoft's statically typed alternative TypeScript with and without code completion in MS Visual Studio. While the experiment shows (in correspondence to previous experiments) a large positive effect of the statically typed language TypeScript, the code completion effect is not only marginal, but also just approaching statistical significance. This seems to be an indicator that the effect of static type systems is larger than often assumed, at least in comparison to code completion. ![]() |
|
Hardekopf, Ben |
![]() Madhukar N. Kedlaya, Behnam Robatmili, and Ben Hardekopf (University of California at Santa Barbara, USA; Qualcomm Research, USA) Modern JavaScript engines optimize hot functions using a JIT compiler along with type information gathered by an online profiler. However, the profiler's information can be unsound and when unexpected types are encountered the engine must recover using an expensive mechanism called deoptimization. In this paper we describe a method to significantly reduce the number of deoptimizations observed by client-side JavaScript engines by using ahead-of-time profiling on the server-side. Unlike previous work on ahead-of-time profiling for statically-typed languages such as Java, our technique must operate on a dynamically-typed language, which significantly changes the required insights and methods to make the technique effective. We implement our proposed technique using the SpiderMonkey JavaScript engine, and we evaluate our implementation using three different kinds of benchmarks: the industry-standard Octane benchmark suite, a set of JavaScript physics engines, and a set of real-world websites from the Membench50 benchmark suite. We show that using ahead-of-time profiling provides significant performance benefits over the baseline vanilla SpiderMonkey engine. ![]() |
|
Hirschfeld, Robert |
![]() Tobias Pape, Tim Felgentreff, Robert Hirschfeld, Anton Gulenko, and Carl Friedrich Bolz (HPI, Germany; TU Berlin, Germany; King's College London, UK) Storage strategies have been proposed as a run-time optimization for the PyPy Python implementation and have shown promising results for optimizing execution speed and memory requirements. However, it remained unclear whether the approach works equally well in other dynamic languages. Furthermore, while PyPy is based on RPython, a language to write VMs with reusable components such as a tracing just-in-time compiler and garbage collection, the strategies design itself was not generalized to be reusable across languages implemented using that same toolchain. In this paper, we present a general design and implementation for storage strategies and show how they can be reused across different RPython-based languages. We evaluate the performance of our implementation for RSqueak, an RPython-based VM for Squeak/Smalltalk and show that storage strategies may indeed offer performance benefits for certain workloads in other dynamic programming languages.We furthermore evaluate the generality of our implementation by applying it to Topaz, a Ruby VM, and Pycket, a Racket implementation. ![]() |
|
Homer, Michael |
![]() Michael Homer, Timothy Jones, and James Noble (Victoria University of Wellington, New Zealand) Method names with multiple separate parts are a feature of many dynamic languages derived from Smalltalk. Generalising the syntax of method names to allow parts to be repeated, optional, or alternatives, means a single definition can respond to a whole family of method requests. We show how generalising method names can support flexible APIs for domain-specific languages, complex initialisation tasks, and control structures defined in libraries. We describe how we have extended Grace to support generalised method names, and prove that such an extension can be integrated into a gradually-typed language while preserving type soundness. ![]() |
|
Ierusalimschy, Roberto |
![]() André Murbach Maidl, Fabio Mascarenhas, and Roberto Ierusalimschy (PUC-Rio, Brazil; Federal University of Rio de Janeiro, Brazil) Programmers often migrate from a dynamically typed to a statically typed language when their simple scripts evolve into complex programs. Optional type systems are one way of having both static and dynamic typing in the same language, while keeping its dynamically typed semantics. This makes evolving a program from dynamic to static typing a matter of describing the implied types that it is using and adding annotations to make those types explicit. Designing an optional type system for an existing dynamically typed language is challenging, as its types should feel natural to programmers that are already familiar with this language. In this work, we give a formal description of Typed Lua, an optional type system for Lua, with a focus on two of its novel type system features: incremental evolution of imperative record and object types that is both lightweight and type-safe, and projection types, a combination of flow typing, functions that return multiple values, and multiple assignment. While our type system is tailored to the features and idioms of Lua, its features can be adapted to other imperative scripting languages. ![]() |
|
Jones, Timothy |
![]() Michael Homer, Timothy Jones, and James Noble (Victoria University of Wellington, New Zealand) Method names with multiple separate parts are a feature of many dynamic languages derived from Smalltalk. Generalising the syntax of method names to allow parts to be repeated, optional, or alternatives, means a single definition can respond to a whole family of method requests. We show how generalising method names can support flexible APIs for domain-specific languages, complex initialisation tasks, and control structures defined in libraries. We describe how we have extended Grace to support generalised method names, and prove that such an extension can be integrated into a gradually-typed language while preserving type soundness. ![]() |
|
Kedlaya, Madhukar N. |
![]() Madhukar N. Kedlaya, Behnam Robatmili, and Ben Hardekopf (University of California at Santa Barbara, USA; Qualcomm Research, USA) Modern JavaScript engines optimize hot functions using a JIT compiler along with type information gathered by an online profiler. However, the profiler's information can be unsound and when unexpected types are encountered the engine must recover using an expensive mechanism called deoptimization. In this paper we describe a method to significantly reduce the number of deoptimizations observed by client-side JavaScript engines by using ahead-of-time profiling on the server-side. Unlike previous work on ahead-of-time profiling for statically-typed languages such as Java, our technique must operate on a dynamically-typed language, which significantly changes the required insights and methods to make the technique effective. We implement our proposed technique using the SpiderMonkey JavaScript engine, and we evaluate our implementation using three different kinds of benchmarks: the industry-standard Octane benchmark suite, a set of JavaScript physics engines, and a set of real-world websites from the Membench50 benchmark suite. We show that using ahead-of-time profiling provides significant performance benefits over the baseline vanilla SpiderMonkey engine. ![]() |
|
Leopoldseder, David |
![]() David Leopoldseder, Lukas Stadler, Christian Wimmer, and Hanspeter Mössenböck (JKU Linz, Austria; Oracle Labs, Austria; Oracle Labs, USA) We present an approach to cross-compile Java bytecodes to JavaScript, building on existing Java optimizing compiler technology. Static analysis determines which Java classes and methods are reachable. These are then translated to JavaScript using a re-configured Java just-in-time compiler with a new back end that generates JavaScript instead of machine code. Standard compiler optimizations such as method inlining and global value numbering, as well as advanced optimizations such as escape analysis, lead to compact and optimized JavaScript code. Compiler IR is unstructured, so structured control flow needs to be reconstructed before code generation is possible. We present details of our control flow reconstruction algorithm. Our system is based on Graal, an open-source optimizing compiler for the Java HotSpot VM and other VMs. The modular and VM-independent architecture of Graal allows us to reuse the intermediate representation, the bytecode parser, and the high-level optimizations. Our custom back end first performs control flow reconstruction and then JavaScript code generation. The generated JavaScript undergoes a set of optimizations to increase readability and performance. Static analysis is performed on the Graal intermediate representation as well. Benchmark results for medium-sized Java benchmarks such as SPECjbb2005 run with acceptable performance on the V8 JavaScript VM. ![]() |
|
Lyde, Steven |
![]() Steven Lyde, William E. Byrd, and Matthew Might (University of Utah, USA) We demonstrate how to map a control-flow analysis for a higher-order language (dynamic languages are typically higher-order) into a pointer analysis for a first-order language, such as C. This allows us to use existing pointer analysis tools to perform a control-flow analysis, exploiting their technical advancements and the engineering effort that went into developing them. We compare the results of two recent parallel pointer analysis tools with a parallel control-flow analysis tool. While it has been known that a control-flow analysis of higher-order languages and a pointer analysis of first-order languages are very similar, we demonstrate that these two analyses are actually more similar than previously thought. We present the first mapping between a high-order control-flow analysis and a pointer analysis. ![]() |
|
Maidl, André Murbach |
![]() André Murbach Maidl, Fabio Mascarenhas, and Roberto Ierusalimschy (PUC-Rio, Brazil; Federal University of Rio de Janeiro, Brazil) Programmers often migrate from a dynamically typed to a statically typed language when their simple scripts evolve into complex programs. Optional type systems are one way of having both static and dynamic typing in the same language, while keeping its dynamically typed semantics. This makes evolving a program from dynamic to static typing a matter of describing the implied types that it is using and adding annotations to make those types explicit. Designing an optional type system for an existing dynamically typed language is challenging, as its types should feel natural to programmers that are already familiar with this language. In this work, we give a formal description of Typed Lua, an optional type system for Lua, with a focus on two of its novel type system features: incremental evolution of imperative record and object types that is both lightweight and type-safe, and projection types, a combination of flow typing, functions that return multiple values, and multiple assignment. While our type system is tailored to the features and idioms of Lua, its features can be adapted to other imperative scripting languages. ![]() |
|
Mascarenhas, Fabio |
![]() André Murbach Maidl, Fabio Mascarenhas, and Roberto Ierusalimschy (PUC-Rio, Brazil; Federal University of Rio de Janeiro, Brazil) Programmers often migrate from a dynamically typed to a statically typed language when their simple scripts evolve into complex programs. Optional type systems are one way of having both static and dynamic typing in the same language, while keeping its dynamically typed semantics. This makes evolving a program from dynamic to static typing a matter of describing the implied types that it is using and adding annotations to make those types explicit. Designing an optional type system for an existing dynamically typed language is challenging, as its types should feel natural to programmers that are already familiar with this language. In this work, we give a formal description of Typed Lua, an optional type system for Lua, with a focus on two of its novel type system features: incremental evolution of imperative record and object types that is both lightweight and type-safe, and projection types, a combination of flow typing, functions that return multiple values, and multiple assignment. While our type system is tailored to the features and idioms of Lua, its features can be adapted to other imperative scripting languages. ![]() |
|
Might, Matthew |
![]() Steven Lyde, William E. Byrd, and Matthew Might (University of Utah, USA) We demonstrate how to map a control-flow analysis for a higher-order language (dynamic languages are typically higher-order) into a pointer analysis for a first-order language, such as C. This allows us to use existing pointer analysis tools to perform a control-flow analysis, exploiting their technical advancements and the engineering effort that went into developing them. We compare the results of two recent parallel pointer analysis tools with a parallel control-flow analysis tool. While it has been known that a control-flow analysis of higher-order languages and a pointer analysis of first-order languages are very similar, we demonstrate that these two analyses are actually more similar than previously thought. We present the first mapping between a high-order control-flow analysis and a pointer analysis. ![]() |
|
Møller, Anders |
![]() Erik Ernst, Anders Møller, Mathias Schwarz, and Fabio Strocco (Google, Denmark; Aarhus University, Denmark) Unlike traditional static type checking, the type system in the Dart programming language is unsound by design, even for fully annotated programs. The rationale has been that this allows compile-time detection of likely errors and enables code completion in integrated development environments, without being restrictive on programmers. Despite unsoundness, judicious use of type annotations can ensure useful properties of the runtime behavior of Dart programs. We present a formal model of a core of Dart with a focus on its type system, which allows us to elucidate the causes of unsoundness. Our main contribution is a characterization of message-safe programs and a theorem stating that such programs will never encounter 'message not understood' errors at runtime. Message safety is less restrictive than traditional type soundness, and we argue that it forms a natural intermediate point between dynamically typed and statically typed Dart programs. ![]() ![]() |
|
Mössenböck, Hanspeter |
![]() Matthias Grimmer, Chris Seaton, Roland Schatz, Thomas Würthinger, and Hanspeter Mössenböck (JKU Linz, Austria; Oracle Labs, UK; Oracle Labs, Austria; Oracle Labs, Switzerland) Programmers combine different programming languages because it allows them to use the most suitable language for a given problem, to gradually migrate existing projects from one language to another, or to reuse existing source code. However, existing cross-language mechanisms suffer from complex interfaces, insufficient flexibility, or poor performance. We present the TruffleVM, a multi-language runtime that allows composing different language implementations in a seamless way. It reduces the amount of required boiler-plate code to a minimum by allowing programmers to access foreign functions or objects by using the notation of the host language. We compose language implementations that translate source code to an intermediate representation (IR), which is executed on top of a shared runtime system. Language implementations use language-independent messages that the runtime resolves at their first execution by transforming them to efficient foreign-language-specific operations. The TruffleVM avoids conversion or marshaling of foreign objects at the language boundary and allows the dynamic compiler to perform its optimizations across language boundaries, which guarantees high performance. This paper presents an implementation of our ideas based on the Truffle system and its guest language implementations JavaScript, Ruby, and C. ![]() ![]() David Leopoldseder, Lukas Stadler, Christian Wimmer, and Hanspeter Mössenböck (JKU Linz, Austria; Oracle Labs, Austria; Oracle Labs, USA) We present an approach to cross-compile Java bytecodes to JavaScript, building on existing Java optimizing compiler technology. Static analysis determines which Java classes and methods are reachable. These are then translated to JavaScript using a re-configured Java just-in-time compiler with a new back end that generates JavaScript instead of machine code. Standard compiler optimizations such as method inlining and global value numbering, as well as advanced optimizations such as escape analysis, lead to compact and optimized JavaScript code. Compiler IR is unstructured, so structured control flow needs to be reconstructed before code generation is possible. We present details of our control flow reconstruction algorithm. Our system is based on Graal, an open-source optimizing compiler for the Java HotSpot VM and other VMs. The modular and VM-independent architecture of Graal allows us to reuse the intermediate representation, the bytecode parser, and the high-level optimizations. Our custom back end first performs control flow reconstruction and then JavaScript code generation. The generated JavaScript undergoes a set of optimizations to increase readability and performance. Static analysis is performed on the Graal intermediate representation as well. Benchmark results for medium-sized Java benchmarks such as SPECjbb2005 run with acceptable performance on the V8 JavaScript VM. ![]() |
|
Noble, James |
![]() Michael Homer, Timothy Jones, and James Noble (Victoria University of Wellington, New Zealand) Method names with multiple separate parts are a feature of many dynamic languages derived from Smalltalk. Generalising the syntax of method names to allow parts to be repeated, optional, or alternatives, means a single definition can respond to a whole family of method requests. We show how generalising method names can support flexible APIs for domain-specific languages, complex initialisation tasks, and control structures defined in libraries. We describe how we have extended Grace to support generalised method names, and prove that such an extension can be integrated into a gradually-typed language while preserving type soundness. ![]() |
|
Pape, Tobias |
![]() Tobias Pape, Tim Felgentreff, Robert Hirschfeld, Anton Gulenko, and Carl Friedrich Bolz (HPI, Germany; TU Berlin, Germany; King's College London, UK) Storage strategies have been proposed as a run-time optimization for the PyPy Python implementation and have shown promising results for optimizing execution speed and memory requirements. However, it remained unclear whether the approach works equally well in other dynamic languages. Furthermore, while PyPy is based on RPython, a language to write VMs with reusable components such as a tracing just-in-time compiler and garbage collection, the strategies design itself was not generalized to be reusable across languages implemented using that same toolchain. In this paper, we present a general design and implementation for storage strategies and show how they can be reused across different RPython-based languages. We evaluate the performance of our implementation for RSqueak, an RPython-based VM for Squeak/Smalltalk and show that storage strategies may indeed offer performance benefits for certain workloads in other dynamic programming languages.We furthermore evaluate the generality of our implementation by applying it to Topaz, a Ruby VM, and Pycket, a Racket implementation. ![]() |
|
Robatmili, Behnam |
![]() Madhukar N. Kedlaya, Behnam Robatmili, and Ben Hardekopf (University of California at Santa Barbara, USA; Qualcomm Research, USA) Modern JavaScript engines optimize hot functions using a JIT compiler along with type information gathered by an online profiler. However, the profiler's information can be unsound and when unexpected types are encountered the engine must recover using an expensive mechanism called deoptimization. In this paper we describe a method to significantly reduce the number of deoptimizations observed by client-side JavaScript engines by using ahead-of-time profiling on the server-side. Unlike previous work on ahead-of-time profiling for statically-typed languages such as Java, our technique must operate on a dynamically-typed language, which significantly changes the required insights and methods to make the technique effective. We implement our proposed technique using the SpiderMonkey JavaScript engine, and we evaluate our implementation using three different kinds of benchmarks: the industry-standard Octane benchmark suite, a set of JavaScript physics engines, and a set of real-world websites from the Membench50 benchmark suite. We show that using ahead-of-time profiling provides significant performance benefits over the baseline vanilla SpiderMonkey engine. ![]() |
|
Schatz, Roland |
![]() Matthias Grimmer, Chris Seaton, Roland Schatz, Thomas Würthinger, and Hanspeter Mössenböck (JKU Linz, Austria; Oracle Labs, UK; Oracle Labs, Austria; Oracle Labs, Switzerland) Programmers combine different programming languages because it allows them to use the most suitable language for a given problem, to gradually migrate existing projects from one language to another, or to reuse existing source code. However, existing cross-language mechanisms suffer from complex interfaces, insufficient flexibility, or poor performance. We present the TruffleVM, a multi-language runtime that allows composing different language implementations in a seamless way. It reduces the amount of required boiler-plate code to a minimum by allowing programmers to access foreign functions or objects by using the notation of the host language. We compose language implementations that translate source code to an intermediate representation (IR), which is executed on top of a shared runtime system. Language implementations use language-independent messages that the runtime resolves at their first execution by transforming them to efficient foreign-language-specific operations. The TruffleVM avoids conversion or marshaling of foreign objects at the language boundary and allows the dynamic compiler to perform its optimizations across language boundaries, which guarantees high performance. This paper presents an implementation of our ideas based on the Truffle system and its guest language implementations JavaScript, Ruby, and C. ![]() |
|
Schwarz, Mathias |
![]() Erik Ernst, Anders Møller, Mathias Schwarz, and Fabio Strocco (Google, Denmark; Aarhus University, Denmark) Unlike traditional static type checking, the type system in the Dart programming language is unsound by design, even for fully annotated programs. The rationale has been that this allows compile-time detection of likely errors and enables code completion in integrated development environments, without being restrictive on programmers. Despite unsoundness, judicious use of type annotations can ensure useful properties of the runtime behavior of Dart programs. We present a formal model of a core of Dart with a focus on its type system, which allows us to elucidate the causes of unsoundness. Our main contribution is a characterization of message-safe programs and a theorem stating that such programs will never encounter 'message not understood' errors at runtime. Message safety is less restrictive than traditional type soundness, and we argue that it forms a natural intermediate point between dynamically typed and statically typed Dart programs. ![]() ![]() |
|
Seaton, Chris |
![]() Matthias Grimmer, Chris Seaton, Roland Schatz, Thomas Würthinger, and Hanspeter Mössenböck (JKU Linz, Austria; Oracle Labs, UK; Oracle Labs, Austria; Oracle Labs, Switzerland) Programmers combine different programming languages because it allows them to use the most suitable language for a given problem, to gradually migrate existing projects from one language to another, or to reuse existing source code. However, existing cross-language mechanisms suffer from complex interfaces, insufficient flexibility, or poor performance. We present the TruffleVM, a multi-language runtime that allows composing different language implementations in a seamless way. It reduces the amount of required boiler-plate code to a minimum by allowing programmers to access foreign functions or objects by using the notation of the host language. We compose language implementations that translate source code to an intermediate representation (IR), which is executed on top of a shared runtime system. Language implementations use language-independent messages that the runtime resolves at their first execution by transforming them to efficient foreign-language-specific operations. The TruffleVM avoids conversion or marshaling of foreign objects at the language boundary and allows the dynamic compiler to perform its optimizations across language boundaries, which guarantees high performance. This paper presents an implementation of our ideas based on the Truffle system and its guest language implementations JavaScript, Ruby, and C. ![]() |
|
Stadler, Lukas |
![]() David Leopoldseder, Lukas Stadler, Christian Wimmer, and Hanspeter Mössenböck (JKU Linz, Austria; Oracle Labs, Austria; Oracle Labs, USA) We present an approach to cross-compile Java bytecodes to JavaScript, building on existing Java optimizing compiler technology. Static analysis determines which Java classes and methods are reachable. These are then translated to JavaScript using a re-configured Java just-in-time compiler with a new back end that generates JavaScript instead of machine code. Standard compiler optimizations such as method inlining and global value numbering, as well as advanced optimizations such as escape analysis, lead to compact and optimized JavaScript code. Compiler IR is unstructured, so structured control flow needs to be reconstructed before code generation is possible. We present details of our control flow reconstruction algorithm. Our system is based on Graal, an open-source optimizing compiler for the Java HotSpot VM and other VMs. The modular and VM-independent architecture of Graal allows us to reuse the intermediate representation, the bytecode parser, and the high-level optimizations. Our custom back end first performs control flow reconstruction and then JavaScript code generation. The generated JavaScript undergoes a set of optimizations to increase readability and performance. Static analysis is performed on the Graal intermediate representation as well. Benchmark results for medium-sized Java benchmarks such as SPECjbb2005 run with acceptable performance on the V8 JavaScript VM. ![]() |
|
Strocco, Fabio |
![]() Erik Ernst, Anders Møller, Mathias Schwarz, and Fabio Strocco (Google, Denmark; Aarhus University, Denmark) Unlike traditional static type checking, the type system in the Dart programming language is unsound by design, even for fully annotated programs. The rationale has been that this allows compile-time detection of likely errors and enables code completion in integrated development environments, without being restrictive on programmers. Despite unsoundness, judicious use of type annotations can ensure useful properties of the runtime behavior of Dart programs. We present a formal model of a core of Dart with a focus on its type system, which allows us to elucidate the causes of unsoundness. Our main contribution is a characterization of message-safe programs and a theorem stating that such programs will never encounter 'message not understood' errors at runtime. Message safety is less restrictive than traditional type soundness, and we argue that it forms a natural intermediate point between dynamically typed and statically typed Dart programs. ![]() ![]() |
|
Tabareau, Nicolas |
![]() Éric Tanter and Nicolas Tabareau (University of Chile, Chile; INRIA, France) Expressive static typing disciplines are a powerful way to achieve high-quality software. However, the adoption cost of such techniques should not be under-estimated. Just like gradual typing allows for a smooth transition from dynamically-typed to statically-typed programs, it seems desirable to support a gradual path to certified programming. We explore gradual certified programming in Coq, providing the possibility to postpone the proofs of selected properties, and to check "at runtime" whether the properties actually hold. Casts can be integrated with the implicit coercion mechanism of Coq to support implicit cast insertion à la gradual typing. Additionally, when extracting Coq functions to mainstream languages, our encoding of casts supports lifting assumed properties into runtime checks. Much to our surprise, it is not necessary to extend Coq in any way to support gradual certified programming. A simple mix of type classes and axioms makes it possible to bring gradual certified programming to Coq in a straightforward manner. ![]() |
|
Tanter, Éric |
![]() Éric Tanter and Nicolas Tabareau (University of Chile, Chile; INRIA, France) Expressive static typing disciplines are a powerful way to achieve high-quality software. However, the adoption cost of such techniques should not be under-estimated. Just like gradual typing allows for a smooth transition from dynamically-typed to statically-typed programs, it seems desirable to support a gradual path to certified programming. We explore gradual certified programming in Coq, providing the possibility to postpone the proofs of selected properties, and to check "at runtime" whether the properties actually hold. Casts can be integrated with the implicit coercion mechanism of Coq to support implicit cast insertion à la gradual typing. Additionally, when extracting Coq functions to mainstream languages, our encoding of casts supports lifting assumed properties into runtime checks. Much to our surprise, it is not necessary to extend Coq in any way to support gradual certified programming. A simple mix of type classes and axioms makes it possible to bring gradual certified programming to Coq in a straightforward manner. ![]() |
|
Teruel, Camille |
![]() Camille Teruel, Stéphane Ducasse, Damien Cassou, and Marcus Denker (INRIA, France; University of Lille, France) Reflection is a powerful programming language feature that enables language extensions, generic code, dynamic analyses, development tools, etc. However, uncontrolled reflection breaks object encapsulation and considerably increases the attack surface of programs e.g., malicious libraries can use reflection to attack their client applications. To bring reflection and object encapsulation back together, we use dynamic object ownership to design an access control policy to reflective operations. This policy grants objects full reflective power over the objects they own but limited reflective power over other objects. Code is still able to use advanced reflective operations but reflection cannot be used as an attack vector anymore. ![]() |
|
Wimmer, Christian |
![]() David Leopoldseder, Lukas Stadler, Christian Wimmer, and Hanspeter Mössenböck (JKU Linz, Austria; Oracle Labs, Austria; Oracle Labs, USA) We present an approach to cross-compile Java bytecodes to JavaScript, building on existing Java optimizing compiler technology. Static analysis determines which Java classes and methods are reachable. These are then translated to JavaScript using a re-configured Java just-in-time compiler with a new back end that generates JavaScript instead of machine code. Standard compiler optimizations such as method inlining and global value numbering, as well as advanced optimizations such as escape analysis, lead to compact and optimized JavaScript code. Compiler IR is unstructured, so structured control flow needs to be reconstructed before code generation is possible. We present details of our control flow reconstruction algorithm. Our system is based on Graal, an open-source optimizing compiler for the Java HotSpot VM and other VMs. The modular and VM-independent architecture of Graal allows us to reuse the intermediate representation, the bytecode parser, and the high-level optimizations. Our custom back end first performs control flow reconstruction and then JavaScript code generation. The generated JavaScript undergoes a set of optimizations to increase readability and performance. Static analysis is performed on the Graal intermediate representation as well. Benchmark results for medium-sized Java benchmarks such as SPECjbb2005 run with acceptable performance on the V8 JavaScript VM. ![]() |
|
Wrigstad, Tobias |
![]() Beatrice Åkerblom and Tobias Wrigstad (Stockholm University, Sweden; Uppsala University, Sweden) Following the increased popularity of dynamic languages and their increased use in critical software, there have been many proposals to retrofit static type system to these languages to improve possibilities to catch bugs and improve performance. A key question for any type system is whether the types should be structural, for more expressiveness, or nominal, to carry more meaning for the programmer. For retrofitted type systems, it seems the current trend is using structural types. This paper attempts to answer the question to what extent this extra expressiveness is needed, and how the possible polymorphism in dynamic code is used in practise. We study polymorphism in 36 real-world open source Python programs and approximate to what extent nominal and structural types could be used to type these programs. The study is based on collecting traces from multiple runs of the programs and analysing the polymorphic degrees of targets at more than 7 million call-sites. Our results show that while polymorphism is used in all programs, the programs are to a great extent monomorphic. The polymorphism found is evenly distributed across libraries and program-specific code and occur both during program start-up and normal execution. Most programs contain a few ``megamorphic'' call-sites where receiver types vary widely. The non-monomorphic parts of the programs can to some extent be typed with nominal or structural types, but none of the approaches can type entire programs. ![]() |
|
Würthinger, Thomas |
![]() Matthias Grimmer, Chris Seaton, Roland Schatz, Thomas Würthinger, and Hanspeter Mössenböck (JKU Linz, Austria; Oracle Labs, UK; Oracle Labs, Austria; Oracle Labs, Switzerland) Programmers combine different programming languages because it allows them to use the most suitable language for a given problem, to gradually migrate existing projects from one language to another, or to reuse existing source code. However, existing cross-language mechanisms suffer from complex interfaces, insufficient flexibility, or poor performance. We present the TruffleVM, a multi-language runtime that allows composing different language implementations in a seamless way. It reduces the amount of required boiler-plate code to a minimum by allowing programmers to access foreign functions or objects by using the notation of the host language. We compose language implementations that translate source code to an intermediate representation (IR), which is executed on top of a shared runtime system. Language implementations use language-independent messages that the runtime resolves at their first execution by transforming them to efficient foreign-language-specific operations. The TruffleVM avoids conversion or marshaling of foreign objects at the language boundary and allows the dynamic compiler to perform its optimizations across language boundaries, which guarantees high performance. This paper presents an implementation of our ideas based on the Truffle system and its guest language implementations JavaScript, Ruby, and C. ![]() |
43 authors
proc time: 1.25