Let a Thousand Flowers Bloom: An Algebraic Representation for Edge Graphs
Jack Liell-Cock and Tom Schrijvers (University of Oxford, United Kingdom; KU Leuven, Belgium)
Context: Edge graphs are graphs whose edges are labelled with identifiers, and nodes can have multiple edges between them. They are used to model a wide range of systems, including networks with distances or degrees of connection and complex relational data.
Inquiry: Unfortunately, the homogeneity of this graph structure prevents an effective representation in (functional) programs. Either their interface is riddled with partial functions, or the representations are computationally inefficient to process.
Approach: We present a novel data type for edge graphs, based on total and recursive definitions, that prevents usage errors from partial APIs and promotes structurally recursive computations. We follow an algebraic approach and provide a set of primitive constructors and combinators, along with equational laws that identify semantically equivalent constructions.
Knowledge: This algebra translates directly into an implementation using algebraic data types, and its homomorphisms give rise to functions for manipulating and transforming these edge graphs.
Grounding: We exploit the fact that many common graph algorithms are such homomorphisms to implement them in our framework.
Importance: In giving a theoretical grounding for the edge graph data type, we can formalise properties such as soundness and completeness of the representation while also minimising usage errors and maximising re-usability.
@Article{Programming Journal, Volume24p9,
author = {Jack Liell-Cock and Tom Schrijvers},
title = {Let a Thousand Flowers Bloom: An Algebraic Representation for Edge Graphs},
journal = {},
volume = {},
number = {},
articleno = {9},
numpages = {28},
doi = {10.22152/programming-journal.org/2024/8/9},
year = {2024},
}
Publisher's Version
Scheduling Garbage Collection for Energy Efficiency on Asymmetric Multicore Processors
Marina Shimchenko, Erik Österlund, and Tobias Wrigstad (Uppsala University, Sweden; Oracle, Sweden)
The growing concern for energy efficiency in the Information and Communication Technology (ICT) sector has prompted the exploration of resource management techniques. While hardware architectures, such as single-ISA asymmetric multicore processors (AMP), offer potential energy savings, there is still untapped potential for software optimizations. This paper aims to bridge this gap by investigating the scheduling of garbage collection (GC) activities on a heterogeneous architecture with both performance cores ("p-cores") and energy cores ("e-cores") to achieve energy savings.
Our study focuses on the concurrent ZGC collector in the context of Java Virtual Machines (JVM), as the energy aspect is not well studied in the context of latency-sensitive Java workloads. By comparing the energy efficiency, performance, latency, and memory utilization of executing GC on p-cores versus e-cores, we present compelling findings.
We demonstrate that scheduling GC work on e-cores overall leads to approximately 3% energy savings without performance and mean latency degradation while requiring no additional effort from developers. Overall energy reduction can increase to 5.3±0.0225% by tuning the number of e-cores (still not changing the program!).
Our findings highlight the practicality and benefits of scheduling GC on e-cores, showcasing the potential for energy savings in heterogeneous architectures running Java workloads while meeting critical latency requirements. Our research contributes to the ongoing efforts toward achieving a more sustainable and efficient ICT sector.
@Article{Programming Journal, Volume24p10,
author = {Marina Shimchenko and Erik Österlund and Tobias Wrigstad},
title = {Scheduling Garbage Collection for Energy Efficiency on Asymmetric Multicore Processors},
journal = {},
volume = {},
number = {},
articleno = {10},
numpages = {33},
doi = {10.22152/programming-journal.org/2024/8/10},
year = {2024},
}
Publisher's Version
Reactive programming (RP) is a declarative programming paradigm suitable for expressing the handling of events. It enables programmers to create applications that react automatically to changes over time. Whenever a time-varying signal changes — e.g. in response to values produced by event stream (e.g., sensor data, user input…) — the program state is updated automatically in tandem with that change. This makes RP well-suited for building interactive applications and reactive (soft real-time) systems.
Inquiry
RP Language implementations are often built on top of an existing (host) language as an Embedded Domain Specific Language (EDSL). This results in application code in which reactive code and non-reactive code is inherently entangled. Using a mechanism known as lifting, one usually has access to the full feature set of the (non-reactive) host language in the RP program. However, lifting is also dangerous. First, host code expressed in a Turing-complete language may diverge, resulting in unresponsive programs: i.e. reactive programs that are not actually reactive. Second, the bi-directional integration of reactive and non-reactive code results in a paradigmatic mismatch that, when unchecked, leads to faulty behaviour in programs.
Approach
We propose a new reactive programming language, that has been meticulously designed to be reactive-only. We start with a simple (first-order) model for reactivity, based on reactors (i.e. uninstantiated descriptions of signals and their dependencies) and deployments (i.e. instances of reactors) that consist of signals. The language does not have the notion of functions, and thus unlike other RP languages there is no lifting either. We extend this simple model incrementally with additional features found in other programming languages, RP or otherwise. These features include stateful reactors (that allow for time-based accumulation), signals with dynamic dependencies by means of conditionals and polymorphic deployments, recursively-defined reactors, and (anonymous) reactors with lexical scope.
Knowledge
In our description of these language features, we not only describe the syntax and semantics, but also how each features compares to the problems that exist in (EDSL) RP languages. I.e. by starting from a reactive-only model, we identify which reactive features (that, in other RP languages are typically expressed in non-reactive code) affect the reactive guarantees that can be enforced by the language.
Grounding
We base our arguments by analysing the effect that each feature has on our language: e.g., by analysing how signals are updated, how they are created and how dependencies between signals can be affected. When applicable, we draw parallels with other languages: i.e. similarities shared with other RP languages will be highlighted and thoroughly analysed, and where relevant the same will also be done with non-reactive languages.
Importance
Our language shows how a purely reactive programming is able to express the same kinds of programs as in other RP languages that require the use of (unchecked) functions. By considering reactive programs as a collection of pure (reactive-only) reactors, we aim to increase how reactive programming is comprehended by both language designers and its users.
@Article{Programming Journal, Volume24p11,
author = {Bjarno Oeyen and Joeri De Koster and Wolfgang De Meuter},
title = {Reactive Programming without Functions},
journal = {},
volume = {},
number = {},
articleno = {11},
numpages = {30},
doi = {10.22152/programming-journal.org/2024/8/11},
year = {2024},
}
Publisher's Version
Privacy-Respecting Type Error Telemetry at Scale
Ben Greenman, Alan Jeffrey, Shriram Krishnamurthi, and Mitesh Shah (Brown University, USA; University of Utah, USA; Roblox, USA)
**Context**: Roblox Studio lets millions of creators build interactive experiences by programming in a variant of Lua called Luau. The creators form a broad group, ranging from novices writing their first script to professional developers; thus, Luau must support a wide audience. As part of its efforts to support all kinds of programmers, Luau includes an optional, gradual type system and goes to great lengths to minimize false positive errors.
**Inquiry**: Since Luau is currently being used by many creators, we want to collect data to improve the language and, in particular, the type system. The standard way to collect data is to deploy client-side telemetry; however, we cannot scrape personal data or proprietary information, which means we cannot collect source code fragments, error messages, or even filepaths. The research questions are thus about how to conduct telemetry that is not invasive and obtain insights from it about type errors.
**Approach**: We designed and implemented a pseudonymized, randomly-sampling telemetry system for Luau. Telemetry records include a timestamp, a session id, a reason for sending, and a numeric summary of the most recent type analyses. This information lets us study type errors over time without revealing private data. We deployed the system in Roblox Studio during Spring 2023 and collected over 1.5 million telemetry records from over 340,000 sessions.
**Knowledge**: We present several findings about Luau, all of which suggest that telemetry is an effective way to study type error pragmatics. One of the less-surprising findings is that opt-in gradual types are unpopular: there is an 100x gap between the number of untyped Luau sessions and the number of typed ones. One surprise is that the strict mode for type analysis is overly conservative about interactions with data assets. A reassuring finding is that type analysis rarely hits its internal limits on problem size.
**Grounding**: Our findings are supported by a dataset of over 1.5 million telemetry records. The data and scripts for analyzing it are available in an artifact.
**Importance**: Beyond the immediate benefits to Luau, our findings about types and type errors have implications for adoption and ergonomics in other gradual languages such as TypeScript, Elixir, and Typed Racket. Our telemetry design is of broad interest, as it reports on type errors without revealing sensitive information.
Broadening the View of Live Programmers: Integrating a Cross-Cutting Perspective on Run-Time Behavior into a Live Programming Environment
Patrick Rein, Christian Flach, Stefan Ramson, Eva Krebs, and Robert Hirschfeld (Hasso Plattner Institute - University of Potsdam, Germany)
Live programming provides feedback on run-time behavior by visualizing concrete values of expressions close to the source code. When using such a local perspective on run-time behavior, programmers have to mentally reconstruct the control flow if they want to understand the relation between observed values. As this requires complete and correct knowledge of all relevant code, this reconstruction is impractical for larger programs as well as in the case of unexpected program behavior. In turn, cross-cutting perspectives on run-time behavior can visualize the actual control flow directly. At the same time, cross-cutting perspectives are often difficult to navigate due to the large number of run-time events.
We propose to integrate cross-cutting perspectives into live programming environments based on local perspectives so that the two complement each other: the cross-cutting perspective provides an overview of the run-time behavior; the local perspective provides detailed feedback as well as points of interest to navigate the cross-cutting perspective. We present a cross-cutting perspective prototype in the form of a call tree browser integrated into the Babylonian/S live programming environment. In an exploratory user study, we observed that programmers found the tool useful for debugging, code comprehension, and navigation. Finally, we discuss how our prototype illustrates how the features of live programming environments may serve as the basis for other powerful dynamic development tools.
@Article{Programming Journal, Volume24p13,
author = {Patrick Rein and Christian Flach and Stefan Ramson and Eva Krebs and Robert Hirschfeld},
title = {Broadening the View of Live Programmers: Integrating a Cross-Cutting Perspective on Run-Time Behavior into a Live Programming Environment},
journal = {},
volume = {},
number = {},
articleno = {13},
numpages = {38},
doi = {10.22152/programming-journal.org/2024/8/13},
year = {2024},
}
Publisher's Version
Arrays in Practice: An Empirical Study of Array Access Patterns on the JVM
Beatrice Åkerblom and Elias Castegren (Stockholm University, Sweden; Uppsala University, Sweden)
The array is a data structure used in a wide range of programs.
Its compact storage and constant time random access makes it
highly efficient, but arbitrary indexing complicates the
analysis of code containing array accesses. Such analyses are
important for compiler optimisations such as bounds check
elimination.
The aim of this work is to gain a better understanding of how
arrays are used in real-world programs. While previous work has
applied static analyses to understand how arrays are accessed
and used, we take a dynamic approach.
We empirically examine various characteristics of array usage by
instrumenting programs to log all array accesses, allowing for
analysis of array sizes, element types, from where arrays are
accessed and to which extent sequences of array accesses form
recognizable patterns. The programs in the study were collected
from the Renaissance benchmark suite, all running on the Java
Virtual Machine.
We account for characteristics displayed by the arrays
investigated, finding that most arrays have a small size, are
accessed by only one or two classes and by a single thread. On
average over the benchmarks, 69.8% of the access patterns
consist of uncomplicated traversals. Most of the instrumented
classes (over 95%) do not use arrays directly at all.
These results come from tracing data covering 3,803,043,390
array accesses made across 168,686 classes.
While our analysis has only been applied to the Renaissance
benchmark suite, the methodology can be applied to any program
running on the Java Virtual Machine. This study, and the
methodology in general, can inform future runtime
implementations and compiler optimisations.
@Article{Programming Journal, Volume24p14,
author = {Beatrice Åkerblom and Elias Castegren},
title = {Arrays in Practice: An Empirical Study of Array Access Patterns on the JVM},
journal = {},
volume = {},
number = {},
articleno = {14},
numpages = {31},
doi = {10.22152/programming-journal.org/2024/8/14},
year = {2024},
}
Publisher's Version
Collective Allocator Abstraction to Control Object Spatial Locality in C++
Takato Hideshima, Shigeyuki Sato, and Tomoharu Ugawa (University of Tokyo, Japan; University of Electro-Communications, Japan)
Disaggregated memory is promising for improving memory utilization in computer clusters in which memory demands significantly vary across computer nodes under utilization. It allows applications with high memory demands to use memory in other computer nodes. However, disaggregated memory is not easy to use for implementing data structures in C++ because the C++ standard does not provide an adequate abstraction to use it efficiently in a high-level, modular manner. Because accessing remote memory involves high latency, disaggregated memory is often used as a far-memory system, which forms a kind of swap memory where part of local memory is used as a cache area, while the remaining memory is not subject to swapping. To pursue performance, programmers have to be aware of this nonuniform memory view and place data appropriately to minimize swapping. In this work, we model the address space of memory-disaggregated systems as the far-memory model, present the collective allocator abstraction, which enables us to specify object placement aware of memory address subspaces, and apply it to programming aware of the far-memory model. The far-memory model provides a view of the nonuniform memory space while hiding the details. In the model, the virtual address space is divided into two subspaces; one is subject to swapping and the other is not. The swapping subspace is further divided into even-sized pages, which are units of swapping. The collective allocator abstraction forms an allocator as a collection of sub-allocators, each of which owns a distinct subspace, where every allocation is done via sub-allocators. It enables us to control object placement at allocation time by selecting an appropriate sub-allocator according to different criteria, such as subspace characteristics and object collocation. It greatly facilitates implementing container data structures aware of the far-memory model. We develop an allocator based on the collective allocator abstraction by extending the C++ standard allocator for container data structures on the far-memory model and experimentally demonstrate that it facilitates implementing containers equipped with object placement strategies aware of spatial locality under the far-memory model in a high-level, modular manner. More specifically, we have successfully implemented B-trees and skip lists with the combined use of two placement strategies. The modifications therein for the original implementations are fairly modest: addition is mostly due to specifying object placement; deletion and modification are at most 1.2 % and 3.2 % of lines of the original code, respectively. We have experimentally confirmed that the modified implementations successfully have data layouts suppressing swapping. We forecast that the collective allocator abstraction would be a key to high-level integration with different memory hardware technologies because it straightforwardly accommodates new interfaces for subspaces.
@Article{Programming Journal, Volume24p15,
author = {Takato Hideshima and Shigeyuki Sato and Tomoharu Ugawa},
title = {Collective Allocator Abstraction to Control Object Spatial Locality in C++},
journal = {},
volume = {},
number = {},
articleno = {15},
numpages = {28},
doi = {10.22152/programming-journal.org/2024/8/15},
year = {2024},
}
Publisher's Version Artifact Available v2.0 ae programmingjournal supports claims v2.0
LiveRec: Prototyping Probes by Framing Debug Protocols
Jean-Baptiste Döderlein, Riemer van Rozen, and Tijs van der Storm (ENS Rennes, France; CWI, Netherlands; University of Groningen, Netherlands) Context. In the first part of his 2012 presentation ”Inventing on Principle”, Bret Victor gives a demo of a live code editor for Javascript which shows the dynamic history of values of variables in real time. This form of live programming has become known as ”probes”. Probes provide the programmer with permanent and continuous insight into the dynamic evolution of function or method variables, thus improving feedback and developer experience. Inquiry. Although Victor shows a working prototype of live probes in the context of Javascript, he does not discuss strategies for implementing them. Later work provides an implementation approach, but this requires a programming language to be implemented on top of the GraalVM runtime. In this paper we present LiveRec, a generic approach for implementing probes which can be applied in the context of many programming languages, without requiring the modification of compilers or run-time systems. Approach. LiveRec is based on reusing existing debug protocols to implement probes. Methods or functions are compiled after every code change and executed inside the debugger. During execution the evolution of all local variables in the current stack frame are recorded and communicated back to the editor or IDE for display to the user. Knowledge. It turns out that mainstream debug protocols are rich enough for implementing live probes. Step-wise execution, code hot swapping, and stack frame inspection provide the right granularity and sufficient information to realize live probes, without modifying compilers or language runtimes. Furthermore, it turns out that the recently proposed Debugger Adapter Protocol (DAP) provides an even more generic approach of implementing live probes, but, in some cases, at the cost of a significant performance penalty. Grounding. We have applied LiveRec to implement probes using stack recording natively for Java through the Java Debug Interface (JDI), and through the DAP for Java, Python, C, and Javascript, all requiring just modest amounts of configuration code. We evaluate the run-time performance of all four probes prototypes, decomposed into: compile-after-change, hot swap, single step overhead, and stack recording overhead. Our initial results show that live probes on top of native debug APIs can be performant enough for interactive use. In the case of DAP, however, it highly depends on characteristics of the programming language implementation and its associated debugging infrastructure. Importance. Live programming improves the programmer experience by providing immediate feedback about a program’s execution and eliminating disruptive edit-compile-restart sequences. Probes are one way to shorten the programmer feedback loop at the level of functions and methods. Although probes are not new, and have been implemented in (prototype) systems, LiveRec’s approach of building live probes on top of existing and generic debug protocols promises a path towards probes for a host of mainstream programming languages, with reasonable effort.
@Article{Programming Journal, Volume24p16,
author = {Jean-Baptiste Döderlein and Riemer van Rozen and Tijs van der Storm},
title = {LiveRec: Prototyping Probes by Framing Debug Protocols},
journal = {},
volume = {},
number = {},
articleno = {16},
numpages = {36},
doi = {10.22152/programming-journal.org/2024/8/16},
year = {2024},
}
Publisher's Version