PLDI 2024 Co-Located Events
PLDI 2024 Co-Located Events
Powered by
Conference Publishing Consulting

10th ACM SIGPLAN International Workshop on Libraries, Languages and Compilers for Array Programming (ARRAY 2024), June 25, 2024, Copenhagen, Denmark

ARRAY 2024 – Proceedings

Contents - Abstracts - Authors

10th ACM SIGPLAN International Workshop on Libraries, Languages and Compilers for Array Programming (ARRAY 2024)


Title Page

Welcome from the Chairs
Welcome to the 2024 edition on the ACM SIGPLAN Workshop on Libraries, Languages and Compilers for Array Programming, co-located with PLDI 2024.

ARRAY 2024 Organization


Points for Free: Embedding Pointful Array Programming in Python
Jakub Bachurski ORCID logo and Alan Mycroft ORCID logo
(University of Cambridge, United Kingdom)
Multidimensional array operations are ubiquitous in machine learning. The dominant ecosystem in this field is centred around Python and NumPy, where programs are expressed with elaborate and error-prone calls in the point-free array programming model. Such code is difficult to statically analyse and maintain. Various other array programming paradigms offer to solve these problems, in particular the pointful style of Dex. However, only limited approaches – based on Einstein summation – have been embedded in Python. We introduce Ein, a pointful array DSL embedded in Python. We also describe a novel connection between pointful and point-free array programming. Thanks to this connection, Ein generates performant and type-safe calls to NumPy with potential for further optimisations. Ein reconciles the readability of comprehension-style definitions with the capabilities of existing array frameworks.

Publisher's Version
Work Assisting: Linking Task-Parallel Work Stealing with Data-Parallel Self Scheduling
Ivo Gabe de Wolff ORCID logo and Gabriele Keller ORCID logo
(Utrecht University, Netherlands)
We present work assisting, a novel scheduling strategy for mixing data parallelism (loop parallelism) with task parallelism, where threads share their current data-parallel activity in a shared array to let other threads assist. In contrast to most existing work in this space, our algorithm aims at preserving the structure of data parallelism instead of implementing all parallelism as task parallelism. This enables the use of self-scheduling for data parallelism, as required by certain data-parallel algorithms, and only exploits data parallelism if task parallelism is not sufficient. It provides full flexibility: neither the number of threads for a data-parallel loop nor the distribution over threads need to be fixed before the loop starts. We present benchmarks to demonstrate that our scheduling algorithm, depending on the problem, behaves similar to, or outperforms schedulers based purely on task parallelism.

Publisher's Version
Shray: An Owner-Compute Distributed Shared-Memory System
Stefan Schrijvers ORCID logo, Thomas Koopman ORCID logo, and Sven-Bodo ScholzORCID logo
(Radboud University, Netherlands)
In this paper, we propose a new library for storing arrays in a distributed fashion on distributed memory systems. From a programmer's perspective, these arrays behave for arbitrary reads as if they were allocated in shared memory. When it comes to writes into these arrays, the programmer has to ensure that all writes are restricted to a fixed range of address that are "owned" by the node executing the writing operation. We show how this design, despite the owner-compute restriction can aid programmer productivity by enabling straight-forward parallelisations of typical array-manipulating codes. Furthermore, we delineate an open-source implementation of the proposed library named Shray. Using the programming interface of Shray, we compare possible hand-parallelised codes of example applications with implementations in other DSM/PGAS systems demonstrating the programming style enabled by Shray, and providing some initial performance figures.

Publisher's Version

proc time: 1.88