Workshop ARRAY 2024 – Author Index |
Contents -
Abstracts -
Authors
|
Bachurski, Jakub |
ARRAY '24: "Points for Free: Embedding ..."
Points for Free: Embedding Pointful Array Programming in Python
Jakub Bachurski and Alan Mycroft (University of Cambridge, United Kingdom) Multidimensional array operations are ubiquitous in machine learning. The dominant ecosystem in this field is centred around Python and NumPy, where programs are expressed with elaborate and error-prone calls in the point-free array programming model. Such code is difficult to statically analyse and maintain. Various other array programming paradigms offer to solve these problems, in particular the pointful style of Dex. However, only limited approaches – based on Einstein summation – have been embedded in Python. We introduce Ein, a pointful array DSL embedded in Python. We also describe a novel connection between pointful and point-free array programming. Thanks to this connection, Ein generates performant and type-safe calls to NumPy with potential for further optimisations. Ein reconciles the readability of comprehension-style definitions with the capabilities of existing array frameworks. @InProceedings{ARRAY24p1, author = {Jakub Bachurski and Alan Mycroft}, title = {Points for Free: Embedding Pointful Array Programming in Python}, booktitle = {Proc.\ ARRAY}, publisher = {ACM}, pages = {1--12}, doi = {10.1145/3652586.3663312}, year = {2024}, } Publisher's Version |
|
De Wolff, Ivo Gabe |
ARRAY '24: "Work Assisting: Linking Task-Parallel ..."
Work Assisting: Linking Task-Parallel Work Stealing with Data-Parallel Self Scheduling
Ivo Gabe de Wolff and Gabriele Keller (Utrecht University, Netherlands) We present work assisting, a novel scheduling strategy for mixing data parallelism (loop parallelism) with task parallelism, where threads share their current data-parallel activity in a shared array to let other threads assist. In contrast to most existing work in this space, our algorithm aims at preserving the structure of data parallelism instead of implementing all parallelism as task parallelism. This enables the use of self-scheduling for data parallelism, as required by certain data-parallel algorithms, and only exploits data parallelism if task parallelism is not sufficient. It provides full flexibility: neither the number of threads for a data-parallel loop nor the distribution over threads need to be fixed before the loop starts. We present benchmarks to demonstrate that our scheduling algorithm, depending on the problem, behaves similar to, or outperforms schedulers based purely on task parallelism. @InProceedings{ARRAY24p13, author = {Ivo Gabe de Wolff and Gabriele Keller}, title = {Work Assisting: Linking Task-Parallel Work Stealing with Data-Parallel Self Scheduling}, booktitle = {Proc.\ ARRAY}, publisher = {ACM}, pages = {13--24}, doi = {10.1145/3652586.3663313}, year = {2024}, } Publisher's Version |
|
Keller, Gabriele |
ARRAY '24: "Work Assisting: Linking Task-Parallel ..."
Work Assisting: Linking Task-Parallel Work Stealing with Data-Parallel Self Scheduling
Ivo Gabe de Wolff and Gabriele Keller (Utrecht University, Netherlands) We present work assisting, a novel scheduling strategy for mixing data parallelism (loop parallelism) with task parallelism, where threads share their current data-parallel activity in a shared array to let other threads assist. In contrast to most existing work in this space, our algorithm aims at preserving the structure of data parallelism instead of implementing all parallelism as task parallelism. This enables the use of self-scheduling for data parallelism, as required by certain data-parallel algorithms, and only exploits data parallelism if task parallelism is not sufficient. It provides full flexibility: neither the number of threads for a data-parallel loop nor the distribution over threads need to be fixed before the loop starts. We present benchmarks to demonstrate that our scheduling algorithm, depending on the problem, behaves similar to, or outperforms schedulers based purely on task parallelism. @InProceedings{ARRAY24p13, author = {Ivo Gabe de Wolff and Gabriele Keller}, title = {Work Assisting: Linking Task-Parallel Work Stealing with Data-Parallel Self Scheduling}, booktitle = {Proc.\ ARRAY}, publisher = {ACM}, pages = {13--24}, doi = {10.1145/3652586.3663313}, year = {2024}, } Publisher's Version |
|
Koopman, Thomas |
ARRAY '24: "Shray: An Owner-Compute Distributed ..."
Shray: An Owner-Compute Distributed Shared-Memory System
Stefan Schrijvers, Thomas Koopman, and Sven-Bodo Scholz (Radboud University, Netherlands) In this paper, we propose a new library for storing arrays in a distributed fashion on distributed memory systems. From a programmer's perspective, these arrays behave for arbitrary reads as if they were allocated in shared memory. When it comes to writes into these arrays, the programmer has to ensure that all writes are restricted to a fixed range of address that are "owned" by the node executing the writing operation. We show how this design, despite the owner-compute restriction can aid programmer productivity by enabling straight-forward parallelisations of typical array-manipulating codes. Furthermore, we delineate an open-source implementation of the proposed library named Shray. Using the programming interface of Shray, we compare possible hand-parallelised codes of example applications with implementations in other DSM/PGAS systems demonstrating the programming style enabled by Shray, and providing some initial performance figures. @InProceedings{ARRAY24p25, author = {Stefan Schrijvers and Thomas Koopman and Sven-Bodo Scholz}, title = {Shray: An Owner-Compute Distributed Shared-Memory System}, booktitle = {Proc.\ ARRAY}, publisher = {ACM}, pages = {25--37}, doi = {10.1145/3652586.3663314}, year = {2024}, } Publisher's Version |
|
Mycroft, Alan |
ARRAY '24: "Points for Free: Embedding ..."
Points for Free: Embedding Pointful Array Programming in Python
Jakub Bachurski and Alan Mycroft (University of Cambridge, United Kingdom) Multidimensional array operations are ubiquitous in machine learning. The dominant ecosystem in this field is centred around Python and NumPy, where programs are expressed with elaborate and error-prone calls in the point-free array programming model. Such code is difficult to statically analyse and maintain. Various other array programming paradigms offer to solve these problems, in particular the pointful style of Dex. However, only limited approaches – based on Einstein summation – have been embedded in Python. We introduce Ein, a pointful array DSL embedded in Python. We also describe a novel connection between pointful and point-free array programming. Thanks to this connection, Ein generates performant and type-safe calls to NumPy with potential for further optimisations. Ein reconciles the readability of comprehension-style definitions with the capabilities of existing array frameworks. @InProceedings{ARRAY24p1, author = {Jakub Bachurski and Alan Mycroft}, title = {Points for Free: Embedding Pointful Array Programming in Python}, booktitle = {Proc.\ ARRAY}, publisher = {ACM}, pages = {1--12}, doi = {10.1145/3652586.3663312}, year = {2024}, } Publisher's Version |
|
Scholz, Sven-Bodo |
ARRAY '24: "Shray: An Owner-Compute Distributed ..."
Shray: An Owner-Compute Distributed Shared-Memory System
Stefan Schrijvers, Thomas Koopman, and Sven-Bodo Scholz (Radboud University, Netherlands) In this paper, we propose a new library for storing arrays in a distributed fashion on distributed memory systems. From a programmer's perspective, these arrays behave for arbitrary reads as if they were allocated in shared memory. When it comes to writes into these arrays, the programmer has to ensure that all writes are restricted to a fixed range of address that are "owned" by the node executing the writing operation. We show how this design, despite the owner-compute restriction can aid programmer productivity by enabling straight-forward parallelisations of typical array-manipulating codes. Furthermore, we delineate an open-source implementation of the proposed library named Shray. Using the programming interface of Shray, we compare possible hand-parallelised codes of example applications with implementations in other DSM/PGAS systems demonstrating the programming style enabled by Shray, and providing some initial performance figures. @InProceedings{ARRAY24p25, author = {Stefan Schrijvers and Thomas Koopman and Sven-Bodo Scholz}, title = {Shray: An Owner-Compute Distributed Shared-Memory System}, booktitle = {Proc.\ ARRAY}, publisher = {ACM}, pages = {25--37}, doi = {10.1145/3652586.3663314}, year = {2024}, } Publisher's Version |
|
Schrijvers, Stefan |
ARRAY '24: "Shray: An Owner-Compute Distributed ..."
Shray: An Owner-Compute Distributed Shared-Memory System
Stefan Schrijvers, Thomas Koopman, and Sven-Bodo Scholz (Radboud University, Netherlands) In this paper, we propose a new library for storing arrays in a distributed fashion on distributed memory systems. From a programmer's perspective, these arrays behave for arbitrary reads as if they were allocated in shared memory. When it comes to writes into these arrays, the programmer has to ensure that all writes are restricted to a fixed range of address that are "owned" by the node executing the writing operation. We show how this design, despite the owner-compute restriction can aid programmer productivity by enabling straight-forward parallelisations of typical array-manipulating codes. Furthermore, we delineate an open-source implementation of the proposed library named Shray. Using the programming interface of Shray, we compare possible hand-parallelised codes of example applications with implementations in other DSM/PGAS systems demonstrating the programming style enabled by Shray, and providing some initial performance figures. @InProceedings{ARRAY24p25, author = {Stefan Schrijvers and Thomas Koopman and Sven-Bodo Scholz}, title = {Shray: An Owner-Compute Distributed Shared-Memory System}, booktitle = {Proc.\ ARRAY}, publisher = {ACM}, pages = {25--37}, doi = {10.1145/3652586.3663314}, year = {2024}, } Publisher's Version |
7 authors
proc time: 1.88