Workshop Haskell 2024 – Author Index 
Contents 
Abstracts 
Authors

Augustsson, Lennart 
Haskell '24: "MicroHs: A Small Compiler ..."
MicroHs: A Small Compiler for Haskell
Lennart Augustsson (Epic Games, Sweden) MicroHs is a compiler for Haskell2010, []. It translates Haskell to SKI style combinators via λcalculus. The runtime system is quite small with few dependencies. Article Search 

Bahr, Patrick 
Haskell '24: "Calculating Compilers Effectively ..."
Calculating Compilers Effectively
Zac Garby , Graham Hutton , and Patrick Bahr (University of Nottingham, United Kingdom; IT University of Copenhagen, Denmark) Much work in the area of compiler calculation has focused on pure languages. While this simplifies the reasoning, it reduces the applicability. In this article, we show how an existing compiler calculation methodology can be naturally extended to languages with side effects. We achieve this by exploiting an algebraic approach to effects, which keeps the reasoning simple and provides flexibility in how effects are interpreted. To make the ideas accessible we only use elementary functional programming techniques. Article Search Artifacts Available 

Bunkenburg, Niels 
Haskell '24: "Making a Curry Interpreter ..."
Making a Curry Interpreter using Effects and Handlers
Niels Bunkenburg and Nicolas Wu (Kiel University, Germany; Imperial College London, United Kingdom) Despite the intended use for prototyping or as a first step towards giving semantics to a new programming language, interpreters are often monolithic and difficult to extend. Higherorder effects and handlers promise modular composition of handlers and semantics; in practice, however, they have been mostly applied to toy examples. We present an approach that successfully combines algebraic, scoped, and latent effects into a handler pipeline for interpreting the functional logic programming language Curry. We show which effects make up Curry, discuss the infrastructure needed to combine effects of different classes and shed light on how these effects interact in order to lessen the barrier to entry for using effects in practical projects. Article Search Info 

Gale, Michael B. 
Haskell '24: "Functional Reactive Programming, ..."
Functional Reactive Programming, Rearranged
Finnbar Keating and Michael B. Gale (University of Warwick, United Kingdom; GitHub, United Kingdom) At present, the most efficient implementations of Arrowized Functional Reactive Programming (AFRP), such as Scalable FRP (SFRP), do not support the loop combinator. This prevents us from expressing fundamental programs in such implementations, which limits AFRP’s use in domains where both performance and expressivity are required. We introduce Oxbow, which extends SFRP with support for the loop combinator by leveraging an improved variant of the rearrange technique. In benchmarks, Oxbow performs at least 2.2x better than Yampa, an AFRP implementation that supports the loop combinator. Article Search 

Garby, Zac 
Haskell '24: "Calculating Compilers Effectively ..."
Calculating Compilers Effectively
Zac Garby , Graham Hutton , and Patrick Bahr (University of Nottingham, United Kingdom; IT University of Copenhagen, Denmark) Much work in the area of compiler calculation has focused on pure languages. While this simplifies the reasoning, it reduces the applicability. In this article, we show how an existing compiler calculation methodology can be naturally extended to languages with side effects. We achieve this by exploiting an algebraic approach to effects, which keeps the reasoning simple and provides flexibility in how effects are interpreted. To make the ideas accessible we only use elementary functional programming techniques. Article Search Artifacts Available 

Hammersberg, Samuel 
Haskell '24: "Welcome to the Parti(tioning) ..."
Welcome to the Parti(tioning) (Functional Pearl): Using Rewrite Rules and Specialisation to Partition Haskell Programs
Robert Krook and Samuel Hammersberg (Chalmers University of Technology  Gothenburg University, Sweden; Gothenburg University, Sweden) Writing distributed applications is hard, as the programmer needs to describe the communication protocol between the different endpoints. If this is not done correctly, we can introduce bugs such as deadlocks and data races. Tierless and choreographic programming models aim to make this easier by describing the interactions of every endpoint in a single compilation unit. When such a program is compiled, ideally, a single endpoint is projected and the code for the other endpoints is removed. This leads to smaller binaries with fewer dependencies, and is called program partitioning. In this pearl, we show how we can use rewrite rules and specialisation to get GHC to partition our Haskell programs (almost) for free, if they are written using the Haste App or HasChor framework. As an example of why partitioning is useful, we show how an example application can be more easily built and deployed after being partitioned. Article Search 

Hutton, Graham 
Haskell '24: "Calculating Compilers Effectively ..."
Calculating Compilers Effectively
Zac Garby , Graham Hutton , and Patrick Bahr (University of Nottingham, United Kingdom; IT University of Copenhagen, Denmark) Much work in the area of compiler calculation has focused on pure languages. While this simplifies the reasoning, it reduces the applicability. In this article, we show how an existing compiler calculation methodology can be naturally extended to languages with side effects. We achieve this by exploiting an algebraic approach to effects, which keeps the reasoning simple and provides flexibility in how effects are interpreted. To make the ideas accessible we only use elementary functional programming techniques. Article Search Artifacts Available 

Iwasaki, Hideya 
Haskell '24: "Controlling Computation Granularity ..."
Controlling Computation Granularity through Fusion in Improving FloatingPoint Numbers
Momoka Saito , Hideya Iwasaki , Hideyuki Kawabata , and Tsuneyasu Komiya (University of ElectroCommunications, Japan; Meiji University, Japan; Hiroshima City University, Japan) Improving FloatingPoint Numbers (IFN) is a numerical computation library for Haskell. It allows the user to directly specify the accuracy of the result, making accurate computation easy. The library's computation process is based on adaptive control of accuracies, which propagates the demands for more accurate values from an expression to its appropriate subexpressions. However, despite its unique features, programs utilizing the IFN library often encounter efficiency issues regarding memory consumption and execution time due to its fine granularity of computations. This paper presents the computational granularity control mechanism through fusion transformation to resolve these problems and proposes two fusion strategies, the maximal fusion and chain fusion. We have successfully implemented a fusion system that automatically applies these strategies through program transformation. Its effectiveness was confirmed through numerical computation programs. Article Search 

Kawabata, Hideyuki 
Haskell '24: "Controlling Computation Granularity ..."
Controlling Computation Granularity through Fusion in Improving FloatingPoint Numbers
Momoka Saito , Hideya Iwasaki , Hideyuki Kawabata , and Tsuneyasu Komiya (University of ElectroCommunications, Japan; Meiji University, Japan; Hiroshima City University, Japan) Improving FloatingPoint Numbers (IFN) is a numerical computation library for Haskell. It allows the user to directly specify the accuracy of the result, making accurate computation easy. The library's computation process is based on adaptive control of accuracies, which propagates the demands for more accurate values from an expression to its appropriate subexpressions. However, despite its unique features, programs utilizing the IFN library often encounter efficiency issues regarding memory consumption and execution time due to its fine granularity of computations. This paper presents the computational granularity control mechanism through fusion transformation to resolve these problems and proposes two fusion strategies, the maximal fusion and chain fusion. We have successfully implemented a fusion system that automatically applies these strategies through program transformation. Its effectiveness was confirmed through numerical computation programs. Article Search 

Keating, Finnbar 
Haskell '24: "Functional Reactive Programming, ..."
Functional Reactive Programming, Rearranged
Finnbar Keating and Michael B. Gale (University of Warwick, United Kingdom; GitHub, United Kingdom) At present, the most efficient implementations of Arrowized Functional Reactive Programming (AFRP), such as Scalable FRP (SFRP), do not support the loop combinator. This prevents us from expressing fundamental programs in such implementations, which limits AFRP’s use in domains where both performance and expressivity are required. We introduce Oxbow, which extends SFRP with support for the loop combinator by leveraging an improved variant of the rearrange technique. In benchmarks, Oxbow performs at least 2.2x better than Yampa, an AFRP implementation that supports the loop combinator. Article Search 

Komiya, Tsuneyasu 
Haskell '24: "Controlling Computation Granularity ..."
Controlling Computation Granularity through Fusion in Improving FloatingPoint Numbers
Momoka Saito , Hideya Iwasaki , Hideyuki Kawabata , and Tsuneyasu Komiya (University of ElectroCommunications, Japan; Meiji University, Japan; Hiroshima City University, Japan) Improving FloatingPoint Numbers (IFN) is a numerical computation library for Haskell. It allows the user to directly specify the accuracy of the result, making accurate computation easy. The library's computation process is based on adaptive control of accuracies, which propagates the demands for more accurate values from an expression to its appropriate subexpressions. However, despite its unique features, programs utilizing the IFN library often encounter efficiency issues regarding memory consumption and execution time due to its fine granularity of computations. This paper presents the computational granularity control mechanism through fusion transformation to resolve these problems and proposes two fusion strategies, the maximal fusion and chain fusion. We have successfully implemented a fusion system that automatically applies these strategies through program transformation. Its effectiveness was confirmed through numerical computation programs. Article Search 

Krook, Robert 
Haskell '24: "Welcome to the Parti(tioning) ..."
Welcome to the Parti(tioning) (Functional Pearl): Using Rewrite Rules and Specialisation to Partition Haskell Programs
Robert Krook and Samuel Hammersberg (Chalmers University of Technology  Gothenburg University, Sweden; Gothenburg University, Sweden) Writing distributed applications is hard, as the programmer needs to describe the communication protocol between the different endpoints. If this is not done correctly, we can introduce bugs such as deadlocks and data races. Tierless and choreographic programming models aim to make this easier by describing the interactions of every endpoint in a single compilation unit. When such a program is compiled, ideally, a single endpoint is projected and the code for the other endpoints is removed. This leads to smaller binaries with fewer dependencies, and is called program partitioning. In this pearl, we show how we can use rewrite rules and specialisation to get GHC to partition our Haskell programs (almost) for free, if they are written using the Haste App or HasChor framework. As an example of why partitioning is useful, we show how an example application can be more easily built and deployed after being partitioned. Article Search 

Marques, Rodrigo 
Haskell '24: "Haskelite: A Tracing Interpreter ..."
Haskelite: A Tracing Interpreter Based on a PatternMatching Calculus
Pedro Vasconcelos and Rodrigo Marques (University of Porto, Portugal) Many Haskell textbooks explain the evaluation of pure functional programs as a process of stepwise rewriting using equations. However, usual implementation techniques perform program transformations that make producing the corresponding tracing evaluations difficult. This paper presents a tracing interpreter for a subset of Haskell based on the pattern matching calculus of Kahl. We start from a bigstep semantics in the style of Launchbury and develop a smallstep semantics in the style of Sestoft's machines. This machine is used in the implementation of a stepbystep educational interpreter. We also discuss some implementation decisions and present illustrative examples. Preprint Info Artifacts Available 

Ramsay, Craig 
Haskell '24: "Cloaca: A Concurrent Hardware ..."
Cloaca: A Concurrent Hardware Garbage Collector for Nonstrict Functional Languages
Craig Ramsay and Robert Stewart (HeriotWatt University, United Kingdom) Most functional language runtime systems context switch between executing user code and a nonconcurrent garbage collector (GC), exposing GC latency to overall wallclock time. Recent concurrent softwarebased GCs reduce these latencies, but wallclock times are instead increased due to their synchronisation and write barrier overheads, by as much as 21%. This GC overhead is exacerbated further for pure nonstrict languages like Haskell, due to the abundance of allocations for storing immutable data structures and closures. This paper presents Cloaca, an FPGAbased hardware implementation of a concurrent, hybrid GC for a pure nonstrict functional language. It combines markandsweep tracing and onebit reference counting. It traces live heap data using hardwarelevel synchronisation and write barriers, without damaging graph reduction performance. To ensure the correctness of Cloaca, three invariants of its Haskellbased implementation are verified with propertybased testing. Despite GHC running on an Intel i7 CPU operating at a x25 higher clock frequency than Cloaca; Cloaca takes, on average, 4.1% of GHC's GC wallclock time across 14 of 15 benchmarks. Article Search Artifacts Available 

Reinders, Jaro 
Haskell '24: "Higher Order Patterns for ..."
Higher Order Patterns for Rewrite Rules
Jaro Reinders (Delft University of Technology, Netherlands) GHC’s rewrite rules enable programmers to write local program transformation rules for their own functions. The most notable use case are fusion optimizations, which merge multiple traversals of a data structure into one and avoids allocation of intermediate structures. However, GHC’s rewrite rules were too limited to express a rewrite rule that is necessary for proper fusion of the concatMap function in a variant of fusion called stream fusion. We introduce higher order patterns as a simple extension of GHC’s rewrite rules. Higher order patterns substantially broaden the expressiveness of rewrite rules that involve local variables. In particular, they enable the rewriting of concatMap such that it can be properly optimized. We implement a stream fusion framework to replace the existing fusion framework in GHC and evaluate it on GHC’s benchmark suite. Our stream fusion framework with higher order patterns shows an average speedup of 7% compared to our stream fusion framework without it. However, our stream fusion framework is not yet able to match the performance of GHC’s existing fusion framework. Additionally, we show that our higher order patterns can be used to simplify GHC’s existing mechanism for rolling back failed attempts at fusion and, at the same time, solve a problem that prevented proper optimization in one example program. However, evaluating it on GHC’s benchmark suite shows a small regression in performance overall. Article Search 

Saito, Momoka 
Haskell '24: "Controlling Computation Granularity ..."
Controlling Computation Granularity through Fusion in Improving FloatingPoint Numbers
Momoka Saito , Hideya Iwasaki , Hideyuki Kawabata , and Tsuneyasu Komiya (University of ElectroCommunications, Japan; Meiji University, Japan; Hiroshima City University, Japan) Improving FloatingPoint Numbers (IFN) is a numerical computation library for Haskell. It allows the user to directly specify the accuracy of the result, making accurate computation easy. The library's computation process is based on adaptive control of accuracies, which propagates the demands for more accurate values from an expression to its appropriate subexpressions. However, despite its unique features, programs utilizing the IFN library often encounter efficiency issues regarding memory consumption and execution time due to its fine granularity of computations. This paper presents the computational granularity control mechanism through fusion transformation to resolve these problems and proposes two fusion strategies, the maximal fusion and chain fusion. We have successfully implemented a fusion system that automatically applies these strategies through program transformation. Its effectiveness was confirmed through numerical computation programs. Article Search 

Stewart, Robert 
Haskell '24: "Cloaca: A Concurrent Hardware ..."
Cloaca: A Concurrent Hardware Garbage Collector for Nonstrict Functional Languages
Craig Ramsay and Robert Stewart (HeriotWatt University, United Kingdom) Most functional language runtime systems context switch between executing user code and a nonconcurrent garbage collector (GC), exposing GC latency to overall wallclock time. Recent concurrent softwarebased GCs reduce these latencies, but wallclock times are instead increased due to their synchronisation and write barrier overheads, by as much as 21%. This GC overhead is exacerbated further for pure nonstrict languages like Haskell, due to the abundance of allocations for storing immutable data structures and closures. This paper presents Cloaca, an FPGAbased hardware implementation of a concurrent, hybrid GC for a pure nonstrict functional language. It combines markandsweep tracing and onebit reference counting. It traces live heap data using hardwarelevel synchronisation and write barriers, without damaging graph reduction performance. To ensure the correctness of Cloaca, three invariants of its Haskellbased implementation are verified with propertybased testing. Despite GHC running on an Intel i7 CPU operating at a x25 higher clock frequency than Cloaca; Cloaca takes, on average, 4.1% of GHC's GC wallclock time across 14 of 15 benchmarks. Article Search Artifacts Available 

Van Brügge, Jan 
Haskell '24: "Liquid Amortization: Proving ..."
Liquid Amortization: Proving Amortized Complexity with LiquidHaskell (Functional Pearl)
Jan van Brügge (HeriotWatt University, United Kingdom) Formal reasoning about the time complexity of algorithms and data structures is usually done in interactive theorem provers like Isabelle/HOL. This includes reasoning about amortized time complexity which looks at the worst case performance over a series of operations. However, most programs are not written within a theorem prover and thus use the data structures of the production language. To verify the correctness it is necessary to translate the data structures from the production language into the language of the prover. Such a translation step could introduce errors, for example due to a mismatch in features between the two languages. We show how to prove amortized complexity of data structures directly in Haskell using LiquidHaskell. Besides skipping the translation step, our approach can also provide a didactic advantage. Learners do not have to learn an additional language for proofs and can focus on the new concepts only. For this paper, we do not assume prior knowledge of amortized complexity as we explain the concepts and apply them in our first case study, a simple stack with multipop. Moving to more complicated (and useful) data structures, we show that the same technique works for binomial heaps which can be used to implement a priority queue. We also prove amortized complexity bounds for Claessen’s version of the finger tree, a sequencelike data structure with constanttime cons/uncons on either end. Finally we discuss the current limitations of LiquidHaskell that made certain versions of the data structures not feasible. Preprint 

Vasconcelos, Pedro 
Haskell '24: "Haskelite: A Tracing Interpreter ..."
Haskelite: A Tracing Interpreter Based on a PatternMatching Calculus
Pedro Vasconcelos and Rodrigo Marques (University of Porto, Portugal) Many Haskell textbooks explain the evaluation of pure functional programs as a process of stepwise rewriting using equations. However, usual implementation techniques perform program transformations that make producing the corresponding tracing evaluations difficult. This paper presents a tracing interpreter for a subset of Haskell based on the pattern matching calculus of Kahl. We start from a bigstep semantics in the style of Launchbury and develop a smallstep semantics in the style of Sestoft's machines. This machine is used in the implementation of a stepbystep educational interpreter. We also discuss some implementation decisions and present illustrative examples. Preprint Info Artifacts Available 

Wu, Nicolas 
Haskell '24: "Making a Curry Interpreter ..."
Making a Curry Interpreter using Effects and Handlers
Niels Bunkenburg and Nicolas Wu (Kiel University, Germany; Imperial College London, United Kingdom) Despite the intended use for prototyping or as a first step towards giving semantics to a new programming language, interpreters are often monolithic and difficult to extend. Higherorder effects and handlers promise modular composition of handlers and semantics; in practice, however, they have been mostly applied to toy examples. We present an approach that successfully combines algebraic, scoped, and latent effects into a handler pipeline for interpreting the functional logic programming language Curry. We show which effects make up Curry, discuss the infrastructure needed to combine effects of different classes and shed light on how these effects interact in order to lessen the barrier to entry for using effects in practical projects. Article Search Info 
20 authors
proc time: 0.8