Workshop LCTES 2022 – Author Index |
Contents -
Abstracts -
Authors
|
A B C D F G H J K L M N P R S T V W X Z
Asavoae, Mihail |
LCTES '22: "A Memory Interference Analysis ..."
A Memory Interference Analysis using a Formal Timing Analyzer (WIP)
Mihail Asavoae, Oumaima Matoussi, Asmae Bouachtala, Hai-Dang Vu, and Mathieu Jan (University of Paris-Saclay, France; CEA LIST, France) Safety-critical applications require well-defined and documented timing behavior. These requirements shape the design and implementation of a timing analyzer based on a formal Instruction-Set Architecture (ISA) semantics and formal micro-architecture models. In this paper we present the key elements of such a timing analyzer and how to systematically combine the formal components to address timing properties such as evaluating memory interferences. We also report preliminary experiments of memory interference analysis of multi-threaded applications in a multicore context. @InProceedings{LCTES22p146, author = {Mihail Asavoae and Oumaima Matoussi and Asmae Bouachtala and Hai-Dang Vu and Mathieu Jan}, title = {A Memory Interference Analysis using a Formal Timing Analyzer (WIP)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {146--150}, doi = {10.1145/3519941.3535077}, year = {2022}, } Publisher's Version |
|
Ayers, Hudson |
LCTES '22: "Tighten Rust’s Belt: Shrinking ..."
Tighten Rust’s Belt: Shrinking Embedded Rust Binaries
Hudson Ayers, Evan Laufer, Paul Mure, Jaehyeon Park, Eduardo Rodelo, Thea Rossman, Andrey Pronin, Philip Levis, and Johnathan Van Why (Stanford University, USA; Google, USA) Rust is a promising programming language for embedded software, providing low-level primitives and performance similar to C/C++ alongside type safety, memory safety, and modern high-level language features. We find naive use of Rust leads to binaries much larger than their C equivalents. For flash-constrained embedded microcontrollers, this is prohibitive. We find four major causes of this growth: monomorphization, inefficient derivations, implicit data structures, and missing compiler optimizations. We present a set of embedded Rust programming principles which reduce Rust binary sizes. We apply these principles to an industrial Rust firmware application, reducing size by 76kB (19%), and an open source Rust OS kernel binary, reducing size by 23kB (26%). We explore compiler optimizations that could further shrink embedded Rust. @InProceedings{LCTES22p121, author = {Hudson Ayers and Evan Laufer and Paul Mure and Jaehyeon Park and Eduardo Rodelo and Thea Rossman and Andrey Pronin and Philip Levis and Johnathan Van Why}, title = {Tighten Rust’s Belt: Shrinking Embedded Rust Binaries}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {121--132}, doi = {10.1145/3519941.3535075}, year = {2022}, } Publisher's Version |
|
Baudart, Guillaume |
LCTES '22: "JAX Based Parallel Inference ..."
JAX Based Parallel Inference for Reactive Probabilistic Programming
Guillaume Baudart, Louis Mandel, and Reyyan Tekin (DI ENS, France; ENS, France; PSL University, France; CNRS, France; Inria, France; IBM Research, USA) ProbZelus is a synchronous probabilistic language for the design of reactive probabilistic models in interaction with an environment. Reactive inference methods continuously learn distributions over the unobserved parameters of the model from statistical observations. Unfortunately, this inference problem is in general intractable. Monte Carlo inference techniques thus rely on many independent executions to compute accurate approximations. These methods are expensive but can be parallelized. We propose to use JAX to parallelize ProbZelus reactive inference engine. JAX is a recent library to compile Python code which can then be executed on massively parallel architectures such as GPUs or TPUs. In this paper, we describe a new reactive inference engine implemented in JAX and the new associated JAX backend for ProbZelus. We show on existing benchmarks that our new parallel implementation outperforms the original sequential implementation for a high number of particles. @InProceedings{LCTES22p26, author = {Guillaume Baudart and Louis Mandel and Reyyan Tekin}, title = {JAX Based Parallel Inference for Reactive Probabilistic Programming}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {26--36}, doi = {10.1145/3519941.3535066}, year = {2022}, } Publisher's Version Artifacts Reusable Results Reproduced |
|
Bhagyanath, Anoop |
LCTES '22: "Code Generation Criteria for ..."
Code Generation Criteria for Buffered Exposed Datapath Architectures from Dataflow Graphs
Klaus Schneider, Anoop Bhagyanath, and Julius Roob (University of Kaiserslautern, Germany) Many novel processor architectures expose their processing units (PUs) and internal datapaths to the compiler. To avoid an unnecessary synchronization of PUs, the datapaths are often buffered which results in buffered exposed datapath (BED) architectures. This paper suggests a code generation technique for BED architectures from dataflow graphs that are used as intermediate program representations. Inspired by results on queue layouts in graph drawing, we determine in this paper constraints for the node and edge orderings of the dataflow graphs to ensure the first-in-first-out behavior of the buffers. Formalizing these constraints in propositional logic enables SAT solvers to compute optimal PU allocations. Moreover, future code generation techniques may develop heuristics based on the code generation criteria of this paper. @InProceedings{LCTES22p133, author = {Klaus Schneider and Anoop Bhagyanath and Julius Roob}, title = {Code Generation Criteria for Buffered Exposed Datapath Architectures from Dataflow Graphs}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {133--145}, doi = {10.1145/3519941.3535076}, year = {2022}, } Publisher's Version Info Artifacts Functional Results Reproduced |
|
Bouachtala, Asmae |
LCTES '22: "A Memory Interference Analysis ..."
A Memory Interference Analysis using a Formal Timing Analyzer (WIP)
Mihail Asavoae, Oumaima Matoussi, Asmae Bouachtala, Hai-Dang Vu, and Mathieu Jan (University of Paris-Saclay, France; CEA LIST, France) Safety-critical applications require well-defined and documented timing behavior. These requirements shape the design and implementation of a timing analyzer based on a formal Instruction-Set Architecture (ISA) semantics and formal micro-architecture models. In this paper we present the key elements of such a timing analyzer and how to systematically combine the formal components to address timing properties such as evaluating memory interferences. We also report preliminary experiments of memory interference analysis of multi-threaded applications in a multicore context. @InProceedings{LCTES22p146, author = {Mihail Asavoae and Oumaima Matoussi and Asmae Bouachtala and Hai-Dang Vu and Mathieu Jan}, title = {A Memory Interference Analysis using a Formal Timing Analyzer (WIP)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {146--150}, doi = {10.1145/3519941.3535077}, year = {2022}, } Publisher's Version |
|
Bradbury, Alex |
LCTES '22: "Trace-and-Brace (TAB): Bespoke ..."
Trace-and-Brace (TAB): Bespoke Software Countermeasures against Soft Errors
Yousun Ko, Alex Bradbury, Bernd Burgstaller, and Robert Mullins (Yonsei University, South Korea; lowRISC, UK; University of Cambridge, UK) Lower voltage levels and higher clock frequencies together with a tight hardware area budget make modern processors more susceptible to soft errors. Existing generic software countermeasures against soft errors are application-agnostic or applied uniformely to the application code, which results in insufficient coverage of errors and excessive performance and code size overhead. In this paper, we observe that the degree and the types of vulnerabilities against soft errors vary by application and operational scenarios. We propose a software method, Trace-and-Brace (TAB), that diagnoses the types and locations of vulnerabilities in software and allows accurate and precise application of countermeasures. TAB consists of two phases. The trace phase performs exhaustive fault injection on program traces to diagnose vulnerable code regions and their vulnerability types (i.e., for control-flow, data-flow, and memory data corruption). TAB provides an enhanced and comprehensive set of countermeasures that the user employs in the brace phase through high-level program annotations on vulnerable code regions. Bespoke countermeasures avoid the protection overhead for robust code and focus on vulnerable regions instead. We implemented TAB within CLANG and LLVM, and evaluated it on the RISC-V architecture. TAB detected 99.82 % of data-flow errors and 10.94 % more control-flow errors (96.61 % in total) than the state-of-the-art approach while TAB’s generated programs run 1.15 × faster. TAB provides selective memory area protection which induced only 16 % performance overhead to detect 99.99 % of soft errors on memory. @InProceedings{LCTES22p73, author = {Yousun Ko and Alex Bradbury and Bernd Burgstaller and Robert Mullins}, title = {Trace-and-Brace (TAB): Bespoke Software Countermeasures against Soft Errors}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {73--85}, doi = {10.1145/3519941.3535070}, year = {2022}, } Publisher's Version |
|
Burgstaller, Bernd |
LCTES '22: "Trace-and-Brace (TAB): Bespoke ..."
Trace-and-Brace (TAB): Bespoke Software Countermeasures against Soft Errors
Yousun Ko, Alex Bradbury, Bernd Burgstaller, and Robert Mullins (Yonsei University, South Korea; lowRISC, UK; University of Cambridge, UK) Lower voltage levels and higher clock frequencies together with a tight hardware area budget make modern processors more susceptible to soft errors. Existing generic software countermeasures against soft errors are application-agnostic or applied uniformely to the application code, which results in insufficient coverage of errors and excessive performance and code size overhead. In this paper, we observe that the degree and the types of vulnerabilities against soft errors vary by application and operational scenarios. We propose a software method, Trace-and-Brace (TAB), that diagnoses the types and locations of vulnerabilities in software and allows accurate and precise application of countermeasures. TAB consists of two phases. The trace phase performs exhaustive fault injection on program traces to diagnose vulnerable code regions and their vulnerability types (i.e., for control-flow, data-flow, and memory data corruption). TAB provides an enhanced and comprehensive set of countermeasures that the user employs in the brace phase through high-level program annotations on vulnerable code regions. Bespoke countermeasures avoid the protection overhead for robust code and focus on vulnerable regions instead. We implemented TAB within CLANG and LLVM, and evaluated it on the RISC-V architecture. TAB detected 99.82 % of data-flow errors and 10.94 % more control-flow errors (96.61 % in total) than the state-of-the-art approach while TAB’s generated programs run 1.15 × faster. TAB provides selective memory area protection which induced only 16 % performance overhead to detect 99.99 % of soft errors on memory. @InProceedings{LCTES22p73, author = {Yousun Ko and Alex Bradbury and Bernd Burgstaller and Robert Mullins}, title = {Trace-and-Brace (TAB): Bespoke Software Countermeasures against Soft Errors}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {73--85}, doi = {10.1145/3519941.3535070}, year = {2022}, } Publisher's Version |
|
Challapalle, Nagadastagiri |
LCTES '22: "ISKEVA: In-SSD Key-Value Database ..."
ISKEVA: In-SSD Key-Value Database Engine for Video Analytics Applications
Yi Zheng, Joshua Fixelle, Nagadastagiri Challapalle, Pingyi Huo, Zhaoyan Shen, Zili Shao, Mircea Stan, and Vijaykrishnan Narayanan (Pennsylvania State University, USA; University of Virginia, USA; Shandong University, China; Chinese University of Hong Kong, China) Key-value databases are widely used to store the features or metadata generated from the neural network based video processing platforms. Due to the large volumes of video data, these databases use solid state drives (SSDs) as the primary data storage platform, and user query-based filtering, and retrieval operations on data incur large volume of data movement between the SSD and the host processor. In this paper, we present an in-SSD key-value database which uses the embedded CPU core, and DRAM memory on the SSD to support various queries with predicates and reduce the data movement between SSD and host processor significantly. We augment the SSD flash translation layer with key-value database functions and auxiliary data structures to support the user queries using the embedded core and DRAM memory on SSD. The proposed key-value store prototype on the Cosmos plus OpenSSD board reduces data movement between host processor and SSD by 14.57x, achieves an application-level speedup by 1.16x, and reduced energy consumption by 56% across different types of user queries. @InProceedings{LCTES22p50, author = {Yi Zheng and Joshua Fixelle and Nagadastagiri Challapalle and Pingyi Huo and Zhaoyan Shen and Zili Shao and Mircea Stan and Vijaykrishnan Narayanan}, title = {ISKEVA: In-SSD Key-Value Database Engine for Video Analytics Applications}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {50--60}, doi = {10.1145/3519941.3535068}, year = {2022}, } Publisher's Version |
|
Chen, Zizhan |
LCTES '22: "An Old Friend Is Better Than ..."
An Old Friend Is Better Than Two New Ones: Dual-Screen Android
Zizhan Chen, Siqi Shang, Qihong Wu, Jin Xue, Zhaoyan Shen, and Zili Shao (Chinese University of Hong Kong, China; Shandong University, China) Dual-screen foldable Android smartphones such as Microsoft Surface Duo are emerging. However, due to its internal design, the Android framework cannot support combined mode, by which two screens can be integrated into one, without modifications, thus requiring new display management. Android introduces a new dual-screen display method, called presentation class, to handle this problem; however, existing apps need to be redesigned based on the new class. In this paper, we propose Dual-Screen Android (DSA), a semantics-aware display scheme for dual-screen foldable Android smartphones. DSA is transparent to apps, thus requiring no changes for existing apps, and incurs minimum modification to the Android framework. Specifically, inside Android, DSA duplicates and maps single-screen views provided by apps to dual screens and maps input events backward correspondingly, thus being transparent to apps. DSA also opens the door to store other hardware states (e.g., screen brightness, screen-touch events, etc.), which can be utilized to bridge other software/hardware semantic gaps for further system optimization. To demonstrate this, we design an effective Q-learning energy optimization scheme within DSA to control screen brightness based on predicted users' behaviors. We have implemented DSA based on Android 10 with real hardware and conducted a series of experiments. Experimental results show that based on DSA, without any modifications, existing apps can directly run and fully exploit dual screens with negligible time and memory overheads, and our energy optimization scheme can effectively reduce energy. We have released the open-source code of DSA for public access. @InProceedings{LCTES22p86, author = {Zizhan Chen and Siqi Shang and Qihong Wu and Jin Xue and Zhaoyan Shen and Zili Shao}, title = {An Old Friend Is Better Than Two New Ones: Dual-Screen Android}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {86--98}, doi = {10.1145/3519941.3535071}, year = {2022}, } Publisher's Version Artifacts Functional |
|
Ding, Chen |
LCTES '22: "Cache-Coherent CLAM (WIP) ..."
Cache-Coherent CLAM (WIP)
Chen Ding, Benjamin Reber, and Dorin Patru (University of Rochester, USA; Rochester Institute of Technology, USA) Traditional caches are automatic and cannot be controlled directly by software. A recent design called CLAM manages a cache using leases and lets a program specify these leases. The lease cache is mostly controlled by software. This paper extends CLAM to support multiple cores with private caches. It presents the hardware extensions to support cache coherence for data-race free (DRF) programs. CLAM can use either inclusive or exclusive caching for shared data. Its performance can be improved by two programming techniques: cache draining and reference privatization. @InProceedings{LCTES22p111, author = {Chen Ding and Benjamin Reber and Dorin Patru}, title = {Cache-Coherent CLAM (WIP)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {111--115}, doi = {10.1145/3519941.3535073}, year = {2022}, } Publisher's Version |
|
Dubach, Christophe |
LCTES '22: "Optimizing Data Reshaping ..."
Optimizing Data Reshaping Operations in Functional IRs for High-Level Synthesis
Christof Schlaak, Tzung-Han Juang, and Christophe Dubach (University of Edinburgh, UK; McGill University, Canada) FPGAs (Field Programmable Gate Arrays) have become the substrate of choice to implement accelerators. They deliver high performance with low power consumption, while offering the flexibility of being re-programmable. But they are notoriously hard to program directly using HDLs (Hardware Description Languages). Traditional HLS (High-Level Synthesis) methods are addressing some of these issues but are far from being perfect. Programmers are still required to write hardware-specific code and existing HLS tools often produce sub-optimal designs. Modern approaches based on multi-level functional IR (Intermediate Representation) such as Aetherling, have demonstrated the advantages of generating high performance designs in a predictable way. A functional IR makes optimizations via rewrite rules simple to express, and abstract away the hardware details. However, as we will see in this paper, functional approaches bring their own set of challenges to produce high performance hardware. In particular, data reshaping operations, such as transposition, introduce overheads that hurt performance or even prevent the generation of synthesizable hardware designs altogether. This paper presents an approach with rewrite rules to solve this fundamental issue and produce efficient FPGA designs from functional IRs. Using rewrites, it is possible to generate high performance designs for matrix multiplication and 2D convolution. The paper also evaluates the performance impact of the optimizations and shows that without them, low performance designs are produced, or even worse, it is impossible to synthesize the designs at all. @InProceedings{LCTES22p61, author = {Christof Schlaak and Tzung-Han Juang and Christophe Dubach}, title = {Optimizing Data Reshaping Operations in Functional IRs for High-Level Synthesis}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {61--72}, doi = {10.1145/3519941.3535069}, year = {2022}, } Publisher's Version Artifacts Functional Results Reproduced |
|
Fixelle, Joshua |
LCTES '22: "ISKEVA: In-SSD Key-Value Database ..."
ISKEVA: In-SSD Key-Value Database Engine for Video Analytics Applications
Yi Zheng, Joshua Fixelle, Nagadastagiri Challapalle, Pingyi Huo, Zhaoyan Shen, Zili Shao, Mircea Stan, and Vijaykrishnan Narayanan (Pennsylvania State University, USA; University of Virginia, USA; Shandong University, China; Chinese University of Hong Kong, China) Key-value databases are widely used to store the features or metadata generated from the neural network based video processing platforms. Due to the large volumes of video data, these databases use solid state drives (SSDs) as the primary data storage platform, and user query-based filtering, and retrieval operations on data incur large volume of data movement between the SSD and the host processor. In this paper, we present an in-SSD key-value database which uses the embedded CPU core, and DRAM memory on the SSD to support various queries with predicates and reduce the data movement between SSD and host processor significantly. We augment the SSD flash translation layer with key-value database functions and auxiliary data structures to support the user queries using the embedded core and DRAM memory on SSD. The proposed key-value store prototype on the Cosmos plus OpenSSD board reduces data movement between host processor and SSD by 14.57x, achieves an application-level speedup by 1.16x, and reduced energy consumption by 56% across different types of user queries. @InProceedings{LCTES22p50, author = {Yi Zheng and Joshua Fixelle and Nagadastagiri Challapalle and Pingyi Huo and Zhaoyan Shen and Zili Shao and Mircea Stan and Vijaykrishnan Narayanan}, title = {ISKEVA: In-SSD Key-Value Database Engine for Video Analytics Applications}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {50--60}, doi = {10.1145/3519941.3535068}, year = {2022}, } Publisher's Version |
|
Fukuhara, Junji |
LCTES '22: "Automated Kernel Fusion for ..."
Automated Kernel Fusion for GPU Based on Code Motion
Junji Fukuhara and Munehiro Takimoto (Tokyo University of Science, Japan) Applications implemented for GPU are important in various fields. GPU has many parallel computing cores and high arithmetic throughput, enabling GPU applications to work efficiently. However, the throughput of GPU memory, of which global memory is the costliest for accessing, is low. The input and output of GPU kernels must be stored in the global memory. They can be a bottleneck for some applications. Kernel fusion-based methods can effectively suppress access to global memory when kernels share some data. The methods combine two or more kernels into one, improving the performance by reducing expensive data communication with global memory. However, traditional kernel fusion-based methods miss many fusion opportunities because they focus only on data dependency between candidate kernels and do not consider their control flows. This paper proposes a novel kernel fusion-based method, called kernel fusion based on code motion (KFCM). KFCM exposes the fusibility of kernels by moving the kernels along control flows and then combines them. In the process of exposing fusibility, KFCM may duplicate some kernels. However, KFCM does not increase the number of kernels executed on each execution path. Thus, KFCM increases the opportunity of combining kernels without negative gain. The experimental results show that KFCM achieves 1.60x and 1.35x speedup compared with O3 option of Clang and a traditional method, respectively, in the best case. @InProceedings{LCTES22p151, author = {Junji Fukuhara and Munehiro Takimoto}, title = {Automated Kernel Fusion for GPU Based on Code Motion}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {151--161}, doi = {10.1145/3519941.3535078}, year = {2022}, } Publisher's Version |
|
Ge, Tianao |
LCTES '22: "RollBin: Reducing Code-Size ..."
RollBin: Reducing Code-Size via Loop Rerolling at Binary Level
Tianao Ge, Zewei Mo, Kan Wu, Xianwei Zhang, and Yutong Lu (Sun Yat-sen University, China) Code size is an increasing concern on resource constrained systems, ranging from embedded devices to cloud servers. To address the issue, lowering memory occupancy has become a priority in developing and deploying applications, and accordingly compiler-based optimizations have been proposed to reduce program footprint. However, prior arts are generally dealing with source codes or intermediate representations, and thus are very limited in scope in real scenarios where only binary files are commonly provided. To fill the gap, this paper presents a novel code-size optimization RollBin to reroll loops at binary level. RollBin first locates the unrolled loops in binary files, and then probes to decide the unrolling factor by identifying regular memory address patterns. To reconstruct the iterations, we propose a customized data dependency analysis that tackles the challenges brought by shuffled instructions and loop-carry dependencies. Next, the recognized iterations are rolled up through instruction removal and update, which are generally reverting the normal unrolling procedure. The evaluations on standard SPEC2006/2017 and MiBench demonstrate that RollBin effectively shrinks code size by 1.7% and 2.2% on average (up to 7.8%), which respectively outperforms the state-of-the-arts by 31% and 38%. In addition, the use cases of representative realistic applications manifest that RollBin can be applicable in practices. @InProceedings{LCTES22p99, author = {Tianao Ge and Zewei Mo and Kan Wu and Xianwei Zhang and Yutong Lu}, title = {RollBin: Reducing Code-Size via Loop Rerolling at Binary Level}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {99--110}, doi = {10.1145/3519941.3535072}, year = {2022}, } Publisher's Version |
|
Huo, Pingyi |
LCTES '22: "ISKEVA: In-SSD Key-Value Database ..."
ISKEVA: In-SSD Key-Value Database Engine for Video Analytics Applications
Yi Zheng, Joshua Fixelle, Nagadastagiri Challapalle, Pingyi Huo, Zhaoyan Shen, Zili Shao, Mircea Stan, and Vijaykrishnan Narayanan (Pennsylvania State University, USA; University of Virginia, USA; Shandong University, China; Chinese University of Hong Kong, China) Key-value databases are widely used to store the features or metadata generated from the neural network based video processing platforms. Due to the large volumes of video data, these databases use solid state drives (SSDs) as the primary data storage platform, and user query-based filtering, and retrieval operations on data incur large volume of data movement between the SSD and the host processor. In this paper, we present an in-SSD key-value database which uses the embedded CPU core, and DRAM memory on the SSD to support various queries with predicates and reduce the data movement between SSD and host processor significantly. We augment the SSD flash translation layer with key-value database functions and auxiliary data structures to support the user queries using the embedded core and DRAM memory on SSD. The proposed key-value store prototype on the Cosmos plus OpenSSD board reduces data movement between host processor and SSD by 14.57x, achieves an application-level speedup by 1.16x, and reduced energy consumption by 56% across different types of user queries. @InProceedings{LCTES22p50, author = {Yi Zheng and Joshua Fixelle and Nagadastagiri Challapalle and Pingyi Huo and Zhaoyan Shen and Zili Shao and Mircea Stan and Vijaykrishnan Narayanan}, title = {ISKEVA: In-SSD Key-Value Database Engine for Video Analytics Applications}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {50--60}, doi = {10.1145/3519941.3535068}, year = {2022}, } Publisher's Version |
|
Jan, Mathieu |
LCTES '22: "A Memory Interference Analysis ..."
A Memory Interference Analysis using a Formal Timing Analyzer (WIP)
Mihail Asavoae, Oumaima Matoussi, Asmae Bouachtala, Hai-Dang Vu, and Mathieu Jan (University of Paris-Saclay, France; CEA LIST, France) Safety-critical applications require well-defined and documented timing behavior. These requirements shape the design and implementation of a timing analyzer based on a formal Instruction-Set Architecture (ISA) semantics and formal micro-architecture models. In this paper we present the key elements of such a timing analyzer and how to systematically combine the formal components to address timing properties such as evaluating memory interferences. We also report preliminary experiments of memory interference analysis of multi-threaded applications in a multicore context. @InProceedings{LCTES22p146, author = {Mihail Asavoae and Oumaima Matoussi and Asmae Bouachtala and Hai-Dang Vu and Mathieu Jan}, title = {A Memory Interference Analysis using a Formal Timing Analyzer (WIP)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {146--150}, doi = {10.1145/3519941.3535077}, year = {2022}, } Publisher's Version |
|
Juang, Tzung-Han |
LCTES '22: "Optimizing Data Reshaping ..."
Optimizing Data Reshaping Operations in Functional IRs for High-Level Synthesis
Christof Schlaak, Tzung-Han Juang, and Christophe Dubach (University of Edinburgh, UK; McGill University, Canada) FPGAs (Field Programmable Gate Arrays) have become the substrate of choice to implement accelerators. They deliver high performance with low power consumption, while offering the flexibility of being re-programmable. But they are notoriously hard to program directly using HDLs (Hardware Description Languages). Traditional HLS (High-Level Synthesis) methods are addressing some of these issues but are far from being perfect. Programmers are still required to write hardware-specific code and existing HLS tools often produce sub-optimal designs. Modern approaches based on multi-level functional IR (Intermediate Representation) such as Aetherling, have demonstrated the advantages of generating high performance designs in a predictable way. A functional IR makes optimizations via rewrite rules simple to express, and abstract away the hardware details. However, as we will see in this paper, functional approaches bring their own set of challenges to produce high performance hardware. In particular, data reshaping operations, such as transposition, introduce overheads that hurt performance or even prevent the generation of synthesizable hardware designs altogether. This paper presents an approach with rewrite rules to solve this fundamental issue and produce efficient FPGA designs from functional IRs. Using rewrites, it is possible to generate high performance designs for matrix multiplication and 2D convolution. The paper also evaluates the performance impact of the optimizations and shows that without them, low performance designs are produced, or even worse, it is impossible to synthesize the designs at all. @InProceedings{LCTES22p61, author = {Christof Schlaak and Tzung-Han Juang and Christophe Dubach}, title = {Optimizing Data Reshaping Operations in Functional IRs for High-Level Synthesis}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {61--72}, doi = {10.1145/3519941.3535069}, year = {2022}, } Publisher's Version Artifacts Functional Results Reproduced |
|
Ko, Yousun |
LCTES '22: "Trace-and-Brace (TAB): Bespoke ..."
Trace-and-Brace (TAB): Bespoke Software Countermeasures against Soft Errors
Yousun Ko, Alex Bradbury, Bernd Burgstaller, and Robert Mullins (Yonsei University, South Korea; lowRISC, UK; University of Cambridge, UK) Lower voltage levels and higher clock frequencies together with a tight hardware area budget make modern processors more susceptible to soft errors. Existing generic software countermeasures against soft errors are application-agnostic or applied uniformely to the application code, which results in insufficient coverage of errors and excessive performance and code size overhead. In this paper, we observe that the degree and the types of vulnerabilities against soft errors vary by application and operational scenarios. We propose a software method, Trace-and-Brace (TAB), that diagnoses the types and locations of vulnerabilities in software and allows accurate and precise application of countermeasures. TAB consists of two phases. The trace phase performs exhaustive fault injection on program traces to diagnose vulnerable code regions and their vulnerability types (i.e., for control-flow, data-flow, and memory data corruption). TAB provides an enhanced and comprehensive set of countermeasures that the user employs in the brace phase through high-level program annotations on vulnerable code regions. Bespoke countermeasures avoid the protection overhead for robust code and focus on vulnerable regions instead. We implemented TAB within CLANG and LLVM, and evaluated it on the RISC-V architecture. TAB detected 99.82 % of data-flow errors and 10.94 % more control-flow errors (96.61 % in total) than the state-of-the-art approach while TAB’s generated programs run 1.15 × faster. TAB provides selective memory area protection which induced only 16 % performance overhead to detect 99.99 % of soft errors on memory. @InProceedings{LCTES22p73, author = {Yousun Ko and Alex Bradbury and Bernd Burgstaller and Robert Mullins}, title = {Trace-and-Brace (TAB): Bespoke Software Countermeasures against Soft Errors}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {73--85}, doi = {10.1145/3519941.3535070}, year = {2022}, } Publisher's Version |
|
Laufer, Evan |
LCTES '22: "Tighten Rust’s Belt: Shrinking ..."
Tighten Rust’s Belt: Shrinking Embedded Rust Binaries
Hudson Ayers, Evan Laufer, Paul Mure, Jaehyeon Park, Eduardo Rodelo, Thea Rossman, Andrey Pronin, Philip Levis, and Johnathan Van Why (Stanford University, USA; Google, USA) Rust is a promising programming language for embedded software, providing low-level primitives and performance similar to C/C++ alongside type safety, memory safety, and modern high-level language features. We find naive use of Rust leads to binaries much larger than their C equivalents. For flash-constrained embedded microcontrollers, this is prohibitive. We find four major causes of this growth: monomorphization, inefficient derivations, implicit data structures, and missing compiler optimizations. We present a set of embedded Rust programming principles which reduce Rust binary sizes. We apply these principles to an industrial Rust firmware application, reducing size by 76kB (19%), and an open source Rust OS kernel binary, reducing size by 23kB (26%). We explore compiler optimizations that could further shrink embedded Rust. @InProceedings{LCTES22p121, author = {Hudson Ayers and Evan Laufer and Paul Mure and Jaehyeon Park and Eduardo Rodelo and Thea Rossman and Andrey Pronin and Philip Levis and Johnathan Van Why}, title = {Tighten Rust’s Belt: Shrinking Embedded Rust Binaries}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {121--132}, doi = {10.1145/3519941.3535075}, year = {2022}, } Publisher's Version |
|
Lee, Kyungwoo |
LCTES '22: "Scalable Size Inliner for ..."
Scalable Size Inliner for Mobile Applications (WIP)
Kyungwoo Lee, Manman Ren, and Shane Nay (Meta, USA) Inlining is critical for both performance and size in mobile apps. When building large mobile apps, ThinLTO, a scalable link-time optimization is imperative in order to achieve both optimal size and build scalability. However, inlining with ThinLTO is not tuned to reduce the code size because each module inliner works independently without modeling the size cost across modules, and functions are often not eligible to import due to private references, appearing in Objective-C or Swift for iOS. This paper extends the bitcode summary to perform a global inlining analysis to find inline candidates for saving the code size. Using this summary information, a pre-inliner eagerly inlines the candidates that are proven to shrink the size. When the inline candidates are not eligible to import, a pre-merger combines their bitcode modules to remove inline restrictions. Our work improves the size of real-world mobile apps when compared to the MinSize (-Oz) optimization level. We reduced the code size by 2.8% for SocialApp and 4.0% for ChatApp. @InProceedings{LCTES22p116, author = {Kyungwoo Lee and Manman Ren and Shane Nay}, title = {Scalable Size Inliner for Mobile Applications (WIP)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {116--120}, doi = {10.1145/3519941.3535074}, year = {2022}, } Publisher's Version |
|
Levis, Philip |
LCTES '22: "Tighten Rust’s Belt: Shrinking ..."
Tighten Rust’s Belt: Shrinking Embedded Rust Binaries
Hudson Ayers, Evan Laufer, Paul Mure, Jaehyeon Park, Eduardo Rodelo, Thea Rossman, Andrey Pronin, Philip Levis, and Johnathan Van Why (Stanford University, USA; Google, USA) Rust is a promising programming language for embedded software, providing low-level primitives and performance similar to C/C++ alongside type safety, memory safety, and modern high-level language features. We find naive use of Rust leads to binaries much larger than their C equivalents. For flash-constrained embedded microcontrollers, this is prohibitive. We find four major causes of this growth: monomorphization, inefficient derivations, implicit data structures, and missing compiler optimizations. We present a set of embedded Rust programming principles which reduce Rust binary sizes. We apply these principles to an industrial Rust firmware application, reducing size by 76kB (19%), and an open source Rust OS kernel binary, reducing size by 23kB (26%). We explore compiler optimizations that could further shrink embedded Rust. @InProceedings{LCTES22p121, author = {Hudson Ayers and Evan Laufer and Paul Mure and Jaehyeon Park and Eduardo Rodelo and Thea Rossman and Andrey Pronin and Philip Levis and Johnathan Van Why}, title = {Tighten Rust’s Belt: Shrinking Embedded Rust Binaries}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {121--132}, doi = {10.1145/3519941.3535075}, year = {2022}, } Publisher's Version |
|
Liu, Fengyun |
LCTES '22: "Implicit State Machines ..."
Implicit State Machines
Fengyun Liu and Aleksandar Prokopec (Oracle Labs, Switzerland) Finite-state machines (FSM) are a simple yet powerful abstraction widely used for modeling, programming and verifying real-time and reactive systems that control modern factories, power plants, transportation systems and medical equipment. However, traditionally finite-state machines are either encoded indirectly in an imperative language, such as C and Verilog, or embedded as an imperative extension of a declarative language, such as Lustre. Given the widely accepted advantage of declarative programming, can we have a declarative design of finite-state machines to facilitate design, construction, and verification of embedded programs? By sticking to the design principle of declarativeness, we show that a novel abstraction emerges, implicit state machines, which is declarative in nature and at the same time supports recursive composition. Given its simplicity and universality, we believe it may serve as a new foundation for programming embedded systems. @InProceedings{LCTES22p13, author = {Fengyun Liu and Aleksandar Prokopec}, title = {Implicit State Machines}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {13--25}, doi = {10.1145/3519941.3535065}, year = {2022}, } Publisher's Version Artifacts Reusable Results Reproduced |
|
Lu, Yutong |
LCTES '22: "RollBin: Reducing Code-Size ..."
RollBin: Reducing Code-Size via Loop Rerolling at Binary Level
Tianao Ge, Zewei Mo, Kan Wu, Xianwei Zhang, and Yutong Lu (Sun Yat-sen University, China) Code size is an increasing concern on resource constrained systems, ranging from embedded devices to cloud servers. To address the issue, lowering memory occupancy has become a priority in developing and deploying applications, and accordingly compiler-based optimizations have been proposed to reduce program footprint. However, prior arts are generally dealing with source codes or intermediate representations, and thus are very limited in scope in real scenarios where only binary files are commonly provided. To fill the gap, this paper presents a novel code-size optimization RollBin to reroll loops at binary level. RollBin first locates the unrolled loops in binary files, and then probes to decide the unrolling factor by identifying regular memory address patterns. To reconstruct the iterations, we propose a customized data dependency analysis that tackles the challenges brought by shuffled instructions and loop-carry dependencies. Next, the recognized iterations are rolled up through instruction removal and update, which are generally reverting the normal unrolling procedure. The evaluations on standard SPEC2006/2017 and MiBench demonstrate that RollBin effectively shrinks code size by 1.7% and 2.2% on average (up to 7.8%), which respectively outperforms the state-of-the-arts by 31% and 38%. In addition, the use cases of representative realistic applications manifest that RollBin can be applicable in practices. @InProceedings{LCTES22p99, author = {Tianao Ge and Zewei Mo and Kan Wu and Xianwei Zhang and Yutong Lu}, title = {RollBin: Reducing Code-Size via Loop Rerolling at Binary Level}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {99--110}, doi = {10.1145/3519941.3535072}, year = {2022}, } Publisher's Version |
|
Mandel, Louis |
LCTES '22: "JAX Based Parallel Inference ..."
JAX Based Parallel Inference for Reactive Probabilistic Programming
Guillaume Baudart, Louis Mandel, and Reyyan Tekin (DI ENS, France; ENS, France; PSL University, France; CNRS, France; Inria, France; IBM Research, USA) ProbZelus is a synchronous probabilistic language for the design of reactive probabilistic models in interaction with an environment. Reactive inference methods continuously learn distributions over the unobserved parameters of the model from statistical observations. Unfortunately, this inference problem is in general intractable. Monte Carlo inference techniques thus rely on many independent executions to compute accurate approximations. These methods are expensive but can be parallelized. We propose to use JAX to parallelize ProbZelus reactive inference engine. JAX is a recent library to compile Python code which can then be executed on massively parallel architectures such as GPUs or TPUs. In this paper, we describe a new reactive inference engine implemented in JAX and the new associated JAX backend for ProbZelus. We show on existing benchmarks that our new parallel implementation outperforms the original sequential implementation for a high number of particles. @InProceedings{LCTES22p26, author = {Guillaume Baudart and Louis Mandel and Reyyan Tekin}, title = {JAX Based Parallel Inference for Reactive Probabilistic Programming}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {26--36}, doi = {10.1145/3519941.3535066}, year = {2022}, } Publisher's Version Artifacts Reusable Results Reproduced |
|
Matoussi, Oumaima |
LCTES '22: "A Memory Interference Analysis ..."
A Memory Interference Analysis using a Formal Timing Analyzer (WIP)
Mihail Asavoae, Oumaima Matoussi, Asmae Bouachtala, Hai-Dang Vu, and Mathieu Jan (University of Paris-Saclay, France; CEA LIST, France) Safety-critical applications require well-defined and documented timing behavior. These requirements shape the design and implementation of a timing analyzer based on a formal Instruction-Set Architecture (ISA) semantics and formal micro-architecture models. In this paper we present the key elements of such a timing analyzer and how to systematically combine the formal components to address timing properties such as evaluating memory interferences. We also report preliminary experiments of memory interference analysis of multi-threaded applications in a multicore context. @InProceedings{LCTES22p146, author = {Mihail Asavoae and Oumaima Matoussi and Asmae Bouachtala and Hai-Dang Vu and Mathieu Jan}, title = {A Memory Interference Analysis using a Formal Timing Analyzer (WIP)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {146--150}, doi = {10.1145/3519941.3535077}, year = {2022}, } Publisher's Version |
|
Mo, Zewei |
LCTES '22: "RollBin: Reducing Code-Size ..."
RollBin: Reducing Code-Size via Loop Rerolling at Binary Level
Tianao Ge, Zewei Mo, Kan Wu, Xianwei Zhang, and Yutong Lu (Sun Yat-sen University, China) Code size is an increasing concern on resource constrained systems, ranging from embedded devices to cloud servers. To address the issue, lowering memory occupancy has become a priority in developing and deploying applications, and accordingly compiler-based optimizations have been proposed to reduce program footprint. However, prior arts are generally dealing with source codes or intermediate representations, and thus are very limited in scope in real scenarios where only binary files are commonly provided. To fill the gap, this paper presents a novel code-size optimization RollBin to reroll loops at binary level. RollBin first locates the unrolled loops in binary files, and then probes to decide the unrolling factor by identifying regular memory address patterns. To reconstruct the iterations, we propose a customized data dependency analysis that tackles the challenges brought by shuffled instructions and loop-carry dependencies. Next, the recognized iterations are rolled up through instruction removal and update, which are generally reverting the normal unrolling procedure. The evaluations on standard SPEC2006/2017 and MiBench demonstrate that RollBin effectively shrinks code size by 1.7% and 2.2% on average (up to 7.8%), which respectively outperforms the state-of-the-arts by 31% and 38%. In addition, the use cases of representative realistic applications manifest that RollBin can be applicable in practices. @InProceedings{LCTES22p99, author = {Tianao Ge and Zewei Mo and Kan Wu and Xianwei Zhang and Yutong Lu}, title = {RollBin: Reducing Code-Size via Loop Rerolling at Binary Level}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {99--110}, doi = {10.1145/3519941.3535072}, year = {2022}, } Publisher's Version |
|
Mullins, Robert |
LCTES '22: "Trace-and-Brace (TAB): Bespoke ..."
Trace-and-Brace (TAB): Bespoke Software Countermeasures against Soft Errors
Yousun Ko, Alex Bradbury, Bernd Burgstaller, and Robert Mullins (Yonsei University, South Korea; lowRISC, UK; University of Cambridge, UK) Lower voltage levels and higher clock frequencies together with a tight hardware area budget make modern processors more susceptible to soft errors. Existing generic software countermeasures against soft errors are application-agnostic or applied uniformely to the application code, which results in insufficient coverage of errors and excessive performance and code size overhead. In this paper, we observe that the degree and the types of vulnerabilities against soft errors vary by application and operational scenarios. We propose a software method, Trace-and-Brace (TAB), that diagnoses the types and locations of vulnerabilities in software and allows accurate and precise application of countermeasures. TAB consists of two phases. The trace phase performs exhaustive fault injection on program traces to diagnose vulnerable code regions and their vulnerability types (i.e., for control-flow, data-flow, and memory data corruption). TAB provides an enhanced and comprehensive set of countermeasures that the user employs in the brace phase through high-level program annotations on vulnerable code regions. Bespoke countermeasures avoid the protection overhead for robust code and focus on vulnerable regions instead. We implemented TAB within CLANG and LLVM, and evaluated it on the RISC-V architecture. TAB detected 99.82 % of data-flow errors and 10.94 % more control-flow errors (96.61 % in total) than the state-of-the-art approach while TAB’s generated programs run 1.15 × faster. TAB provides selective memory area protection which induced only 16 % performance overhead to detect 99.99 % of soft errors on memory. @InProceedings{LCTES22p73, author = {Yousun Ko and Alex Bradbury and Bernd Burgstaller and Robert Mullins}, title = {Trace-and-Brace (TAB): Bespoke Software Countermeasures against Soft Errors}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {73--85}, doi = {10.1145/3519941.3535070}, year = {2022}, } Publisher's Version |
|
Mure, Paul |
LCTES '22: "Tighten Rust’s Belt: Shrinking ..."
Tighten Rust’s Belt: Shrinking Embedded Rust Binaries
Hudson Ayers, Evan Laufer, Paul Mure, Jaehyeon Park, Eduardo Rodelo, Thea Rossman, Andrey Pronin, Philip Levis, and Johnathan Van Why (Stanford University, USA; Google, USA) Rust is a promising programming language for embedded software, providing low-level primitives and performance similar to C/C++ alongside type safety, memory safety, and modern high-level language features. We find naive use of Rust leads to binaries much larger than their C equivalents. For flash-constrained embedded microcontrollers, this is prohibitive. We find four major causes of this growth: monomorphization, inefficient derivations, implicit data structures, and missing compiler optimizations. We present a set of embedded Rust programming principles which reduce Rust binary sizes. We apply these principles to an industrial Rust firmware application, reducing size by 76kB (19%), and an open source Rust OS kernel binary, reducing size by 23kB (26%). We explore compiler optimizations that could further shrink embedded Rust. @InProceedings{LCTES22p121, author = {Hudson Ayers and Evan Laufer and Paul Mure and Jaehyeon Park and Eduardo Rodelo and Thea Rossman and Andrey Pronin and Philip Levis and Johnathan Van Why}, title = {Tighten Rust’s Belt: Shrinking Embedded Rust Binaries}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {121--132}, doi = {10.1145/3519941.3535075}, year = {2022}, } Publisher's Version |
|
Narayanan, Vijaykrishnan |
LCTES '22: "ISKEVA: In-SSD Key-Value Database ..."
ISKEVA: In-SSD Key-Value Database Engine for Video Analytics Applications
Yi Zheng, Joshua Fixelle, Nagadastagiri Challapalle, Pingyi Huo, Zhaoyan Shen, Zili Shao, Mircea Stan, and Vijaykrishnan Narayanan (Pennsylvania State University, USA; University of Virginia, USA; Shandong University, China; Chinese University of Hong Kong, China) Key-value databases are widely used to store the features or metadata generated from the neural network based video processing platforms. Due to the large volumes of video data, these databases use solid state drives (SSDs) as the primary data storage platform, and user query-based filtering, and retrieval operations on data incur large volume of data movement between the SSD and the host processor. In this paper, we present an in-SSD key-value database which uses the embedded CPU core, and DRAM memory on the SSD to support various queries with predicates and reduce the data movement between SSD and host processor significantly. We augment the SSD flash translation layer with key-value database functions and auxiliary data structures to support the user queries using the embedded core and DRAM memory on SSD. The proposed key-value store prototype on the Cosmos plus OpenSSD board reduces data movement between host processor and SSD by 14.57x, achieves an application-level speedup by 1.16x, and reduced energy consumption by 56% across different types of user queries. @InProceedings{LCTES22p50, author = {Yi Zheng and Joshua Fixelle and Nagadastagiri Challapalle and Pingyi Huo and Zhaoyan Shen and Zili Shao and Mircea Stan and Vijaykrishnan Narayanan}, title = {ISKEVA: In-SSD Key-Value Database Engine for Video Analytics Applications}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {50--60}, doi = {10.1145/3519941.3535068}, year = {2022}, } Publisher's Version |
|
Nay, Shane |
LCTES '22: "Scalable Size Inliner for ..."
Scalable Size Inliner for Mobile Applications (WIP)
Kyungwoo Lee, Manman Ren, and Shane Nay (Meta, USA) Inlining is critical for both performance and size in mobile apps. When building large mobile apps, ThinLTO, a scalable link-time optimization is imperative in order to achieve both optimal size and build scalability. However, inlining with ThinLTO is not tuned to reduce the code size because each module inliner works independently without modeling the size cost across modules, and functions are often not eligible to import due to private references, appearing in Objective-C or Swift for iOS. This paper extends the bitcode summary to perform a global inlining analysis to find inline candidates for saving the code size. Using this summary information, a pre-inliner eagerly inlines the candidates that are proven to shrink the size. When the inline candidates are not eligible to import, a pre-merger combines their bitcode modules to remove inline restrictions. Our work improves the size of real-world mobile apps when compared to the MinSize (-Oz) optimization level. We reduced the code size by 2.8% for SocialApp and 4.0% for ChatApp. @InProceedings{LCTES22p116, author = {Kyungwoo Lee and Manman Ren and Shane Nay}, title = {Scalable Size Inliner for Mobile Applications (WIP)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {116--120}, doi = {10.1145/3519941.3535074}, year = {2022}, } Publisher's Version |
|
Park, Jaehyeon |
LCTES '22: "Tighten Rust’s Belt: Shrinking ..."
Tighten Rust’s Belt: Shrinking Embedded Rust Binaries
Hudson Ayers, Evan Laufer, Paul Mure, Jaehyeon Park, Eduardo Rodelo, Thea Rossman, Andrey Pronin, Philip Levis, and Johnathan Van Why (Stanford University, USA; Google, USA) Rust is a promising programming language for embedded software, providing low-level primitives and performance similar to C/C++ alongside type safety, memory safety, and modern high-level language features. We find naive use of Rust leads to binaries much larger than their C equivalents. For flash-constrained embedded microcontrollers, this is prohibitive. We find four major causes of this growth: monomorphization, inefficient derivations, implicit data structures, and missing compiler optimizations. We present a set of embedded Rust programming principles which reduce Rust binary sizes. We apply these principles to an industrial Rust firmware application, reducing size by 76kB (19%), and an open source Rust OS kernel binary, reducing size by 23kB (26%). We explore compiler optimizations that could further shrink embedded Rust. @InProceedings{LCTES22p121, author = {Hudson Ayers and Evan Laufer and Paul Mure and Jaehyeon Park and Eduardo Rodelo and Thea Rossman and Andrey Pronin and Philip Levis and Johnathan Van Why}, title = {Tighten Rust’s Belt: Shrinking Embedded Rust Binaries}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {121--132}, doi = {10.1145/3519941.3535075}, year = {2022}, } Publisher's Version |
|
Patru, Dorin |
LCTES '22: "Cache-Coherent CLAM (WIP) ..."
Cache-Coherent CLAM (WIP)
Chen Ding, Benjamin Reber, and Dorin Patru (University of Rochester, USA; Rochester Institute of Technology, USA) Traditional caches are automatic and cannot be controlled directly by software. A recent design called CLAM manages a cache using leases and lets a program specify these leases. The lease cache is mostly controlled by software. This paper extends CLAM to support multiple cores with private caches. It presents the hardware extensions to support cache coherence for data-race free (DRF) programs. CLAM can use either inclusive or exclusive caching for shared data. Its performance can be improved by two programming techniques: cache draining and reference privatization. @InProceedings{LCTES22p111, author = {Chen Ding and Benjamin Reber and Dorin Patru}, title = {Cache-Coherent CLAM (WIP)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {111--115}, doi = {10.1145/3519941.3535073}, year = {2022}, } Publisher's Version |
|
Pimentel, Andy D. |
LCTES '22: "TCPS: A Task and Cache-Aware ..."
TCPS: A Task and Cache-Aware Partitioned Scheduler for Hard Real-Time Multi-core Systems
Yixian Shen, Jun Xiao, and Andy D. Pimentel (University of Amsterdam, Netherlands) Shared caches in multi-core processors seriously complicate the timing verification of real-time software tasks due to the task interference occurring in the shared caches. Explicitly calculating the amount of cache interference among tasks and cache partitioning are two major approaches to enhance the schedulability performance in the context of multi-core processors with shared caches. The former approach suffers from pessimistic cache interference estimations that subsequently result in suboptimal schedulability performance, whereas the latter approach may increase the execution time of tasks due to a lower cache usage, also degrading the schedulability performance. In this paper, we propose a heuristic partitioned scheduler, called TCPS, for real-time non-preemptive multi-core systems with partitioned caches. To achieve a high degree of schedulability, TCPS combines the benefits of partitioned scheduling, relieving the computing resources from contention, and cache partitioning, mitigating cache interference, in conjunction with exploiting task characteristics. A series of comprehensive experiments were performed to evaluate the schedulability performance of TCPS and compare it against a variety of global and partitioned scheduling approaches. Our results show that TCPS outperforms all of these scheduling techniques in terms of schedulability, and yields a more effective cache usage and more stable load balancing. @InProceedings{LCTES22p37, author = {Yixian Shen and Jun Xiao and Andy D. Pimentel}, title = {TCPS: A Task and Cache-Aware Partitioned Scheduler for Hard Real-Time Multi-core Systems}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {37--49}, doi = {10.1145/3519941.3535067}, year = {2022}, } Publisher's Version |
|
Prokopec, Aleksandar |
LCTES '22: "Implicit State Machines ..."
Implicit State Machines
Fengyun Liu and Aleksandar Prokopec (Oracle Labs, Switzerland) Finite-state machines (FSM) are a simple yet powerful abstraction widely used for modeling, programming and verifying real-time and reactive systems that control modern factories, power plants, transportation systems and medical equipment. However, traditionally finite-state machines are either encoded indirectly in an imperative language, such as C and Verilog, or embedded as an imperative extension of a declarative language, such as Lustre. Given the widely accepted advantage of declarative programming, can we have a declarative design of finite-state machines to facilitate design, construction, and verification of embedded programs? By sticking to the design principle of declarativeness, we show that a novel abstraction emerges, implicit state machines, which is declarative in nature and at the same time supports recursive composition. Given its simplicity and universality, we believe it may serve as a new foundation for programming embedded systems. @InProceedings{LCTES22p13, author = {Fengyun Liu and Aleksandar Prokopec}, title = {Implicit State Machines}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {13--25}, doi = {10.1145/3519941.3535065}, year = {2022}, } Publisher's Version Artifacts Reusable Results Reproduced |
|
Pronin, Andrey |
LCTES '22: "Tighten Rust’s Belt: Shrinking ..."
Tighten Rust’s Belt: Shrinking Embedded Rust Binaries
Hudson Ayers, Evan Laufer, Paul Mure, Jaehyeon Park, Eduardo Rodelo, Thea Rossman, Andrey Pronin, Philip Levis, and Johnathan Van Why (Stanford University, USA; Google, USA) Rust is a promising programming language for embedded software, providing low-level primitives and performance similar to C/C++ alongside type safety, memory safety, and modern high-level language features. We find naive use of Rust leads to binaries much larger than their C equivalents. For flash-constrained embedded microcontrollers, this is prohibitive. We find four major causes of this growth: monomorphization, inefficient derivations, implicit data structures, and missing compiler optimizations. We present a set of embedded Rust programming principles which reduce Rust binary sizes. We apply these principles to an industrial Rust firmware application, reducing size by 76kB (19%), and an open source Rust OS kernel binary, reducing size by 23kB (26%). We explore compiler optimizations that could further shrink embedded Rust. @InProceedings{LCTES22p121, author = {Hudson Ayers and Evan Laufer and Paul Mure and Jaehyeon Park and Eduardo Rodelo and Thea Rossman and Andrey Pronin and Philip Levis and Johnathan Van Why}, title = {Tighten Rust’s Belt: Shrinking Embedded Rust Binaries}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {121--132}, doi = {10.1145/3519941.3535075}, year = {2022}, } Publisher's Version |
|
Reber, Benjamin |
LCTES '22: "Cache-Coherent CLAM (WIP) ..."
Cache-Coherent CLAM (WIP)
Chen Ding, Benjamin Reber, and Dorin Patru (University of Rochester, USA; Rochester Institute of Technology, USA) Traditional caches are automatic and cannot be controlled directly by software. A recent design called CLAM manages a cache using leases and lets a program specify these leases. The lease cache is mostly controlled by software. This paper extends CLAM to support multiple cores with private caches. It presents the hardware extensions to support cache coherence for data-race free (DRF) programs. CLAM can use either inclusive or exclusive caching for shared data. Its performance can be improved by two programming techniques: cache draining and reference privatization. @InProceedings{LCTES22p111, author = {Chen Ding and Benjamin Reber and Dorin Patru}, title = {Cache-Coherent CLAM (WIP)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {111--115}, doi = {10.1145/3519941.3535073}, year = {2022}, } Publisher's Version |
|
Ren, Manman |
LCTES '22: "Scalable Size Inliner for ..."
Scalable Size Inliner for Mobile Applications (WIP)
Kyungwoo Lee, Manman Ren, and Shane Nay (Meta, USA) Inlining is critical for both performance and size in mobile apps. When building large mobile apps, ThinLTO, a scalable link-time optimization is imperative in order to achieve both optimal size and build scalability. However, inlining with ThinLTO is not tuned to reduce the code size because each module inliner works independently without modeling the size cost across modules, and functions are often not eligible to import due to private references, appearing in Objective-C or Swift for iOS. This paper extends the bitcode summary to perform a global inlining analysis to find inline candidates for saving the code size. Using this summary information, a pre-inliner eagerly inlines the candidates that are proven to shrink the size. When the inline candidates are not eligible to import, a pre-merger combines their bitcode modules to remove inline restrictions. Our work improves the size of real-world mobile apps when compared to the MinSize (-Oz) optimization level. We reduced the code size by 2.8% for SocialApp and 4.0% for ChatApp. @InProceedings{LCTES22p116, author = {Kyungwoo Lee and Manman Ren and Shane Nay}, title = {Scalable Size Inliner for Mobile Applications (WIP)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {116--120}, doi = {10.1145/3519941.3535074}, year = {2022}, } Publisher's Version |
|
Rodelo, Eduardo |
LCTES '22: "Tighten Rust’s Belt: Shrinking ..."
Tighten Rust’s Belt: Shrinking Embedded Rust Binaries
Hudson Ayers, Evan Laufer, Paul Mure, Jaehyeon Park, Eduardo Rodelo, Thea Rossman, Andrey Pronin, Philip Levis, and Johnathan Van Why (Stanford University, USA; Google, USA) Rust is a promising programming language for embedded software, providing low-level primitives and performance similar to C/C++ alongside type safety, memory safety, and modern high-level language features. We find naive use of Rust leads to binaries much larger than their C equivalents. For flash-constrained embedded microcontrollers, this is prohibitive. We find four major causes of this growth: monomorphization, inefficient derivations, implicit data structures, and missing compiler optimizations. We present a set of embedded Rust programming principles which reduce Rust binary sizes. We apply these principles to an industrial Rust firmware application, reducing size by 76kB (19%), and an open source Rust OS kernel binary, reducing size by 23kB (26%). We explore compiler optimizations that could further shrink embedded Rust. @InProceedings{LCTES22p121, author = {Hudson Ayers and Evan Laufer and Paul Mure and Jaehyeon Park and Eduardo Rodelo and Thea Rossman and Andrey Pronin and Philip Levis and Johnathan Van Why}, title = {Tighten Rust’s Belt: Shrinking Embedded Rust Binaries}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {121--132}, doi = {10.1145/3519941.3535075}, year = {2022}, } Publisher's Version |
|
Roob, Julius |
LCTES '22: "Code Generation Criteria for ..."
Code Generation Criteria for Buffered Exposed Datapath Architectures from Dataflow Graphs
Klaus Schneider, Anoop Bhagyanath, and Julius Roob (University of Kaiserslautern, Germany) Many novel processor architectures expose their processing units (PUs) and internal datapaths to the compiler. To avoid an unnecessary synchronization of PUs, the datapaths are often buffered which results in buffered exposed datapath (BED) architectures. This paper suggests a code generation technique for BED architectures from dataflow graphs that are used as intermediate program representations. Inspired by results on queue layouts in graph drawing, we determine in this paper constraints for the node and edge orderings of the dataflow graphs to ensure the first-in-first-out behavior of the buffers. Formalizing these constraints in propositional logic enables SAT solvers to compute optimal PU allocations. Moreover, future code generation techniques may develop heuristics based on the code generation criteria of this paper. @InProceedings{LCTES22p133, author = {Klaus Schneider and Anoop Bhagyanath and Julius Roob}, title = {Code Generation Criteria for Buffered Exposed Datapath Architectures from Dataflow Graphs}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {133--145}, doi = {10.1145/3519941.3535076}, year = {2022}, } Publisher's Version Info Artifacts Functional Results Reproduced |
|
Rossman, Thea |
LCTES '22: "Tighten Rust’s Belt: Shrinking ..."
Tighten Rust’s Belt: Shrinking Embedded Rust Binaries
Hudson Ayers, Evan Laufer, Paul Mure, Jaehyeon Park, Eduardo Rodelo, Thea Rossman, Andrey Pronin, Philip Levis, and Johnathan Van Why (Stanford University, USA; Google, USA) Rust is a promising programming language for embedded software, providing low-level primitives and performance similar to C/C++ alongside type safety, memory safety, and modern high-level language features. We find naive use of Rust leads to binaries much larger than their C equivalents. For flash-constrained embedded microcontrollers, this is prohibitive. We find four major causes of this growth: monomorphization, inefficient derivations, implicit data structures, and missing compiler optimizations. We present a set of embedded Rust programming principles which reduce Rust binary sizes. We apply these principles to an industrial Rust firmware application, reducing size by 76kB (19%), and an open source Rust OS kernel binary, reducing size by 23kB (26%). We explore compiler optimizations that could further shrink embedded Rust. @InProceedings{LCTES22p121, author = {Hudson Ayers and Evan Laufer and Paul Mure and Jaehyeon Park and Eduardo Rodelo and Thea Rossman and Andrey Pronin and Philip Levis and Johnathan Van Why}, title = {Tighten Rust’s Belt: Shrinking Embedded Rust Binaries}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {121--132}, doi = {10.1145/3519941.3535075}, year = {2022}, } Publisher's Version |
|
Schlaak, Christof |
LCTES '22: "Optimizing Data Reshaping ..."
Optimizing Data Reshaping Operations in Functional IRs for High-Level Synthesis
Christof Schlaak, Tzung-Han Juang, and Christophe Dubach (University of Edinburgh, UK; McGill University, Canada) FPGAs (Field Programmable Gate Arrays) have become the substrate of choice to implement accelerators. They deliver high performance with low power consumption, while offering the flexibility of being re-programmable. But they are notoriously hard to program directly using HDLs (Hardware Description Languages). Traditional HLS (High-Level Synthesis) methods are addressing some of these issues but are far from being perfect. Programmers are still required to write hardware-specific code and existing HLS tools often produce sub-optimal designs. Modern approaches based on multi-level functional IR (Intermediate Representation) such as Aetherling, have demonstrated the advantages of generating high performance designs in a predictable way. A functional IR makes optimizations via rewrite rules simple to express, and abstract away the hardware details. However, as we will see in this paper, functional approaches bring their own set of challenges to produce high performance hardware. In particular, data reshaping operations, such as transposition, introduce overheads that hurt performance or even prevent the generation of synthesizable hardware designs altogether. This paper presents an approach with rewrite rules to solve this fundamental issue and produce efficient FPGA designs from functional IRs. Using rewrites, it is possible to generate high performance designs for matrix multiplication and 2D convolution. The paper also evaluates the performance impact of the optimizations and shows that without them, low performance designs are produced, or even worse, it is impossible to synthesize the designs at all. @InProceedings{LCTES22p61, author = {Christof Schlaak and Tzung-Han Juang and Christophe Dubach}, title = {Optimizing Data Reshaping Operations in Functional IRs for High-Level Synthesis}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {61--72}, doi = {10.1145/3519941.3535069}, year = {2022}, } Publisher's Version Artifacts Functional Results Reproduced |
|
Schneider, Klaus |
LCTES '22: "Code Generation Criteria for ..."
Code Generation Criteria for Buffered Exposed Datapath Architectures from Dataflow Graphs
Klaus Schneider, Anoop Bhagyanath, and Julius Roob (University of Kaiserslautern, Germany) Many novel processor architectures expose their processing units (PUs) and internal datapaths to the compiler. To avoid an unnecessary synchronization of PUs, the datapaths are often buffered which results in buffered exposed datapath (BED) architectures. This paper suggests a code generation technique for BED architectures from dataflow graphs that are used as intermediate program representations. Inspired by results on queue layouts in graph drawing, we determine in this paper constraints for the node and edge orderings of the dataflow graphs to ensure the first-in-first-out behavior of the buffers. Formalizing these constraints in propositional logic enables SAT solvers to compute optimal PU allocations. Moreover, future code generation techniques may develop heuristics based on the code generation criteria of this paper. @InProceedings{LCTES22p133, author = {Klaus Schneider and Anoop Bhagyanath and Julius Roob}, title = {Code Generation Criteria for Buffered Exposed Datapath Architectures from Dataflow Graphs}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {133--145}, doi = {10.1145/3519941.3535076}, year = {2022}, } Publisher's Version Info Artifacts Functional Results Reproduced |
|
Shang, Siqi |
LCTES '22: "An Old Friend Is Better Than ..."
An Old Friend Is Better Than Two New Ones: Dual-Screen Android
Zizhan Chen, Siqi Shang, Qihong Wu, Jin Xue, Zhaoyan Shen, and Zili Shao (Chinese University of Hong Kong, China; Shandong University, China) Dual-screen foldable Android smartphones such as Microsoft Surface Duo are emerging. However, due to its internal design, the Android framework cannot support combined mode, by which two screens can be integrated into one, without modifications, thus requiring new display management. Android introduces a new dual-screen display method, called presentation class, to handle this problem; however, existing apps need to be redesigned based on the new class. In this paper, we propose Dual-Screen Android (DSA), a semantics-aware display scheme for dual-screen foldable Android smartphones. DSA is transparent to apps, thus requiring no changes for existing apps, and incurs minimum modification to the Android framework. Specifically, inside Android, DSA duplicates and maps single-screen views provided by apps to dual screens and maps input events backward correspondingly, thus being transparent to apps. DSA also opens the door to store other hardware states (e.g., screen brightness, screen-touch events, etc.), which can be utilized to bridge other software/hardware semantic gaps for further system optimization. To demonstrate this, we design an effective Q-learning energy optimization scheme within DSA to control screen brightness based on predicted users' behaviors. We have implemented DSA based on Android 10 with real hardware and conducted a series of experiments. Experimental results show that based on DSA, without any modifications, existing apps can directly run and fully exploit dual screens with negligible time and memory overheads, and our energy optimization scheme can effectively reduce energy. We have released the open-source code of DSA for public access. @InProceedings{LCTES22p86, author = {Zizhan Chen and Siqi Shang and Qihong Wu and Jin Xue and Zhaoyan Shen and Zili Shao}, title = {An Old Friend Is Better Than Two New Ones: Dual-Screen Android}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {86--98}, doi = {10.1145/3519941.3535071}, year = {2022}, } Publisher's Version Artifacts Functional |
|
Shao, Zili |
LCTES '22: "Co-mining: A Processing-in-Memory ..."
Co-mining: A Processing-in-Memory Assisted Framework for Memory-Intensive PoW Acceleration
Tianyu Wang, Zhaoyan Shen, and Zili Shao (Chinese University of Hong Kong, China; Shandong University, China) Recently, HBM (High Bandwidth Memory) and PIM (Processing in Memory) integrated technology such as Samsung function-in-memory DRAM opens a new door for memory-intensive PoW acceleration by jointly exploiting GPU, PIM and HBM. In this paper, we for the first time propose a GPU/PIM Co-Mining framework to accelerate memory intensive PoW by fully exploiting HBM-PIM's bandwidth and coordinately scheduling mining tasks in both GPU and PIM. Specifically, we first design a linear programming model to intelligently guide the GPU/PIM task scheduling. An extended finite-state-machine model is designed for the GPU memory controller to switch PIM working mode (compute/memory mode) accordingly. Finally, considering the speed difference between intra-/inter-channel memory accesses, a hybrid memory access method is proposed to minimize inter-channel data movements. We evaluate Co-Mining based on Samsung's HBM2-based function-in-memory architecture. The experimental results show that it can achieve up to 38.5% hashrate improvement compared with the method by directly integrating PIM into PoW acceleration with GPU. @InProceedings{LCTES22p1, author = {Tianyu Wang and Zhaoyan Shen and Zili Shao}, title = {Co-mining: A Processing-in-Memory Assisted Framework for Memory-Intensive PoW Acceleration}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {1--12}, doi = {10.1145/3519941.3535064}, year = {2022}, } Publisher's Version Artifacts Functional Results Reproduced LCTES '22: "An Old Friend Is Better Than ..." An Old Friend Is Better Than Two New Ones: Dual-Screen Android Zizhan Chen, Siqi Shang, Qihong Wu, Jin Xue, Zhaoyan Shen, and Zili Shao (Chinese University of Hong Kong, China; Shandong University, China) Dual-screen foldable Android smartphones such as Microsoft Surface Duo are emerging. However, due to its internal design, the Android framework cannot support combined mode, by which two screens can be integrated into one, without modifications, thus requiring new display management. Android introduces a new dual-screen display method, called presentation class, to handle this problem; however, existing apps need to be redesigned based on the new class. In this paper, we propose Dual-Screen Android (DSA), a semantics-aware display scheme for dual-screen foldable Android smartphones. DSA is transparent to apps, thus requiring no changes for existing apps, and incurs minimum modification to the Android framework. Specifically, inside Android, DSA duplicates and maps single-screen views provided by apps to dual screens and maps input events backward correspondingly, thus being transparent to apps. DSA also opens the door to store other hardware states (e.g., screen brightness, screen-touch events, etc.), which can be utilized to bridge other software/hardware semantic gaps for further system optimization. To demonstrate this, we design an effective Q-learning energy optimization scheme within DSA to control screen brightness based on predicted users' behaviors. We have implemented DSA based on Android 10 with real hardware and conducted a series of experiments. Experimental results show that based on DSA, without any modifications, existing apps can directly run and fully exploit dual screens with negligible time and memory overheads, and our energy optimization scheme can effectively reduce energy. We have released the open-source code of DSA for public access. @InProceedings{LCTES22p86, author = {Zizhan Chen and Siqi Shang and Qihong Wu and Jin Xue and Zhaoyan Shen and Zili Shao}, title = {An Old Friend Is Better Than Two New Ones: Dual-Screen Android}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {86--98}, doi = {10.1145/3519941.3535071}, year = {2022}, } Publisher's Version Artifacts Functional LCTES '22: "ISKEVA: In-SSD Key-Value Database ..." ISKEVA: In-SSD Key-Value Database Engine for Video Analytics Applications Yi Zheng, Joshua Fixelle, Nagadastagiri Challapalle, Pingyi Huo, Zhaoyan Shen, Zili Shao, Mircea Stan, and Vijaykrishnan Narayanan (Pennsylvania State University, USA; University of Virginia, USA; Shandong University, China; Chinese University of Hong Kong, China) Key-value databases are widely used to store the features or metadata generated from the neural network based video processing platforms. Due to the large volumes of video data, these databases use solid state drives (SSDs) as the primary data storage platform, and user query-based filtering, and retrieval operations on data incur large volume of data movement between the SSD and the host processor. In this paper, we present an in-SSD key-value database which uses the embedded CPU core, and DRAM memory on the SSD to support various queries with predicates and reduce the data movement between SSD and host processor significantly. We augment the SSD flash translation layer with key-value database functions and auxiliary data structures to support the user queries using the embedded core and DRAM memory on SSD. The proposed key-value store prototype on the Cosmos plus OpenSSD board reduces data movement between host processor and SSD by 14.57x, achieves an application-level speedup by 1.16x, and reduced energy consumption by 56% across different types of user queries. @InProceedings{LCTES22p50, author = {Yi Zheng and Joshua Fixelle and Nagadastagiri Challapalle and Pingyi Huo and Zhaoyan Shen and Zili Shao and Mircea Stan and Vijaykrishnan Narayanan}, title = {ISKEVA: In-SSD Key-Value Database Engine for Video Analytics Applications}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {50--60}, doi = {10.1145/3519941.3535068}, year = {2022}, } Publisher's Version |
|
Shen, Yixian |
LCTES '22: "TCPS: A Task and Cache-Aware ..."
TCPS: A Task and Cache-Aware Partitioned Scheduler for Hard Real-Time Multi-core Systems
Yixian Shen, Jun Xiao, and Andy D. Pimentel (University of Amsterdam, Netherlands) Shared caches in multi-core processors seriously complicate the timing verification of real-time software tasks due to the task interference occurring in the shared caches. Explicitly calculating the amount of cache interference among tasks and cache partitioning are two major approaches to enhance the schedulability performance in the context of multi-core processors with shared caches. The former approach suffers from pessimistic cache interference estimations that subsequently result in suboptimal schedulability performance, whereas the latter approach may increase the execution time of tasks due to a lower cache usage, also degrading the schedulability performance. In this paper, we propose a heuristic partitioned scheduler, called TCPS, for real-time non-preemptive multi-core systems with partitioned caches. To achieve a high degree of schedulability, TCPS combines the benefits of partitioned scheduling, relieving the computing resources from contention, and cache partitioning, mitigating cache interference, in conjunction with exploiting task characteristics. A series of comprehensive experiments were performed to evaluate the schedulability performance of TCPS and compare it against a variety of global and partitioned scheduling approaches. Our results show that TCPS outperforms all of these scheduling techniques in terms of schedulability, and yields a more effective cache usage and more stable load balancing. @InProceedings{LCTES22p37, author = {Yixian Shen and Jun Xiao and Andy D. Pimentel}, title = {TCPS: A Task and Cache-Aware Partitioned Scheduler for Hard Real-Time Multi-core Systems}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {37--49}, doi = {10.1145/3519941.3535067}, year = {2022}, } Publisher's Version |
|
Shen, Zhaoyan |
LCTES '22: "Co-mining: A Processing-in-Memory ..."
Co-mining: A Processing-in-Memory Assisted Framework for Memory-Intensive PoW Acceleration
Tianyu Wang, Zhaoyan Shen, and Zili Shao (Chinese University of Hong Kong, China; Shandong University, China) Recently, HBM (High Bandwidth Memory) and PIM (Processing in Memory) integrated technology such as Samsung function-in-memory DRAM opens a new door for memory-intensive PoW acceleration by jointly exploiting GPU, PIM and HBM. In this paper, we for the first time propose a GPU/PIM Co-Mining framework to accelerate memory intensive PoW by fully exploiting HBM-PIM's bandwidth and coordinately scheduling mining tasks in both GPU and PIM. Specifically, we first design a linear programming model to intelligently guide the GPU/PIM task scheduling. An extended finite-state-machine model is designed for the GPU memory controller to switch PIM working mode (compute/memory mode) accordingly. Finally, considering the speed difference between intra-/inter-channel memory accesses, a hybrid memory access method is proposed to minimize inter-channel data movements. We evaluate Co-Mining based on Samsung's HBM2-based function-in-memory architecture. The experimental results show that it can achieve up to 38.5% hashrate improvement compared with the method by directly integrating PIM into PoW acceleration with GPU. @InProceedings{LCTES22p1, author = {Tianyu Wang and Zhaoyan Shen and Zili Shao}, title = {Co-mining: A Processing-in-Memory Assisted Framework for Memory-Intensive PoW Acceleration}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {1--12}, doi = {10.1145/3519941.3535064}, year = {2022}, } Publisher's Version Artifacts Functional Results Reproduced LCTES '22: "An Old Friend Is Better Than ..." An Old Friend Is Better Than Two New Ones: Dual-Screen Android Zizhan Chen, Siqi Shang, Qihong Wu, Jin Xue, Zhaoyan Shen, and Zili Shao (Chinese University of Hong Kong, China; Shandong University, China) Dual-screen foldable Android smartphones such as Microsoft Surface Duo are emerging. However, due to its internal design, the Android framework cannot support combined mode, by which two screens can be integrated into one, without modifications, thus requiring new display management. Android introduces a new dual-screen display method, called presentation class, to handle this problem; however, existing apps need to be redesigned based on the new class. In this paper, we propose Dual-Screen Android (DSA), a semantics-aware display scheme for dual-screen foldable Android smartphones. DSA is transparent to apps, thus requiring no changes for existing apps, and incurs minimum modification to the Android framework. Specifically, inside Android, DSA duplicates and maps single-screen views provided by apps to dual screens and maps input events backward correspondingly, thus being transparent to apps. DSA also opens the door to store other hardware states (e.g., screen brightness, screen-touch events, etc.), which can be utilized to bridge other software/hardware semantic gaps for further system optimization. To demonstrate this, we design an effective Q-learning energy optimization scheme within DSA to control screen brightness based on predicted users' behaviors. We have implemented DSA based on Android 10 with real hardware and conducted a series of experiments. Experimental results show that based on DSA, without any modifications, existing apps can directly run and fully exploit dual screens with negligible time and memory overheads, and our energy optimization scheme can effectively reduce energy. We have released the open-source code of DSA for public access. @InProceedings{LCTES22p86, author = {Zizhan Chen and Siqi Shang and Qihong Wu and Jin Xue and Zhaoyan Shen and Zili Shao}, title = {An Old Friend Is Better Than Two New Ones: Dual-Screen Android}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {86--98}, doi = {10.1145/3519941.3535071}, year = {2022}, } Publisher's Version Artifacts Functional LCTES '22: "ISKEVA: In-SSD Key-Value Database ..." ISKEVA: In-SSD Key-Value Database Engine for Video Analytics Applications Yi Zheng, Joshua Fixelle, Nagadastagiri Challapalle, Pingyi Huo, Zhaoyan Shen, Zili Shao, Mircea Stan, and Vijaykrishnan Narayanan (Pennsylvania State University, USA; University of Virginia, USA; Shandong University, China; Chinese University of Hong Kong, China) Key-value databases are widely used to store the features or metadata generated from the neural network based video processing platforms. Due to the large volumes of video data, these databases use solid state drives (SSDs) as the primary data storage platform, and user query-based filtering, and retrieval operations on data incur large volume of data movement between the SSD and the host processor. In this paper, we present an in-SSD key-value database which uses the embedded CPU core, and DRAM memory on the SSD to support various queries with predicates and reduce the data movement between SSD and host processor significantly. We augment the SSD flash translation layer with key-value database functions and auxiliary data structures to support the user queries using the embedded core and DRAM memory on SSD. The proposed key-value store prototype on the Cosmos plus OpenSSD board reduces data movement between host processor and SSD by 14.57x, achieves an application-level speedup by 1.16x, and reduced energy consumption by 56% across different types of user queries. @InProceedings{LCTES22p50, author = {Yi Zheng and Joshua Fixelle and Nagadastagiri Challapalle and Pingyi Huo and Zhaoyan Shen and Zili Shao and Mircea Stan and Vijaykrishnan Narayanan}, title = {ISKEVA: In-SSD Key-Value Database Engine for Video Analytics Applications}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {50--60}, doi = {10.1145/3519941.3535068}, year = {2022}, } Publisher's Version |
|
Stan, Mircea |
LCTES '22: "ISKEVA: In-SSD Key-Value Database ..."
ISKEVA: In-SSD Key-Value Database Engine for Video Analytics Applications
Yi Zheng, Joshua Fixelle, Nagadastagiri Challapalle, Pingyi Huo, Zhaoyan Shen, Zili Shao, Mircea Stan, and Vijaykrishnan Narayanan (Pennsylvania State University, USA; University of Virginia, USA; Shandong University, China; Chinese University of Hong Kong, China) Key-value databases are widely used to store the features or metadata generated from the neural network based video processing platforms. Due to the large volumes of video data, these databases use solid state drives (SSDs) as the primary data storage platform, and user query-based filtering, and retrieval operations on data incur large volume of data movement between the SSD and the host processor. In this paper, we present an in-SSD key-value database which uses the embedded CPU core, and DRAM memory on the SSD to support various queries with predicates and reduce the data movement between SSD and host processor significantly. We augment the SSD flash translation layer with key-value database functions and auxiliary data structures to support the user queries using the embedded core and DRAM memory on SSD. The proposed key-value store prototype on the Cosmos plus OpenSSD board reduces data movement between host processor and SSD by 14.57x, achieves an application-level speedup by 1.16x, and reduced energy consumption by 56% across different types of user queries. @InProceedings{LCTES22p50, author = {Yi Zheng and Joshua Fixelle and Nagadastagiri Challapalle and Pingyi Huo and Zhaoyan Shen and Zili Shao and Mircea Stan and Vijaykrishnan Narayanan}, title = {ISKEVA: In-SSD Key-Value Database Engine for Video Analytics Applications}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {50--60}, doi = {10.1145/3519941.3535068}, year = {2022}, } Publisher's Version |
|
Takimoto, Munehiro |
LCTES '22: "Automated Kernel Fusion for ..."
Automated Kernel Fusion for GPU Based on Code Motion
Junji Fukuhara and Munehiro Takimoto (Tokyo University of Science, Japan) Applications implemented for GPU are important in various fields. GPU has many parallel computing cores and high arithmetic throughput, enabling GPU applications to work efficiently. However, the throughput of GPU memory, of which global memory is the costliest for accessing, is low. The input and output of GPU kernels must be stored in the global memory. They can be a bottleneck for some applications. Kernel fusion-based methods can effectively suppress access to global memory when kernels share some data. The methods combine two or more kernels into one, improving the performance by reducing expensive data communication with global memory. However, traditional kernel fusion-based methods miss many fusion opportunities because they focus only on data dependency between candidate kernels and do not consider their control flows. This paper proposes a novel kernel fusion-based method, called kernel fusion based on code motion (KFCM). KFCM exposes the fusibility of kernels by moving the kernels along control flows and then combines them. In the process of exposing fusibility, KFCM may duplicate some kernels. However, KFCM does not increase the number of kernels executed on each execution path. Thus, KFCM increases the opportunity of combining kernels without negative gain. The experimental results show that KFCM achieves 1.60x and 1.35x speedup compared with O3 option of Clang and a traditional method, respectively, in the best case. @InProceedings{LCTES22p151, author = {Junji Fukuhara and Munehiro Takimoto}, title = {Automated Kernel Fusion for GPU Based on Code Motion}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {151--161}, doi = {10.1145/3519941.3535078}, year = {2022}, } Publisher's Version |
|
Tekin, Reyyan |
LCTES '22: "JAX Based Parallel Inference ..."
JAX Based Parallel Inference for Reactive Probabilistic Programming
Guillaume Baudart, Louis Mandel, and Reyyan Tekin (DI ENS, France; ENS, France; PSL University, France; CNRS, France; Inria, France; IBM Research, USA) ProbZelus is a synchronous probabilistic language for the design of reactive probabilistic models in interaction with an environment. Reactive inference methods continuously learn distributions over the unobserved parameters of the model from statistical observations. Unfortunately, this inference problem is in general intractable. Monte Carlo inference techniques thus rely on many independent executions to compute accurate approximations. These methods are expensive but can be parallelized. We propose to use JAX to parallelize ProbZelus reactive inference engine. JAX is a recent library to compile Python code which can then be executed on massively parallel architectures such as GPUs or TPUs. In this paper, we describe a new reactive inference engine implemented in JAX and the new associated JAX backend for ProbZelus. We show on existing benchmarks that our new parallel implementation outperforms the original sequential implementation for a high number of particles. @InProceedings{LCTES22p26, author = {Guillaume Baudart and Louis Mandel and Reyyan Tekin}, title = {JAX Based Parallel Inference for Reactive Probabilistic Programming}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {26--36}, doi = {10.1145/3519941.3535066}, year = {2022}, } Publisher's Version Artifacts Reusable Results Reproduced |
|
Van Why, Johnathan |
LCTES '22: "Tighten Rust’s Belt: Shrinking ..."
Tighten Rust’s Belt: Shrinking Embedded Rust Binaries
Hudson Ayers, Evan Laufer, Paul Mure, Jaehyeon Park, Eduardo Rodelo, Thea Rossman, Andrey Pronin, Philip Levis, and Johnathan Van Why (Stanford University, USA; Google, USA) Rust is a promising programming language for embedded software, providing low-level primitives and performance similar to C/C++ alongside type safety, memory safety, and modern high-level language features. We find naive use of Rust leads to binaries much larger than their C equivalents. For flash-constrained embedded microcontrollers, this is prohibitive. We find four major causes of this growth: monomorphization, inefficient derivations, implicit data structures, and missing compiler optimizations. We present a set of embedded Rust programming principles which reduce Rust binary sizes. We apply these principles to an industrial Rust firmware application, reducing size by 76kB (19%), and an open source Rust OS kernel binary, reducing size by 23kB (26%). We explore compiler optimizations that could further shrink embedded Rust. @InProceedings{LCTES22p121, author = {Hudson Ayers and Evan Laufer and Paul Mure and Jaehyeon Park and Eduardo Rodelo and Thea Rossman and Andrey Pronin and Philip Levis and Johnathan Van Why}, title = {Tighten Rust’s Belt: Shrinking Embedded Rust Binaries}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {121--132}, doi = {10.1145/3519941.3535075}, year = {2022}, } Publisher's Version |
|
Vu, Hai-Dang |
LCTES '22: "A Memory Interference Analysis ..."
A Memory Interference Analysis using a Formal Timing Analyzer (WIP)
Mihail Asavoae, Oumaima Matoussi, Asmae Bouachtala, Hai-Dang Vu, and Mathieu Jan (University of Paris-Saclay, France; CEA LIST, France) Safety-critical applications require well-defined and documented timing behavior. These requirements shape the design and implementation of a timing analyzer based on a formal Instruction-Set Architecture (ISA) semantics and formal micro-architecture models. In this paper we present the key elements of such a timing analyzer and how to systematically combine the formal components to address timing properties such as evaluating memory interferences. We also report preliminary experiments of memory interference analysis of multi-threaded applications in a multicore context. @InProceedings{LCTES22p146, author = {Mihail Asavoae and Oumaima Matoussi and Asmae Bouachtala and Hai-Dang Vu and Mathieu Jan}, title = {A Memory Interference Analysis using a Formal Timing Analyzer (WIP)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {146--150}, doi = {10.1145/3519941.3535077}, year = {2022}, } Publisher's Version |
|
Wang, Tianyu |
LCTES '22: "Co-mining: A Processing-in-Memory ..."
Co-mining: A Processing-in-Memory Assisted Framework for Memory-Intensive PoW Acceleration
Tianyu Wang, Zhaoyan Shen, and Zili Shao (Chinese University of Hong Kong, China; Shandong University, China) Recently, HBM (High Bandwidth Memory) and PIM (Processing in Memory) integrated technology such as Samsung function-in-memory DRAM opens a new door for memory-intensive PoW acceleration by jointly exploiting GPU, PIM and HBM. In this paper, we for the first time propose a GPU/PIM Co-Mining framework to accelerate memory intensive PoW by fully exploiting HBM-PIM's bandwidth and coordinately scheduling mining tasks in both GPU and PIM. Specifically, we first design a linear programming model to intelligently guide the GPU/PIM task scheduling. An extended finite-state-machine model is designed for the GPU memory controller to switch PIM working mode (compute/memory mode) accordingly. Finally, considering the speed difference between intra-/inter-channel memory accesses, a hybrid memory access method is proposed to minimize inter-channel data movements. We evaluate Co-Mining based on Samsung's HBM2-based function-in-memory architecture. The experimental results show that it can achieve up to 38.5% hashrate improvement compared with the method by directly integrating PIM into PoW acceleration with GPU. @InProceedings{LCTES22p1, author = {Tianyu Wang and Zhaoyan Shen and Zili Shao}, title = {Co-mining: A Processing-in-Memory Assisted Framework for Memory-Intensive PoW Acceleration}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {1--12}, doi = {10.1145/3519941.3535064}, year = {2022}, } Publisher's Version Artifacts Functional Results Reproduced |
|
Wu, Kan |
LCTES '22: "RollBin: Reducing Code-Size ..."
RollBin: Reducing Code-Size via Loop Rerolling at Binary Level
Tianao Ge, Zewei Mo, Kan Wu, Xianwei Zhang, and Yutong Lu (Sun Yat-sen University, China) Code size is an increasing concern on resource constrained systems, ranging from embedded devices to cloud servers. To address the issue, lowering memory occupancy has become a priority in developing and deploying applications, and accordingly compiler-based optimizations have been proposed to reduce program footprint. However, prior arts are generally dealing with source codes or intermediate representations, and thus are very limited in scope in real scenarios where only binary files are commonly provided. To fill the gap, this paper presents a novel code-size optimization RollBin to reroll loops at binary level. RollBin first locates the unrolled loops in binary files, and then probes to decide the unrolling factor by identifying regular memory address patterns. To reconstruct the iterations, we propose a customized data dependency analysis that tackles the challenges brought by shuffled instructions and loop-carry dependencies. Next, the recognized iterations are rolled up through instruction removal and update, which are generally reverting the normal unrolling procedure. The evaluations on standard SPEC2006/2017 and MiBench demonstrate that RollBin effectively shrinks code size by 1.7% and 2.2% on average (up to 7.8%), which respectively outperforms the state-of-the-arts by 31% and 38%. In addition, the use cases of representative realistic applications manifest that RollBin can be applicable in practices. @InProceedings{LCTES22p99, author = {Tianao Ge and Zewei Mo and Kan Wu and Xianwei Zhang and Yutong Lu}, title = {RollBin: Reducing Code-Size via Loop Rerolling at Binary Level}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {99--110}, doi = {10.1145/3519941.3535072}, year = {2022}, } Publisher's Version |
|
Wu, Qihong |
LCTES '22: "An Old Friend Is Better Than ..."
An Old Friend Is Better Than Two New Ones: Dual-Screen Android
Zizhan Chen, Siqi Shang, Qihong Wu, Jin Xue, Zhaoyan Shen, and Zili Shao (Chinese University of Hong Kong, China; Shandong University, China) Dual-screen foldable Android smartphones such as Microsoft Surface Duo are emerging. However, due to its internal design, the Android framework cannot support combined mode, by which two screens can be integrated into one, without modifications, thus requiring new display management. Android introduces a new dual-screen display method, called presentation class, to handle this problem; however, existing apps need to be redesigned based on the new class. In this paper, we propose Dual-Screen Android (DSA), a semantics-aware display scheme for dual-screen foldable Android smartphones. DSA is transparent to apps, thus requiring no changes for existing apps, and incurs minimum modification to the Android framework. Specifically, inside Android, DSA duplicates and maps single-screen views provided by apps to dual screens and maps input events backward correspondingly, thus being transparent to apps. DSA also opens the door to store other hardware states (e.g., screen brightness, screen-touch events, etc.), which can be utilized to bridge other software/hardware semantic gaps for further system optimization. To demonstrate this, we design an effective Q-learning energy optimization scheme within DSA to control screen brightness based on predicted users' behaviors. We have implemented DSA based on Android 10 with real hardware and conducted a series of experiments. Experimental results show that based on DSA, without any modifications, existing apps can directly run and fully exploit dual screens with negligible time and memory overheads, and our energy optimization scheme can effectively reduce energy. We have released the open-source code of DSA for public access. @InProceedings{LCTES22p86, author = {Zizhan Chen and Siqi Shang and Qihong Wu and Jin Xue and Zhaoyan Shen and Zili Shao}, title = {An Old Friend Is Better Than Two New Ones: Dual-Screen Android}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {86--98}, doi = {10.1145/3519941.3535071}, year = {2022}, } Publisher's Version Artifacts Functional |
|
Xiao, Jun |
LCTES '22: "TCPS: A Task and Cache-Aware ..."
TCPS: A Task and Cache-Aware Partitioned Scheduler for Hard Real-Time Multi-core Systems
Yixian Shen, Jun Xiao, and Andy D. Pimentel (University of Amsterdam, Netherlands) Shared caches in multi-core processors seriously complicate the timing verification of real-time software tasks due to the task interference occurring in the shared caches. Explicitly calculating the amount of cache interference among tasks and cache partitioning are two major approaches to enhance the schedulability performance in the context of multi-core processors with shared caches. The former approach suffers from pessimistic cache interference estimations that subsequently result in suboptimal schedulability performance, whereas the latter approach may increase the execution time of tasks due to a lower cache usage, also degrading the schedulability performance. In this paper, we propose a heuristic partitioned scheduler, called TCPS, for real-time non-preemptive multi-core systems with partitioned caches. To achieve a high degree of schedulability, TCPS combines the benefits of partitioned scheduling, relieving the computing resources from contention, and cache partitioning, mitigating cache interference, in conjunction with exploiting task characteristics. A series of comprehensive experiments were performed to evaluate the schedulability performance of TCPS and compare it against a variety of global and partitioned scheduling approaches. Our results show that TCPS outperforms all of these scheduling techniques in terms of schedulability, and yields a more effective cache usage and more stable load balancing. @InProceedings{LCTES22p37, author = {Yixian Shen and Jun Xiao and Andy D. Pimentel}, title = {TCPS: A Task and Cache-Aware Partitioned Scheduler for Hard Real-Time Multi-core Systems}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {37--49}, doi = {10.1145/3519941.3535067}, year = {2022}, } Publisher's Version |
|
Xue, Jin |
LCTES '22: "An Old Friend Is Better Than ..."
An Old Friend Is Better Than Two New Ones: Dual-Screen Android
Zizhan Chen, Siqi Shang, Qihong Wu, Jin Xue, Zhaoyan Shen, and Zili Shao (Chinese University of Hong Kong, China; Shandong University, China) Dual-screen foldable Android smartphones such as Microsoft Surface Duo are emerging. However, due to its internal design, the Android framework cannot support combined mode, by which two screens can be integrated into one, without modifications, thus requiring new display management. Android introduces a new dual-screen display method, called presentation class, to handle this problem; however, existing apps need to be redesigned based on the new class. In this paper, we propose Dual-Screen Android (DSA), a semantics-aware display scheme for dual-screen foldable Android smartphones. DSA is transparent to apps, thus requiring no changes for existing apps, and incurs minimum modification to the Android framework. Specifically, inside Android, DSA duplicates and maps single-screen views provided by apps to dual screens and maps input events backward correspondingly, thus being transparent to apps. DSA also opens the door to store other hardware states (e.g., screen brightness, screen-touch events, etc.), which can be utilized to bridge other software/hardware semantic gaps for further system optimization. To demonstrate this, we design an effective Q-learning energy optimization scheme within DSA to control screen brightness based on predicted users' behaviors. We have implemented DSA based on Android 10 with real hardware and conducted a series of experiments. Experimental results show that based on DSA, without any modifications, existing apps can directly run and fully exploit dual screens with negligible time and memory overheads, and our energy optimization scheme can effectively reduce energy. We have released the open-source code of DSA for public access. @InProceedings{LCTES22p86, author = {Zizhan Chen and Siqi Shang and Qihong Wu and Jin Xue and Zhaoyan Shen and Zili Shao}, title = {An Old Friend Is Better Than Two New Ones: Dual-Screen Android}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {86--98}, doi = {10.1145/3519941.3535071}, year = {2022}, } Publisher's Version Artifacts Functional |
|
Zhang, Xianwei |
LCTES '22: "RollBin: Reducing Code-Size ..."
RollBin: Reducing Code-Size via Loop Rerolling at Binary Level
Tianao Ge, Zewei Mo, Kan Wu, Xianwei Zhang, and Yutong Lu (Sun Yat-sen University, China) Code size is an increasing concern on resource constrained systems, ranging from embedded devices to cloud servers. To address the issue, lowering memory occupancy has become a priority in developing and deploying applications, and accordingly compiler-based optimizations have been proposed to reduce program footprint. However, prior arts are generally dealing with source codes or intermediate representations, and thus are very limited in scope in real scenarios where only binary files are commonly provided. To fill the gap, this paper presents a novel code-size optimization RollBin to reroll loops at binary level. RollBin first locates the unrolled loops in binary files, and then probes to decide the unrolling factor by identifying regular memory address patterns. To reconstruct the iterations, we propose a customized data dependency analysis that tackles the challenges brought by shuffled instructions and loop-carry dependencies. Next, the recognized iterations are rolled up through instruction removal and update, which are generally reverting the normal unrolling procedure. The evaluations on standard SPEC2006/2017 and MiBench demonstrate that RollBin effectively shrinks code size by 1.7% and 2.2% on average (up to 7.8%), which respectively outperforms the state-of-the-arts by 31% and 38%. In addition, the use cases of representative realistic applications manifest that RollBin can be applicable in practices. @InProceedings{LCTES22p99, author = {Tianao Ge and Zewei Mo and Kan Wu and Xianwei Zhang and Yutong Lu}, title = {RollBin: Reducing Code-Size via Loop Rerolling at Binary Level}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {99--110}, doi = {10.1145/3519941.3535072}, year = {2022}, } Publisher's Version |
|
Zheng, Yi |
LCTES '22: "ISKEVA: In-SSD Key-Value Database ..."
ISKEVA: In-SSD Key-Value Database Engine for Video Analytics Applications
Yi Zheng, Joshua Fixelle, Nagadastagiri Challapalle, Pingyi Huo, Zhaoyan Shen, Zili Shao, Mircea Stan, and Vijaykrishnan Narayanan (Pennsylvania State University, USA; University of Virginia, USA; Shandong University, China; Chinese University of Hong Kong, China) Key-value databases are widely used to store the features or metadata generated from the neural network based video processing platforms. Due to the large volumes of video data, these databases use solid state drives (SSDs) as the primary data storage platform, and user query-based filtering, and retrieval operations on data incur large volume of data movement between the SSD and the host processor. In this paper, we present an in-SSD key-value database which uses the embedded CPU core, and DRAM memory on the SSD to support various queries with predicates and reduce the data movement between SSD and host processor significantly. We augment the SSD flash translation layer with key-value database functions and auxiliary data structures to support the user queries using the embedded core and DRAM memory on SSD. The proposed key-value store prototype on the Cosmos plus OpenSSD board reduces data movement between host processor and SSD by 14.57x, achieves an application-level speedup by 1.16x, and reduced energy consumption by 56% across different types of user queries. @InProceedings{LCTES22p50, author = {Yi Zheng and Joshua Fixelle and Nagadastagiri Challapalle and Pingyi Huo and Zhaoyan Shen and Zili Shao and Mircea Stan and Vijaykrishnan Narayanan}, title = {ISKEVA: In-SSD Key-Value Database Engine for Video Analytics Applications}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {50--60}, doi = {10.1145/3519941.3535068}, year = {2022}, } Publisher's Version |
58 authors
proc time: 10.04