Workshop LCTES 2021 – Author Index |
Contents -
Abstracts -
Authors
|
A C D F G H K L M N O P Q R S T U W Y Z
Ahn, Jung Ho |
LCTES '21: "MaPHeA: A Lightweight Memory ..."
MaPHeA: A Lightweight Memory Hierarchy-Aware Profile-Guided Heap Allocation Framework
Deok-Jae Oh, Yaebin Moon, Eojin Lee, Tae Jun Ham, Yongjun Park, Jae W. Lee, and Jung Ho Ahn (Seoul National University, South Korea; Samsung Electronics, South Korea; Hanyang University, South Korea) Hardware performance monitoring units (PMUs) are a standard feature in modern microprocessors for high-performance computing (HPC) and embedded systems, by providing a rich set of microarchitectural event samplers. Recently, many profile-guided optimization (PGO) frameworks have exploited them to feature much lower profiling overhead than conventional instrumentation-based frameworks. However, existing PGO frameworks mostly focus on optimizing the layout of binaries and do not utilize rich information provided by the PMU about data access behaviors over the memory hierarchy. Thus, we propose MaPHeA, a lightweight Memory hierarchy-aware Profile-guided Heap Allocation framework applicable to both HPC and embedded systems. MaPHeA improves application performance by guiding and applying the optimized allocation of dynamically allocated heap objects with very low profiling overhead and without additional user intervention. To demonstrate the effectiveness of MaPHeA, we apply it to optimizing heap object allocation in an emerging DRAM-NVM heterogeneous memory system (HMS), and to selective huge-page utilization. In an HMS, by identifying and placing frequently accessed heap objects to the fast DRAM region, MaPHeA improves the performance of memory-intensive graph-processing and Redis workloads by 56.0% on average over the default configuration that uses DRAM as a hardware-managed cache of slow NVM. Also, by identifying large heap objects that cause frequent TLB misses and allocating them to huge pages, MaPHeA increases the performance of read and update operations of Redis by 10.6% over the transparent huge-page implementation of Linux. @InProceedings{LCTES21p24, author = {Deok-Jae Oh and Yaebin Moon and Eojin Lee and Tae Jun Ham and Yongjun Park and Jae W. Lee and Jung Ho Ahn}, title = {MaPHeA: A Lightweight Memory Hierarchy-Aware Profile-Guided Heap Allocation Framework}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {24--36}, doi = {10.1145/3461648.3463844}, year = {2021}, } Publisher's Version Artifacts Functional Results Reproduced |
|
Alexandrov, Alexey |
LCTES '21: "Break Dancing: Low Overhead, ..."
Break Dancing: Low Overhead, Architecture Neutral Software Branch Tracing
Gabriel Marin, Alexey Alexandrov, and Tipp Moseley (Google, USA) Sampling-based Feedback Directed Optimization (FDO) methods like AutoFDO and BOLT that employ profiles collected in live production environments, are commonly used in datacenter applications to attain significant performance benefits without the toil of maintaining representative load tests. Sampled profiles rely on hardware facilities like Intel’s Last Branch Record (LBR) which are not currently available even on popular CPUs from ARM or AMD. Since not all architectures include a hardware LBR feature, we present an architecture neutral approach to collect LBR-like data. We use sampling and limited program tracing to capture LBR-like data from optimized and unmodified applications binaries. Since the implementation is in user space, we can collect arbitrarily long LBR buffers, and by varying the sampling rate, we can adjust the runtime overhead to arbitrarily low values. We target runtime overheads of <2% when the profiler is on and zero when it’s off. This amortizes to negligible fleet-wide collection cost given the size of a modern production fleet. We implemented a profiler that uses this method of software branch tracing. We also analyzed its overhead and the similarity of the data it collects to the Intel LBR hardware using the SPEC2006 benchmarks. Results demonstrate profile quality and optimization efficacy at parity with LBR-based AutoFDO and the target profiling overhead being achievable even without implementing any advanced tuning. @InProceedings{LCTES21p122, author = {Gabriel Marin and Alexey Alexandrov and Tipp Moseley}, title = {Break Dancing: Low Overhead, Architecture Neutral Software Branch Tracing}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {122--133}, doi = {10.1145/3461648.3463853}, year = {2021}, } Publisher's Version |
|
Cai, Xuyi |
LCTES '21: "Optimus: Towards Optimal Layer-Fusion ..."
Optimus: Towards Optimal Layer-Fusion on Deep Learning Processors
Xuyi Cai, Ying Wang, and Lei Zhang (Institute of Computing Technology at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China) Neural network layer fusion has been proposed to parallelize the inference of neural layers and thus significantly reduces the feature-induced memory accesses. However, how to fuse the neural layers is still a challenging issue that heavily depends on both the network architecture and the specific DNN processor configuration. This work formalizes the layer fusion problem for DNN processors, proves that prior fusion solutions cannot guarantee memory-level optimality, and presents a novel neural network fusion framework, Optimus. Optimus includes an accurate memory cost model to evaluate fusion schemes, and a Computing-Graph (CG) based layer fusion algorithm, which generates high-efficiency layer-fusion schemes for arbitrary network architectures on DNN processors. The proposed off-line and on-line graph-based fusion algorithms can reduce 10.1% - 72.2% off-chip memory traffic and obtain 1.71x - 3.94x energy efficiency over SOTA baselines on DNN workloads, and they bring significant power-efficiency boost to the DNN processors of different architectures and dataflows. @InProceedings{LCTES21p67, author = {Xuyi Cai and Ying Wang and Lei Zhang}, title = {Optimus: Towards Optimal Layer-Fusion on Deep Learning Processors}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {67--79}, doi = {10.1145/3461648.3463848}, year = {2021}, } Publisher's Version Artifacts Reusable Results Reproduced |
|
Chen, Weiwei |
LCTES '21: "CHaNAS: Coordinated Search ..."
CHaNAS: Coordinated Search for Network Architecture and Scheduling Policy
Weiwei Chen, Ying Wang, Gangliang Lin, Chengsi Gao, Cheng Liu, and Lei Zhang (Institute of Computing Technology at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China) Automatically design an efficient DNN solution for a given deep learning task on the target hardware mainly decided by the neural network architecture and the schedule mapping strategy, where the two goals are closely coupled with each other to fully exploit the advantages of the underlying hardware. Prior hardware-aware Neural Architecture Search (NAS) methods mostly ignore the impacts of different scheduling policies (e.g., graph-level optimization, loop transformations, parallelization, etc.) on network candidates being evaluated in the search process. Thus, they may miss the true-optimal architecture that can only be discovered by trying-out different scheduling policies. This work proposes a NAS framework (CHaNAS) that searches for not only the network architecture but also the dedicated scheduling policy, as the optimal co-design solution on target hardware that fully exploits the advantages of the underlying hardware. We propose to use a block-based pre-scheduling methodology to reduce the co-design search space, and enable the automatic generation of the optimal co-design, including the network architecture and the tensor programs that practice the scheduling policy. We evaluate CHaNAS on Imagenet on different hardware back-ends against the state-of-the-art hardware-aware search method MobileNet-v3. Experimental results show that the co-design solutions obtained by ChaNAS show up to 1.6x, 1.9x, and 1.7x performance boost on NVIDIA P100 GPU, Intel Xeon 8163 CPU, and Samsung Note 10 Mobile, respectively, over the baselines of the same-level accuracy. @InProceedings{LCTES21p42, author = {Weiwei Chen and Ying Wang and Gangliang Lin and Chengsi Gao and Cheng Liu and Lei Zhang}, title = {CHaNAS: Coordinated Search for Network Architecture and Scheduling Policy}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {42--53}, doi = {10.1145/3461648.3463846}, year = {2021}, } Publisher's Version |
|
Chen, Zhaoyun |
LCTES '21: "Automatic Mapping and Code ..."
Automatic Mapping and Code Optimization for OpenCL Kernels on FT-Matrix Architecture (WIP Paper)
Xiaolei Zhao, Mei Wen, Zhaoyun Chen, Yang Shi, and Chunyuan Zhang (National University of Defense Technology, China) FT-Matrix is a typical vector-SIMD architecture that refines the cooperation between scalar and vector units. This approach is widely used in digital signal processing, high-performance computing, and artificial intelligence, among other fields. FT-Matrix currently adopts C vector extension as the main programming model, improving the utilization efficiency of SIMD by providing explicit vector extension API. Moreover, it is difficult to efficiently transplant parallel programs (OpenCL, CUDA) adopted by users. This paper proposes an automatic mapping and code optimization method for OpenCL kernels on FT-Matrix architecture. The proposed approach solves these challenges by means of work item coalescing, slicing and rotation, and instruction-level code optimization. Preliminary results show that our method can achieve high performance and good hardware utilization for OpenCL kernels, as well as decreasing the programming difficulty on FT-Matrix. @InProceedings{LCTES21p37, author = {Xiaolei Zhao and Mei Wen and Zhaoyun Chen and Yang Shi and Chunyuan Zhang}, title = {Automatic Mapping and Code Optimization for OpenCL Kernels on FT-Matrix Architecture (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {37--41}, doi = {10.1145/3461648.3463845}, year = {2021}, } Publisher's Version |
|
Cheng, Albert Mo Kim |
LCTES '21: "ARINC 653-Inspired Regularity-Based ..."
ARINC 653-Inspired Regularity-Based Resource Partitioning on Xen
Pavan Kumar Paluri, Guangli Dai, and Albert Mo Kim Cheng (University of Houston, USA) A multitude of cloud-native applications take up a significant share of today's world wide web, the majority of which implicitly require soft-real-time guarantees when hosted on servers at various data centers across the globe. With the rapid development of cloud computing and virtualization techniques, many applications have been moved onto cloud and edge platforms that require efficient virtualization techniques. This means a set of applications must be executed on a Virtual Machine (VM) and multiple VMs must be temporally and spatially scheduled on a set of CPUs. Designed to leverage the cloud infrastructure model, many of these cloud-native applications such as media servers strongly demand low data latency and high compute-resource availability, both of which must be predictable. However, state-of-art VM schedulers fail to satisfy these requirements simultaneously. The scheduling of cloud-native applications on VMs and the scheduling of VMs on physical resources (CPUs), collectively need to be real-time in nature as specified by the Hierarchical Real-Time Scheduling (HiRTS) framework. Conforming to the specifications of this framework, the Regularity-based Resource Partitioning (RRP) model has been proposed that introduces the concept of regularity to provide a near-ideal resource supply to all VMs. In this paper, we make the theoretically superior Regularity-based Resource Partitioning (RRP) model ready for prime time by implementing its associated resource partitioning algorithms for the first time ever on the popular x-86 open-source hypervisor Xen, i.e., RRP-Xen. This paper also compares and contrasts the real-time performance of RRP-Xen against contemporary Xen schedulers such as Credit and RTDS. Our contributions include: (1) a novel implementation of the RRP model on Xen's x-86 based hypervisor, thereby providing a test-bed for future researchers; (2) the first-ever multi-core ARINC 653 VM scheduler prototype on Xen; and (3) numerous experiments and theoretical analysis to determine the real-time performance of RRP-Xen under a stringent workload environment. @InProceedings{LCTES21p134, author = {Pavan Kumar Paluri and Guangli Dai and Albert Mo Kim Cheng}, title = {ARINC 653-Inspired Regularity-Based Resource Partitioning on Xen}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {134--145}, doi = {10.1145/3461648.3463854}, year = {2021}, } Publisher's Version Info Artifacts Functional |
|
Chimdyalwar, Bharti |
LCTES '21: "Selective Path-Sensitive Interval ..."
Selective Path-Sensitive Interval Analysis (WIP Paper)
Bharti Chimdyalwar and Shrawan Kumar (TCS Research, India) K-limited path-sensitive interval domain is an abstract domain that has been proposed for precise and scalable analysis of large software systems. The domain maintains variables’ value ranges in the form of intervals along a configurable K subsets of paths at each program point, which implicitly provides co-relation among variables. When the number of paths at the join point exceeds K, the set of paths are partitioned into K subsets, arbitrarily, which results in loss of precision required to verify program properties. To address this problem, we propose selective merging of paths - identify and merge paths in such a way that the intervals computed help verifying more properties. Our selective path-sensitive approach is based on the knowledge of variables whose values influence the verification outcomes of program properties. We evaluated our approach on industrial automotive applications as well as academic benchmarks. We show benefits of selective path merging over arbitrary path selection by verifying 40% more properties. @InProceedings{LCTES21p146, author = {Bharti Chimdyalwar and Shrawan Kumar}, title = {Selective Path-Sensitive Interval Analysis (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {146--150}, doi = {10.1145/3461648.3463855}, year = {2021}, } Publisher's Version |
|
Cole, Murray |
LCTES '21: "HyFM: Function Merging for ..."
HyFM: Function Merging for Free
Rodrigo C. O. Rocha, Pavlos Petoumenos, Zheng Wang, Murray Cole, Kim Hazelwood, and Hugh Leather (University of Edinburgh, UK; University of Manchester, UK; University of Leeds, UK; Facebook AI Research, USA) Function merging is an important optimization for reducing code size. It merges multiple functions into a single one, eliminating duplicate code among them. The existing state-of-the-art relies on a well-known sequence alignment algorithm to identify duplicate code across whole functions. However, this algorithm is quadratic in time and space on the number of instructions. This leads to very high time overheads and prohibitive levels of memory usage even for medium-sized benchmarks. For larger programs, it becomes impractical. This is made worse by an overly eager merging approach. All selected pairs of functions will be merged. Only then will this approach estimate the potential benefit from merging and decide whether to replace the original functions with the merged one. Given that most pairs are unprofitable, a significant amount of time is wasted producing merged functions that are simply thrown away. In this paper, we propose HyFM, a novel function merging technique that delivers similar levels of code size reduction for significantly lower time overhead and memory usage. Unlike the state-of-the-art, our alignment strategy works at the block level. Since basic blocks are usually much shorter than functions, even a quadratic alignment is acceptable. However, we also propose a linear algorithm for aligning blocks of the same size at a much lower cost. We extend this strategy with a multi-tier profitability analysis that bails out early from unprofitable merging attempts. By aligning individual pairs of blocks, we are able to decide their alignment’s profitability separately and before actually generating code. Experimental results on SPEC 2006 and 2017 show that HyFM needs orders of magnitude less memory, using up to 48 MB or 5.6 MB, depending on the variant used, while the state-of-the-art requires 32 GB in the worst case. HyFM also runs over 4.5×× faster, while still achieving comparable code size reduction. Combined with the speedup of later compilation stages due to the reduced number of functions, HyFM contributes to a reduced end-to-end compilation time. @InProceedings{LCTES21p110, author = {Rodrigo C. O. Rocha and Pavlos Petoumenos and Zheng Wang and Murray Cole and Kim Hazelwood and Hugh Leather}, title = {HyFM: Function Merging for Free}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {110--121}, doi = {10.1145/3461648.3463852}, year = {2021}, } Publisher's Version Artifacts Functional Results Reproduced |
|
Dai, Guangli |
LCTES '21: "ARINC 653-Inspired Regularity-Based ..."
ARINC 653-Inspired Regularity-Based Resource Partitioning on Xen
Pavan Kumar Paluri, Guangli Dai, and Albert Mo Kim Cheng (University of Houston, USA) A multitude of cloud-native applications take up a significant share of today's world wide web, the majority of which implicitly require soft-real-time guarantees when hosted on servers at various data centers across the globe. With the rapid development of cloud computing and virtualization techniques, many applications have been moved onto cloud and edge platforms that require efficient virtualization techniques. This means a set of applications must be executed on a Virtual Machine (VM) and multiple VMs must be temporally and spatially scheduled on a set of CPUs. Designed to leverage the cloud infrastructure model, many of these cloud-native applications such as media servers strongly demand low data latency and high compute-resource availability, both of which must be predictable. However, state-of-art VM schedulers fail to satisfy these requirements simultaneously. The scheduling of cloud-native applications on VMs and the scheduling of VMs on physical resources (CPUs), collectively need to be real-time in nature as specified by the Hierarchical Real-Time Scheduling (HiRTS) framework. Conforming to the specifications of this framework, the Regularity-based Resource Partitioning (RRP) model has been proposed that introduces the concept of regularity to provide a near-ideal resource supply to all VMs. In this paper, we make the theoretically superior Regularity-based Resource Partitioning (RRP) model ready for prime time by implementing its associated resource partitioning algorithms for the first time ever on the popular x-86 open-source hypervisor Xen, i.e., RRP-Xen. This paper also compares and contrasts the real-time performance of RRP-Xen against contemporary Xen schedulers such as Credit and RTDS. Our contributions include: (1) a novel implementation of the RRP model on Xen's x-86 based hypervisor, thereby providing a test-bed for future researchers; (2) the first-ever multi-core ARINC 653 VM scheduler prototype on Xen; and (3) numerous experiments and theoretical analysis to determine the real-time performance of RRP-Xen under a stringent workload environment. @InProceedings{LCTES21p134, author = {Pavan Kumar Paluri and Guangli Dai and Albert Mo Kim Cheng}, title = {ARINC 653-Inspired Regularity-Based Resource Partitioning on Xen}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {134--145}, doi = {10.1145/3461648.3463854}, year = {2021}, } Publisher's Version Info Artifacts Functional |
|
Dietrich, Christian |
LCTES '21: "Data-Flow-Sensitive Fault-Space ..."
Data-Flow-Sensitive Fault-Space Pruning for the Injection of Transient Hardware Faults
Oskar Pusz, Christian Dietrich, and Daniel Lohmann (Leibniz Universität Hannover, Germany) In the domain of safety-critical systems, fault injection campaigns on ISA-level have become a widespread approach to systematically assess the resilience of a system with respect to transient hardware faults. However, experimentally injecting all possible faults to achieve full fault-space coverage is infeasible in practice. Hence, pruning techniques, such as def/use pruning are commonly applied to reduce the campaign size by grouping injections that surely provoke the same erroneous behavior. We describe data-flow pruning, a new data-flow sensitive fault-space pruning method that extends on def/use-pruning by also considering the instructions’ semantics when deriving fault-equivalence sets. By tracking the information flow for each bit individually across the respective instructions and considering their fault-masking capability, data-flow pruning (DFP) has to plan fewer pilot injections as it derives larger fault-equivalence sets. Like def/use pruning, DFP is precise and complete and it can be used as a direct replacement/alternative in existing software-based fault-injection tools. Our prototypical implementation so far considers local fault equivalence for five types of instructions. In our experimental evaluation, this already reduces the number of necessary injections by up to 18 percent compared to def/use pruning. @InProceedings{LCTES21p97, author = {Oskar Pusz and Christian Dietrich and Daniel Lohmann}, title = {Data-Flow-Sensitive Fault-Space Pruning for the Injection of Transient Hardware Faults}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {97--109}, doi = {10.1145/3461648.3463851}, year = {2021}, } Publisher's Version Artifacts Reusable Results Reproduced |
|
Feng, Dan |
LCTES '21: "Better Atomic Writes by Exposing ..."
Better Atomic Writes by Exposing the Flash Out-of-Band Area to File Systems
Hongwei Qin, Dan Feng, Wei Tong, Yutong Zhao, Sheng Qiu, Fei Liu, and Shu Li (Huazhong University of Science and Technology, China; Alibaba Group, China) File systems for mobile devices usually preserve data consistency by ordered I/Os. However, maintaining I/O ordering prevents applications from fully exploiting device parallelism and thus degrades the storage performance. In this paper, we propose NBStack to eliminate ordered I/Os without compromising data consistency. First, we augment the existing block interface to expose the Flash out-of-band area to file systems. Second, we build an enhanced block device prototype that supports the new interface. Third, we develop NBFS, a Linux file system, that leverages the new block interface to achieve atomic writes without enforcing I/O orderings. Experimental results show that NBStack doubles the performance of F2FS while providing strong consistency and durability guarantees. If applications are willing to trade-off durability, NBStack can further aggressively improve performance. @InProceedings{LCTES21p12, author = {Hongwei Qin and Dan Feng and Wei Tong and Yutong Zhao and Sheng Qiu and Fei Liu and Shu Li}, title = {Better Atomic Writes by Exposing the Flash Out-of-Band Area to File Systems}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {12--23}, doi = {10.1145/3461648.3463843}, year = {2021}, } Publisher's Version |
|
Gao, Chengsi |
LCTES '21: "CHaNAS: Coordinated Search ..."
CHaNAS: Coordinated Search for Network Architecture and Scheduling Policy
Weiwei Chen, Ying Wang, Gangliang Lin, Chengsi Gao, Cheng Liu, and Lei Zhang (Institute of Computing Technology at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China) Automatically design an efficient DNN solution for a given deep learning task on the target hardware mainly decided by the neural network architecture and the schedule mapping strategy, where the two goals are closely coupled with each other to fully exploit the advantages of the underlying hardware. Prior hardware-aware Neural Architecture Search (NAS) methods mostly ignore the impacts of different scheduling policies (e.g., graph-level optimization, loop transformations, parallelization, etc.) on network candidates being evaluated in the search process. Thus, they may miss the true-optimal architecture that can only be discovered by trying-out different scheduling policies. This work proposes a NAS framework (CHaNAS) that searches for not only the network architecture but also the dedicated scheduling policy, as the optimal co-design solution on target hardware that fully exploits the advantages of the underlying hardware. We propose to use a block-based pre-scheduling methodology to reduce the co-design search space, and enable the automatic generation of the optimal co-design, including the network architecture and the tensor programs that practice the scheduling policy. We evaluate CHaNAS on Imagenet on different hardware back-ends against the state-of-the-art hardware-aware search method MobileNet-v3. Experimental results show that the co-design solutions obtained by ChaNAS show up to 1.6x, 1.9x, and 1.7x performance boost on NVIDIA P100 GPU, Intel Xeon 8163 CPU, and Samsung Note 10 Mobile, respectively, over the baselines of the same-level accuracy. @InProceedings{LCTES21p42, author = {Weiwei Chen and Ying Wang and Gangliang Lin and Chengsi Gao and Cheng Liu and Lei Zhang}, title = {CHaNAS: Coordinated Search for Network Architecture and Scheduling Policy}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {42--53}, doi = {10.1145/3461648.3463846}, year = {2021}, } Publisher's Version |
|
Ghanta, Sandesh |
LCTES '21: "Robust I/O-Compute Concurrency ..."
Robust I/O-Compute Concurrency for Machine Learning Pipelines in Constrained Cyber-Physical Devices
Jayaraj Poroor, Akash Lal, and Sandesh Ghanta (Amrita University, India; JIFFY.ai, India; Microsoft Research, India; Amazon, India) Cyberphysical systems have numerous industrial and commercial applications. Such systems are often built using low-resource devices that gather and process data, using machine-learning (ML) models, to make intelligent decisions and provide value to users. Programming such low-resource devices with an impoverished system runtime is often challenging. This paper presents a new domain-specific language called PiCon for programming ML pipelines in low-resource devices. PiCon allows safe I/O-compute concurrency, ruling out a large class of errors, while providing a simple and sequential coding abstraction to the programmer. PiCon compiles to C code and easily interfaces with existing C/C++ code. Furthermore, the generated code does not rely on multi-threading support or dynamic memory allocation, dramatically reducing its footprint on the device. We present experience porting two real-world ML applications that demonstrate simplification in programmability, in addition to several safe-by-construction guarantees. @InProceedings{LCTES21p1, author = {Jayaraj Poroor and Akash Lal and Sandesh Ghanta}, title = {Robust I/O-Compute Concurrency for Machine Learning Pipelines in Constrained Cyber-Physical Devices}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {1--11}, doi = {10.1145/3461648.3463842}, year = {2021}, } Publisher's Version |
|
Ham, Tae Jun |
LCTES '21: "MaPHeA: A Lightweight Memory ..."
MaPHeA: A Lightweight Memory Hierarchy-Aware Profile-Guided Heap Allocation Framework
Deok-Jae Oh, Yaebin Moon, Eojin Lee, Tae Jun Ham, Yongjun Park, Jae W. Lee, and Jung Ho Ahn (Seoul National University, South Korea; Samsung Electronics, South Korea; Hanyang University, South Korea) Hardware performance monitoring units (PMUs) are a standard feature in modern microprocessors for high-performance computing (HPC) and embedded systems, by providing a rich set of microarchitectural event samplers. Recently, many profile-guided optimization (PGO) frameworks have exploited them to feature much lower profiling overhead than conventional instrumentation-based frameworks. However, existing PGO frameworks mostly focus on optimizing the layout of binaries and do not utilize rich information provided by the PMU about data access behaviors over the memory hierarchy. Thus, we propose MaPHeA, a lightweight Memory hierarchy-aware Profile-guided Heap Allocation framework applicable to both HPC and embedded systems. MaPHeA improves application performance by guiding and applying the optimized allocation of dynamically allocated heap objects with very low profiling overhead and without additional user intervention. To demonstrate the effectiveness of MaPHeA, we apply it to optimizing heap object allocation in an emerging DRAM-NVM heterogeneous memory system (HMS), and to selective huge-page utilization. In an HMS, by identifying and placing frequently accessed heap objects to the fast DRAM region, MaPHeA improves the performance of memory-intensive graph-processing and Redis workloads by 56.0% on average over the default configuration that uses DRAM as a hardware-managed cache of slow NVM. Also, by identifying large heap objects that cause frequent TLB misses and allocating them to huge pages, MaPHeA increases the performance of read and update operations of Redis by 10.6% over the transparent huge-page implementation of Linux. @InProceedings{LCTES21p24, author = {Deok-Jae Oh and Yaebin Moon and Eojin Lee and Tae Jun Ham and Yongjun Park and Jae W. Lee and Jung Ho Ahn}, title = {MaPHeA: A Lightweight Memory Hierarchy-Aware Profile-Guided Heap Allocation Framework}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {24--36}, doi = {10.1145/3461648.3463844}, year = {2021}, } Publisher's Version Artifacts Functional Results Reproduced |
|
Hazelwood, Kim |
LCTES '21: "HyFM: Function Merging for ..."
HyFM: Function Merging for Free
Rodrigo C. O. Rocha, Pavlos Petoumenos, Zheng Wang, Murray Cole, Kim Hazelwood, and Hugh Leather (University of Edinburgh, UK; University of Manchester, UK; University of Leeds, UK; Facebook AI Research, USA) Function merging is an important optimization for reducing code size. It merges multiple functions into a single one, eliminating duplicate code among them. The existing state-of-the-art relies on a well-known sequence alignment algorithm to identify duplicate code across whole functions. However, this algorithm is quadratic in time and space on the number of instructions. This leads to very high time overheads and prohibitive levels of memory usage even for medium-sized benchmarks. For larger programs, it becomes impractical. This is made worse by an overly eager merging approach. All selected pairs of functions will be merged. Only then will this approach estimate the potential benefit from merging and decide whether to replace the original functions with the merged one. Given that most pairs are unprofitable, a significant amount of time is wasted producing merged functions that are simply thrown away. In this paper, we propose HyFM, a novel function merging technique that delivers similar levels of code size reduction for significantly lower time overhead and memory usage. Unlike the state-of-the-art, our alignment strategy works at the block level. Since basic blocks are usually much shorter than functions, even a quadratic alignment is acceptable. However, we also propose a linear algorithm for aligning blocks of the same size at a much lower cost. We extend this strategy with a multi-tier profitability analysis that bails out early from unprofitable merging attempts. By aligning individual pairs of blocks, we are able to decide their alignment’s profitability separately and before actually generating code. Experimental results on SPEC 2006 and 2017 show that HyFM needs orders of magnitude less memory, using up to 48 MB or 5.6 MB, depending on the variant used, while the state-of-the-art requires 32 GB in the worst case. HyFM also runs over 4.5×× faster, while still achieving comparable code size reduction. Combined with the speedup of later compilation stages due to the reduced number of functions, HyFM contributes to a reduced end-to-end compilation time. @InProceedings{LCTES21p110, author = {Rodrigo C. O. Rocha and Pavlos Petoumenos and Zheng Wang and Murray Cole and Kim Hazelwood and Hugh Leather}, title = {HyFM: Function Merging for Free}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {110--121}, doi = {10.1145/3461648.3463852}, year = {2021}, } Publisher's Version Artifacts Functional Results Reproduced |
|
Hu, Alan J. |
LCTES '21: "Cache Abstraction for Data ..."
Cache Abstraction for Data Race Detection in Heterogeneous Systems with Non-coherent Accelerators
May Young, Alan J. Hu, and Guy G. F. Lemieux (University of British Columbia, Canada) Embedded systems are becoming increasingly complex and heterogeneous, featuring multiple processor cores (which might themselves be heterogeneous) as well as specialized hardware accelerators, all accessing shared memory. Many accelerators are non-coherent (i.e., do not support hardware cache coherence) because it reduces hardware complexity, cost, and power consumption, while potentially offering superior performance. However, the disadvantage of non-coherence is that the software must explicitly synchronize between accelerators and processors, and this synchronization is notoriously error-prone. We propose an analysis technique to find data races in software for heterogeneous systems that include non-coherent accelerators. Our approach builds on classical results for data race detection, but the challenge turns out to be analyzing cache behavior rather than the behavior of the non-coherent accelerators. Accordingly, our central contribution is a novel, sound (data-race-preserving) abstraction of cache behavior. We prove our abstraction sound, and then to demonstrate the precision of our abstraction, we implement it in a simple dynamic race detector for a system with a processor and a massively parallel accelerator provided by a commercial FPGA-based accelerator vendor. On eleven software examples provided by the vendor, the tool had zero false positives and was able to detect previously unknown data races in 2 of the 11 examples. @InProceedings{LCTES21p151, author = {May Young and Alan J. Hu and Guy G. F. Lemieux}, title = {Cache Abstraction for Data Race Detection in Heterogeneous Systems with Non-coherent Accelerators}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {151--162}, doi = {10.1145/3461648.3463856}, year = {2021}, } Publisher's Version |
|
Kumar, Shrawan |
LCTES '21: "Selective Path-Sensitive Interval ..."
Selective Path-Sensitive Interval Analysis (WIP Paper)
Bharti Chimdyalwar and Shrawan Kumar (TCS Research, India) K-limited path-sensitive interval domain is an abstract domain that has been proposed for precise and scalable analysis of large software systems. The domain maintains variables’ value ranges in the form of intervals along a configurable K subsets of paths at each program point, which implicitly provides co-relation among variables. When the number of paths at the join point exceeds K, the set of paths are partitioned into K subsets, arbitrarily, which results in loss of precision required to verify program properties. To address this problem, we propose selective merging of paths - identify and merge paths in such a way that the intervals computed help verifying more properties. Our selective path-sensitive approach is based on the knowledge of variables whose values influence the verification outcomes of program properties. We evaluated our approach on industrial automotive applications as well as academic benchmarks. We show benefits of selective path merging over arbitrary path selection by verifying 40% more properties. @InProceedings{LCTES21p146, author = {Bharti Chimdyalwar and Shrawan Kumar}, title = {Selective Path-Sensitive Interval Analysis (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {146--150}, doi = {10.1145/3461648.3463855}, year = {2021}, } Publisher's Version |
|
Lal, Akash |
LCTES '21: "Robust I/O-Compute Concurrency ..."
Robust I/O-Compute Concurrency for Machine Learning Pipelines in Constrained Cyber-Physical Devices
Jayaraj Poroor, Akash Lal, and Sandesh Ghanta (Amrita University, India; JIFFY.ai, India; Microsoft Research, India; Amazon, India) Cyberphysical systems have numerous industrial and commercial applications. Such systems are often built using low-resource devices that gather and process data, using machine-learning (ML) models, to make intelligent decisions and provide value to users. Programming such low-resource devices with an impoverished system runtime is often challenging. This paper presents a new domain-specific language called PiCon for programming ML pipelines in low-resource devices. PiCon allows safe I/O-compute concurrency, ruling out a large class of errors, while providing a simple and sequential coding abstraction to the programmer. PiCon compiles to C code and easily interfaces with existing C/C++ code. Furthermore, the generated code does not rely on multi-threading support or dynamic memory allocation, dramatically reducing its footprint on the device. We present experience porting two real-world ML applications that demonstrate simplification in programmability, in addition to several safe-by-construction guarantees. @InProceedings{LCTES21p1, author = {Jayaraj Poroor and Akash Lal and Sandesh Ghanta}, title = {Robust I/O-Compute Concurrency for Machine Learning Pipelines in Constrained Cyber-Physical Devices}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {1--11}, doi = {10.1145/3461648.3463842}, year = {2021}, } Publisher's Version |
|
Leather, Hugh |
LCTES '21: "HyFM: Function Merging for ..."
HyFM: Function Merging for Free
Rodrigo C. O. Rocha, Pavlos Petoumenos, Zheng Wang, Murray Cole, Kim Hazelwood, and Hugh Leather (University of Edinburgh, UK; University of Manchester, UK; University of Leeds, UK; Facebook AI Research, USA) Function merging is an important optimization for reducing code size. It merges multiple functions into a single one, eliminating duplicate code among them. The existing state-of-the-art relies on a well-known sequence alignment algorithm to identify duplicate code across whole functions. However, this algorithm is quadratic in time and space on the number of instructions. This leads to very high time overheads and prohibitive levels of memory usage even for medium-sized benchmarks. For larger programs, it becomes impractical. This is made worse by an overly eager merging approach. All selected pairs of functions will be merged. Only then will this approach estimate the potential benefit from merging and decide whether to replace the original functions with the merged one. Given that most pairs are unprofitable, a significant amount of time is wasted producing merged functions that are simply thrown away. In this paper, we propose HyFM, a novel function merging technique that delivers similar levels of code size reduction for significantly lower time overhead and memory usage. Unlike the state-of-the-art, our alignment strategy works at the block level. Since basic blocks are usually much shorter than functions, even a quadratic alignment is acceptable. However, we also propose a linear algorithm for aligning blocks of the same size at a much lower cost. We extend this strategy with a multi-tier profitability analysis that bails out early from unprofitable merging attempts. By aligning individual pairs of blocks, we are able to decide their alignment’s profitability separately and before actually generating code. Experimental results on SPEC 2006 and 2017 show that HyFM needs orders of magnitude less memory, using up to 48 MB or 5.6 MB, depending on the variant used, while the state-of-the-art requires 32 GB in the worst case. HyFM also runs over 4.5×× faster, while still achieving comparable code size reduction. Combined with the speedup of later compilation stages due to the reduced number of functions, HyFM contributes to a reduced end-to-end compilation time. @InProceedings{LCTES21p110, author = {Rodrigo C. O. Rocha and Pavlos Petoumenos and Zheng Wang and Murray Cole and Kim Hazelwood and Hugh Leather}, title = {HyFM: Function Merging for Free}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {110--121}, doi = {10.1145/3461648.3463852}, year = {2021}, } Publisher's Version Artifacts Functional Results Reproduced |
|
Lee, Eojin |
LCTES '21: "MaPHeA: A Lightweight Memory ..."
MaPHeA: A Lightweight Memory Hierarchy-Aware Profile-Guided Heap Allocation Framework
Deok-Jae Oh, Yaebin Moon, Eojin Lee, Tae Jun Ham, Yongjun Park, Jae W. Lee, and Jung Ho Ahn (Seoul National University, South Korea; Samsung Electronics, South Korea; Hanyang University, South Korea) Hardware performance monitoring units (PMUs) are a standard feature in modern microprocessors for high-performance computing (HPC) and embedded systems, by providing a rich set of microarchitectural event samplers. Recently, many profile-guided optimization (PGO) frameworks have exploited them to feature much lower profiling overhead than conventional instrumentation-based frameworks. However, existing PGO frameworks mostly focus on optimizing the layout of binaries and do not utilize rich information provided by the PMU about data access behaviors over the memory hierarchy. Thus, we propose MaPHeA, a lightweight Memory hierarchy-aware Profile-guided Heap Allocation framework applicable to both HPC and embedded systems. MaPHeA improves application performance by guiding and applying the optimized allocation of dynamically allocated heap objects with very low profiling overhead and without additional user intervention. To demonstrate the effectiveness of MaPHeA, we apply it to optimizing heap object allocation in an emerging DRAM-NVM heterogeneous memory system (HMS), and to selective huge-page utilization. In an HMS, by identifying and placing frequently accessed heap objects to the fast DRAM region, MaPHeA improves the performance of memory-intensive graph-processing and Redis workloads by 56.0% on average over the default configuration that uses DRAM as a hardware-managed cache of slow NVM. Also, by identifying large heap objects that cause frequent TLB misses and allocating them to huge pages, MaPHeA increases the performance of read and update operations of Redis by 10.6% over the transparent huge-page implementation of Linux. @InProceedings{LCTES21p24, author = {Deok-Jae Oh and Yaebin Moon and Eojin Lee and Tae Jun Ham and Yongjun Park and Jae W. Lee and Jung Ho Ahn}, title = {MaPHeA: A Lightweight Memory Hierarchy-Aware Profile-Guided Heap Allocation Framework}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {24--36}, doi = {10.1145/3461648.3463844}, year = {2021}, } Publisher's Version Artifacts Functional Results Reproduced |
|
Lee, Jae W. |
LCTES '21: "MaPHeA: A Lightweight Memory ..."
MaPHeA: A Lightweight Memory Hierarchy-Aware Profile-Guided Heap Allocation Framework
Deok-Jae Oh, Yaebin Moon, Eojin Lee, Tae Jun Ham, Yongjun Park, Jae W. Lee, and Jung Ho Ahn (Seoul National University, South Korea; Samsung Electronics, South Korea; Hanyang University, South Korea) Hardware performance monitoring units (PMUs) are a standard feature in modern microprocessors for high-performance computing (HPC) and embedded systems, by providing a rich set of microarchitectural event samplers. Recently, many profile-guided optimization (PGO) frameworks have exploited them to feature much lower profiling overhead than conventional instrumentation-based frameworks. However, existing PGO frameworks mostly focus on optimizing the layout of binaries and do not utilize rich information provided by the PMU about data access behaviors over the memory hierarchy. Thus, we propose MaPHeA, a lightweight Memory hierarchy-aware Profile-guided Heap Allocation framework applicable to both HPC and embedded systems. MaPHeA improves application performance by guiding and applying the optimized allocation of dynamically allocated heap objects with very low profiling overhead and without additional user intervention. To demonstrate the effectiveness of MaPHeA, we apply it to optimizing heap object allocation in an emerging DRAM-NVM heterogeneous memory system (HMS), and to selective huge-page utilization. In an HMS, by identifying and placing frequently accessed heap objects to the fast DRAM region, MaPHeA improves the performance of memory-intensive graph-processing and Redis workloads by 56.0% on average over the default configuration that uses DRAM as a hardware-managed cache of slow NVM. Also, by identifying large heap objects that cause frequent TLB misses and allocating them to huge pages, MaPHeA increases the performance of read and update operations of Redis by 10.6% over the transparent huge-page implementation of Linux. @InProceedings{LCTES21p24, author = {Deok-Jae Oh and Yaebin Moon and Eojin Lee and Tae Jun Ham and Yongjun Park and Jae W. Lee and Jung Ho Ahn}, title = {MaPHeA: A Lightweight Memory Hierarchy-Aware Profile-Guided Heap Allocation Framework}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {24--36}, doi = {10.1145/3461648.3463844}, year = {2021}, } Publisher's Version Artifacts Functional Results Reproduced |
|
Lemieux, Guy G. F. |
LCTES '21: "Cache Abstraction for Data ..."
Cache Abstraction for Data Race Detection in Heterogeneous Systems with Non-coherent Accelerators
May Young, Alan J. Hu, and Guy G. F. Lemieux (University of British Columbia, Canada) Embedded systems are becoming increasingly complex and heterogeneous, featuring multiple processor cores (which might themselves be heterogeneous) as well as specialized hardware accelerators, all accessing shared memory. Many accelerators are non-coherent (i.e., do not support hardware cache coherence) because it reduces hardware complexity, cost, and power consumption, while potentially offering superior performance. However, the disadvantage of non-coherence is that the software must explicitly synchronize between accelerators and processors, and this synchronization is notoriously error-prone. We propose an analysis technique to find data races in software for heterogeneous systems that include non-coherent accelerators. Our approach builds on classical results for data race detection, but the challenge turns out to be analyzing cache behavior rather than the behavior of the non-coherent accelerators. Accordingly, our central contribution is a novel, sound (data-race-preserving) abstraction of cache behavior. We prove our abstraction sound, and then to demonstrate the precision of our abstraction, we implement it in a simple dynamic race detector for a system with a processor and a massively parallel accelerator provided by a commercial FPGA-based accelerator vendor. On eleven software examples provided by the vendor, the tool had zero false positives and was able to detect previously unknown data races in 2 of the 11 examples. @InProceedings{LCTES21p151, author = {May Young and Alan J. Hu and Guy G. F. Lemieux}, title = {Cache Abstraction for Data Race Detection in Heterogeneous Systems with Non-coherent Accelerators}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {151--162}, doi = {10.1145/3461648.3463856}, year = {2021}, } Publisher's Version |
|
Li, Shu |
LCTES '21: "Better Atomic Writes by Exposing ..."
Better Atomic Writes by Exposing the Flash Out-of-Band Area to File Systems
Hongwei Qin, Dan Feng, Wei Tong, Yutong Zhao, Sheng Qiu, Fei Liu, and Shu Li (Huazhong University of Science and Technology, China; Alibaba Group, China) File systems for mobile devices usually preserve data consistency by ordered I/Os. However, maintaining I/O ordering prevents applications from fully exploiting device parallelism and thus degrades the storage performance. In this paper, we propose NBStack to eliminate ordered I/Os without compromising data consistency. First, we augment the existing block interface to expose the Flash out-of-band area to file systems. Second, we build an enhanced block device prototype that supports the new interface. Third, we develop NBFS, a Linux file system, that leverages the new block interface to achieve atomic writes without enforcing I/O orderings. Experimental results show that NBStack doubles the performance of F2FS while providing strong consistency and durability guarantees. If applications are willing to trade-off durability, NBStack can further aggressively improve performance. @InProceedings{LCTES21p12, author = {Hongwei Qin and Dan Feng and Wei Tong and Yutong Zhao and Sheng Qiu and Fei Liu and Shu Li}, title = {Better Atomic Writes by Exposing the Flash Out-of-Band Area to File Systems}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {12--23}, doi = {10.1145/3461648.3463843}, year = {2021}, } Publisher's Version |
|
Lin, Gangliang |
LCTES '21: "CHaNAS: Coordinated Search ..."
CHaNAS: Coordinated Search for Network Architecture and Scheduling Policy
Weiwei Chen, Ying Wang, Gangliang Lin, Chengsi Gao, Cheng Liu, and Lei Zhang (Institute of Computing Technology at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China) Automatically design an efficient DNN solution for a given deep learning task on the target hardware mainly decided by the neural network architecture and the schedule mapping strategy, where the two goals are closely coupled with each other to fully exploit the advantages of the underlying hardware. Prior hardware-aware Neural Architecture Search (NAS) methods mostly ignore the impacts of different scheduling policies (e.g., graph-level optimization, loop transformations, parallelization, etc.) on network candidates being evaluated in the search process. Thus, they may miss the true-optimal architecture that can only be discovered by trying-out different scheduling policies. This work proposes a NAS framework (CHaNAS) that searches for not only the network architecture but also the dedicated scheduling policy, as the optimal co-design solution on target hardware that fully exploits the advantages of the underlying hardware. We propose to use a block-based pre-scheduling methodology to reduce the co-design search space, and enable the automatic generation of the optimal co-design, including the network architecture and the tensor programs that practice the scheduling policy. We evaluate CHaNAS on Imagenet on different hardware back-ends against the state-of-the-art hardware-aware search method MobileNet-v3. Experimental results show that the co-design solutions obtained by ChaNAS show up to 1.6x, 1.9x, and 1.7x performance boost on NVIDIA P100 GPU, Intel Xeon 8163 CPU, and Samsung Note 10 Mobile, respectively, over the baselines of the same-level accuracy. @InProceedings{LCTES21p42, author = {Weiwei Chen and Ying Wang and Gangliang Lin and Chengsi Gao and Cheng Liu and Lei Zhang}, title = {CHaNAS: Coordinated Search for Network Architecture and Scheduling Policy}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {42--53}, doi = {10.1145/3461648.3463846}, year = {2021}, } Publisher's Version |
|
Liu, Cheng |
LCTES '21: "CHaNAS: Coordinated Search ..."
CHaNAS: Coordinated Search for Network Architecture and Scheduling Policy
Weiwei Chen, Ying Wang, Gangliang Lin, Chengsi Gao, Cheng Liu, and Lei Zhang (Institute of Computing Technology at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China) Automatically design an efficient DNN solution for a given deep learning task on the target hardware mainly decided by the neural network architecture and the schedule mapping strategy, where the two goals are closely coupled with each other to fully exploit the advantages of the underlying hardware. Prior hardware-aware Neural Architecture Search (NAS) methods mostly ignore the impacts of different scheduling policies (e.g., graph-level optimization, loop transformations, parallelization, etc.) on network candidates being evaluated in the search process. Thus, they may miss the true-optimal architecture that can only be discovered by trying-out different scheduling policies. This work proposes a NAS framework (CHaNAS) that searches for not only the network architecture but also the dedicated scheduling policy, as the optimal co-design solution on target hardware that fully exploits the advantages of the underlying hardware. We propose to use a block-based pre-scheduling methodology to reduce the co-design search space, and enable the automatic generation of the optimal co-design, including the network architecture and the tensor programs that practice the scheduling policy. We evaluate CHaNAS on Imagenet on different hardware back-ends against the state-of-the-art hardware-aware search method MobileNet-v3. Experimental results show that the co-design solutions obtained by ChaNAS show up to 1.6x, 1.9x, and 1.7x performance boost on NVIDIA P100 GPU, Intel Xeon 8163 CPU, and Samsung Note 10 Mobile, respectively, over the baselines of the same-level accuracy. @InProceedings{LCTES21p42, author = {Weiwei Chen and Ying Wang and Gangliang Lin and Chengsi Gao and Cheng Liu and Lei Zhang}, title = {CHaNAS: Coordinated Search for Network Architecture and Scheduling Policy}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {42--53}, doi = {10.1145/3461648.3463846}, year = {2021}, } Publisher's Version |
|
Liu, Fei |
LCTES '21: "Better Atomic Writes by Exposing ..."
Better Atomic Writes by Exposing the Flash Out-of-Band Area to File Systems
Hongwei Qin, Dan Feng, Wei Tong, Yutong Zhao, Sheng Qiu, Fei Liu, and Shu Li (Huazhong University of Science and Technology, China; Alibaba Group, China) File systems for mobile devices usually preserve data consistency by ordered I/Os. However, maintaining I/O ordering prevents applications from fully exploiting device parallelism and thus degrades the storage performance. In this paper, we propose NBStack to eliminate ordered I/Os without compromising data consistency. First, we augment the existing block interface to expose the Flash out-of-band area to file systems. Second, we build an enhanced block device prototype that supports the new interface. Third, we develop NBFS, a Linux file system, that leverages the new block interface to achieve atomic writes without enforcing I/O orderings. Experimental results show that NBStack doubles the performance of F2FS while providing strong consistency and durability guarantees. If applications are willing to trade-off durability, NBStack can further aggressively improve performance. @InProceedings{LCTES21p12, author = {Hongwei Qin and Dan Feng and Wei Tong and Yutong Zhao and Sheng Qiu and Fei Liu and Shu Li}, title = {Better Atomic Writes by Exposing the Flash Out-of-Band Area to File Systems}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {12--23}, doi = {10.1145/3461648.3463843}, year = {2021}, } Publisher's Version |
|
Lohmann, Daniel |
LCTES '21: "Data-Flow-Sensitive Fault-Space ..."
Data-Flow-Sensitive Fault-Space Pruning for the Injection of Transient Hardware Faults
Oskar Pusz, Christian Dietrich, and Daniel Lohmann (Leibniz Universität Hannover, Germany) In the domain of safety-critical systems, fault injection campaigns on ISA-level have become a widespread approach to systematically assess the resilience of a system with respect to transient hardware faults. However, experimentally injecting all possible faults to achieve full fault-space coverage is infeasible in practice. Hence, pruning techniques, such as def/use pruning are commonly applied to reduce the campaign size by grouping injections that surely provoke the same erroneous behavior. We describe data-flow pruning, a new data-flow sensitive fault-space pruning method that extends on def/use-pruning by also considering the instructions’ semantics when deriving fault-equivalence sets. By tracking the information flow for each bit individually across the respective instructions and considering their fault-masking capability, data-flow pruning (DFP) has to plan fewer pilot injections as it derives larger fault-equivalence sets. Like def/use pruning, DFP is precise and complete and it can be used as a direct replacement/alternative in existing software-based fault-injection tools. Our prototypical implementation so far considers local fault equivalence for five types of instructions. In our experimental evaluation, this already reduces the number of necessary injections by up to 18 percent compared to def/use pruning. @InProceedings{LCTES21p97, author = {Oskar Pusz and Christian Dietrich and Daniel Lohmann}, title = {Data-Flow-Sensitive Fault-Space Pruning for the Injection of Transient Hardware Faults}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {97--109}, doi = {10.1145/3461648.3463851}, year = {2021}, } Publisher's Version Artifacts Reusable Results Reproduced |
|
Marin, Gabriel |
LCTES '21: "Break Dancing: Low Overhead, ..."
Break Dancing: Low Overhead, Architecture Neutral Software Branch Tracing
Gabriel Marin, Alexey Alexandrov, and Tipp Moseley (Google, USA) Sampling-based Feedback Directed Optimization (FDO) methods like AutoFDO and BOLT that employ profiles collected in live production environments, are commonly used in datacenter applications to attain significant performance benefits without the toil of maintaining representative load tests. Sampled profiles rely on hardware facilities like Intel’s Last Branch Record (LBR) which are not currently available even on popular CPUs from ARM or AMD. Since not all architectures include a hardware LBR feature, we present an architecture neutral approach to collect LBR-like data. We use sampling and limited program tracing to capture LBR-like data from optimized and unmodified applications binaries. Since the implementation is in user space, we can collect arbitrarily long LBR buffers, and by varying the sampling rate, we can adjust the runtime overhead to arbitrarily low values. We target runtime overheads of <2% when the profiler is on and zero when it’s off. This amortizes to negligible fleet-wide collection cost given the size of a modern production fleet. We implemented a profiler that uses this method of software branch tracing. We also analyzed its overhead and the similarity of the data it collects to the Intel LBR hardware using the SPEC2006 benchmarks. Results demonstrate profile quality and optimization efficacy at parity with LBR-based AutoFDO and the target profiling overhead being achievable even without implementing any advanced tuning. @InProceedings{LCTES21p122, author = {Gabriel Marin and Alexey Alexandrov and Tipp Moseley}, title = {Break Dancing: Low Overhead, Architecture Neutral Software Branch Tracing}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {122--133}, doi = {10.1145/3461648.3463853}, year = {2021}, } Publisher's Version |
|
Monniaux, David |
LCTES '21: "Simple, Light, Yet Formally ..."
Simple, Light, Yet Formally Verified, Global Common Subexpression Elimination and Loop-Invariant Code Motion
David Monniaux and Cyril Six (Verimag, France; Université Grenoble Alpes, France; CNRS, France; Kalray, France) We present an approach for implementing a formally certified loop-invariant code motion optimization by composing an unrolling pass and a formally certified yet efficient global subexpression elimination. This approach is lightweight: each pass comes with a simple and independent proof of correctness. Experiments show the approach significantly narrows the performance gap between the CompCert certified compiler and state-of-the-art optimizing compilers. Our static analysis employs an efficient yet verified hashed set structure, resulting in fast compilation. @InProceedings{LCTES21p85, author = {David Monniaux and Cyril Six}, title = {Simple, Light, Yet Formally Verified, Global Common Subexpression Elimination and Loop-Invariant Code Motion}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {85--96}, doi = {10.1145/3461648.3463850}, year = {2021}, } Publisher's Version Artifacts Reusable Results Reproduced |
|
Moon, Yaebin |
LCTES '21: "MaPHeA: A Lightweight Memory ..."
MaPHeA: A Lightweight Memory Hierarchy-Aware Profile-Guided Heap Allocation Framework
Deok-Jae Oh, Yaebin Moon, Eojin Lee, Tae Jun Ham, Yongjun Park, Jae W. Lee, and Jung Ho Ahn (Seoul National University, South Korea; Samsung Electronics, South Korea; Hanyang University, South Korea) Hardware performance monitoring units (PMUs) are a standard feature in modern microprocessors for high-performance computing (HPC) and embedded systems, by providing a rich set of microarchitectural event samplers. Recently, many profile-guided optimization (PGO) frameworks have exploited them to feature much lower profiling overhead than conventional instrumentation-based frameworks. However, existing PGO frameworks mostly focus on optimizing the layout of binaries and do not utilize rich information provided by the PMU about data access behaviors over the memory hierarchy. Thus, we propose MaPHeA, a lightweight Memory hierarchy-aware Profile-guided Heap Allocation framework applicable to both HPC and embedded systems. MaPHeA improves application performance by guiding and applying the optimized allocation of dynamically allocated heap objects with very low profiling overhead and without additional user intervention. To demonstrate the effectiveness of MaPHeA, we apply it to optimizing heap object allocation in an emerging DRAM-NVM heterogeneous memory system (HMS), and to selective huge-page utilization. In an HMS, by identifying and placing frequently accessed heap objects to the fast DRAM region, MaPHeA improves the performance of memory-intensive graph-processing and Redis workloads by 56.0% on average over the default configuration that uses DRAM as a hardware-managed cache of slow NVM. Also, by identifying large heap objects that cause frequent TLB misses and allocating them to huge pages, MaPHeA increases the performance of read and update operations of Redis by 10.6% over the transparent huge-page implementation of Linux. @InProceedings{LCTES21p24, author = {Deok-Jae Oh and Yaebin Moon and Eojin Lee and Tae Jun Ham and Yongjun Park and Jae W. Lee and Jung Ho Ahn}, title = {MaPHeA: A Lightweight Memory Hierarchy-Aware Profile-Guided Heap Allocation Framework}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {24--36}, doi = {10.1145/3461648.3463844}, year = {2021}, } Publisher's Version Artifacts Functional Results Reproduced |
|
Moseley, Tipp |
LCTES '21: "Break Dancing: Low Overhead, ..."
Break Dancing: Low Overhead, Architecture Neutral Software Branch Tracing
Gabriel Marin, Alexey Alexandrov, and Tipp Moseley (Google, USA) Sampling-based Feedback Directed Optimization (FDO) methods like AutoFDO and BOLT that employ profiles collected in live production environments, are commonly used in datacenter applications to attain significant performance benefits without the toil of maintaining representative load tests. Sampled profiles rely on hardware facilities like Intel’s Last Branch Record (LBR) which are not currently available even on popular CPUs from ARM or AMD. Since not all architectures include a hardware LBR feature, we present an architecture neutral approach to collect LBR-like data. We use sampling and limited program tracing to capture LBR-like data from optimized and unmodified applications binaries. Since the implementation is in user space, we can collect arbitrarily long LBR buffers, and by varying the sampling rate, we can adjust the runtime overhead to arbitrarily low values. We target runtime overheads of <2% when the profiler is on and zero when it’s off. This amortizes to negligible fleet-wide collection cost given the size of a modern production fleet. We implemented a profiler that uses this method of software branch tracing. We also analyzed its overhead and the similarity of the data it collects to the Intel LBR hardware using the SPEC2006 benchmarks. Results demonstrate profile quality and optimization efficacy at parity with LBR-based AutoFDO and the target profiling overhead being achievable even without implementing any advanced tuning. @InProceedings{LCTES21p122, author = {Gabriel Marin and Alexey Alexandrov and Tipp Moseley}, title = {Break Dancing: Low Overhead, Architecture Neutral Software Branch Tracing}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {122--133}, doi = {10.1145/3461648.3463853}, year = {2021}, } Publisher's Version |
|
Nanayakkara, Suranga |
LCTES '21: "WasmAndroid: A Cross-Platform ..."
WasmAndroid: A Cross-Platform Runtime for Native Programming Languages on Android (WIP Paper)
Elliott Wen, Gerald Weber, and Suranga Nanayakkara (University of Auckland, New Zealand) Open-source hardware such as RISC-V has been gaining substantial momentum. Recently, they have begun to embrace Google's Android operating system to leverage its software ecosystem. Despite the encouraging progress, a challenging issue arises: a majority of Android applications are written in native languages and need to be recompiled to target new hardware platforms. Unfortunately, this recompilation process is not scalable because of the explosion of new hardware platforms. To address this issue, we present WasmAndroid, a high-performance cross-platform runtime for native programming languages on Android. WasmAndroid only requires developers to compile their source code to WebAssembly, an efficient and portable bytecode format that can be executed everywhere without additional reconfiguration. WasmAndroid can also trans-pile existing application binaries to WebAssembly when source code is not available. WebAssembly's language model is very different from C/C++ and this mismatch leads to many unique implementation challenges. In this paper, we provide workable solutions and conduct a preliminary system evaluation. We show that WasmAndroid provides acceptable performance to execute native applications in a cross-platform manner. @InProceedings{LCTES21p80, author = {Elliott Wen and Gerald Weber and Suranga Nanayakkara}, title = {WasmAndroid: A Cross-Platform Runtime for Native Programming Languages on Android (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {80--84}, doi = {10.1145/3461648.3463849}, year = {2021}, } Publisher's Version |
|
Oh, Deok-Jae |
LCTES '21: "MaPHeA: A Lightweight Memory ..."
MaPHeA: A Lightweight Memory Hierarchy-Aware Profile-Guided Heap Allocation Framework
Deok-Jae Oh, Yaebin Moon, Eojin Lee, Tae Jun Ham, Yongjun Park, Jae W. Lee, and Jung Ho Ahn (Seoul National University, South Korea; Samsung Electronics, South Korea; Hanyang University, South Korea) Hardware performance monitoring units (PMUs) are a standard feature in modern microprocessors for high-performance computing (HPC) and embedded systems, by providing a rich set of microarchitectural event samplers. Recently, many profile-guided optimization (PGO) frameworks have exploited them to feature much lower profiling overhead than conventional instrumentation-based frameworks. However, existing PGO frameworks mostly focus on optimizing the layout of binaries and do not utilize rich information provided by the PMU about data access behaviors over the memory hierarchy. Thus, we propose MaPHeA, a lightweight Memory hierarchy-aware Profile-guided Heap Allocation framework applicable to both HPC and embedded systems. MaPHeA improves application performance by guiding and applying the optimized allocation of dynamically allocated heap objects with very low profiling overhead and without additional user intervention. To demonstrate the effectiveness of MaPHeA, we apply it to optimizing heap object allocation in an emerging DRAM-NVM heterogeneous memory system (HMS), and to selective huge-page utilization. In an HMS, by identifying and placing frequently accessed heap objects to the fast DRAM region, MaPHeA improves the performance of memory-intensive graph-processing and Redis workloads by 56.0% on average over the default configuration that uses DRAM as a hardware-managed cache of slow NVM. Also, by identifying large heap objects that cause frequent TLB misses and allocating them to huge pages, MaPHeA increases the performance of read and update operations of Redis by 10.6% over the transparent huge-page implementation of Linux. @InProceedings{LCTES21p24, author = {Deok-Jae Oh and Yaebin Moon and Eojin Lee and Tae Jun Ham and Yongjun Park and Jae W. Lee and Jung Ho Ahn}, title = {MaPHeA: A Lightweight Memory Hierarchy-Aware Profile-Guided Heap Allocation Framework}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {24--36}, doi = {10.1145/3461648.3463844}, year = {2021}, } Publisher's Version Artifacts Functional Results Reproduced |
|
Paluri, Pavan Kumar |
LCTES '21: "ARINC 653-Inspired Regularity-Based ..."
ARINC 653-Inspired Regularity-Based Resource Partitioning on Xen
Pavan Kumar Paluri, Guangli Dai, and Albert Mo Kim Cheng (University of Houston, USA) A multitude of cloud-native applications take up a significant share of today's world wide web, the majority of which implicitly require soft-real-time guarantees when hosted on servers at various data centers across the globe. With the rapid development of cloud computing and virtualization techniques, many applications have been moved onto cloud and edge platforms that require efficient virtualization techniques. This means a set of applications must be executed on a Virtual Machine (VM) and multiple VMs must be temporally and spatially scheduled on a set of CPUs. Designed to leverage the cloud infrastructure model, many of these cloud-native applications such as media servers strongly demand low data latency and high compute-resource availability, both of which must be predictable. However, state-of-art VM schedulers fail to satisfy these requirements simultaneously. The scheduling of cloud-native applications on VMs and the scheduling of VMs on physical resources (CPUs), collectively need to be real-time in nature as specified by the Hierarchical Real-Time Scheduling (HiRTS) framework. Conforming to the specifications of this framework, the Regularity-based Resource Partitioning (RRP) model has been proposed that introduces the concept of regularity to provide a near-ideal resource supply to all VMs. In this paper, we make the theoretically superior Regularity-based Resource Partitioning (RRP) model ready for prime time by implementing its associated resource partitioning algorithms for the first time ever on the popular x-86 open-source hypervisor Xen, i.e., RRP-Xen. This paper also compares and contrasts the real-time performance of RRP-Xen against contemporary Xen schedulers such as Credit and RTDS. Our contributions include: (1) a novel implementation of the RRP model on Xen's x-86 based hypervisor, thereby providing a test-bed for future researchers; (2) the first-ever multi-core ARINC 653 VM scheduler prototype on Xen; and (3) numerous experiments and theoretical analysis to determine the real-time performance of RRP-Xen under a stringent workload environment. @InProceedings{LCTES21p134, author = {Pavan Kumar Paluri and Guangli Dai and Albert Mo Kim Cheng}, title = {ARINC 653-Inspired Regularity-Based Resource Partitioning on Xen}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {134--145}, doi = {10.1145/3461648.3463854}, year = {2021}, } Publisher's Version Info Artifacts Functional |
|
Park, Yongjun |
LCTES '21: "MaPHeA: A Lightweight Memory ..."
MaPHeA: A Lightweight Memory Hierarchy-Aware Profile-Guided Heap Allocation Framework
Deok-Jae Oh, Yaebin Moon, Eojin Lee, Tae Jun Ham, Yongjun Park, Jae W. Lee, and Jung Ho Ahn (Seoul National University, South Korea; Samsung Electronics, South Korea; Hanyang University, South Korea) Hardware performance monitoring units (PMUs) are a standard feature in modern microprocessors for high-performance computing (HPC) and embedded systems, by providing a rich set of microarchitectural event samplers. Recently, many profile-guided optimization (PGO) frameworks have exploited them to feature much lower profiling overhead than conventional instrumentation-based frameworks. However, existing PGO frameworks mostly focus on optimizing the layout of binaries and do not utilize rich information provided by the PMU about data access behaviors over the memory hierarchy. Thus, we propose MaPHeA, a lightweight Memory hierarchy-aware Profile-guided Heap Allocation framework applicable to both HPC and embedded systems. MaPHeA improves application performance by guiding and applying the optimized allocation of dynamically allocated heap objects with very low profiling overhead and without additional user intervention. To demonstrate the effectiveness of MaPHeA, we apply it to optimizing heap object allocation in an emerging DRAM-NVM heterogeneous memory system (HMS), and to selective huge-page utilization. In an HMS, by identifying and placing frequently accessed heap objects to the fast DRAM region, MaPHeA improves the performance of memory-intensive graph-processing and Redis workloads by 56.0% on average over the default configuration that uses DRAM as a hardware-managed cache of slow NVM. Also, by identifying large heap objects that cause frequent TLB misses and allocating them to huge pages, MaPHeA increases the performance of read and update operations of Redis by 10.6% over the transparent huge-page implementation of Linux. @InProceedings{LCTES21p24, author = {Deok-Jae Oh and Yaebin Moon and Eojin Lee and Tae Jun Ham and Yongjun Park and Jae W. Lee and Jung Ho Ahn}, title = {MaPHeA: A Lightweight Memory Hierarchy-Aware Profile-Guided Heap Allocation Framework}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {24--36}, doi = {10.1145/3461648.3463844}, year = {2021}, } Publisher's Version Artifacts Functional Results Reproduced |
|
Petoumenos, Pavlos |
LCTES '21: "HyFM: Function Merging for ..."
HyFM: Function Merging for Free
Rodrigo C. O. Rocha, Pavlos Petoumenos, Zheng Wang, Murray Cole, Kim Hazelwood, and Hugh Leather (University of Edinburgh, UK; University of Manchester, UK; University of Leeds, UK; Facebook AI Research, USA) Function merging is an important optimization for reducing code size. It merges multiple functions into a single one, eliminating duplicate code among them. The existing state-of-the-art relies on a well-known sequence alignment algorithm to identify duplicate code across whole functions. However, this algorithm is quadratic in time and space on the number of instructions. This leads to very high time overheads and prohibitive levels of memory usage even for medium-sized benchmarks. For larger programs, it becomes impractical. This is made worse by an overly eager merging approach. All selected pairs of functions will be merged. Only then will this approach estimate the potential benefit from merging and decide whether to replace the original functions with the merged one. Given that most pairs are unprofitable, a significant amount of time is wasted producing merged functions that are simply thrown away. In this paper, we propose HyFM, a novel function merging technique that delivers similar levels of code size reduction for significantly lower time overhead and memory usage. Unlike the state-of-the-art, our alignment strategy works at the block level. Since basic blocks are usually much shorter than functions, even a quadratic alignment is acceptable. However, we also propose a linear algorithm for aligning blocks of the same size at a much lower cost. We extend this strategy with a multi-tier profitability analysis that bails out early from unprofitable merging attempts. By aligning individual pairs of blocks, we are able to decide their alignment’s profitability separately and before actually generating code. Experimental results on SPEC 2006 and 2017 show that HyFM needs orders of magnitude less memory, using up to 48 MB or 5.6 MB, depending on the variant used, while the state-of-the-art requires 32 GB in the worst case. HyFM also runs over 4.5×× faster, while still achieving comparable code size reduction. Combined with the speedup of later compilation stages due to the reduced number of functions, HyFM contributes to a reduced end-to-end compilation time. @InProceedings{LCTES21p110, author = {Rodrigo C. O. Rocha and Pavlos Petoumenos and Zheng Wang and Murray Cole and Kim Hazelwood and Hugh Leather}, title = {HyFM: Function Merging for Free}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {110--121}, doi = {10.1145/3461648.3463852}, year = {2021}, } Publisher's Version Artifacts Functional Results Reproduced |
|
Poroor, Jayaraj |
LCTES '21: "Robust I/O-Compute Concurrency ..."
Robust I/O-Compute Concurrency for Machine Learning Pipelines in Constrained Cyber-Physical Devices
Jayaraj Poroor, Akash Lal, and Sandesh Ghanta (Amrita University, India; JIFFY.ai, India; Microsoft Research, India; Amazon, India) Cyberphysical systems have numerous industrial and commercial applications. Such systems are often built using low-resource devices that gather and process data, using machine-learning (ML) models, to make intelligent decisions and provide value to users. Programming such low-resource devices with an impoverished system runtime is often challenging. This paper presents a new domain-specific language called PiCon for programming ML pipelines in low-resource devices. PiCon allows safe I/O-compute concurrency, ruling out a large class of errors, while providing a simple and sequential coding abstraction to the programmer. PiCon compiles to C code and easily interfaces with existing C/C++ code. Furthermore, the generated code does not rely on multi-threading support or dynamic memory allocation, dramatically reducing its footprint on the device. We present experience porting two real-world ML applications that demonstrate simplification in programmability, in addition to several safe-by-construction guarantees. @InProceedings{LCTES21p1, author = {Jayaraj Poroor and Akash Lal and Sandesh Ghanta}, title = {Robust I/O-Compute Concurrency for Machine Learning Pipelines in Constrained Cyber-Physical Devices}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {1--11}, doi = {10.1145/3461648.3463842}, year = {2021}, } Publisher's Version |
|
Pusz, Oskar |
LCTES '21: "Data-Flow-Sensitive Fault-Space ..."
Data-Flow-Sensitive Fault-Space Pruning for the Injection of Transient Hardware Faults
Oskar Pusz, Christian Dietrich, and Daniel Lohmann (Leibniz Universität Hannover, Germany) In the domain of safety-critical systems, fault injection campaigns on ISA-level have become a widespread approach to systematically assess the resilience of a system with respect to transient hardware faults. However, experimentally injecting all possible faults to achieve full fault-space coverage is infeasible in practice. Hence, pruning techniques, such as def/use pruning are commonly applied to reduce the campaign size by grouping injections that surely provoke the same erroneous behavior. We describe data-flow pruning, a new data-flow sensitive fault-space pruning method that extends on def/use-pruning by also considering the instructions’ semantics when deriving fault-equivalence sets. By tracking the information flow for each bit individually across the respective instructions and considering their fault-masking capability, data-flow pruning (DFP) has to plan fewer pilot injections as it derives larger fault-equivalence sets. Like def/use pruning, DFP is precise and complete and it can be used as a direct replacement/alternative in existing software-based fault-injection tools. Our prototypical implementation so far considers local fault equivalence for five types of instructions. In our experimental evaluation, this already reduces the number of necessary injections by up to 18 percent compared to def/use pruning. @InProceedings{LCTES21p97, author = {Oskar Pusz and Christian Dietrich and Daniel Lohmann}, title = {Data-Flow-Sensitive Fault-Space Pruning for the Injection of Transient Hardware Faults}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {97--109}, doi = {10.1145/3461648.3463851}, year = {2021}, } Publisher's Version Artifacts Reusable Results Reproduced |
|
Qin, Hongwei |
LCTES '21: "Better Atomic Writes by Exposing ..."
Better Atomic Writes by Exposing the Flash Out-of-Band Area to File Systems
Hongwei Qin, Dan Feng, Wei Tong, Yutong Zhao, Sheng Qiu, Fei Liu, and Shu Li (Huazhong University of Science and Technology, China; Alibaba Group, China) File systems for mobile devices usually preserve data consistency by ordered I/Os. However, maintaining I/O ordering prevents applications from fully exploiting device parallelism and thus degrades the storage performance. In this paper, we propose NBStack to eliminate ordered I/Os without compromising data consistency. First, we augment the existing block interface to expose the Flash out-of-band area to file systems. Second, we build an enhanced block device prototype that supports the new interface. Third, we develop NBFS, a Linux file system, that leverages the new block interface to achieve atomic writes without enforcing I/O orderings. Experimental results show that NBStack doubles the performance of F2FS while providing strong consistency and durability guarantees. If applications are willing to trade-off durability, NBStack can further aggressively improve performance. @InProceedings{LCTES21p12, author = {Hongwei Qin and Dan Feng and Wei Tong and Yutong Zhao and Sheng Qiu and Fei Liu and Shu Li}, title = {Better Atomic Writes by Exposing the Flash Out-of-Band Area to File Systems}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {12--23}, doi = {10.1145/3461648.3463843}, year = {2021}, } Publisher's Version |
|
Qiu, Sheng |
LCTES '21: "Better Atomic Writes by Exposing ..."
Better Atomic Writes by Exposing the Flash Out-of-Band Area to File Systems
Hongwei Qin, Dan Feng, Wei Tong, Yutong Zhao, Sheng Qiu, Fei Liu, and Shu Li (Huazhong University of Science and Technology, China; Alibaba Group, China) File systems for mobile devices usually preserve data consistency by ordered I/Os. However, maintaining I/O ordering prevents applications from fully exploiting device parallelism and thus degrades the storage performance. In this paper, we propose NBStack to eliminate ordered I/Os without compromising data consistency. First, we augment the existing block interface to expose the Flash out-of-band area to file systems. Second, we build an enhanced block device prototype that supports the new interface. Third, we develop NBFS, a Linux file system, that leverages the new block interface to achieve atomic writes without enforcing I/O orderings. Experimental results show that NBStack doubles the performance of F2FS while providing strong consistency and durability guarantees. If applications are willing to trade-off durability, NBStack can further aggressively improve performance. @InProceedings{LCTES21p12, author = {Hongwei Qin and Dan Feng and Wei Tong and Yutong Zhao and Sheng Qiu and Fei Liu and Shu Li}, title = {Better Atomic Writes by Exposing the Flash Out-of-Band Area to File Systems}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {12--23}, doi = {10.1145/3461648.3463843}, year = {2021}, } Publisher's Version |
|
Rocha, Rodrigo C. O. |
LCTES '21: "HyFM: Function Merging for ..."
HyFM: Function Merging for Free
Rodrigo C. O. Rocha, Pavlos Petoumenos, Zheng Wang, Murray Cole, Kim Hazelwood, and Hugh Leather (University of Edinburgh, UK; University of Manchester, UK; University of Leeds, UK; Facebook AI Research, USA) Function merging is an important optimization for reducing code size. It merges multiple functions into a single one, eliminating duplicate code among them. The existing state-of-the-art relies on a well-known sequence alignment algorithm to identify duplicate code across whole functions. However, this algorithm is quadratic in time and space on the number of instructions. This leads to very high time overheads and prohibitive levels of memory usage even for medium-sized benchmarks. For larger programs, it becomes impractical. This is made worse by an overly eager merging approach. All selected pairs of functions will be merged. Only then will this approach estimate the potential benefit from merging and decide whether to replace the original functions with the merged one. Given that most pairs are unprofitable, a significant amount of time is wasted producing merged functions that are simply thrown away. In this paper, we propose HyFM, a novel function merging technique that delivers similar levels of code size reduction for significantly lower time overhead and memory usage. Unlike the state-of-the-art, our alignment strategy works at the block level. Since basic blocks are usually much shorter than functions, even a quadratic alignment is acceptable. However, we also propose a linear algorithm for aligning blocks of the same size at a much lower cost. We extend this strategy with a multi-tier profitability analysis that bails out early from unprofitable merging attempts. By aligning individual pairs of blocks, we are able to decide their alignment’s profitability separately and before actually generating code. Experimental results on SPEC 2006 and 2017 show that HyFM needs orders of magnitude less memory, using up to 48 MB or 5.6 MB, depending on the variant used, while the state-of-the-art requires 32 GB in the worst case. HyFM also runs over 4.5×× faster, while still achieving comparable code size reduction. Combined with the speedup of later compilation stages due to the reduced number of functions, HyFM contributes to a reduced end-to-end compilation time. @InProceedings{LCTES21p110, author = {Rodrigo C. O. Rocha and Pavlos Petoumenos and Zheng Wang and Murray Cole and Kim Hazelwood and Hugh Leather}, title = {HyFM: Function Merging for Free}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {110--121}, doi = {10.1145/3461648.3463852}, year = {2021}, } Publisher's Version Artifacts Functional Results Reproduced |
|
Schröder-Preikschat, Wolfgang |
LCTES '21: "Annotate Once – Analyze ..."
Annotate Once – Analyze Anywhere: Context-Aware WCET Analysis by User-Defined Abstractions
Simon Schuster, Peter Wägemann, Peter Ulbrich, and Wolfgang Schröder-Preikschat (University of Erlangen-Nuremberg, Germany; TU Dortmund, Germany) The widespread adoption of cyber-physical systems in the safety-critical (hard real-time) domain is accompanied by a rising degree of code-reuse up to actual software product lines spanning different hardware platforms. Nevertheless, the dominant tools for static worst-case execution-time (WCET) analysis operate on individual, specific system instances at the binary level, further depending on machine-code–level annotations for precise analysis. Thus, this timing verification is neither portable nor reusable. PragMetis addresses this schism by providing an expressive source-level annotation language that enables to express context dependence at the library level using user-defined abstractions. These abstractions allow users to generically annotate context-dependent flow facts down to the granularity of individual loop contexts. We then use control-flow–relation graphs to transfer these facts to machine-code level for specific instances, even in the presence of certain compiler optimizations, thus achieving portability. Our evaluation results based on TACLeBench confirm that PragMetis's powerful expressions yield more accurate WCET bounds. @InProceedings{LCTES21p54, author = {Simon Schuster and Peter Wägemann and Peter Ulbrich and Wolfgang Schröder-Preikschat}, title = {Annotate Once – Analyze Anywhere: Context-Aware WCET Analysis by User-Defined Abstractions}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {54--66}, doi = {10.1145/3461648.3463847}, year = {2021}, } Publisher's Version Artifacts Reusable Results Reproduced |
|
Schuster, Simon |
LCTES '21: "Annotate Once – Analyze ..."
Annotate Once – Analyze Anywhere: Context-Aware WCET Analysis by User-Defined Abstractions
Simon Schuster, Peter Wägemann, Peter Ulbrich, and Wolfgang Schröder-Preikschat (University of Erlangen-Nuremberg, Germany; TU Dortmund, Germany) The widespread adoption of cyber-physical systems in the safety-critical (hard real-time) domain is accompanied by a rising degree of code-reuse up to actual software product lines spanning different hardware platforms. Nevertheless, the dominant tools for static worst-case execution-time (WCET) analysis operate on individual, specific system instances at the binary level, further depending on machine-code–level annotations for precise analysis. Thus, this timing verification is neither portable nor reusable. PragMetis addresses this schism by providing an expressive source-level annotation language that enables to express context dependence at the library level using user-defined abstractions. These abstractions allow users to generically annotate context-dependent flow facts down to the granularity of individual loop contexts. We then use control-flow–relation graphs to transfer these facts to machine-code level for specific instances, even in the presence of certain compiler optimizations, thus achieving portability. Our evaluation results based on TACLeBench confirm that PragMetis's powerful expressions yield more accurate WCET bounds. @InProceedings{LCTES21p54, author = {Simon Schuster and Peter Wägemann and Peter Ulbrich and Wolfgang Schröder-Preikschat}, title = {Annotate Once – Analyze Anywhere: Context-Aware WCET Analysis by User-Defined Abstractions}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {54--66}, doi = {10.1145/3461648.3463847}, year = {2021}, } Publisher's Version Artifacts Reusable Results Reproduced |
|
Shi, Yang |
LCTES '21: "Automatic Mapping and Code ..."
Automatic Mapping and Code Optimization for OpenCL Kernels on FT-Matrix Architecture (WIP Paper)
Xiaolei Zhao, Mei Wen, Zhaoyun Chen, Yang Shi, and Chunyuan Zhang (National University of Defense Technology, China) FT-Matrix is a typical vector-SIMD architecture that refines the cooperation between scalar and vector units. This approach is widely used in digital signal processing, high-performance computing, and artificial intelligence, among other fields. FT-Matrix currently adopts C vector extension as the main programming model, improving the utilization efficiency of SIMD by providing explicit vector extension API. Moreover, it is difficult to efficiently transplant parallel programs (OpenCL, CUDA) adopted by users. This paper proposes an automatic mapping and code optimization method for OpenCL kernels on FT-Matrix architecture. The proposed approach solves these challenges by means of work item coalescing, slicing and rotation, and instruction-level code optimization. Preliminary results show that our method can achieve high performance and good hardware utilization for OpenCL kernels, as well as decreasing the programming difficulty on FT-Matrix. @InProceedings{LCTES21p37, author = {Xiaolei Zhao and Mei Wen and Zhaoyun Chen and Yang Shi and Chunyuan Zhang}, title = {Automatic Mapping and Code Optimization for OpenCL Kernels on FT-Matrix Architecture (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {37--41}, doi = {10.1145/3461648.3463845}, year = {2021}, } Publisher's Version |
|
Six, Cyril |
LCTES '21: "Simple, Light, Yet Formally ..."
Simple, Light, Yet Formally Verified, Global Common Subexpression Elimination and Loop-Invariant Code Motion
David Monniaux and Cyril Six (Verimag, France; Université Grenoble Alpes, France; CNRS, France; Kalray, France) We present an approach for implementing a formally certified loop-invariant code motion optimization by composing an unrolling pass and a formally certified yet efficient global subexpression elimination. This approach is lightweight: each pass comes with a simple and independent proof of correctness. Experiments show the approach significantly narrows the performance gap between the CompCert certified compiler and state-of-the-art optimizing compilers. Our static analysis employs an efficient yet verified hashed set structure, resulting in fast compilation. @InProceedings{LCTES21p85, author = {David Monniaux and Cyril Six}, title = {Simple, Light, Yet Formally Verified, Global Common Subexpression Elimination and Loop-Invariant Code Motion}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {85--96}, doi = {10.1145/3461648.3463850}, year = {2021}, } Publisher's Version Artifacts Reusable Results Reproduced |
|
Tong, Wei |
LCTES '21: "Better Atomic Writes by Exposing ..."
Better Atomic Writes by Exposing the Flash Out-of-Band Area to File Systems
Hongwei Qin, Dan Feng, Wei Tong, Yutong Zhao, Sheng Qiu, Fei Liu, and Shu Li (Huazhong University of Science and Technology, China; Alibaba Group, China) File systems for mobile devices usually preserve data consistency by ordered I/Os. However, maintaining I/O ordering prevents applications from fully exploiting device parallelism and thus degrades the storage performance. In this paper, we propose NBStack to eliminate ordered I/Os without compromising data consistency. First, we augment the existing block interface to expose the Flash out-of-band area to file systems. Second, we build an enhanced block device prototype that supports the new interface. Third, we develop NBFS, a Linux file system, that leverages the new block interface to achieve atomic writes without enforcing I/O orderings. Experimental results show that NBStack doubles the performance of F2FS while providing strong consistency and durability guarantees. If applications are willing to trade-off durability, NBStack can further aggressively improve performance. @InProceedings{LCTES21p12, author = {Hongwei Qin and Dan Feng and Wei Tong and Yutong Zhao and Sheng Qiu and Fei Liu and Shu Li}, title = {Better Atomic Writes by Exposing the Flash Out-of-Band Area to File Systems}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {12--23}, doi = {10.1145/3461648.3463843}, year = {2021}, } Publisher's Version |
|
Ulbrich, Peter |
LCTES '21: "Annotate Once – Analyze ..."
Annotate Once – Analyze Anywhere: Context-Aware WCET Analysis by User-Defined Abstractions
Simon Schuster, Peter Wägemann, Peter Ulbrich, and Wolfgang Schröder-Preikschat (University of Erlangen-Nuremberg, Germany; TU Dortmund, Germany) The widespread adoption of cyber-physical systems in the safety-critical (hard real-time) domain is accompanied by a rising degree of code-reuse up to actual software product lines spanning different hardware platforms. Nevertheless, the dominant tools for static worst-case execution-time (WCET) analysis operate on individual, specific system instances at the binary level, further depending on machine-code–level annotations for precise analysis. Thus, this timing verification is neither portable nor reusable. PragMetis addresses this schism by providing an expressive source-level annotation language that enables to express context dependence at the library level using user-defined abstractions. These abstractions allow users to generically annotate context-dependent flow facts down to the granularity of individual loop contexts. We then use control-flow–relation graphs to transfer these facts to machine-code level for specific instances, even in the presence of certain compiler optimizations, thus achieving portability. Our evaluation results based on TACLeBench confirm that PragMetis's powerful expressions yield more accurate WCET bounds. @InProceedings{LCTES21p54, author = {Simon Schuster and Peter Wägemann and Peter Ulbrich and Wolfgang Schröder-Preikschat}, title = {Annotate Once – Analyze Anywhere: Context-Aware WCET Analysis by User-Defined Abstractions}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {54--66}, doi = {10.1145/3461648.3463847}, year = {2021}, } Publisher's Version Artifacts Reusable Results Reproduced |
|
Wägemann, Peter |
LCTES '21: "Annotate Once – Analyze ..."
Annotate Once – Analyze Anywhere: Context-Aware WCET Analysis by User-Defined Abstractions
Simon Schuster, Peter Wägemann, Peter Ulbrich, and Wolfgang Schröder-Preikschat (University of Erlangen-Nuremberg, Germany; TU Dortmund, Germany) The widespread adoption of cyber-physical systems in the safety-critical (hard real-time) domain is accompanied by a rising degree of code-reuse up to actual software product lines spanning different hardware platforms. Nevertheless, the dominant tools for static worst-case execution-time (WCET) analysis operate on individual, specific system instances at the binary level, further depending on machine-code–level annotations for precise analysis. Thus, this timing verification is neither portable nor reusable. PragMetis addresses this schism by providing an expressive source-level annotation language that enables to express context dependence at the library level using user-defined abstractions. These abstractions allow users to generically annotate context-dependent flow facts down to the granularity of individual loop contexts. We then use control-flow–relation graphs to transfer these facts to machine-code level for specific instances, even in the presence of certain compiler optimizations, thus achieving portability. Our evaluation results based on TACLeBench confirm that PragMetis's powerful expressions yield more accurate WCET bounds. @InProceedings{LCTES21p54, author = {Simon Schuster and Peter Wägemann and Peter Ulbrich and Wolfgang Schröder-Preikschat}, title = {Annotate Once – Analyze Anywhere: Context-Aware WCET Analysis by User-Defined Abstractions}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {54--66}, doi = {10.1145/3461648.3463847}, year = {2021}, } Publisher's Version Artifacts Reusable Results Reproduced |
|
Wang, Ying |
LCTES '21: "Optimus: Towards Optimal Layer-Fusion ..."
Optimus: Towards Optimal Layer-Fusion on Deep Learning Processors
Xuyi Cai, Ying Wang, and Lei Zhang (Institute of Computing Technology at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China) Neural network layer fusion has been proposed to parallelize the inference of neural layers and thus significantly reduces the feature-induced memory accesses. However, how to fuse the neural layers is still a challenging issue that heavily depends on both the network architecture and the specific DNN processor configuration. This work formalizes the layer fusion problem for DNN processors, proves that prior fusion solutions cannot guarantee memory-level optimality, and presents a novel neural network fusion framework, Optimus. Optimus includes an accurate memory cost model to evaluate fusion schemes, and a Computing-Graph (CG) based layer fusion algorithm, which generates high-efficiency layer-fusion schemes for arbitrary network architectures on DNN processors. The proposed off-line and on-line graph-based fusion algorithms can reduce 10.1% - 72.2% off-chip memory traffic and obtain 1.71x - 3.94x energy efficiency over SOTA baselines on DNN workloads, and they bring significant power-efficiency boost to the DNN processors of different architectures and dataflows. @InProceedings{LCTES21p67, author = {Xuyi Cai and Ying Wang and Lei Zhang}, title = {Optimus: Towards Optimal Layer-Fusion on Deep Learning Processors}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {67--79}, doi = {10.1145/3461648.3463848}, year = {2021}, } Publisher's Version Artifacts Reusable Results Reproduced LCTES '21: "CHaNAS: Coordinated Search ..." CHaNAS: Coordinated Search for Network Architecture and Scheduling Policy Weiwei Chen, Ying Wang, Gangliang Lin, Chengsi Gao, Cheng Liu, and Lei Zhang (Institute of Computing Technology at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China) Automatically design an efficient DNN solution for a given deep learning task on the target hardware mainly decided by the neural network architecture and the schedule mapping strategy, where the two goals are closely coupled with each other to fully exploit the advantages of the underlying hardware. Prior hardware-aware Neural Architecture Search (NAS) methods mostly ignore the impacts of different scheduling policies (e.g., graph-level optimization, loop transformations, parallelization, etc.) on network candidates being evaluated in the search process. Thus, they may miss the true-optimal architecture that can only be discovered by trying-out different scheduling policies. This work proposes a NAS framework (CHaNAS) that searches for not only the network architecture but also the dedicated scheduling policy, as the optimal co-design solution on target hardware that fully exploits the advantages of the underlying hardware. We propose to use a block-based pre-scheduling methodology to reduce the co-design search space, and enable the automatic generation of the optimal co-design, including the network architecture and the tensor programs that practice the scheduling policy. We evaluate CHaNAS on Imagenet on different hardware back-ends against the state-of-the-art hardware-aware search method MobileNet-v3. Experimental results show that the co-design solutions obtained by ChaNAS show up to 1.6x, 1.9x, and 1.7x performance boost on NVIDIA P100 GPU, Intel Xeon 8163 CPU, and Samsung Note 10 Mobile, respectively, over the baselines of the same-level accuracy. @InProceedings{LCTES21p42, author = {Weiwei Chen and Ying Wang and Gangliang Lin and Chengsi Gao and Cheng Liu and Lei Zhang}, title = {CHaNAS: Coordinated Search for Network Architecture and Scheduling Policy}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {42--53}, doi = {10.1145/3461648.3463846}, year = {2021}, } Publisher's Version |
|
Wang, Zheng |
LCTES '21: "HyFM: Function Merging for ..."
HyFM: Function Merging for Free
Rodrigo C. O. Rocha, Pavlos Petoumenos, Zheng Wang, Murray Cole, Kim Hazelwood, and Hugh Leather (University of Edinburgh, UK; University of Manchester, UK; University of Leeds, UK; Facebook AI Research, USA) Function merging is an important optimization for reducing code size. It merges multiple functions into a single one, eliminating duplicate code among them. The existing state-of-the-art relies on a well-known sequence alignment algorithm to identify duplicate code across whole functions. However, this algorithm is quadratic in time and space on the number of instructions. This leads to very high time overheads and prohibitive levels of memory usage even for medium-sized benchmarks. For larger programs, it becomes impractical. This is made worse by an overly eager merging approach. All selected pairs of functions will be merged. Only then will this approach estimate the potential benefit from merging and decide whether to replace the original functions with the merged one. Given that most pairs are unprofitable, a significant amount of time is wasted producing merged functions that are simply thrown away. In this paper, we propose HyFM, a novel function merging technique that delivers similar levels of code size reduction for significantly lower time overhead and memory usage. Unlike the state-of-the-art, our alignment strategy works at the block level. Since basic blocks are usually much shorter than functions, even a quadratic alignment is acceptable. However, we also propose a linear algorithm for aligning blocks of the same size at a much lower cost. We extend this strategy with a multi-tier profitability analysis that bails out early from unprofitable merging attempts. By aligning individual pairs of blocks, we are able to decide their alignment’s profitability separately and before actually generating code. Experimental results on SPEC 2006 and 2017 show that HyFM needs orders of magnitude less memory, using up to 48 MB or 5.6 MB, depending on the variant used, while the state-of-the-art requires 32 GB in the worst case. HyFM also runs over 4.5×× faster, while still achieving comparable code size reduction. Combined with the speedup of later compilation stages due to the reduced number of functions, HyFM contributes to a reduced end-to-end compilation time. @InProceedings{LCTES21p110, author = {Rodrigo C. O. Rocha and Pavlos Petoumenos and Zheng Wang and Murray Cole and Kim Hazelwood and Hugh Leather}, title = {HyFM: Function Merging for Free}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {110--121}, doi = {10.1145/3461648.3463852}, year = {2021}, } Publisher's Version Artifacts Functional Results Reproduced |
|
Weber, Gerald |
LCTES '21: "WasmAndroid: A Cross-Platform ..."
WasmAndroid: A Cross-Platform Runtime for Native Programming Languages on Android (WIP Paper)
Elliott Wen, Gerald Weber, and Suranga Nanayakkara (University of Auckland, New Zealand) Open-source hardware such as RISC-V has been gaining substantial momentum. Recently, they have begun to embrace Google's Android operating system to leverage its software ecosystem. Despite the encouraging progress, a challenging issue arises: a majority of Android applications are written in native languages and need to be recompiled to target new hardware platforms. Unfortunately, this recompilation process is not scalable because of the explosion of new hardware platforms. To address this issue, we present WasmAndroid, a high-performance cross-platform runtime for native programming languages on Android. WasmAndroid only requires developers to compile their source code to WebAssembly, an efficient and portable bytecode format that can be executed everywhere without additional reconfiguration. WasmAndroid can also trans-pile existing application binaries to WebAssembly when source code is not available. WebAssembly's language model is very different from C/C++ and this mismatch leads to many unique implementation challenges. In this paper, we provide workable solutions and conduct a preliminary system evaluation. We show that WasmAndroid provides acceptable performance to execute native applications in a cross-platform manner. @InProceedings{LCTES21p80, author = {Elliott Wen and Gerald Weber and Suranga Nanayakkara}, title = {WasmAndroid: A Cross-Platform Runtime for Native Programming Languages on Android (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {80--84}, doi = {10.1145/3461648.3463849}, year = {2021}, } Publisher's Version |
|
Wen, Elliott |
LCTES '21: "WasmAndroid: A Cross-Platform ..."
WasmAndroid: A Cross-Platform Runtime for Native Programming Languages on Android (WIP Paper)
Elliott Wen, Gerald Weber, and Suranga Nanayakkara (University of Auckland, New Zealand) Open-source hardware such as RISC-V has been gaining substantial momentum. Recently, they have begun to embrace Google's Android operating system to leverage its software ecosystem. Despite the encouraging progress, a challenging issue arises: a majority of Android applications are written in native languages and need to be recompiled to target new hardware platforms. Unfortunately, this recompilation process is not scalable because of the explosion of new hardware platforms. To address this issue, we present WasmAndroid, a high-performance cross-platform runtime for native programming languages on Android. WasmAndroid only requires developers to compile their source code to WebAssembly, an efficient and portable bytecode format that can be executed everywhere without additional reconfiguration. WasmAndroid can also trans-pile existing application binaries to WebAssembly when source code is not available. WebAssembly's language model is very different from C/C++ and this mismatch leads to many unique implementation challenges. In this paper, we provide workable solutions and conduct a preliminary system evaluation. We show that WasmAndroid provides acceptable performance to execute native applications in a cross-platform manner. @InProceedings{LCTES21p80, author = {Elliott Wen and Gerald Weber and Suranga Nanayakkara}, title = {WasmAndroid: A Cross-Platform Runtime for Native Programming Languages on Android (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {80--84}, doi = {10.1145/3461648.3463849}, year = {2021}, } Publisher's Version |
|
Wen, Mei |
LCTES '21: "Automatic Mapping and Code ..."
Automatic Mapping and Code Optimization for OpenCL Kernels on FT-Matrix Architecture (WIP Paper)
Xiaolei Zhao, Mei Wen, Zhaoyun Chen, Yang Shi, and Chunyuan Zhang (National University of Defense Technology, China) FT-Matrix is a typical vector-SIMD architecture that refines the cooperation between scalar and vector units. This approach is widely used in digital signal processing, high-performance computing, and artificial intelligence, among other fields. FT-Matrix currently adopts C vector extension as the main programming model, improving the utilization efficiency of SIMD by providing explicit vector extension API. Moreover, it is difficult to efficiently transplant parallel programs (OpenCL, CUDA) adopted by users. This paper proposes an automatic mapping and code optimization method for OpenCL kernels on FT-Matrix architecture. The proposed approach solves these challenges by means of work item coalescing, slicing and rotation, and instruction-level code optimization. Preliminary results show that our method can achieve high performance and good hardware utilization for OpenCL kernels, as well as decreasing the programming difficulty on FT-Matrix. @InProceedings{LCTES21p37, author = {Xiaolei Zhao and Mei Wen and Zhaoyun Chen and Yang Shi and Chunyuan Zhang}, title = {Automatic Mapping and Code Optimization for OpenCL Kernels on FT-Matrix Architecture (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {37--41}, doi = {10.1145/3461648.3463845}, year = {2021}, } Publisher's Version |
|
Young, May |
LCTES '21: "Cache Abstraction for Data ..."
Cache Abstraction for Data Race Detection in Heterogeneous Systems with Non-coherent Accelerators
May Young, Alan J. Hu, and Guy G. F. Lemieux (University of British Columbia, Canada) Embedded systems are becoming increasingly complex and heterogeneous, featuring multiple processor cores (which might themselves be heterogeneous) as well as specialized hardware accelerators, all accessing shared memory. Many accelerators are non-coherent (i.e., do not support hardware cache coherence) because it reduces hardware complexity, cost, and power consumption, while potentially offering superior performance. However, the disadvantage of non-coherence is that the software must explicitly synchronize between accelerators and processors, and this synchronization is notoriously error-prone. We propose an analysis technique to find data races in software for heterogeneous systems that include non-coherent accelerators. Our approach builds on classical results for data race detection, but the challenge turns out to be analyzing cache behavior rather than the behavior of the non-coherent accelerators. Accordingly, our central contribution is a novel, sound (data-race-preserving) abstraction of cache behavior. We prove our abstraction sound, and then to demonstrate the precision of our abstraction, we implement it in a simple dynamic race detector for a system with a processor and a massively parallel accelerator provided by a commercial FPGA-based accelerator vendor. On eleven software examples provided by the vendor, the tool had zero false positives and was able to detect previously unknown data races in 2 of the 11 examples. @InProceedings{LCTES21p151, author = {May Young and Alan J. Hu and Guy G. F. Lemieux}, title = {Cache Abstraction for Data Race Detection in Heterogeneous Systems with Non-coherent Accelerators}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {151--162}, doi = {10.1145/3461648.3463856}, year = {2021}, } Publisher's Version |
|
Zhang, Chunyuan |
LCTES '21: "Automatic Mapping and Code ..."
Automatic Mapping and Code Optimization for OpenCL Kernels on FT-Matrix Architecture (WIP Paper)
Xiaolei Zhao, Mei Wen, Zhaoyun Chen, Yang Shi, and Chunyuan Zhang (National University of Defense Technology, China) FT-Matrix is a typical vector-SIMD architecture that refines the cooperation between scalar and vector units. This approach is widely used in digital signal processing, high-performance computing, and artificial intelligence, among other fields. FT-Matrix currently adopts C vector extension as the main programming model, improving the utilization efficiency of SIMD by providing explicit vector extension API. Moreover, it is difficult to efficiently transplant parallel programs (OpenCL, CUDA) adopted by users. This paper proposes an automatic mapping and code optimization method for OpenCL kernels on FT-Matrix architecture. The proposed approach solves these challenges by means of work item coalescing, slicing and rotation, and instruction-level code optimization. Preliminary results show that our method can achieve high performance and good hardware utilization for OpenCL kernels, as well as decreasing the programming difficulty on FT-Matrix. @InProceedings{LCTES21p37, author = {Xiaolei Zhao and Mei Wen and Zhaoyun Chen and Yang Shi and Chunyuan Zhang}, title = {Automatic Mapping and Code Optimization for OpenCL Kernels on FT-Matrix Architecture (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {37--41}, doi = {10.1145/3461648.3463845}, year = {2021}, } Publisher's Version |
|
Zhang, Lei |
LCTES '21: "Optimus: Towards Optimal Layer-Fusion ..."
Optimus: Towards Optimal Layer-Fusion on Deep Learning Processors
Xuyi Cai, Ying Wang, and Lei Zhang (Institute of Computing Technology at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China) Neural network layer fusion has been proposed to parallelize the inference of neural layers and thus significantly reduces the feature-induced memory accesses. However, how to fuse the neural layers is still a challenging issue that heavily depends on both the network architecture and the specific DNN processor configuration. This work formalizes the layer fusion problem for DNN processors, proves that prior fusion solutions cannot guarantee memory-level optimality, and presents a novel neural network fusion framework, Optimus. Optimus includes an accurate memory cost model to evaluate fusion schemes, and a Computing-Graph (CG) based layer fusion algorithm, which generates high-efficiency layer-fusion schemes for arbitrary network architectures on DNN processors. The proposed off-line and on-line graph-based fusion algorithms can reduce 10.1% - 72.2% off-chip memory traffic and obtain 1.71x - 3.94x energy efficiency over SOTA baselines on DNN workloads, and they bring significant power-efficiency boost to the DNN processors of different architectures and dataflows. @InProceedings{LCTES21p67, author = {Xuyi Cai and Ying Wang and Lei Zhang}, title = {Optimus: Towards Optimal Layer-Fusion on Deep Learning Processors}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {67--79}, doi = {10.1145/3461648.3463848}, year = {2021}, } Publisher's Version Artifacts Reusable Results Reproduced LCTES '21: "CHaNAS: Coordinated Search ..." CHaNAS: Coordinated Search for Network Architecture and Scheduling Policy Weiwei Chen, Ying Wang, Gangliang Lin, Chengsi Gao, Cheng Liu, and Lei Zhang (Institute of Computing Technology at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China) Automatically design an efficient DNN solution for a given deep learning task on the target hardware mainly decided by the neural network architecture and the schedule mapping strategy, where the two goals are closely coupled with each other to fully exploit the advantages of the underlying hardware. Prior hardware-aware Neural Architecture Search (NAS) methods mostly ignore the impacts of different scheduling policies (e.g., graph-level optimization, loop transformations, parallelization, etc.) on network candidates being evaluated in the search process. Thus, they may miss the true-optimal architecture that can only be discovered by trying-out different scheduling policies. This work proposes a NAS framework (CHaNAS) that searches for not only the network architecture but also the dedicated scheduling policy, as the optimal co-design solution on target hardware that fully exploits the advantages of the underlying hardware. We propose to use a block-based pre-scheduling methodology to reduce the co-design search space, and enable the automatic generation of the optimal co-design, including the network architecture and the tensor programs that practice the scheduling policy. We evaluate CHaNAS on Imagenet on different hardware back-ends against the state-of-the-art hardware-aware search method MobileNet-v3. Experimental results show that the co-design solutions obtained by ChaNAS show up to 1.6x, 1.9x, and 1.7x performance boost on NVIDIA P100 GPU, Intel Xeon 8163 CPU, and Samsung Note 10 Mobile, respectively, over the baselines of the same-level accuracy. @InProceedings{LCTES21p42, author = {Weiwei Chen and Ying Wang and Gangliang Lin and Chengsi Gao and Cheng Liu and Lei Zhang}, title = {CHaNAS: Coordinated Search for Network Architecture and Scheduling Policy}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {42--53}, doi = {10.1145/3461648.3463846}, year = {2021}, } Publisher's Version |
|
Zhao, Xiaolei |
LCTES '21: "Automatic Mapping and Code ..."
Automatic Mapping and Code Optimization for OpenCL Kernels on FT-Matrix Architecture (WIP Paper)
Xiaolei Zhao, Mei Wen, Zhaoyun Chen, Yang Shi, and Chunyuan Zhang (National University of Defense Technology, China) FT-Matrix is a typical vector-SIMD architecture that refines the cooperation between scalar and vector units. This approach is widely used in digital signal processing, high-performance computing, and artificial intelligence, among other fields. FT-Matrix currently adopts C vector extension as the main programming model, improving the utilization efficiency of SIMD by providing explicit vector extension API. Moreover, it is difficult to efficiently transplant parallel programs (OpenCL, CUDA) adopted by users. This paper proposes an automatic mapping and code optimization method for OpenCL kernels on FT-Matrix architecture. The proposed approach solves these challenges by means of work item coalescing, slicing and rotation, and instruction-level code optimization. Preliminary results show that our method can achieve high performance and good hardware utilization for OpenCL kernels, as well as decreasing the programming difficulty on FT-Matrix. @InProceedings{LCTES21p37, author = {Xiaolei Zhao and Mei Wen and Zhaoyun Chen and Yang Shi and Chunyuan Zhang}, title = {Automatic Mapping and Code Optimization for OpenCL Kernels on FT-Matrix Architecture (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {37--41}, doi = {10.1145/3461648.3463845}, year = {2021}, } Publisher's Version |
|
Zhao, Yutong |
LCTES '21: "Better Atomic Writes by Exposing ..."
Better Atomic Writes by Exposing the Flash Out-of-Band Area to File Systems
Hongwei Qin, Dan Feng, Wei Tong, Yutong Zhao, Sheng Qiu, Fei Liu, and Shu Li (Huazhong University of Science and Technology, China; Alibaba Group, China) File systems for mobile devices usually preserve data consistency by ordered I/Os. However, maintaining I/O ordering prevents applications from fully exploiting device parallelism and thus degrades the storage performance. In this paper, we propose NBStack to eliminate ordered I/Os without compromising data consistency. First, we augment the existing block interface to expose the Flash out-of-band area to file systems. Second, we build an enhanced block device prototype that supports the new interface. Third, we develop NBFS, a Linux file system, that leverages the new block interface to achieve atomic writes without enforcing I/O orderings. Experimental results show that NBStack doubles the performance of F2FS while providing strong consistency and durability guarantees. If applications are willing to trade-off durability, NBStack can further aggressively improve performance. @InProceedings{LCTES21p12, author = {Hongwei Qin and Dan Feng and Wei Tong and Yutong Zhao and Sheng Qiu and Fei Liu and Shu Li}, title = {Better Atomic Writes by Exposing the Flash Out-of-Band Area to File Systems}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {12--23}, doi = {10.1145/3461648.3463843}, year = {2021}, } Publisher's Version |
58 authors
proc time: 12.67