LCTES 2019 – Author Index |
Contents -
Abstracts -
Authors
|
A B C D G H K L M O P R S U V W X Y Z
Ahmed, Saad |
LCTES '19: "The Betrayal of Constant Power ..."
The Betrayal of Constant Power × Time: Finding the Missing Joules of Transiently-Powered Computers
Saad Ahmed, Abu Bakar, Naveed Anwar Bhatti, Muhammad Hamad Alizai, Junaid Haroon Siddiqui, and Luca Mottola (Lahore University of Management Sciences, Pakistan; RISE SICS, Sweden; Politecnico di Milano, Italy) Transiently-powered computers (TPCs) lay the basis for a battery-less Internet of Things, using energy harvesting and small capacitors to power their operation. This power supply is characterized by extreme variations in supply voltage, as capacitors charge when harvesting energy and discharge when computing. We experimentally find that these variations cause marked fluctuations in clock speed and power consumption, which determine energy efficiency. We demonstrate that it is possible to accurately model and concretely capitalize on these fluctuations. We derive an energy model as a function of supply voltage and develop EPIC, a compile-time energy analysis tool. We use EPIC to substitute for the constant power assumption in existing analysis techniques, giving programmers accurate information on worst-case energy consumption of programs. When using EPIC with existing TPC system support, run-time energy efficiency drastically improves, eventually leading up to a 350% speedup in the time to complete a fixed workload. Further, when using EPIC with existing debugging tools, programmers avoid unnecessary program changes that hurt energy efficiency. @InProceedings{LCTES19p97, author = {Saad Ahmed and Abu Bakar and Naveed Anwar Bhatti and Muhammad Hamad Alizai and Junaid Haroon Siddiqui and Luca Mottola}, title = {The Betrayal of Constant Power × Time: Finding the Missing Joules of Transiently-Powered Computers}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {97--109}, doi = {10.1145/3316482.3326348}, year = {2019}, } Publisher's Version LCTES '19: "Efficient Intermittent Computing ..." Efficient Intermittent Computing with Differential Checkpointing Saad Ahmed, Naveed Anwar Bhatti, Muhammad Hamad Alizai, Junaid Haroon Siddiqui, and Luca Mottola (Lahore University of Management Sciences, Pakistan; RISE SICS, Sweden; Politecnico di Milano, Italy) Embedded devices running on ambient energy perform computations intermittently, depending upon energy availability. System support ensures forward progress of programs through state checkpointing in non-volatile memory. Checkpointing is, however, expensive in energy and adds to execution times. To reduce this overhead, we present DICE, a system design that efficiently achieves differential checkpointing in intermittent computing. Distinctive traits of DICE are its software-only nature and its ability to only operate in volatile main memory to determine differentials. DICE works with arbitrary programs using automatic code instrumentation, thus requiring no programmer intervention, and can be integrated with both reactive (Hibernus) or proactive (MementOS, HarvOS) checkpointing systems. By reducing the cost of checkpoints, performance markedly improves. For example, using DICE, Hibernus requires one order of magnitude shorter time to complete a fixed workload in real-world settings. @InProceedings{LCTES19p70, author = {Saad Ahmed and Naveed Anwar Bhatti and Muhammad Hamad Alizai and Junaid Haroon Siddiqui and Luca Mottola}, title = {Efficient Intermittent Computing with Differential Checkpointing}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {70--81}, doi = {10.1145/3316482.3326357}, year = {2019}, } Publisher's Version |
|
Alizai, Muhammad Hamad |
LCTES '19: "The Betrayal of Constant Power ..."
The Betrayal of Constant Power × Time: Finding the Missing Joules of Transiently-Powered Computers
Saad Ahmed, Abu Bakar, Naveed Anwar Bhatti, Muhammad Hamad Alizai, Junaid Haroon Siddiqui, and Luca Mottola (Lahore University of Management Sciences, Pakistan; RISE SICS, Sweden; Politecnico di Milano, Italy) Transiently-powered computers (TPCs) lay the basis for a battery-less Internet of Things, using energy harvesting and small capacitors to power their operation. This power supply is characterized by extreme variations in supply voltage, as capacitors charge when harvesting energy and discharge when computing. We experimentally find that these variations cause marked fluctuations in clock speed and power consumption, which determine energy efficiency. We demonstrate that it is possible to accurately model and concretely capitalize on these fluctuations. We derive an energy model as a function of supply voltage and develop EPIC, a compile-time energy analysis tool. We use EPIC to substitute for the constant power assumption in existing analysis techniques, giving programmers accurate information on worst-case energy consumption of programs. When using EPIC with existing TPC system support, run-time energy efficiency drastically improves, eventually leading up to a 350% speedup in the time to complete a fixed workload. Further, when using EPIC with existing debugging tools, programmers avoid unnecessary program changes that hurt energy efficiency. @InProceedings{LCTES19p97, author = {Saad Ahmed and Abu Bakar and Naveed Anwar Bhatti and Muhammad Hamad Alizai and Junaid Haroon Siddiqui and Luca Mottola}, title = {The Betrayal of Constant Power × Time: Finding the Missing Joules of Transiently-Powered Computers}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {97--109}, doi = {10.1145/3316482.3326348}, year = {2019}, } Publisher's Version LCTES '19: "On Intermittence Bugs in the ..." On Intermittence Bugs in the Battery-Less Internet of Things (WIP Paper) Andrea Maioli, Luca Mottola, Muhammad Hamad Alizai, and Junaid Haroon Siddiqui (Politecnico di Milano, Italy; RISE SICS, Sweden; Lahore University of Management Sciences, Pakistan) The resource-constrained devices of the battery-less Internet of Things are powered off energy harvesting and compute intermittently, as energy is available. Forward progress of programs is ensured by creating persistent state. Mixed-volatile platforms are thus an asset, as they map slices of the address space onto non-volatile memory. However, these platforms also possibly introduce intermittence bugs, where intermittent and continuous executions differ. Our ongoing work on intermittence bugs includes [label=()] an analysis that demonstrates their presence in settings that current literature overlooks; the design of efficient testing techniques to check their presence in arbitrary code, which would be otherwise prohibitive given the sheer number of different executions to check; the implementation of an offline tool called ScEpTIC that implements these techniques. ScEpTIC finds the same bugs as a brute-force approach, but is six orders of magnitude faster. @InProceedings{LCTES19p203, author = {Andrea Maioli and Luca Mottola and Muhammad Hamad Alizai and Junaid Haroon Siddiqui}, title = {On Intermittence Bugs in the Battery-Less Internet of Things (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {203--207}, doi = {10.1145/3316482.3326346}, year = {2019}, } Publisher's Version LCTES '19: "Efficient Intermittent Computing ..." Efficient Intermittent Computing with Differential Checkpointing Saad Ahmed, Naveed Anwar Bhatti, Muhammad Hamad Alizai, Junaid Haroon Siddiqui, and Luca Mottola (Lahore University of Management Sciences, Pakistan; RISE SICS, Sweden; Politecnico di Milano, Italy) Embedded devices running on ambient energy perform computations intermittently, depending upon energy availability. System support ensures forward progress of programs through state checkpointing in non-volatile memory. Checkpointing is, however, expensive in energy and adds to execution times. To reduce this overhead, we present DICE, a system design that efficiently achieves differential checkpointing in intermittent computing. Distinctive traits of DICE are its software-only nature and its ability to only operate in volatile main memory to determine differentials. DICE works with arbitrary programs using automatic code instrumentation, thus requiring no programmer intervention, and can be integrated with both reactive (Hibernus) or proactive (MementOS, HarvOS) checkpointing systems. By reducing the cost of checkpoints, performance markedly improves. For example, using DICE, Hibernus requires one order of magnitude shorter time to complete a fixed workload in real-world settings. @InProceedings{LCTES19p70, author = {Saad Ahmed and Naveed Anwar Bhatti and Muhammad Hamad Alizai and Junaid Haroon Siddiqui and Luca Mottola}, title = {Efficient Intermittent Computing with Differential Checkpointing}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {70--81}, doi = {10.1145/3316482.3326357}, year = {2019}, } Publisher's Version |
|
Bakar, Abu |
LCTES '19: "The Betrayal of Constant Power ..."
The Betrayal of Constant Power × Time: Finding the Missing Joules of Transiently-Powered Computers
Saad Ahmed, Abu Bakar, Naveed Anwar Bhatti, Muhammad Hamad Alizai, Junaid Haroon Siddiqui, and Luca Mottola (Lahore University of Management Sciences, Pakistan; RISE SICS, Sweden; Politecnico di Milano, Italy) Transiently-powered computers (TPCs) lay the basis for a battery-less Internet of Things, using energy harvesting and small capacitors to power their operation. This power supply is characterized by extreme variations in supply voltage, as capacitors charge when harvesting energy and discharge when computing. We experimentally find that these variations cause marked fluctuations in clock speed and power consumption, which determine energy efficiency. We demonstrate that it is possible to accurately model and concretely capitalize on these fluctuations. We derive an energy model as a function of supply voltage and develop EPIC, a compile-time energy analysis tool. We use EPIC to substitute for the constant power assumption in existing analysis techniques, giving programmers accurate information on worst-case energy consumption of programs. When using EPIC with existing TPC system support, run-time energy efficiency drastically improves, eventually leading up to a 350% speedup in the time to complete a fixed workload. Further, when using EPIC with existing debugging tools, programmers avoid unnecessary program changes that hurt energy efficiency. @InProceedings{LCTES19p97, author = {Saad Ahmed and Abu Bakar and Naveed Anwar Bhatti and Muhammad Hamad Alizai and Junaid Haroon Siddiqui and Luca Mottola}, title = {The Betrayal of Constant Power × Time: Finding the Missing Joules of Transiently-Powered Computers}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {97--109}, doi = {10.1145/3316482.3326348}, year = {2019}, } Publisher's Version |
|
Becker, Martin |
LCTES '19: "Imprecision in WCET Estimates ..."
Imprecision in WCET Estimates Due to Library Calls and How to Reduce It (WIP Paper)
Martin Becker, Samarjit Chakraborty, Ravindra Metta, and R. Venkatesh (TU Munich, Germany; TCS Research, India) One of the main difficulties in estimating the Worst Case Execution Time (WCET) at the binary level is that machine instructions do not allow inferring call contexts as precisely as source code, since compiler optimizations obfuscate control flow and type information. On the other hand, WCET estimation at source code level can be precise in tracking call contexts, but it is pessimistic for functions that are not available as source code. In this paper we propose approaches to join binary-level and source-level analyses, to get the best out of both. We present the arising problems in detail, evaluate the approaches qualitatively, and highlight their trade-offs. @InProceedings{LCTES19p208, author = {Martin Becker and Samarjit Chakraborty and Ravindra Metta and R. Venkatesh}, title = {Imprecision in WCET Estimates Due to Library Calls and How to Reduce It (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {208--212}, doi = {10.1145/3316482.3326353}, year = {2019}, } Publisher's Version |
|
Bhatti, Naveed Anwar |
LCTES '19: "The Betrayal of Constant Power ..."
The Betrayal of Constant Power × Time: Finding the Missing Joules of Transiently-Powered Computers
Saad Ahmed, Abu Bakar, Naveed Anwar Bhatti, Muhammad Hamad Alizai, Junaid Haroon Siddiqui, and Luca Mottola (Lahore University of Management Sciences, Pakistan; RISE SICS, Sweden; Politecnico di Milano, Italy) Transiently-powered computers (TPCs) lay the basis for a battery-less Internet of Things, using energy harvesting and small capacitors to power their operation. This power supply is characterized by extreme variations in supply voltage, as capacitors charge when harvesting energy and discharge when computing. We experimentally find that these variations cause marked fluctuations in clock speed and power consumption, which determine energy efficiency. We demonstrate that it is possible to accurately model and concretely capitalize on these fluctuations. We derive an energy model as a function of supply voltage and develop EPIC, a compile-time energy analysis tool. We use EPIC to substitute for the constant power assumption in existing analysis techniques, giving programmers accurate information on worst-case energy consumption of programs. When using EPIC with existing TPC system support, run-time energy efficiency drastically improves, eventually leading up to a 350% speedup in the time to complete a fixed workload. Further, when using EPIC with existing debugging tools, programmers avoid unnecessary program changes that hurt energy efficiency. @InProceedings{LCTES19p97, author = {Saad Ahmed and Abu Bakar and Naveed Anwar Bhatti and Muhammad Hamad Alizai and Junaid Haroon Siddiqui and Luca Mottola}, title = {The Betrayal of Constant Power × Time: Finding the Missing Joules of Transiently-Powered Computers}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {97--109}, doi = {10.1145/3316482.3326348}, year = {2019}, } Publisher's Version LCTES '19: "Efficient Intermittent Computing ..." Efficient Intermittent Computing with Differential Checkpointing Saad Ahmed, Naveed Anwar Bhatti, Muhammad Hamad Alizai, Junaid Haroon Siddiqui, and Luca Mottola (Lahore University of Management Sciences, Pakistan; RISE SICS, Sweden; Politecnico di Milano, Italy) Embedded devices running on ambient energy perform computations intermittently, depending upon energy availability. System support ensures forward progress of programs through state checkpointing in non-volatile memory. Checkpointing is, however, expensive in energy and adds to execution times. To reduce this overhead, we present DICE, a system design that efficiently achieves differential checkpointing in intermittent computing. Distinctive traits of DICE are its software-only nature and its ability to only operate in volatile main memory to determine differentials. DICE works with arbitrary programs using automatic code instrumentation, thus requiring no programmer intervention, and can be integrated with both reactive (Hibernus) or proactive (MementOS, HarvOS) checkpointing systems. By reducing the cost of checkpoints, performance markedly improves. For example, using DICE, Hibernus requires one order of magnitude shorter time to complete a fixed workload in real-world settings. @InProceedings{LCTES19p70, author = {Saad Ahmed and Naveed Anwar Bhatti and Muhammad Hamad Alizai and Junaid Haroon Siddiqui and Luca Mottola}, title = {Efficient Intermittent Computing with Differential Checkpointing}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {70--81}, doi = {10.1145/3316482.3326357}, year = {2019}, } Publisher's Version |
|
Brihadiswarn, Gunavaran |
LCTES '19: "Crash Recoverable ARMv8-Oriented ..."
Crash Recoverable ARMv8-Oriented B+-Tree for Byte-Addressable Persistent Memory
Chundong Wang, Sudipta Chattopadhyay, and Gunavaran Brihadiswarn (Singapore University of Technology and Design, Singapore; University of Moratuwa, Sri Lanka) The byte-addressable non-volatile memory (NVM) promises persistent memory. Concretely, ARM processors have incorporated architectural supports to utilize NVM. In this paper, we consider tailoring the important B+-tree for NVM operated by a 64-bit ARMv8 processor. We first conduct an empirical study of performance overheads in writing and reading data for a B+-tree with an ARMv8 processor, including the time cost of cache line flushes and memory fences for crash consistency as well as the execution time of binary search compared to that of linear search. We hence identify the key weaknesses in the design of B+-tree with ARMv8 architecture. Accordingly, we develop a new B+-tree variant, namely, crash recoverable ARMv8-oriented B+-tree (Crab-tree). To insert and delete data at runtime, Crab-tree selectively chooses one of two strategies, i.e., copy on write and shifting in place, depending on which one causes less consistency cost to performance. Crab-tree regulates a strict execution order in both strategies and recovers the tree structure in case of crashes. We have evaluated Crab-tree in Raspberry Pi 3 Model B+ with emulated NVM. Experiments show that Crab-tree significantly outperforms state-of-the-art B+-trees designed for persistent memory by up to 2.6x and 3.2x in write and read performances, respectively, with both consistency and scalability achieved. @InProceedings{LCTES19p33, author = {Chundong Wang and Sudipta Chattopadhyay and Gunavaran Brihadiswarn}, title = {Crash Recoverable ARMv8-Oriented B+-Tree for Byte-Addressable Persistent Memory}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {33--44}, doi = {10.1145/3316482.3326358}, year = {2019}, } Publisher's Version |
|
Burns, Alan |
LCTES '19: "From Java to Real-Time Java: ..."
From Java to Real-Time Java: A Model-Driven Methodology with Automated Toolchain (Invited Paper)
Wanli Chang, Shuai Zhao, Ran Wei, Andy Wellings, and Alan Burns (University of York, UK) Real-time systems are receiving increasing attention with the emerging application scenarios that are safety-critical, complex in functionality, high on timing-related performance requirements, and cost-sensitive, such as autonomous vehicles. Development of real-time systems is error-prone and highly dependent on the sophisticated domain expertise, making it a costly process. There is a trend of the existing software without the real-time notion being re-developed to realise real-time features, e.g., in the big data technology. This paper utilises the principles of model-driven engineering (MDE) and proposes the first methodology that automatically converts standard time-sharing Java applications to real-time Java applications. It opens up a new research direction on development automation of real-time programming languages and inspires many research questions that can be jointly investigated by the embedded systems, programming languages as well as MDE communities. @InProceedings{LCTES19p123, author = {Wanli Chang and Shuai Zhao and Ran Wei and Andy Wellings and Alan Burns}, title = {From Java to Real-Time Java: A Model-Driven Methodology with Automated Toolchain (Invited Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {123--134}, doi = {10.1145/3316482.3326360}, year = {2019}, } Publisher's Version |
|
Cai, Haipeng |
LCTES '19: "An Empirical Comparison between ..."
An Empirical Comparison between Monkey Testing and Human Testing (WIP Paper)
Mostafa Mohammed, Haipeng Cai, and Na Meng (Virginia Tech, USA; Washington State University, USA) Android app testing is challenging and time-consuming because fully testing all feasible execution paths is difficult. Nowadays apps are usually tested in two ways: human testing or automated testing. Prior work compared different automated tools. However, some fundamental questions are still unexplored, including (1) how automated testing behaves differently from human testing, and (2) whether automated testing can fully or partially substitute human testing. This paper presents our study to explore the open questions. Monkey has been considered one of the best automated testing tools due to its usability, reliability, and competitive coverage metrics, so we applied Monkey to five Android apps and collected their dynamic event traces. Meanwhile, we recruited eight users to manually test the same apps and gathered the traces. By comparing the collected data, we revealed that i.) on average, the two methods generated similar numbers of unique events; ii.) Monkey created more system events while humans created more UI events; iii.) Monkey could mimic human behaviors when apps have UIs full of clickable widgets to trigger logically independent events; and iv.) Monkey was insufficient to test apps that require information comprehension and problem-solving skills. Our research sheds light on future research that combines human expertise with the agility of Monkey testing. @InProceedings{LCTES19p188, author = {Mostafa Mohammed and Haipeng Cai and Na Meng}, title = {An Empirical Comparison between Monkey Testing and Human Testing (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {188--192}, doi = {10.1145/3316482.3326342}, year = {2019}, } Publisher's Version |
|
Campbell, David |
LCTES '19: "PANDORA: A Parallelizing Approximation-Discovery ..."
PANDORA: A Parallelizing Approximation-Discovery Framework (WIP Paper)
Greg Stitt and David Campbell (University of Florida, USA) In this paper, we introduce PANDORA---a framework that complements existing parallelizing compilers by automatically discovering application- and architecture-specialized approximations. We demonstrate that PANDORA creates approximations that extract massive amounts of parallelism from inherently sequential code by eliminating loop-carried dependencies---a long-time goal of the compiler research community. Compared to exact parallel baselines, preliminary results show speedups ranging from 2.3x to 81x with acceptable error for many usage scenarios. @InProceedings{LCTES19p198, author = {Greg Stitt and David Campbell}, title = {PANDORA: A Parallelizing Approximation-Discovery Framework (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {198--202}, doi = {10.1145/3316482.3326345}, year = {2019}, } Publisher's Version |
|
Castrillon, Jeronimo |
LCTES '19: "Optimizing Tensor Contractions ..."
Optimizing Tensor Contractions for Embedded Devices with Racetrack Memory Scratch-Pads
Asif Ali Khan, Norman A. Rink, Fazal Hameed, and Jeronimo Castrillon (TU Dresden, Germany) Tensor contraction is a fundamental operation in many algorithms with a plethora of applications ranging from quantum chemistry over fluid dynamics and image processing to machine learning. The performance of tensor computations critically depends on the efficient utilization of on-chip memories. In the context of low-power embedded devices, efficient management of the memory space becomes even more crucial, in order to meet energy constraints. This work aims at investigating strategies for performance- and energy-efficient tensor contractions on embedded systems, using racetrack memory (RTM)-based scratch-pad memory (SPM). Compiler optimizations such as the loop access order and data layout transformations paired with architectural optimizations such as prefetching and preshifting are employed to reduce the shifting overhead in RTMs. Experimental results demonstrate that the proposed optimizations improve the SPM performance and energy consumption by 24% and 74% respectively compared to an iso-capacity SRAM. @InProceedings{LCTES19p5, author = {Asif Ali Khan and Norman A. Rink and Fazal Hameed and Jeronimo Castrillon}, title = {Optimizing Tensor Contractions for Embedded Devices with Racetrack Memory Scratch-Pads}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {5--18}, doi = {10.1145/3316482.3326351}, year = {2019}, } Publisher's Version |
|
Chakraborty, Samarjit |
LCTES '19: "Imprecision in WCET Estimates ..."
Imprecision in WCET Estimates Due to Library Calls and How to Reduce It (WIP Paper)
Martin Becker, Samarjit Chakraborty, Ravindra Metta, and R. Venkatesh (TU Munich, Germany; TCS Research, India) One of the main difficulties in estimating the Worst Case Execution Time (WCET) at the binary level is that machine instructions do not allow inferring call contexts as precisely as source code, since compiler optimizations obfuscate control flow and type information. On the other hand, WCET estimation at source code level can be precise in tracking call contexts, but it is pessimistic for functions that are not available as source code. In this paper we propose approaches to join binary-level and source-level analyses, to get the best out of both. We present the arising problems in detail, evaluate the approaches qualitatively, and highlight their trade-offs. @InProceedings{LCTES19p208, author = {Martin Becker and Samarjit Chakraborty and Ravindra Metta and R. Venkatesh}, title = {Imprecision in WCET Estimates Due to Library Calls and How to Reduce It (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {208--212}, doi = {10.1145/3316482.3326353}, year = {2019}, } Publisher's Version |
|
Chang, Wanli |
LCTES '19: "From Java to Real-Time Java: ..."
From Java to Real-Time Java: A Model-Driven Methodology with Automated Toolchain (Invited Paper)
Wanli Chang, Shuai Zhao, Ran Wei, Andy Wellings, and Alan Burns (University of York, UK) Real-time systems are receiving increasing attention with the emerging application scenarios that are safety-critical, complex in functionality, high on timing-related performance requirements, and cost-sensitive, such as autonomous vehicles. Development of real-time systems is error-prone and highly dependent on the sophisticated domain expertise, making it a costly process. There is a trend of the existing software without the real-time notion being re-developed to realise real-time features, e.g., in the big data technology. This paper utilises the principles of model-driven engineering (MDE) and proposes the first methodology that automatically converts standard time-sharing Java applications to real-time Java applications. It opens up a new research direction on development automation of real-time programming languages and inspires many research questions that can be jointly investigated by the embedded systems, programming languages as well as MDE communities. @InProceedings{LCTES19p123, author = {Wanli Chang and Shuai Zhao and Ran Wei and Andy Wellings and Alan Burns}, title = {From Java to Real-Time Java: A Model-Driven Methodology with Automated Toolchain (Invited Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {123--134}, doi = {10.1145/3316482.3326360}, year = {2019}, } Publisher's Version |
|
Chattopadhyay, Sudipta |
LCTES '19: "Crash Recoverable ARMv8-Oriented ..."
Crash Recoverable ARMv8-Oriented B+-Tree for Byte-Addressable Persistent Memory
Chundong Wang, Sudipta Chattopadhyay, and Gunavaran Brihadiswarn (Singapore University of Technology and Design, Singapore; University of Moratuwa, Sri Lanka) The byte-addressable non-volatile memory (NVM) promises persistent memory. Concretely, ARM processors have incorporated architectural supports to utilize NVM. In this paper, we consider tailoring the important B+-tree for NVM operated by a 64-bit ARMv8 processor. We first conduct an empirical study of performance overheads in writing and reading data for a B+-tree with an ARMv8 processor, including the time cost of cache line flushes and memory fences for crash consistency as well as the execution time of binary search compared to that of linear search. We hence identify the key weaknesses in the design of B+-tree with ARMv8 architecture. Accordingly, we develop a new B+-tree variant, namely, crash recoverable ARMv8-oriented B+-tree (Crab-tree). To insert and delete data at runtime, Crab-tree selectively chooses one of two strategies, i.e., copy on write and shifting in place, depending on which one causes less consistency cost to performance. Crab-tree regulates a strict execution order in both strategies and recovers the tree structure in case of crashes. We have evaluated Crab-tree in Raspberry Pi 3 Model B+ with emulated NVM. Experiments show that Crab-tree significantly outperforms state-of-the-art B+-trees designed for persistent memory by up to 2.6x and 3.2x in write and read performances, respectively, with both consistency and scalability achieved. @InProceedings{LCTES19p33, author = {Chundong Wang and Sudipta Chattopadhyay and Gunavaran Brihadiswarn}, title = {Crash Recoverable ARMv8-Oriented B+-Tree for Byte-Addressable Persistent Memory}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {33--44}, doi = {10.1145/3316482.3326358}, year = {2019}, } Publisher's Version |
|
Chen, Shuo-Han |
LCTES '19: "1+1>2: Variation-Aware ..."
1+1>2: Variation-Aware Lifetime Enhancement for Embedded 3D NAND Flash Systems
Yejia Di, Liang Shi, Shuo-Han Chen, Chun Jason Xue, and Edwin H.-M. Sha (East China Normal University, China; Chongqing University, China; Academia Sinica, Taiwan; City University of Hong Kong, China) Three-dimensional (3D) NAND flash has been developed to boost the storage capacity by stacking memory cells vertically. One critical characteristic of 3D NAND flash is its large endurance variation. With this characteristic, the lifetime will be determined by the unit with the worst endurance. However, few works can exploit the variations with acceptable overhead for lifetime improvement. In this paper, a variation-aware lifetime improvement framework is proposed. The basic idea is motivated by an observation that there is an elegant matching between unit endurance and wearing variations when wear leveling and implicit compression are applied together. To achieve the matching goal, the framework is designed from three-type-unit levels, including cell, line, and block, respectively. Series of evaluations are conducted, and the evaluation results show that the lifetime improvement is encouraging, better than that of the combination with the state-of-the-art schemes. @InProceedings{LCTES19p45, author = {Yejia Di and Liang Shi and Shuo-Han Chen and Chun Jason Xue and Edwin H.-M. Sha}, title = {1+1>2: Variation-Aware Lifetime Enhancement for Embedded 3D NAND Flash Systems}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {45--56}, doi = {10.1145/3316482.3326359}, year = {2019}, } Publisher's Version |
|
Chiang, Nicholas |
LCTES '19: "Automating the Generation ..."
Automating the Generation of Hardware Component Knowledge Bases
Luke Hsiao, Sen Wu, Nicholas Chiang, Christopher Ré, and Philip Levis (Stanford University, USA; Gunn High School, USA) Hardware component databases are critical resources in designing embedded systems. Since generating these databases requires hundreds of thousands of hours of manual data entry, they are proprietary, limited in the data they provide, and have many random data entry errors. We present a machine-learning based approach for automating the generation of component databases directly from datasheets. Extracting data directly from datasheets is challenging because: (1) the data is relational in nature and relies on non-local context, (2) the documents are filled with technical jargon, and (3) the datasheets are PDFs, a format that decouples visual locality from locality in the document. The proposed approach uses a rich data model and weak supervision to address these challenges. We evaluate the approach on datasheets of three classes of hardware components and achieve an average quality of 75 F1 points which is comparable to existing human-curated knowledge bases. We perform two applications studies that demonstrate the extraction of multiple data modalities such as numerical properties and images. We show how different sources of supervision such as heuristics and human labels have distinct advantages which can be utilized together within a single methodology to automatically generate hardware component knowledge bases. @InProceedings{LCTES19p163, author = {Luke Hsiao and Sen Wu and Nicholas Chiang and Christopher Ré and Philip Levis}, title = {Automating the Generation of Hardware Component Knowledge Bases}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {163--176}, doi = {10.1145/3316482.3326344}, year = {2019}, } Publisher's Version Artifacts Reusable Results Replicated |
|
Dadzie, Thomas Haywood |
LCTES '19: "SA-SPM: An Efficient Compiler ..."
SA-SPM: An Efficient Compiler for Security Aware Scratchpad Memory (Invited Paper)
Thomas Haywood Dadzie, Jiwon Lee, Jihye Kim, and Hyunok Oh (Hanyang University, South Korea; Kookmin University, South Korea) Scratchpad memories (SPM) are often used to boost the performance of application-specific embedded systems. In embedded systems, main memories are vulnerable to external attacks such as bus snooping or memory extraction. Therefore it is desirable to guarantee the security of data in a main memory. In software-managed SPM, it is possible to provide security in main memory by performing software-assistant encryption. In this paper, we present an efficient compiler for security aware scratch pad Memory (SA-SPM), which ensures the security of main memories in SPM-based embedded systems. Our compiler is the first approach to support full encryption of memory regions (i.e. stack, heap, code, and static variables) in a SPM-based system. Furthermore, to reduce the energy consumption and improve the lifetime of a non-volatile main memory by decreasing the number of bit flips, we propose a new dual encryption scheme for a SPM-based system. Our experimental results show that the proposed dual encryption scheme reduces the number of bit flips by 31.8% compared with the whole encryption. @InProceedings{LCTES19p57, author = {Thomas Haywood Dadzie and Jiwon Lee and Jihye Kim and Hyunok Oh}, title = {SA-SPM: An Efficient Compiler for Security Aware Scratchpad Memory (Invited Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {57--69}, doi = {10.1145/3316482.3326347}, year = {2019}, } Publisher's Version |
|
Daruwalla, Kyle |
LCTES '19: "BitBench: A Benchmark for ..."
BitBench: A Benchmark for Bitstream Computing
Kyle Daruwalla, Heng Zhuo, Carly Schulz, and Mikko Lipasti (University of Wisconsin-Madison, USA) With the recent increase in ultra-low power applications, researchers are investigating alternative architectures that can operate on streaming input data. These target use cases require complex algorithms that must be evaluated under a real-time deadline, but also satisfy the strict available power budget. Stochastic computing (SC) is an example of an alternative paradigm where the data is represented as single bitstreams, allowing designers to implement operations such as multiplication using a simple AND gate. Consequently, the resulting design is both low area and low power. Similarly, traditional digital filters can take advantage of streaming inputs to effectively choose coefficients, resulting in a low cost implementation. In this work, we construct six key algorithms to characterize bitstream computing. We present these algorithms as a new benchmark suite: BitBench. @InProceedings{LCTES19p177, author = {Kyle Daruwalla and Heng Zhuo and Carly Schulz and Mikko Lipasti}, title = {BitBench: A Benchmark for Bitstream Computing}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {177--187}, doi = {10.1145/3316482.3326355}, year = {2019}, } Publisher's Version Artifacts Functional Results Replicated |
|
Das, Sourav |
LCTES '19: "SHAKTI-MS: A RISC-V Processor ..."
SHAKTI-MS: A RISC-V Processor for Memory Safety in C
Sourav Das, R. Harikrishnan Unnithan, Arjun Menon, Chester Rebeiro, and Kamakoti Veezhinathan (IIT Madras, India; BITS Pilani, India) In this era of IoT devices, security is very often traded off for smaller device footprint and low power consumption. Considering the exponentially growing security threats of IoT and cyber-physical systems, it is important that these devices have built-in features that enhance security. In this paper, we present Shakti-MS, a lightweight RISC-V processor with built-in support for both temporal and spatial memory protection. At run time, Shakti-MS can detect and stymie memory misuse in C and C++ programs, with minimum runtime overheads. The solution uses a novel implementation of fat-pointers to efficiently detect misuse of pointers at runtime. Our proposal is to use stack-based cookies for crafting fat-pointers instead of having object-based identifiers. We store the fat-pointer on the stack, which eliminates the use of shadow memory space, or any table to store the pointer metadata. This reduces the storage overheads by a great extent. The cookie also helps to preserve control flow of the program by ensuring that the return address never gets modified by vulnerabilities like buffer overflows. Shakti-MS introduces new instructions in the microprocessor hardware, and also a modified compiler that automatically inserts these new instructions to enable memory protection. This co-design approach is intended to reduce runtime and area overheads, and also provides an end-to-end solution. The hardware has an area overhead of 700 LUTs on a Xilinx Virtex Ultrascale FPGA and 4100 cells on an open 55nm technology node. The clock frequency of the processor is not affected by the security extensions, while there is a marginal increase in the code size by 11% with an average runtime overhead of 13%. @InProceedings{LCTES19p19, author = {Sourav Das and R. Harikrishnan Unnithan and Arjun Menon and Chester Rebeiro and Kamakoti Veezhinathan}, title = {SHAKTI-MS: A RISC-V Processor for Memory Safety in C}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {19--32}, doi = {10.1145/3316482.3326356}, year = {2019}, } Publisher's Version Artifacts Functional Results Replicated |
|
De Silva, Himeshi |
LCTES '19: "ApproxSymate: Path Sensitive ..."
ApproxSymate: Path Sensitive Program Approximation using Symbolic Execution
Himeshi De Silva, Andrew E. Santosa, Nhut-Minh Ho, and Weng-Fai Wong (National University of Singapore, Singapore) Approximate computing, a technique that forgoes quantifiable output accuracy in favor of performance gains, is useful for improving the energy efficiency of error-resilient software, especially in the embedded setting. The identification of program components that can tolerate error plays a crucial role in balancing the energy vs. accuracy trade off in approximate computing. Manual analysis for approximability is not scalable and therefore automated tools which employ static or dynamic analysis have been proposed. However, static techniques are often coarse in their approximations while dynamic efforts incur high overhead. In this work we present ApproxSymate, a framework for automatically identifying program approximations using symbolic execution. ApproxSymate first statically computes symbolic error expressions for program components and then uses a dynamic sensitivity analysis to compute their approximability. A unique feature of this tool is that it explores the previously not considered dimension of program path for approximation which enables safer transformations. Our evaluation shows that ApproxSymate averages about 96% accuracy in identifying the same approximations found in manually annotated benchmarks, outperforming existing automated techniques. @InProceedings{LCTES19p148, author = {Himeshi De Silva and Andrew E. Santosa and Nhut-Minh Ho and Weng-Fai Wong}, title = {ApproxSymate: Path Sensitive Program Approximation using Symbolic Execution}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {148--162}, doi = {10.1145/3316482.3326341}, year = {2019}, } Publisher's Version Artifacts Functional Results Replicated |
|
Di, Yejia |
LCTES '19: "1+1>2: Variation-Aware ..."
1+1>2: Variation-Aware Lifetime Enhancement for Embedded 3D NAND Flash Systems
Yejia Di, Liang Shi, Shuo-Han Chen, Chun Jason Xue, and Edwin H.-M. Sha (East China Normal University, China; Chongqing University, China; Academia Sinica, Taiwan; City University of Hong Kong, China) Three-dimensional (3D) NAND flash has been developed to boost the storage capacity by stacking memory cells vertically. One critical characteristic of 3D NAND flash is its large endurance variation. With this characteristic, the lifetime will be determined by the unit with the worst endurance. However, few works can exploit the variations with acceptable overhead for lifetime improvement. In this paper, a variation-aware lifetime improvement framework is proposed. The basic idea is motivated by an observation that there is an elegant matching between unit endurance and wearing variations when wear leveling and implicit compression are applied together. To achieve the matching goal, the framework is designed from three-type-unit levels, including cell, line, and block, respectively. Series of evaluations are conducted, and the evaluation results show that the lifetime improvement is encouraging, better than that of the combination with the state-of-the-art schemes. @InProceedings{LCTES19p45, author = {Yejia Di and Liang Shi and Shuo-Han Chen and Chun Jason Xue and Edwin H.-M. Sha}, title = {1+1>2: Variation-Aware Lifetime Enhancement for Embedded 3D NAND Flash Systems}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {45--56}, doi = {10.1145/3316482.3326359}, year = {2019}, } Publisher's Version |
|
Gupta, Rajesh K. |
LCTES '19: "New Models and Methods for ..."
New Models and Methods for Programming Cyber-Physical Systems (Keynote)
Rajesh K. Gupta, Jason Koh, and Dezhi Hong (University of California at San Diego, USA) Emerging cyber-physical systems are distributed systems in constant interaction with their physical environments through sensing and actuation at network edges. Over the past decade, the embedded and control systems community have vigorously pursued a vision of coupled feedback-controlled systems with a broad range of real-life applications from transportation, smart buildings to human health. These efforts have continued to push intelligent processing to edge and near-edge devices, provide new capabilities for improved sensing with high quality timing information, establish limits on the quality of time and its impact on the stability of control algorithms etc. It is now time to put these capabilities to use through the emerging “stack” of capabilities, software and systems for emerging applications such as interactive spaces, buildings, smart cities etc. In this talk I will review our efforts related to pushing intelligent processing to edge or near-edge devices, our strategies to lighten the computational and memory demands of recognition tasks, and strategies to ensure high quality of timing information. I will focus on detailing our vision of how we can treat physical spaces and built environments as consisting of sensing, actuation, processing and communication resources that are dynamically discovered and put to use through emerging meta-data schema and methods. The talk represents ongoing work under the CONIX center (conix.io) and BRICK schema consortium (brickschema.org) @InProceedings{LCTES19p1, author = {Rajesh K. Gupta and Jason Koh and Dezhi Hong}, title = {New Models and Methods for Programming Cyber-Physical Systems (Keynote)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {1--3}, doi = {10.1145/3316482.3338093}, year = {2019}, } Publisher's Version |
|
Hameed, Fazal |
LCTES '19: "Optimizing Tensor Contractions ..."
Optimizing Tensor Contractions for Embedded Devices with Racetrack Memory Scratch-Pads
Asif Ali Khan, Norman A. Rink, Fazal Hameed, and Jeronimo Castrillon (TU Dresden, Germany) Tensor contraction is a fundamental operation in many algorithms with a plethora of applications ranging from quantum chemistry over fluid dynamics and image processing to machine learning. The performance of tensor computations critically depends on the efficient utilization of on-chip memories. In the context of low-power embedded devices, efficient management of the memory space becomes even more crucial, in order to meet energy constraints. This work aims at investigating strategies for performance- and energy-efficient tensor contractions on embedded systems, using racetrack memory (RTM)-based scratch-pad memory (SPM). Compiler optimizations such as the loop access order and data layout transformations paired with architectural optimizations such as prefetching and preshifting are employed to reduce the shifting overhead in RTMs. Experimental results demonstrate that the proposed optimizations improve the SPM performance and energy consumption by 24% and 74% respectively compared to an iso-capacity SRAM. @InProceedings{LCTES19p5, author = {Asif Ali Khan and Norman A. Rink and Fazal Hameed and Jeronimo Castrillon}, title = {Optimizing Tensor Contractions for Embedded Devices with Racetrack Memory Scratch-Pads}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {5--18}, doi = {10.1145/3316482.3326351}, year = {2019}, } Publisher's Version |
|
Hong, Dezhi |
LCTES '19: "New Models and Methods for ..."
New Models and Methods for Programming Cyber-Physical Systems (Keynote)
Rajesh K. Gupta, Jason Koh, and Dezhi Hong (University of California at San Diego, USA) Emerging cyber-physical systems are distributed systems in constant interaction with their physical environments through sensing and actuation at network edges. Over the past decade, the embedded and control systems community have vigorously pursued a vision of coupled feedback-controlled systems with a broad range of real-life applications from transportation, smart buildings to human health. These efforts have continued to push intelligent processing to edge and near-edge devices, provide new capabilities for improved sensing with high quality timing information, establish limits on the quality of time and its impact on the stability of control algorithms etc. It is now time to put these capabilities to use through the emerging “stack” of capabilities, software and systems for emerging applications such as interactive spaces, buildings, smart cities etc. In this talk I will review our efforts related to pushing intelligent processing to edge or near-edge devices, our strategies to lighten the computational and memory demands of recognition tasks, and strategies to ensure high quality of timing information. I will focus on detailing our vision of how we can treat physical spaces and built environments as consisting of sensing, actuation, processing and communication resources that are dynamically discovered and put to use through emerging meta-data schema and methods. The talk represents ongoing work under the CONIX center (conix.io) and BRICK schema consortium (brickschema.org) @InProceedings{LCTES19p1, author = {Rajesh K. Gupta and Jason Koh and Dezhi Hong}, title = {New Models and Methods for Programming Cyber-Physical Systems (Keynote)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {1--3}, doi = {10.1145/3316482.3338093}, year = {2019}, } Publisher's Version |
|
Ho, Nhut-Minh |
LCTES '19: "ApproxSymate: Path Sensitive ..."
ApproxSymate: Path Sensitive Program Approximation using Symbolic Execution
Himeshi De Silva, Andrew E. Santosa, Nhut-Minh Ho, and Weng-Fai Wong (National University of Singapore, Singapore) Approximate computing, a technique that forgoes quantifiable output accuracy in favor of performance gains, is useful for improving the energy efficiency of error-resilient software, especially in the embedded setting. The identification of program components that can tolerate error plays a crucial role in balancing the energy vs. accuracy trade off in approximate computing. Manual analysis for approximability is not scalable and therefore automated tools which employ static or dynamic analysis have been proposed. However, static techniques are often coarse in their approximations while dynamic efforts incur high overhead. In this work we present ApproxSymate, a framework for automatically identifying program approximations using symbolic execution. ApproxSymate first statically computes symbolic error expressions for program components and then uses a dynamic sensitivity analysis to compute their approximability. A unique feature of this tool is that it explores the previously not considered dimension of program path for approximation which enables safer transformations. Our evaluation shows that ApproxSymate averages about 96% accuracy in identifying the same approximations found in manually annotated benchmarks, outperforming existing automated techniques. @InProceedings{LCTES19p148, author = {Himeshi De Silva and Andrew E. Santosa and Nhut-Minh Ho and Weng-Fai Wong}, title = {ApproxSymate: Path Sensitive Program Approximation using Symbolic Execution}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {148--162}, doi = {10.1145/3316482.3326341}, year = {2019}, } Publisher's Version Artifacts Functional Results Replicated |
|
Hsiao, Luke |
LCTES '19: "Automating the Generation ..."
Automating the Generation of Hardware Component Knowledge Bases
Luke Hsiao, Sen Wu, Nicholas Chiang, Christopher Ré, and Philip Levis (Stanford University, USA; Gunn High School, USA) Hardware component databases are critical resources in designing embedded systems. Since generating these databases requires hundreds of thousands of hours of manual data entry, they are proprietary, limited in the data they provide, and have many random data entry errors. We present a machine-learning based approach for automating the generation of component databases directly from datasheets. Extracting data directly from datasheets is challenging because: (1) the data is relational in nature and relies on non-local context, (2) the documents are filled with technical jargon, and (3) the datasheets are PDFs, a format that decouples visual locality from locality in the document. The proposed approach uses a rich data model and weak supervision to address these challenges. We evaluate the approach on datasheets of three classes of hardware components and achieve an average quality of 75 F1 points which is comparable to existing human-curated knowledge bases. We perform two applications studies that demonstrate the extraction of multiple data modalities such as numerical properties and images. We show how different sources of supervision such as heuristics and human labels have distinct advantages which can be utilized together within a single methodology to automatically generate hardware component knowledge bases. @InProceedings{LCTES19p163, author = {Luke Hsiao and Sen Wu and Nicholas Chiang and Christopher Ré and Philip Levis}, title = {Automating the Generation of Hardware Component Knowledge Bases}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {163--176}, doi = {10.1145/3316482.3326344}, year = {2019}, } Publisher's Version Artifacts Reusable Results Replicated |
|
Kang, Seokwon |
LCTES '19: "A Compiler-Based Approach ..."
A Compiler-Based Approach for GPGPU Performance Calibration using TLP Modulation (WIP Paper)
Yongseung Yu, Seokwon Kang, and Yongjun Park (Hanyang University, South Korea) Modern GPUs are the most successful accelerators as they provide outstanding performance gain by using CUDA or OpenCL programming models. For maximum performance, programmers typically try to maximize the number of thread blocks of target programs, and GPUs also generally attempt to allocate the maximum number of thread blocks to their GPU cores. However, many recent studies have pointed out that simply allocating the maximum number of thread blocks to GPU cores does not always guarantee the best performance, and identifying proper number of thread blocks per GPU core is a major challenge. Despite these studies, most existing architectural techniques cannot be directly applied to current GPU hardware, and the optimal number of thread blocks can vary significantly depending on the target GPU and application characteristics. To solve these problems, this study proposes a just-in-time thread block number adjustment system using CUDA binary modification upon an LLVM compiler framework, referred to as the CTA-Limiter, in order to dynamically maximize GPU performance on real GPUs without reprogramming. The framework gradually reduces the number of concurrent thread blocks of target CUDA workloads using extra shared memory allocation, and compares the execution time with the previous version to automatically identify the optimal number of co-running thread blocks per GPU Core. The results showed meaningful performance improvements, averaging at 30%, 40%, and 44%, in GTX 960, GTX 1050, and GTX 1080 Ti, respectively. @InProceedings{LCTES19p193, author = {Yongseung Yu and Seokwon Kang and Yongjun Park}, title = {A Compiler-Based Approach for GPGPU Performance Calibration using TLP Modulation (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {193--197}, doi = {10.1145/3316482.3326343}, year = {2019}, } Publisher's Version |
|
Khan, Asif Ali |
LCTES '19: "Optimizing Tensor Contractions ..."
Optimizing Tensor Contractions for Embedded Devices with Racetrack Memory Scratch-Pads
Asif Ali Khan, Norman A. Rink, Fazal Hameed, and Jeronimo Castrillon (TU Dresden, Germany) Tensor contraction is a fundamental operation in many algorithms with a plethora of applications ranging from quantum chemistry over fluid dynamics and image processing to machine learning. The performance of tensor computations critically depends on the efficient utilization of on-chip memories. In the context of low-power embedded devices, efficient management of the memory space becomes even more crucial, in order to meet energy constraints. This work aims at investigating strategies for performance- and energy-efficient tensor contractions on embedded systems, using racetrack memory (RTM)-based scratch-pad memory (SPM). Compiler optimizations such as the loop access order and data layout transformations paired with architectural optimizations such as prefetching and preshifting are employed to reduce the shifting overhead in RTMs. Experimental results demonstrate that the proposed optimizations improve the SPM performance and energy consumption by 24% and 74% respectively compared to an iso-capacity SRAM. @InProceedings{LCTES19p5, author = {Asif Ali Khan and Norman A. Rink and Fazal Hameed and Jeronimo Castrillon}, title = {Optimizing Tensor Contractions for Embedded Devices with Racetrack Memory Scratch-Pads}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {5--18}, doi = {10.1145/3316482.3326351}, year = {2019}, } Publisher's Version |
|
Kim, Jihye |
LCTES '19: "SA-SPM: An Efficient Compiler ..."
SA-SPM: An Efficient Compiler for Security Aware Scratchpad Memory (Invited Paper)
Thomas Haywood Dadzie, Jiwon Lee, Jihye Kim, and Hyunok Oh (Hanyang University, South Korea; Kookmin University, South Korea) Scratchpad memories (SPM) are often used to boost the performance of application-specific embedded systems. In embedded systems, main memories are vulnerable to external attacks such as bus snooping or memory extraction. Therefore it is desirable to guarantee the security of data in a main memory. In software-managed SPM, it is possible to provide security in main memory by performing software-assistant encryption. In this paper, we present an efficient compiler for security aware scratch pad Memory (SA-SPM), which ensures the security of main memories in SPM-based embedded systems. Our compiler is the first approach to support full encryption of memory regions (i.e. stack, heap, code, and static variables) in a SPM-based system. Furthermore, to reduce the energy consumption and improve the lifetime of a non-volatile main memory by decreasing the number of bit flips, we propose a new dual encryption scheme for a SPM-based system. Our experimental results show that the proposed dual encryption scheme reduces the number of bit flips by 31.8% compared with the whole encryption. @InProceedings{LCTES19p57, author = {Thomas Haywood Dadzie and Jiwon Lee and Jihye Kim and Hyunok Oh}, title = {SA-SPM: An Efficient Compiler for Security Aware Scratchpad Memory (Invited Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {57--69}, doi = {10.1145/3316482.3326347}, year = {2019}, } Publisher's Version |
|
Koh, Jason |
LCTES '19: "New Models and Methods for ..."
New Models and Methods for Programming Cyber-Physical Systems (Keynote)
Rajesh K. Gupta, Jason Koh, and Dezhi Hong (University of California at San Diego, USA) Emerging cyber-physical systems are distributed systems in constant interaction with their physical environments through sensing and actuation at network edges. Over the past decade, the embedded and control systems community have vigorously pursued a vision of coupled feedback-controlled systems with a broad range of real-life applications from transportation, smart buildings to human health. These efforts have continued to push intelligent processing to edge and near-edge devices, provide new capabilities for improved sensing with high quality timing information, establish limits on the quality of time and its impact on the stability of control algorithms etc. It is now time to put these capabilities to use through the emerging “stack” of capabilities, software and systems for emerging applications such as interactive spaces, buildings, smart cities etc. In this talk I will review our efforts related to pushing intelligent processing to edge or near-edge devices, our strategies to lighten the computational and memory demands of recognition tasks, and strategies to ensure high quality of timing information. I will focus on detailing our vision of how we can treat physical spaces and built environments as consisting of sensing, actuation, processing and communication resources that are dynamically discovered and put to use through emerging meta-data schema and methods. The talk represents ongoing work under the CONIX center (conix.io) and BRICK schema consortium (brickschema.org) @InProceedings{LCTES19p1, author = {Rajesh K. Gupta and Jason Koh and Dezhi Hong}, title = {New Models and Methods for Programming Cyber-Physical Systems (Keynote)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {1--3}, doi = {10.1145/3316482.3338093}, year = {2019}, } Publisher's Version |
|
Kulkarni, Aditi |
LCTES '19: "SPECTRUM: A Software Defined ..."
SPECTRUM: A Software Defined Predictable Many-Core Architecture for LTE Baseband Processing
Vanchinathan Venkataramani, Aditi Kulkarni, Tulika Mitra, and Li-Shiuan Peh (National University of Singapore, Singapore) Wireless communication standards such as Long Term Evolution (LTE) are rapidly changing to support the high data rate of wireless devices. The physical layer baseband processing has strict real-time deadlines, especially in the next-generation applications enabled by the 5G standard. Existing base station transceivers utilize customized Digital Signal Processing (DSP) cores or fixed-function hardware accelerators for physical layer baseband processing. However, these approaches incur significant non-recurring engineering costs and are inflexible to newer standards or updates. Software programmable processors offer more adaptability. However, it is challenging to sustain guaranteed worst-case latency and throughput at reasonably low-power on shared-memory many-core architectures featuring inherently unpredictable design choices, such as caches and network-on chip. We propose SPECTRUM, a predictable software defined many-core architecture that exploits the massive parallelism of the LTE baseband processing. The focus is on designing a scalable lightweight hardware that can be programmed and defined by sophisticated software mechanisms. SPECTRUM employs hundreds of lightweight in-order cores augmented with custom instructions that provide predictable timing, a purely software-scheduled on-chip network that orchestrates the communication to avoid any contention and per-core software controlled scratchpad memory with deterministic access latency. Compared to a many-core architecture like Skylake-SP (average power 215W) that drops 14% packets at high traffic load, 256-core SPECTRUM by definition has zero packet drop rate at significantly lower average power of 24W. SPECTRUM consumes 2.11x lower power than C66x DSP cores+accelerator platform in baseband processing. SPECTRUM is also well-positioned to support future 5G workloads. @InProceedings{LCTES19p82, author = {Vanchinathan Venkataramani and Aditi Kulkarni and Tulika Mitra and Li-Shiuan Peh}, title = {SPECTRUM: A Software Defined Predictable Many-Core Architecture for LTE Baseband Processing}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {82--96}, doi = {10.1145/3316482.3326352}, year = {2019}, } Publisher's Version |
|
Lee, Jiwon |
LCTES '19: "SA-SPM: An Efficient Compiler ..."
SA-SPM: An Efficient Compiler for Security Aware Scratchpad Memory (Invited Paper)
Thomas Haywood Dadzie, Jiwon Lee, Jihye Kim, and Hyunok Oh (Hanyang University, South Korea; Kookmin University, South Korea) Scratchpad memories (SPM) are often used to boost the performance of application-specific embedded systems. In embedded systems, main memories are vulnerable to external attacks such as bus snooping or memory extraction. Therefore it is desirable to guarantee the security of data in a main memory. In software-managed SPM, it is possible to provide security in main memory by performing software-assistant encryption. In this paper, we present an efficient compiler for security aware scratch pad Memory (SA-SPM), which ensures the security of main memories in SPM-based embedded systems. Our compiler is the first approach to support full encryption of memory regions (i.e. stack, heap, code, and static variables) in a SPM-based system. Furthermore, to reduce the energy consumption and improve the lifetime of a non-volatile main memory by decreasing the number of bit flips, we propose a new dual encryption scheme for a SPM-based system. Our experimental results show that the proposed dual encryption scheme reduces the number of bit flips by 31.8% compared with the whole encryption. @InProceedings{LCTES19p57, author = {Thomas Haywood Dadzie and Jiwon Lee and Jihye Kim and Hyunok Oh}, title = {SA-SPM: An Efficient Compiler for Security Aware Scratchpad Memory (Invited Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {57--69}, doi = {10.1145/3316482.3326347}, year = {2019}, } Publisher's Version |
|
Levis, Philip |
LCTES '19: "Automating the Generation ..."
Automating the Generation of Hardware Component Knowledge Bases
Luke Hsiao, Sen Wu, Nicholas Chiang, Christopher Ré, and Philip Levis (Stanford University, USA; Gunn High School, USA) Hardware component databases are critical resources in designing embedded systems. Since generating these databases requires hundreds of thousands of hours of manual data entry, they are proprietary, limited in the data they provide, and have many random data entry errors. We present a machine-learning based approach for automating the generation of component databases directly from datasheets. Extracting data directly from datasheets is challenging because: (1) the data is relational in nature and relies on non-local context, (2) the documents are filled with technical jargon, and (3) the datasheets are PDFs, a format that decouples visual locality from locality in the document. The proposed approach uses a rich data model and weak supervision to address these challenges. We evaluate the approach on datasheets of three classes of hardware components and achieve an average quality of 75 F1 points which is comparable to existing human-curated knowledge bases. We perform two applications studies that demonstrate the extraction of multiple data modalities such as numerical properties and images. We show how different sources of supervision such as heuristics and human labels have distinct advantages which can be utilized together within a single methodology to automatically generate hardware component knowledge bases. @InProceedings{LCTES19p163, author = {Luke Hsiao and Sen Wu and Nicholas Chiang and Christopher Ré and Philip Levis}, title = {Automating the Generation of Hardware Component Knowledge Bases}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {163--176}, doi = {10.1145/3316482.3326344}, year = {2019}, } Publisher's Version Artifacts Reusable Results Replicated |
|
Lipasti, Mikko |
LCTES '19: "BitBench: A Benchmark for ..."
BitBench: A Benchmark for Bitstream Computing
Kyle Daruwalla, Heng Zhuo, Carly Schulz, and Mikko Lipasti (University of Wisconsin-Madison, USA) With the recent increase in ultra-low power applications, researchers are investigating alternative architectures that can operate on streaming input data. These target use cases require complex algorithms that must be evaluated under a real-time deadline, but also satisfy the strict available power budget. Stochastic computing (SC) is an example of an alternative paradigm where the data is represented as single bitstreams, allowing designers to implement operations such as multiplication using a simple AND gate. Consequently, the resulting design is both low area and low power. Similarly, traditional digital filters can take advantage of streaming inputs to effectively choose coefficients, resulting in a low cost implementation. In this work, we construct six key algorithms to characterize bitstream computing. We present these algorithms as a new benchmark suite: BitBench. @InProceedings{LCTES19p177, author = {Kyle Daruwalla and Heng Zhuo and Carly Schulz and Mikko Lipasti}, title = {BitBench: A Benchmark for Bitstream Computing}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {177--187}, doi = {10.1145/3316482.3326355}, year = {2019}, } Publisher's Version Artifacts Functional Results Replicated |
|
Li, Xinyi |
LCTES '19: "IA-Graph Based Inter-App Conflicts ..."
IA-Graph Based Inter-App Conflicts Detection in Open IoT Systems
Xinyi Li, Lei Zhang, and Xipeng Shen (Chang'an University, China; North Carolina State University, USA) This paper tackles the problem of detecting potential conflicts among independently developed apps that are to be installed into an open Internet of Things (IoT) environment. It provides a new set of definitions and categorizations of the conflicts to more precisely characterize the nature of the problem, and employs a graph representation (named IA Graph) for formally representing IoT controls and inter-app interplays. It provides an efficient conflicts detection algorithm implemented on a SmartThings compiler and shows significantly improved efficacy over prior solutions. @InProceedings{LCTES19p135, author = {Xinyi Li and Lei Zhang and Xipeng Shen}, title = {IA-Graph Based Inter-App Conflicts Detection in Open IoT Systems}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {135--147}, doi = {10.1145/3316482.3326350}, year = {2019}, } Publisher's Version |
|
Maioli, Andrea |
LCTES '19: "On Intermittence Bugs in the ..."
On Intermittence Bugs in the Battery-Less Internet of Things (WIP Paper)
Andrea Maioli, Luca Mottola, Muhammad Hamad Alizai, and Junaid Haroon Siddiqui (Politecnico di Milano, Italy; RISE SICS, Sweden; Lahore University of Management Sciences, Pakistan) The resource-constrained devices of the battery-less Internet of Things are powered off energy harvesting and compute intermittently, as energy is available. Forward progress of programs is ensured by creating persistent state. Mixed-volatile platforms are thus an asset, as they map slices of the address space onto non-volatile memory. However, these platforms also possibly introduce intermittence bugs, where intermittent and continuous executions differ. Our ongoing work on intermittence bugs includes [label=()] an analysis that demonstrates their presence in settings that current literature overlooks; the design of efficient testing techniques to check their presence in arbitrary code, which would be otherwise prohibitive given the sheer number of different executions to check; the implementation of an offline tool called ScEpTIC that implements these techniques. ScEpTIC finds the same bugs as a brute-force approach, but is six orders of magnitude faster. @InProceedings{LCTES19p203, author = {Andrea Maioli and Luca Mottola and Muhammad Hamad Alizai and Junaid Haroon Siddiqui}, title = {On Intermittence Bugs in the Battery-Less Internet of Things (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {203--207}, doi = {10.1145/3316482.3326346}, year = {2019}, } Publisher's Version |
|
Meng, Na |
LCTES '19: "An Empirical Comparison between ..."
An Empirical Comparison between Monkey Testing and Human Testing (WIP Paper)
Mostafa Mohammed, Haipeng Cai, and Na Meng (Virginia Tech, USA; Washington State University, USA) Android app testing is challenging and time-consuming because fully testing all feasible execution paths is difficult. Nowadays apps are usually tested in two ways: human testing or automated testing. Prior work compared different automated tools. However, some fundamental questions are still unexplored, including (1) how automated testing behaves differently from human testing, and (2) whether automated testing can fully or partially substitute human testing. This paper presents our study to explore the open questions. Monkey has been considered one of the best automated testing tools due to its usability, reliability, and competitive coverage metrics, so we applied Monkey to five Android apps and collected their dynamic event traces. Meanwhile, we recruited eight users to manually test the same apps and gathered the traces. By comparing the collected data, we revealed that i.) on average, the two methods generated similar numbers of unique events; ii.) Monkey created more system events while humans created more UI events; iii.) Monkey could mimic human behaviors when apps have UIs full of clickable widgets to trigger logically independent events; and iv.) Monkey was insufficient to test apps that require information comprehension and problem-solving skills. Our research sheds light on future research that combines human expertise with the agility of Monkey testing. @InProceedings{LCTES19p188, author = {Mostafa Mohammed and Haipeng Cai and Na Meng}, title = {An Empirical Comparison between Monkey Testing and Human Testing (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {188--192}, doi = {10.1145/3316482.3326342}, year = {2019}, } Publisher's Version |
|
Menon, Arjun |
LCTES '19: "SHAKTI-MS: A RISC-V Processor ..."
SHAKTI-MS: A RISC-V Processor for Memory Safety in C
Sourav Das, R. Harikrishnan Unnithan, Arjun Menon, Chester Rebeiro, and Kamakoti Veezhinathan (IIT Madras, India; BITS Pilani, India) In this era of IoT devices, security is very often traded off for smaller device footprint and low power consumption. Considering the exponentially growing security threats of IoT and cyber-physical systems, it is important that these devices have built-in features that enhance security. In this paper, we present Shakti-MS, a lightweight RISC-V processor with built-in support for both temporal and spatial memory protection. At run time, Shakti-MS can detect and stymie memory misuse in C and C++ programs, with minimum runtime overheads. The solution uses a novel implementation of fat-pointers to efficiently detect misuse of pointers at runtime. Our proposal is to use stack-based cookies for crafting fat-pointers instead of having object-based identifiers. We store the fat-pointer on the stack, which eliminates the use of shadow memory space, or any table to store the pointer metadata. This reduces the storage overheads by a great extent. The cookie also helps to preserve control flow of the program by ensuring that the return address never gets modified by vulnerabilities like buffer overflows. Shakti-MS introduces new instructions in the microprocessor hardware, and also a modified compiler that automatically inserts these new instructions to enable memory protection. This co-design approach is intended to reduce runtime and area overheads, and also provides an end-to-end solution. The hardware has an area overhead of 700 LUTs on a Xilinx Virtex Ultrascale FPGA and 4100 cells on an open 55nm technology node. The clock frequency of the processor is not affected by the security extensions, while there is a marginal increase in the code size by 11% with an average runtime overhead of 13%. @InProceedings{LCTES19p19, author = {Sourav Das and R. Harikrishnan Unnithan and Arjun Menon and Chester Rebeiro and Kamakoti Veezhinathan}, title = {SHAKTI-MS: A RISC-V Processor for Memory Safety in C}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {19--32}, doi = {10.1145/3316482.3326356}, year = {2019}, } Publisher's Version Artifacts Functional Results Replicated |
|
Metta, Ravindra |
LCTES '19: "Imprecision in WCET Estimates ..."
Imprecision in WCET Estimates Due to Library Calls and How to Reduce It (WIP Paper)
Martin Becker, Samarjit Chakraborty, Ravindra Metta, and R. Venkatesh (TU Munich, Germany; TCS Research, India) One of the main difficulties in estimating the Worst Case Execution Time (WCET) at the binary level is that machine instructions do not allow inferring call contexts as precisely as source code, since compiler optimizations obfuscate control flow and type information. On the other hand, WCET estimation at source code level can be precise in tracking call contexts, but it is pessimistic for functions that are not available as source code. In this paper we propose approaches to join binary-level and source-level analyses, to get the best out of both. We present the arising problems in detail, evaluate the approaches qualitatively, and highlight their trade-offs. @InProceedings{LCTES19p208, author = {Martin Becker and Samarjit Chakraborty and Ravindra Metta and R. Venkatesh}, title = {Imprecision in WCET Estimates Due to Library Calls and How to Reduce It (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {208--212}, doi = {10.1145/3316482.3326353}, year = {2019}, } Publisher's Version |
|
Mitra, Tulika |
LCTES '19: "SPECTRUM: A Software Defined ..."
SPECTRUM: A Software Defined Predictable Many-Core Architecture for LTE Baseband Processing
Vanchinathan Venkataramani, Aditi Kulkarni, Tulika Mitra, and Li-Shiuan Peh (National University of Singapore, Singapore) Wireless communication standards such as Long Term Evolution (LTE) are rapidly changing to support the high data rate of wireless devices. The physical layer baseband processing has strict real-time deadlines, especially in the next-generation applications enabled by the 5G standard. Existing base station transceivers utilize customized Digital Signal Processing (DSP) cores or fixed-function hardware accelerators for physical layer baseband processing. However, these approaches incur significant non-recurring engineering costs and are inflexible to newer standards or updates. Software programmable processors offer more adaptability. However, it is challenging to sustain guaranteed worst-case latency and throughput at reasonably low-power on shared-memory many-core architectures featuring inherently unpredictable design choices, such as caches and network-on chip. We propose SPECTRUM, a predictable software defined many-core architecture that exploits the massive parallelism of the LTE baseband processing. The focus is on designing a scalable lightweight hardware that can be programmed and defined by sophisticated software mechanisms. SPECTRUM employs hundreds of lightweight in-order cores augmented with custom instructions that provide predictable timing, a purely software-scheduled on-chip network that orchestrates the communication to avoid any contention and per-core software controlled scratchpad memory with deterministic access latency. Compared to a many-core architecture like Skylake-SP (average power 215W) that drops 14% packets at high traffic load, 256-core SPECTRUM by definition has zero packet drop rate at significantly lower average power of 24W. SPECTRUM consumes 2.11x lower power than C66x DSP cores+accelerator platform in baseband processing. SPECTRUM is also well-positioned to support future 5G workloads. @InProceedings{LCTES19p82, author = {Vanchinathan Venkataramani and Aditi Kulkarni and Tulika Mitra and Li-Shiuan Peh}, title = {SPECTRUM: A Software Defined Predictable Many-Core Architecture for LTE Baseband Processing}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {82--96}, doi = {10.1145/3316482.3326352}, year = {2019}, } Publisher's Version |
|
Mohammed, Mostafa |
LCTES '19: "An Empirical Comparison between ..."
An Empirical Comparison between Monkey Testing and Human Testing (WIP Paper)
Mostafa Mohammed, Haipeng Cai, and Na Meng (Virginia Tech, USA; Washington State University, USA) Android app testing is challenging and time-consuming because fully testing all feasible execution paths is difficult. Nowadays apps are usually tested in two ways: human testing or automated testing. Prior work compared different automated tools. However, some fundamental questions are still unexplored, including (1) how automated testing behaves differently from human testing, and (2) whether automated testing can fully or partially substitute human testing. This paper presents our study to explore the open questions. Monkey has been considered one of the best automated testing tools due to its usability, reliability, and competitive coverage metrics, so we applied Monkey to five Android apps and collected their dynamic event traces. Meanwhile, we recruited eight users to manually test the same apps and gathered the traces. By comparing the collected data, we revealed that i.) on average, the two methods generated similar numbers of unique events; ii.) Monkey created more system events while humans created more UI events; iii.) Monkey could mimic human behaviors when apps have UIs full of clickable widgets to trigger logically independent events; and iv.) Monkey was insufficient to test apps that require information comprehension and problem-solving skills. Our research sheds light on future research that combines human expertise with the agility of Monkey testing. @InProceedings{LCTES19p188, author = {Mostafa Mohammed and Haipeng Cai and Na Meng}, title = {An Empirical Comparison between Monkey Testing and Human Testing (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {188--192}, doi = {10.1145/3316482.3326342}, year = {2019}, } Publisher's Version |
|
Mottola, Luca |
LCTES '19: "The Betrayal of Constant Power ..."
The Betrayal of Constant Power × Time: Finding the Missing Joules of Transiently-Powered Computers
Saad Ahmed, Abu Bakar, Naveed Anwar Bhatti, Muhammad Hamad Alizai, Junaid Haroon Siddiqui, and Luca Mottola (Lahore University of Management Sciences, Pakistan; RISE SICS, Sweden; Politecnico di Milano, Italy) Transiently-powered computers (TPCs) lay the basis for a battery-less Internet of Things, using energy harvesting and small capacitors to power their operation. This power supply is characterized by extreme variations in supply voltage, as capacitors charge when harvesting energy and discharge when computing. We experimentally find that these variations cause marked fluctuations in clock speed and power consumption, which determine energy efficiency. We demonstrate that it is possible to accurately model and concretely capitalize on these fluctuations. We derive an energy model as a function of supply voltage and develop EPIC, a compile-time energy analysis tool. We use EPIC to substitute for the constant power assumption in existing analysis techniques, giving programmers accurate information on worst-case energy consumption of programs. When using EPIC with existing TPC system support, run-time energy efficiency drastically improves, eventually leading up to a 350% speedup in the time to complete a fixed workload. Further, when using EPIC with existing debugging tools, programmers avoid unnecessary program changes that hurt energy efficiency. @InProceedings{LCTES19p97, author = {Saad Ahmed and Abu Bakar and Naveed Anwar Bhatti and Muhammad Hamad Alizai and Junaid Haroon Siddiqui and Luca Mottola}, title = {The Betrayal of Constant Power × Time: Finding the Missing Joules of Transiently-Powered Computers}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {97--109}, doi = {10.1145/3316482.3326348}, year = {2019}, } Publisher's Version LCTES '19: "On Intermittence Bugs in the ..." On Intermittence Bugs in the Battery-Less Internet of Things (WIP Paper) Andrea Maioli, Luca Mottola, Muhammad Hamad Alizai, and Junaid Haroon Siddiqui (Politecnico di Milano, Italy; RISE SICS, Sweden; Lahore University of Management Sciences, Pakistan) The resource-constrained devices of the battery-less Internet of Things are powered off energy harvesting and compute intermittently, as energy is available. Forward progress of programs is ensured by creating persistent state. Mixed-volatile platforms are thus an asset, as they map slices of the address space onto non-volatile memory. However, these platforms also possibly introduce intermittence bugs, where intermittent and continuous executions differ. Our ongoing work on intermittence bugs includes [label=()] an analysis that demonstrates their presence in settings that current literature overlooks; the design of efficient testing techniques to check their presence in arbitrary code, which would be otherwise prohibitive given the sheer number of different executions to check; the implementation of an offline tool called ScEpTIC that implements these techniques. ScEpTIC finds the same bugs as a brute-force approach, but is six orders of magnitude faster. @InProceedings{LCTES19p203, author = {Andrea Maioli and Luca Mottola and Muhammad Hamad Alizai and Junaid Haroon Siddiqui}, title = {On Intermittence Bugs in the Battery-Less Internet of Things (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {203--207}, doi = {10.1145/3316482.3326346}, year = {2019}, } Publisher's Version LCTES '19: "Efficient Intermittent Computing ..." Efficient Intermittent Computing with Differential Checkpointing Saad Ahmed, Naveed Anwar Bhatti, Muhammad Hamad Alizai, Junaid Haroon Siddiqui, and Luca Mottola (Lahore University of Management Sciences, Pakistan; RISE SICS, Sweden; Politecnico di Milano, Italy) Embedded devices running on ambient energy perform computations intermittently, depending upon energy availability. System support ensures forward progress of programs through state checkpointing in non-volatile memory. Checkpointing is, however, expensive in energy and adds to execution times. To reduce this overhead, we present DICE, a system design that efficiently achieves differential checkpointing in intermittent computing. Distinctive traits of DICE are its software-only nature and its ability to only operate in volatile main memory to determine differentials. DICE works with arbitrary programs using automatic code instrumentation, thus requiring no programmer intervention, and can be integrated with both reactive (Hibernus) or proactive (MementOS, HarvOS) checkpointing systems. By reducing the cost of checkpoints, performance markedly improves. For example, using DICE, Hibernus requires one order of magnitude shorter time to complete a fixed workload in real-world settings. @InProceedings{LCTES19p70, author = {Saad Ahmed and Naveed Anwar Bhatti and Muhammad Hamad Alizai and Junaid Haroon Siddiqui and Luca Mottola}, title = {Efficient Intermittent Computing with Differential Checkpointing}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {70--81}, doi = {10.1145/3316482.3326357}, year = {2019}, } Publisher's Version |
|
Oh, Hyunok |
LCTES '19: "SA-SPM: An Efficient Compiler ..."
SA-SPM: An Efficient Compiler for Security Aware Scratchpad Memory (Invited Paper)
Thomas Haywood Dadzie, Jiwon Lee, Jihye Kim, and Hyunok Oh (Hanyang University, South Korea; Kookmin University, South Korea) Scratchpad memories (SPM) are often used to boost the performance of application-specific embedded systems. In embedded systems, main memories are vulnerable to external attacks such as bus snooping or memory extraction. Therefore it is desirable to guarantee the security of data in a main memory. In software-managed SPM, it is possible to provide security in main memory by performing software-assistant encryption. In this paper, we present an efficient compiler for security aware scratch pad Memory (SA-SPM), which ensures the security of main memories in SPM-based embedded systems. Our compiler is the first approach to support full encryption of memory regions (i.e. stack, heap, code, and static variables) in a SPM-based system. Furthermore, to reduce the energy consumption and improve the lifetime of a non-volatile main memory by decreasing the number of bit flips, we propose a new dual encryption scheme for a SPM-based system. Our experimental results show that the proposed dual encryption scheme reduces the number of bit flips by 31.8% compared with the whole encryption. @InProceedings{LCTES19p57, author = {Thomas Haywood Dadzie and Jiwon Lee and Jihye Kim and Hyunok Oh}, title = {SA-SPM: An Efficient Compiler for Security Aware Scratchpad Memory (Invited Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {57--69}, doi = {10.1145/3316482.3326347}, year = {2019}, } Publisher's Version |
|
Park, Yongjun |
LCTES '19: "A Compiler-Based Approach ..."
A Compiler-Based Approach for GPGPU Performance Calibration using TLP Modulation (WIP Paper)
Yongseung Yu, Seokwon Kang, and Yongjun Park (Hanyang University, South Korea) Modern GPUs are the most successful accelerators as they provide outstanding performance gain by using CUDA or OpenCL programming models. For maximum performance, programmers typically try to maximize the number of thread blocks of target programs, and GPUs also generally attempt to allocate the maximum number of thread blocks to their GPU cores. However, many recent studies have pointed out that simply allocating the maximum number of thread blocks to GPU cores does not always guarantee the best performance, and identifying proper number of thread blocks per GPU core is a major challenge. Despite these studies, most existing architectural techniques cannot be directly applied to current GPU hardware, and the optimal number of thread blocks can vary significantly depending on the target GPU and application characteristics. To solve these problems, this study proposes a just-in-time thread block number adjustment system using CUDA binary modification upon an LLVM compiler framework, referred to as the CTA-Limiter, in order to dynamically maximize GPU performance on real GPUs without reprogramming. The framework gradually reduces the number of concurrent thread blocks of target CUDA workloads using extra shared memory allocation, and compares the execution time with the previous version to automatically identify the optimal number of co-running thread blocks per GPU Core. The results showed meaningful performance improvements, averaging at 30%, 40%, and 44%, in GTX 960, GTX 1050, and GTX 1080 Ti, respectively. @InProceedings{LCTES19p193, author = {Yongseung Yu and Seokwon Kang and Yongjun Park}, title = {A Compiler-Based Approach for GPGPU Performance Calibration using TLP Modulation (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {193--197}, doi = {10.1145/3316482.3326343}, year = {2019}, } Publisher's Version |
|
Peh, Li-Shiuan |
LCTES '19: "SPECTRUM: A Software Defined ..."
SPECTRUM: A Software Defined Predictable Many-Core Architecture for LTE Baseband Processing
Vanchinathan Venkataramani, Aditi Kulkarni, Tulika Mitra, and Li-Shiuan Peh (National University of Singapore, Singapore) Wireless communication standards such as Long Term Evolution (LTE) are rapidly changing to support the high data rate of wireless devices. The physical layer baseband processing has strict real-time deadlines, especially in the next-generation applications enabled by the 5G standard. Existing base station transceivers utilize customized Digital Signal Processing (DSP) cores or fixed-function hardware accelerators for physical layer baseband processing. However, these approaches incur significant non-recurring engineering costs and are inflexible to newer standards or updates. Software programmable processors offer more adaptability. However, it is challenging to sustain guaranteed worst-case latency and throughput at reasonably low-power on shared-memory many-core architectures featuring inherently unpredictable design choices, such as caches and network-on chip. We propose SPECTRUM, a predictable software defined many-core architecture that exploits the massive parallelism of the LTE baseband processing. The focus is on designing a scalable lightweight hardware that can be programmed and defined by sophisticated software mechanisms. SPECTRUM employs hundreds of lightweight in-order cores augmented with custom instructions that provide predictable timing, a purely software-scheduled on-chip network that orchestrates the communication to avoid any contention and per-core software controlled scratchpad memory with deterministic access latency. Compared to a many-core architecture like Skylake-SP (average power 215W) that drops 14% packets at high traffic load, 256-core SPECTRUM by definition has zero packet drop rate at significantly lower average power of 24W. SPECTRUM consumes 2.11x lower power than C66x DSP cores+accelerator platform in baseband processing. SPECTRUM is also well-positioned to support future 5G workloads. @InProceedings{LCTES19p82, author = {Vanchinathan Venkataramani and Aditi Kulkarni and Tulika Mitra and Li-Shiuan Peh}, title = {SPECTRUM: A Software Defined Predictable Many-Core Architecture for LTE Baseband Processing}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {82--96}, doi = {10.1145/3316482.3326352}, year = {2019}, } Publisher's Version |
|
Rebeiro, Chester |
LCTES '19: "SHAKTI-MS: A RISC-V Processor ..."
SHAKTI-MS: A RISC-V Processor for Memory Safety in C
Sourav Das, R. Harikrishnan Unnithan, Arjun Menon, Chester Rebeiro, and Kamakoti Veezhinathan (IIT Madras, India; BITS Pilani, India) In this era of IoT devices, security is very often traded off for smaller device footprint and low power consumption. Considering the exponentially growing security threats of IoT and cyber-physical systems, it is important that these devices have built-in features that enhance security. In this paper, we present Shakti-MS, a lightweight RISC-V processor with built-in support for both temporal and spatial memory protection. At run time, Shakti-MS can detect and stymie memory misuse in C and C++ programs, with minimum runtime overheads. The solution uses a novel implementation of fat-pointers to efficiently detect misuse of pointers at runtime. Our proposal is to use stack-based cookies for crafting fat-pointers instead of having object-based identifiers. We store the fat-pointer on the stack, which eliminates the use of shadow memory space, or any table to store the pointer metadata. This reduces the storage overheads by a great extent. The cookie also helps to preserve control flow of the program by ensuring that the return address never gets modified by vulnerabilities like buffer overflows. Shakti-MS introduces new instructions in the microprocessor hardware, and also a modified compiler that automatically inserts these new instructions to enable memory protection. This co-design approach is intended to reduce runtime and area overheads, and also provides an end-to-end solution. The hardware has an area overhead of 700 LUTs on a Xilinx Virtex Ultrascale FPGA and 4100 cells on an open 55nm technology node. The clock frequency of the processor is not affected by the security extensions, while there is a marginal increase in the code size by 11% with an average runtime overhead of 13%. @InProceedings{LCTES19p19, author = {Sourav Das and R. Harikrishnan Unnithan and Arjun Menon and Chester Rebeiro and Kamakoti Veezhinathan}, title = {SHAKTI-MS: A RISC-V Processor for Memory Safety in C}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {19--32}, doi = {10.1145/3316482.3326356}, year = {2019}, } Publisher's Version Artifacts Functional Results Replicated |
|
Ré, Christopher |
LCTES '19: "Automating the Generation ..."
Automating the Generation of Hardware Component Knowledge Bases
Luke Hsiao, Sen Wu, Nicholas Chiang, Christopher Ré, and Philip Levis (Stanford University, USA; Gunn High School, USA) Hardware component databases are critical resources in designing embedded systems. Since generating these databases requires hundreds of thousands of hours of manual data entry, they are proprietary, limited in the data they provide, and have many random data entry errors. We present a machine-learning based approach for automating the generation of component databases directly from datasheets. Extracting data directly from datasheets is challenging because: (1) the data is relational in nature and relies on non-local context, (2) the documents are filled with technical jargon, and (3) the datasheets are PDFs, a format that decouples visual locality from locality in the document. The proposed approach uses a rich data model and weak supervision to address these challenges. We evaluate the approach on datasheets of three classes of hardware components and achieve an average quality of 75 F1 points which is comparable to existing human-curated knowledge bases. We perform two applications studies that demonstrate the extraction of multiple data modalities such as numerical properties and images. We show how different sources of supervision such as heuristics and human labels have distinct advantages which can be utilized together within a single methodology to automatically generate hardware component knowledge bases. @InProceedings{LCTES19p163, author = {Luke Hsiao and Sen Wu and Nicholas Chiang and Christopher Ré and Philip Levis}, title = {Automating the Generation of Hardware Component Knowledge Bases}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {163--176}, doi = {10.1145/3316482.3326344}, year = {2019}, } Publisher's Version Artifacts Reusable Results Replicated |
|
Rink, Norman A. |
LCTES '19: "Optimizing Tensor Contractions ..."
Optimizing Tensor Contractions for Embedded Devices with Racetrack Memory Scratch-Pads
Asif Ali Khan, Norman A. Rink, Fazal Hameed, and Jeronimo Castrillon (TU Dresden, Germany) Tensor contraction is a fundamental operation in many algorithms with a plethora of applications ranging from quantum chemistry over fluid dynamics and image processing to machine learning. The performance of tensor computations critically depends on the efficient utilization of on-chip memories. In the context of low-power embedded devices, efficient management of the memory space becomes even more crucial, in order to meet energy constraints. This work aims at investigating strategies for performance- and energy-efficient tensor contractions on embedded systems, using racetrack memory (RTM)-based scratch-pad memory (SPM). Compiler optimizations such as the loop access order and data layout transformations paired with architectural optimizations such as prefetching and preshifting are employed to reduce the shifting overhead in RTMs. Experimental results demonstrate that the proposed optimizations improve the SPM performance and energy consumption by 24% and 74% respectively compared to an iso-capacity SRAM. @InProceedings{LCTES19p5, author = {Asif Ali Khan and Norman A. Rink and Fazal Hameed and Jeronimo Castrillon}, title = {Optimizing Tensor Contractions for Embedded Devices with Racetrack Memory Scratch-Pads}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {5--18}, doi = {10.1145/3316482.3326351}, year = {2019}, } Publisher's Version |
|
Santosa, Andrew E. |
LCTES '19: "ApproxSymate: Path Sensitive ..."
ApproxSymate: Path Sensitive Program Approximation using Symbolic Execution
Himeshi De Silva, Andrew E. Santosa, Nhut-Minh Ho, and Weng-Fai Wong (National University of Singapore, Singapore) Approximate computing, a technique that forgoes quantifiable output accuracy in favor of performance gains, is useful for improving the energy efficiency of error-resilient software, especially in the embedded setting. The identification of program components that can tolerate error plays a crucial role in balancing the energy vs. accuracy trade off in approximate computing. Manual analysis for approximability is not scalable and therefore automated tools which employ static or dynamic analysis have been proposed. However, static techniques are often coarse in their approximations while dynamic efforts incur high overhead. In this work we present ApproxSymate, a framework for automatically identifying program approximations using symbolic execution. ApproxSymate first statically computes symbolic error expressions for program components and then uses a dynamic sensitivity analysis to compute their approximability. A unique feature of this tool is that it explores the previously not considered dimension of program path for approximation which enables safer transformations. Our evaluation shows that ApproxSymate averages about 96% accuracy in identifying the same approximations found in manually annotated benchmarks, outperforming existing automated techniques. @InProceedings{LCTES19p148, author = {Himeshi De Silva and Andrew E. Santosa and Nhut-Minh Ho and Weng-Fai Wong}, title = {ApproxSymate: Path Sensitive Program Approximation using Symbolic Execution}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {148--162}, doi = {10.1145/3316482.3326341}, year = {2019}, } Publisher's Version Artifacts Functional Results Replicated |
|
Schulz, Carly |
LCTES '19: "BitBench: A Benchmark for ..."
BitBench: A Benchmark for Bitstream Computing
Kyle Daruwalla, Heng Zhuo, Carly Schulz, and Mikko Lipasti (University of Wisconsin-Madison, USA) With the recent increase in ultra-low power applications, researchers are investigating alternative architectures that can operate on streaming input data. These target use cases require complex algorithms that must be evaluated under a real-time deadline, but also satisfy the strict available power budget. Stochastic computing (SC) is an example of an alternative paradigm where the data is represented as single bitstreams, allowing designers to implement operations such as multiplication using a simple AND gate. Consequently, the resulting design is both low area and low power. Similarly, traditional digital filters can take advantage of streaming inputs to effectively choose coefficients, resulting in a low cost implementation. In this work, we construct six key algorithms to characterize bitstream computing. We present these algorithms as a new benchmark suite: BitBench. @InProceedings{LCTES19p177, author = {Kyle Daruwalla and Heng Zhuo and Carly Schulz and Mikko Lipasti}, title = {BitBench: A Benchmark for Bitstream Computing}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {177--187}, doi = {10.1145/3316482.3326355}, year = {2019}, } Publisher's Version Artifacts Functional Results Replicated |
|
Sha, Edwin H.-M. |
LCTES '19: "1+1>2: Variation-Aware ..."
1+1>2: Variation-Aware Lifetime Enhancement for Embedded 3D NAND Flash Systems
Yejia Di, Liang Shi, Shuo-Han Chen, Chun Jason Xue, and Edwin H.-M. Sha (East China Normal University, China; Chongqing University, China; Academia Sinica, Taiwan; City University of Hong Kong, China) Three-dimensional (3D) NAND flash has been developed to boost the storage capacity by stacking memory cells vertically. One critical characteristic of 3D NAND flash is its large endurance variation. With this characteristic, the lifetime will be determined by the unit with the worst endurance. However, few works can exploit the variations with acceptable overhead for lifetime improvement. In this paper, a variation-aware lifetime improvement framework is proposed. The basic idea is motivated by an observation that there is an elegant matching between unit endurance and wearing variations when wear leveling and implicit compression are applied together. To achieve the matching goal, the framework is designed from three-type-unit levels, including cell, line, and block, respectively. Series of evaluations are conducted, and the evaluation results show that the lifetime improvement is encouraging, better than that of the combination with the state-of-the-art schemes. @InProceedings{LCTES19p45, author = {Yejia Di and Liang Shi and Shuo-Han Chen and Chun Jason Xue and Edwin H.-M. Sha}, title = {1+1>2: Variation-Aware Lifetime Enhancement for Embedded 3D NAND Flash Systems}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {45--56}, doi = {10.1145/3316482.3326359}, year = {2019}, } Publisher's Version |
|
Shen, Xipeng |
LCTES '19: "IA-Graph Based Inter-App Conflicts ..."
IA-Graph Based Inter-App Conflicts Detection in Open IoT Systems
Xinyi Li, Lei Zhang, and Xipeng Shen (Chang'an University, China; North Carolina State University, USA) This paper tackles the problem of detecting potential conflicts among independently developed apps that are to be installed into an open Internet of Things (IoT) environment. It provides a new set of definitions and categorizations of the conflicts to more precisely characterize the nature of the problem, and employs a graph representation (named IA Graph) for formally representing IoT controls and inter-app interplays. It provides an efficient conflicts detection algorithm implemented on a SmartThings compiler and shows significantly improved efficacy over prior solutions. @InProceedings{LCTES19p135, author = {Xinyi Li and Lei Zhang and Xipeng Shen}, title = {IA-Graph Based Inter-App Conflicts Detection in Open IoT Systems}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {135--147}, doi = {10.1145/3316482.3326350}, year = {2019}, } Publisher's Version |
|
Shi, Liang |
LCTES '19: "1+1>2: Variation-Aware ..."
1+1>2: Variation-Aware Lifetime Enhancement for Embedded 3D NAND Flash Systems
Yejia Di, Liang Shi, Shuo-Han Chen, Chun Jason Xue, and Edwin H.-M. Sha (East China Normal University, China; Chongqing University, China; Academia Sinica, Taiwan; City University of Hong Kong, China) Three-dimensional (3D) NAND flash has been developed to boost the storage capacity by stacking memory cells vertically. One critical characteristic of 3D NAND flash is its large endurance variation. With this characteristic, the lifetime will be determined by the unit with the worst endurance. However, few works can exploit the variations with acceptable overhead for lifetime improvement. In this paper, a variation-aware lifetime improvement framework is proposed. The basic idea is motivated by an observation that there is an elegant matching between unit endurance and wearing variations when wear leveling and implicit compression are applied together. To achieve the matching goal, the framework is designed from three-type-unit levels, including cell, line, and block, respectively. Series of evaluations are conducted, and the evaluation results show that the lifetime improvement is encouraging, better than that of the combination with the state-of-the-art schemes. @InProceedings{LCTES19p45, author = {Yejia Di and Liang Shi and Shuo-Han Chen and Chun Jason Xue and Edwin H.-M. Sha}, title = {1+1>2: Variation-Aware Lifetime Enhancement for Embedded 3D NAND Flash Systems}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {45--56}, doi = {10.1145/3316482.3326359}, year = {2019}, } Publisher's Version |
|
Siddiqui, Junaid Haroon |
LCTES '19: "The Betrayal of Constant Power ..."
The Betrayal of Constant Power × Time: Finding the Missing Joules of Transiently-Powered Computers
Saad Ahmed, Abu Bakar, Naveed Anwar Bhatti, Muhammad Hamad Alizai, Junaid Haroon Siddiqui, and Luca Mottola (Lahore University of Management Sciences, Pakistan; RISE SICS, Sweden; Politecnico di Milano, Italy) Transiently-powered computers (TPCs) lay the basis for a battery-less Internet of Things, using energy harvesting and small capacitors to power their operation. This power supply is characterized by extreme variations in supply voltage, as capacitors charge when harvesting energy and discharge when computing. We experimentally find that these variations cause marked fluctuations in clock speed and power consumption, which determine energy efficiency. We demonstrate that it is possible to accurately model and concretely capitalize on these fluctuations. We derive an energy model as a function of supply voltage and develop EPIC, a compile-time energy analysis tool. We use EPIC to substitute for the constant power assumption in existing analysis techniques, giving programmers accurate information on worst-case energy consumption of programs. When using EPIC with existing TPC system support, run-time energy efficiency drastically improves, eventually leading up to a 350% speedup in the time to complete a fixed workload. Further, when using EPIC with existing debugging tools, programmers avoid unnecessary program changes that hurt energy efficiency. @InProceedings{LCTES19p97, author = {Saad Ahmed and Abu Bakar and Naveed Anwar Bhatti and Muhammad Hamad Alizai and Junaid Haroon Siddiqui and Luca Mottola}, title = {The Betrayal of Constant Power × Time: Finding the Missing Joules of Transiently-Powered Computers}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {97--109}, doi = {10.1145/3316482.3326348}, year = {2019}, } Publisher's Version LCTES '19: "On Intermittence Bugs in the ..." On Intermittence Bugs in the Battery-Less Internet of Things (WIP Paper) Andrea Maioli, Luca Mottola, Muhammad Hamad Alizai, and Junaid Haroon Siddiqui (Politecnico di Milano, Italy; RISE SICS, Sweden; Lahore University of Management Sciences, Pakistan) The resource-constrained devices of the battery-less Internet of Things are powered off energy harvesting and compute intermittently, as energy is available. Forward progress of programs is ensured by creating persistent state. Mixed-volatile platforms are thus an asset, as they map slices of the address space onto non-volatile memory. However, these platforms also possibly introduce intermittence bugs, where intermittent and continuous executions differ. Our ongoing work on intermittence bugs includes [label=()] an analysis that demonstrates their presence in settings that current literature overlooks; the design of efficient testing techniques to check their presence in arbitrary code, which would be otherwise prohibitive given the sheer number of different executions to check; the implementation of an offline tool called ScEpTIC that implements these techniques. ScEpTIC finds the same bugs as a brute-force approach, but is six orders of magnitude faster. @InProceedings{LCTES19p203, author = {Andrea Maioli and Luca Mottola and Muhammad Hamad Alizai and Junaid Haroon Siddiqui}, title = {On Intermittence Bugs in the Battery-Less Internet of Things (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {203--207}, doi = {10.1145/3316482.3326346}, year = {2019}, } Publisher's Version LCTES '19: "Efficient Intermittent Computing ..." Efficient Intermittent Computing with Differential Checkpointing Saad Ahmed, Naveed Anwar Bhatti, Muhammad Hamad Alizai, Junaid Haroon Siddiqui, and Luca Mottola (Lahore University of Management Sciences, Pakistan; RISE SICS, Sweden; Politecnico di Milano, Italy) Embedded devices running on ambient energy perform computations intermittently, depending upon energy availability. System support ensures forward progress of programs through state checkpointing in non-volatile memory. Checkpointing is, however, expensive in energy and adds to execution times. To reduce this overhead, we present DICE, a system design that efficiently achieves differential checkpointing in intermittent computing. Distinctive traits of DICE are its software-only nature and its ability to only operate in volatile main memory to determine differentials. DICE works with arbitrary programs using automatic code instrumentation, thus requiring no programmer intervention, and can be integrated with both reactive (Hibernus) or proactive (MementOS, HarvOS) checkpointing systems. By reducing the cost of checkpoints, performance markedly improves. For example, using DICE, Hibernus requires one order of magnitude shorter time to complete a fixed workload in real-world settings. @InProceedings{LCTES19p70, author = {Saad Ahmed and Naveed Anwar Bhatti and Muhammad Hamad Alizai and Junaid Haroon Siddiqui and Luca Mottola}, title = {Efficient Intermittent Computing with Differential Checkpointing}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {70--81}, doi = {10.1145/3316482.3326357}, year = {2019}, } Publisher's Version |
|
Smith, Aaron |
LCTES '19: "Raising Binaries to LLVM IR ..."
Raising Binaries to LLVM IR with MCTOLL (WIP Paper)
S. Bharadwaj Yadavalli and Aaron Smith (Microsoft, USA) The need to analyze and execute binaries from legacy ISAs on new or different ISAs has been addressed in a variety of ways over the past few decades. Solutions using complementary static and dynamic binary translation techniques have been deployed in most real-world situations. As new ISAs are designed and legacy ISAs re-examined, the need for binary translation infrastructure re-emerges, and needs to be re- engineered all over again. Work is in progress with a goal to make such re-engineering efforts easier by using some of the software tools that would irrespectively be developed or available for a new or existing ISA. To that end, this paper presents a static binary raiser that translates binaries to LLVM IR. Native binaries for a new ISA are generated from the raised LLVM IR using the LLVM compiler backend. This technique enables development of a single raiser per legacy ISA, irrespective of the new target ISA. The result of such a raiser can then leverage compiler back-ends of new ISAs, thus simplifying the development of binary translator for the new ISA . This work leverages the existing LLVM infrastructure to implement a static raiser that currently supports raising x64 and Arm32 binaries to LLVM IR. The raiser is built as an LLVM tool – similar to llvm-objdump or clang and does not have any dependencies outside of those needed to build LLVM. This paper describes the phases of the raiser and gives the current status and limitations. @InProceedings{LCTES19p213, author = {S. Bharadwaj Yadavalli and Aaron Smith}, title = {Raising Binaries to LLVM IR with MCTOLL (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {213--218}, doi = {10.1145/3316482.3326354}, year = {2019}, } Publisher's Version Artifacts Functional |
|
Stitt, Greg |
LCTES '19: "PANDORA: A Parallelizing Approximation-Discovery ..."
PANDORA: A Parallelizing Approximation-Discovery Framework (WIP Paper)
Greg Stitt and David Campbell (University of Florida, USA) In this paper, we introduce PANDORA---a framework that complements existing parallelizing compilers by automatically discovering application- and architecture-specialized approximations. We demonstrate that PANDORA creates approximations that extract massive amounts of parallelism from inherently sequential code by eliminating loop-carried dependencies---a long-time goal of the compiler research community. Compared to exact parallel baselines, preliminary results show speedups ranging from 2.3x to 81x with acceptable error for many usage scenarios. @InProceedings{LCTES19p198, author = {Greg Stitt and David Campbell}, title = {PANDORA: A Parallelizing Approximation-Discovery Framework (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {198--202}, doi = {10.1145/3316482.3326345}, year = {2019}, } Publisher's Version |
|
Su, Xuesong |
LCTES '19: "WCET-Aware Hyper-Block Construction ..."
WCET-Aware Hyper-Block Construction for Clustered VLIW Processors
Xuesong Su, Hui Wu, and Jingling Xue (UNSW, Australia) Hyper-blocks can significantly improve instruction level parallelism on a wide range of super-scalar and VLIW processors. However, most hyper-block construction approaches aim at minimizing the average-case execution time of a program. In real-time embedded systems, minimizing the worst-case execution time (WCET) of a program is the primary goal of an optimizing compiler. We investigate the hyper-block construction problem for a program executed on a clustered VLIW processor such that the WCET of the program is minimized, and propose a novel heuristic approach considering tail duplications. Our approach is underpinned by a novel priority scheme and a precise tail duplication cost model for computing the WCET of a program. We have implemented our approach in Trimaran 4.0, and compared it with the state-of-the-art approach by using a set of 8 benchmark suites. The experimental results show that our approach achieves the maximum WCET improvement of 20.37% and the average WCET improvement of 11.59%, respectively. @InProceedings{LCTES19p110, author = {Xuesong Su and Hui Wu and Jingling Xue}, title = {WCET-Aware Hyper-Block Construction for Clustered VLIW Processors}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {110--122}, doi = {10.1145/3316482.3326349}, year = {2019}, } Publisher's Version |
|
Unnithan, R. Harikrishnan |
LCTES '19: "SHAKTI-MS: A RISC-V Processor ..."
SHAKTI-MS: A RISC-V Processor for Memory Safety in C
Sourav Das, R. Harikrishnan Unnithan, Arjun Menon, Chester Rebeiro, and Kamakoti Veezhinathan (IIT Madras, India; BITS Pilani, India) In this era of IoT devices, security is very often traded off for smaller device footprint and low power consumption. Considering the exponentially growing security threats of IoT and cyber-physical systems, it is important that these devices have built-in features that enhance security. In this paper, we present Shakti-MS, a lightweight RISC-V processor with built-in support for both temporal and spatial memory protection. At run time, Shakti-MS can detect and stymie memory misuse in C and C++ programs, with minimum runtime overheads. The solution uses a novel implementation of fat-pointers to efficiently detect misuse of pointers at runtime. Our proposal is to use stack-based cookies for crafting fat-pointers instead of having object-based identifiers. We store the fat-pointer on the stack, which eliminates the use of shadow memory space, or any table to store the pointer metadata. This reduces the storage overheads by a great extent. The cookie also helps to preserve control flow of the program by ensuring that the return address never gets modified by vulnerabilities like buffer overflows. Shakti-MS introduces new instructions in the microprocessor hardware, and also a modified compiler that automatically inserts these new instructions to enable memory protection. This co-design approach is intended to reduce runtime and area overheads, and also provides an end-to-end solution. The hardware has an area overhead of 700 LUTs on a Xilinx Virtex Ultrascale FPGA and 4100 cells on an open 55nm technology node. The clock frequency of the processor is not affected by the security extensions, while there is a marginal increase in the code size by 11% with an average runtime overhead of 13%. @InProceedings{LCTES19p19, author = {Sourav Das and R. Harikrishnan Unnithan and Arjun Menon and Chester Rebeiro and Kamakoti Veezhinathan}, title = {SHAKTI-MS: A RISC-V Processor for Memory Safety in C}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {19--32}, doi = {10.1145/3316482.3326356}, year = {2019}, } Publisher's Version Artifacts Functional Results Replicated |
|
Veezhinathan, Kamakoti |
LCTES '19: "SHAKTI-MS: A RISC-V Processor ..."
SHAKTI-MS: A RISC-V Processor for Memory Safety in C
Sourav Das, R. Harikrishnan Unnithan, Arjun Menon, Chester Rebeiro, and Kamakoti Veezhinathan (IIT Madras, India; BITS Pilani, India) In this era of IoT devices, security is very often traded off for smaller device footprint and low power consumption. Considering the exponentially growing security threats of IoT and cyber-physical systems, it is important that these devices have built-in features that enhance security. In this paper, we present Shakti-MS, a lightweight RISC-V processor with built-in support for both temporal and spatial memory protection. At run time, Shakti-MS can detect and stymie memory misuse in C and C++ programs, with minimum runtime overheads. The solution uses a novel implementation of fat-pointers to efficiently detect misuse of pointers at runtime. Our proposal is to use stack-based cookies for crafting fat-pointers instead of having object-based identifiers. We store the fat-pointer on the stack, which eliminates the use of shadow memory space, or any table to store the pointer metadata. This reduces the storage overheads by a great extent. The cookie also helps to preserve control flow of the program by ensuring that the return address never gets modified by vulnerabilities like buffer overflows. Shakti-MS introduces new instructions in the microprocessor hardware, and also a modified compiler that automatically inserts these new instructions to enable memory protection. This co-design approach is intended to reduce runtime and area overheads, and also provides an end-to-end solution. The hardware has an area overhead of 700 LUTs on a Xilinx Virtex Ultrascale FPGA and 4100 cells on an open 55nm technology node. The clock frequency of the processor is not affected by the security extensions, while there is a marginal increase in the code size by 11% with an average runtime overhead of 13%. @InProceedings{LCTES19p19, author = {Sourav Das and R. Harikrishnan Unnithan and Arjun Menon and Chester Rebeiro and Kamakoti Veezhinathan}, title = {SHAKTI-MS: A RISC-V Processor for Memory Safety in C}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {19--32}, doi = {10.1145/3316482.3326356}, year = {2019}, } Publisher's Version Artifacts Functional Results Replicated |
|
Venkataramani, Vanchinathan |
LCTES '19: "SPECTRUM: A Software Defined ..."
SPECTRUM: A Software Defined Predictable Many-Core Architecture for LTE Baseband Processing
Vanchinathan Venkataramani, Aditi Kulkarni, Tulika Mitra, and Li-Shiuan Peh (National University of Singapore, Singapore) Wireless communication standards such as Long Term Evolution (LTE) are rapidly changing to support the high data rate of wireless devices. The physical layer baseband processing has strict real-time deadlines, especially in the next-generation applications enabled by the 5G standard. Existing base station transceivers utilize customized Digital Signal Processing (DSP) cores or fixed-function hardware accelerators for physical layer baseband processing. However, these approaches incur significant non-recurring engineering costs and are inflexible to newer standards or updates. Software programmable processors offer more adaptability. However, it is challenging to sustain guaranteed worst-case latency and throughput at reasonably low-power on shared-memory many-core architectures featuring inherently unpredictable design choices, such as caches and network-on chip. We propose SPECTRUM, a predictable software defined many-core architecture that exploits the massive parallelism of the LTE baseband processing. The focus is on designing a scalable lightweight hardware that can be programmed and defined by sophisticated software mechanisms. SPECTRUM employs hundreds of lightweight in-order cores augmented with custom instructions that provide predictable timing, a purely software-scheduled on-chip network that orchestrates the communication to avoid any contention and per-core software controlled scratchpad memory with deterministic access latency. Compared to a many-core architecture like Skylake-SP (average power 215W) that drops 14% packets at high traffic load, 256-core SPECTRUM by definition has zero packet drop rate at significantly lower average power of 24W. SPECTRUM consumes 2.11x lower power than C66x DSP cores+accelerator platform in baseband processing. SPECTRUM is also well-positioned to support future 5G workloads. @InProceedings{LCTES19p82, author = {Vanchinathan Venkataramani and Aditi Kulkarni and Tulika Mitra and Li-Shiuan Peh}, title = {SPECTRUM: A Software Defined Predictable Many-Core Architecture for LTE Baseband Processing}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {82--96}, doi = {10.1145/3316482.3326352}, year = {2019}, } Publisher's Version |
|
Venkatesh, R. |
LCTES '19: "Imprecision in WCET Estimates ..."
Imprecision in WCET Estimates Due to Library Calls and How to Reduce It (WIP Paper)
Martin Becker, Samarjit Chakraborty, Ravindra Metta, and R. Venkatesh (TU Munich, Germany; TCS Research, India) One of the main difficulties in estimating the Worst Case Execution Time (WCET) at the binary level is that machine instructions do not allow inferring call contexts as precisely as source code, since compiler optimizations obfuscate control flow and type information. On the other hand, WCET estimation at source code level can be precise in tracking call contexts, but it is pessimistic for functions that are not available as source code. In this paper we propose approaches to join binary-level and source-level analyses, to get the best out of both. We present the arising problems in detail, evaluate the approaches qualitatively, and highlight their trade-offs. @InProceedings{LCTES19p208, author = {Martin Becker and Samarjit Chakraborty and Ravindra Metta and R. Venkatesh}, title = {Imprecision in WCET Estimates Due to Library Calls and How to Reduce It (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {208--212}, doi = {10.1145/3316482.3326353}, year = {2019}, } Publisher's Version |
|
Wang, Chundong |
LCTES '19: "Crash Recoverable ARMv8-Oriented ..."
Crash Recoverable ARMv8-Oriented B+-Tree for Byte-Addressable Persistent Memory
Chundong Wang, Sudipta Chattopadhyay, and Gunavaran Brihadiswarn (Singapore University of Technology and Design, Singapore; University of Moratuwa, Sri Lanka) The byte-addressable non-volatile memory (NVM) promises persistent memory. Concretely, ARM processors have incorporated architectural supports to utilize NVM. In this paper, we consider tailoring the important B+-tree for NVM operated by a 64-bit ARMv8 processor. We first conduct an empirical study of performance overheads in writing and reading data for a B+-tree with an ARMv8 processor, including the time cost of cache line flushes and memory fences for crash consistency as well as the execution time of binary search compared to that of linear search. We hence identify the key weaknesses in the design of B+-tree with ARMv8 architecture. Accordingly, we develop a new B+-tree variant, namely, crash recoverable ARMv8-oriented B+-tree (Crab-tree). To insert and delete data at runtime, Crab-tree selectively chooses one of two strategies, i.e., copy on write and shifting in place, depending on which one causes less consistency cost to performance. Crab-tree regulates a strict execution order in both strategies and recovers the tree structure in case of crashes. We have evaluated Crab-tree in Raspberry Pi 3 Model B+ with emulated NVM. Experiments show that Crab-tree significantly outperforms state-of-the-art B+-trees designed for persistent memory by up to 2.6x and 3.2x in write and read performances, respectively, with both consistency and scalability achieved. @InProceedings{LCTES19p33, author = {Chundong Wang and Sudipta Chattopadhyay and Gunavaran Brihadiswarn}, title = {Crash Recoverable ARMv8-Oriented B+-Tree for Byte-Addressable Persistent Memory}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {33--44}, doi = {10.1145/3316482.3326358}, year = {2019}, } Publisher's Version |
|
Weast, Jack |
LCTES '19: "An Open, Transparent, Industry-Driven ..."
An Open, Transparent, Industry-Driven Approach to AV Safety (Keynote)
Jack Weast (Intel, USA) At Intel and Mobileye, saving lives drives us. But in the world of automated driving, we believe safety is not merely an impact of AD, but the bedrock on which we will build this industry. And so we have proposed Responsibility-Sensitive Safety (RSS), a formal model to define what it means to drive safely - a formulation of the implicit traffic rules that enable human-like negotiation on roads that will contain a bix of machine and human driven vehicles. We intend this open, industry driven model to drive industry, academic and government discussion; let’s come together as an industry and use RSS as a starting point to clarify safety today, to enable the autonomous tomorrow. @InProceedings{LCTES19p4, author = {Jack Weast}, title = {An Open, Transparent, Industry-Driven Approach to AV Safety (Keynote)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {4--4}, doi = {10.1145/3316482.3338094}, year = {2019}, } Publisher's Version |
|
Wei, Ran |
LCTES '19: "From Java to Real-Time Java: ..."
From Java to Real-Time Java: A Model-Driven Methodology with Automated Toolchain (Invited Paper)
Wanli Chang, Shuai Zhao, Ran Wei, Andy Wellings, and Alan Burns (University of York, UK) Real-time systems are receiving increasing attention with the emerging application scenarios that are safety-critical, complex in functionality, high on timing-related performance requirements, and cost-sensitive, such as autonomous vehicles. Development of real-time systems is error-prone and highly dependent on the sophisticated domain expertise, making it a costly process. There is a trend of the existing software without the real-time notion being re-developed to realise real-time features, e.g., in the big data technology. This paper utilises the principles of model-driven engineering (MDE) and proposes the first methodology that automatically converts standard time-sharing Java applications to real-time Java applications. It opens up a new research direction on development automation of real-time programming languages and inspires many research questions that can be jointly investigated by the embedded systems, programming languages as well as MDE communities. @InProceedings{LCTES19p123, author = {Wanli Chang and Shuai Zhao and Ran Wei and Andy Wellings and Alan Burns}, title = {From Java to Real-Time Java: A Model-Driven Methodology with Automated Toolchain (Invited Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {123--134}, doi = {10.1145/3316482.3326360}, year = {2019}, } Publisher's Version |
|
Wellings, Andy |
LCTES '19: "From Java to Real-Time Java: ..."
From Java to Real-Time Java: A Model-Driven Methodology with Automated Toolchain (Invited Paper)
Wanli Chang, Shuai Zhao, Ran Wei, Andy Wellings, and Alan Burns (University of York, UK) Real-time systems are receiving increasing attention with the emerging application scenarios that are safety-critical, complex in functionality, high on timing-related performance requirements, and cost-sensitive, such as autonomous vehicles. Development of real-time systems is error-prone and highly dependent on the sophisticated domain expertise, making it a costly process. There is a trend of the existing software without the real-time notion being re-developed to realise real-time features, e.g., in the big data technology. This paper utilises the principles of model-driven engineering (MDE) and proposes the first methodology that automatically converts standard time-sharing Java applications to real-time Java applications. It opens up a new research direction on development automation of real-time programming languages and inspires many research questions that can be jointly investigated by the embedded systems, programming languages as well as MDE communities. @InProceedings{LCTES19p123, author = {Wanli Chang and Shuai Zhao and Ran Wei and Andy Wellings and Alan Burns}, title = {From Java to Real-Time Java: A Model-Driven Methodology with Automated Toolchain (Invited Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {123--134}, doi = {10.1145/3316482.3326360}, year = {2019}, } Publisher's Version |
|
Wong, Weng-Fai |
LCTES '19: "ApproxSymate: Path Sensitive ..."
ApproxSymate: Path Sensitive Program Approximation using Symbolic Execution
Himeshi De Silva, Andrew E. Santosa, Nhut-Minh Ho, and Weng-Fai Wong (National University of Singapore, Singapore) Approximate computing, a technique that forgoes quantifiable output accuracy in favor of performance gains, is useful for improving the energy efficiency of error-resilient software, especially in the embedded setting. The identification of program components that can tolerate error plays a crucial role in balancing the energy vs. accuracy trade off in approximate computing. Manual analysis for approximability is not scalable and therefore automated tools which employ static or dynamic analysis have been proposed. However, static techniques are often coarse in their approximations while dynamic efforts incur high overhead. In this work we present ApproxSymate, a framework for automatically identifying program approximations using symbolic execution. ApproxSymate first statically computes symbolic error expressions for program components and then uses a dynamic sensitivity analysis to compute their approximability. A unique feature of this tool is that it explores the previously not considered dimension of program path for approximation which enables safer transformations. Our evaluation shows that ApproxSymate averages about 96% accuracy in identifying the same approximations found in manually annotated benchmarks, outperforming existing automated techniques. @InProceedings{LCTES19p148, author = {Himeshi De Silva and Andrew E. Santosa and Nhut-Minh Ho and Weng-Fai Wong}, title = {ApproxSymate: Path Sensitive Program Approximation using Symbolic Execution}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {148--162}, doi = {10.1145/3316482.3326341}, year = {2019}, } Publisher's Version Artifacts Functional Results Replicated |
|
Wu, Hui |
LCTES '19: "WCET-Aware Hyper-Block Construction ..."
WCET-Aware Hyper-Block Construction for Clustered VLIW Processors
Xuesong Su, Hui Wu, and Jingling Xue (UNSW, Australia) Hyper-blocks can significantly improve instruction level parallelism on a wide range of super-scalar and VLIW processors. However, most hyper-block construction approaches aim at minimizing the average-case execution time of a program. In real-time embedded systems, minimizing the worst-case execution time (WCET) of a program is the primary goal of an optimizing compiler. We investigate the hyper-block construction problem for a program executed on a clustered VLIW processor such that the WCET of the program is minimized, and propose a novel heuristic approach considering tail duplications. Our approach is underpinned by a novel priority scheme and a precise tail duplication cost model for computing the WCET of a program. We have implemented our approach in Trimaran 4.0, and compared it with the state-of-the-art approach by using a set of 8 benchmark suites. The experimental results show that our approach achieves the maximum WCET improvement of 20.37% and the average WCET improvement of 11.59%, respectively. @InProceedings{LCTES19p110, author = {Xuesong Su and Hui Wu and Jingling Xue}, title = {WCET-Aware Hyper-Block Construction for Clustered VLIW Processors}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {110--122}, doi = {10.1145/3316482.3326349}, year = {2019}, } Publisher's Version |
|
Wu, Sen |
LCTES '19: "Automating the Generation ..."
Automating the Generation of Hardware Component Knowledge Bases
Luke Hsiao, Sen Wu, Nicholas Chiang, Christopher Ré, and Philip Levis (Stanford University, USA; Gunn High School, USA) Hardware component databases are critical resources in designing embedded systems. Since generating these databases requires hundreds of thousands of hours of manual data entry, they are proprietary, limited in the data they provide, and have many random data entry errors. We present a machine-learning based approach for automating the generation of component databases directly from datasheets. Extracting data directly from datasheets is challenging because: (1) the data is relational in nature and relies on non-local context, (2) the documents are filled with technical jargon, and (3) the datasheets are PDFs, a format that decouples visual locality from locality in the document. The proposed approach uses a rich data model and weak supervision to address these challenges. We evaluate the approach on datasheets of three classes of hardware components and achieve an average quality of 75 F1 points which is comparable to existing human-curated knowledge bases. We perform two applications studies that demonstrate the extraction of multiple data modalities such as numerical properties and images. We show how different sources of supervision such as heuristics and human labels have distinct advantages which can be utilized together within a single methodology to automatically generate hardware component knowledge bases. @InProceedings{LCTES19p163, author = {Luke Hsiao and Sen Wu and Nicholas Chiang and Christopher Ré and Philip Levis}, title = {Automating the Generation of Hardware Component Knowledge Bases}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {163--176}, doi = {10.1145/3316482.3326344}, year = {2019}, } Publisher's Version Artifacts Reusable Results Replicated |
|
Xue, Chun Jason |
LCTES '19: "1+1>2: Variation-Aware ..."
1+1>2: Variation-Aware Lifetime Enhancement for Embedded 3D NAND Flash Systems
Yejia Di, Liang Shi, Shuo-Han Chen, Chun Jason Xue, and Edwin H.-M. Sha (East China Normal University, China; Chongqing University, China; Academia Sinica, Taiwan; City University of Hong Kong, China) Three-dimensional (3D) NAND flash has been developed to boost the storage capacity by stacking memory cells vertically. One critical characteristic of 3D NAND flash is its large endurance variation. With this characteristic, the lifetime will be determined by the unit with the worst endurance. However, few works can exploit the variations with acceptable overhead for lifetime improvement. In this paper, a variation-aware lifetime improvement framework is proposed. The basic idea is motivated by an observation that there is an elegant matching between unit endurance and wearing variations when wear leveling and implicit compression are applied together. To achieve the matching goal, the framework is designed from three-type-unit levels, including cell, line, and block, respectively. Series of evaluations are conducted, and the evaluation results show that the lifetime improvement is encouraging, better than that of the combination with the state-of-the-art schemes. @InProceedings{LCTES19p45, author = {Yejia Di and Liang Shi and Shuo-Han Chen and Chun Jason Xue and Edwin H.-M. Sha}, title = {1+1>2: Variation-Aware Lifetime Enhancement for Embedded 3D NAND Flash Systems}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {45--56}, doi = {10.1145/3316482.3326359}, year = {2019}, } Publisher's Version |
|
Xue, Jingling |
LCTES '19: "WCET-Aware Hyper-Block Construction ..."
WCET-Aware Hyper-Block Construction for Clustered VLIW Processors
Xuesong Su, Hui Wu, and Jingling Xue (UNSW, Australia) Hyper-blocks can significantly improve instruction level parallelism on a wide range of super-scalar and VLIW processors. However, most hyper-block construction approaches aim at minimizing the average-case execution time of a program. In real-time embedded systems, minimizing the worst-case execution time (WCET) of a program is the primary goal of an optimizing compiler. We investigate the hyper-block construction problem for a program executed on a clustered VLIW processor such that the WCET of the program is minimized, and propose a novel heuristic approach considering tail duplications. Our approach is underpinned by a novel priority scheme and a precise tail duplication cost model for computing the WCET of a program. We have implemented our approach in Trimaran 4.0, and compared it with the state-of-the-art approach by using a set of 8 benchmark suites. The experimental results show that our approach achieves the maximum WCET improvement of 20.37% and the average WCET improvement of 11.59%, respectively. @InProceedings{LCTES19p110, author = {Xuesong Su and Hui Wu and Jingling Xue}, title = {WCET-Aware Hyper-Block Construction for Clustered VLIW Processors}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {110--122}, doi = {10.1145/3316482.3326349}, year = {2019}, } Publisher's Version |
|
Yadavalli, S. Bharadwaj |
LCTES '19: "Raising Binaries to LLVM IR ..."
Raising Binaries to LLVM IR with MCTOLL (WIP Paper)
S. Bharadwaj Yadavalli and Aaron Smith (Microsoft, USA) The need to analyze and execute binaries from legacy ISAs on new or different ISAs has been addressed in a variety of ways over the past few decades. Solutions using complementary static and dynamic binary translation techniques have been deployed in most real-world situations. As new ISAs are designed and legacy ISAs re-examined, the need for binary translation infrastructure re-emerges, and needs to be re- engineered all over again. Work is in progress with a goal to make such re-engineering efforts easier by using some of the software tools that would irrespectively be developed or available for a new or existing ISA. To that end, this paper presents a static binary raiser that translates binaries to LLVM IR. Native binaries for a new ISA are generated from the raised LLVM IR using the LLVM compiler backend. This technique enables development of a single raiser per legacy ISA, irrespective of the new target ISA. The result of such a raiser can then leverage compiler back-ends of new ISAs, thus simplifying the development of binary translator for the new ISA . This work leverages the existing LLVM infrastructure to implement a static raiser that currently supports raising x64 and Arm32 binaries to LLVM IR. The raiser is built as an LLVM tool – similar to llvm-objdump or clang and does not have any dependencies outside of those needed to build LLVM. This paper describes the phases of the raiser and gives the current status and limitations. @InProceedings{LCTES19p213, author = {S. Bharadwaj Yadavalli and Aaron Smith}, title = {Raising Binaries to LLVM IR with MCTOLL (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {213--218}, doi = {10.1145/3316482.3326354}, year = {2019}, } Publisher's Version Artifacts Functional |
|
Yu, Yongseung |
LCTES '19: "A Compiler-Based Approach ..."
A Compiler-Based Approach for GPGPU Performance Calibration using TLP Modulation (WIP Paper)
Yongseung Yu, Seokwon Kang, and Yongjun Park (Hanyang University, South Korea) Modern GPUs are the most successful accelerators as they provide outstanding performance gain by using CUDA or OpenCL programming models. For maximum performance, programmers typically try to maximize the number of thread blocks of target programs, and GPUs also generally attempt to allocate the maximum number of thread blocks to their GPU cores. However, many recent studies have pointed out that simply allocating the maximum number of thread blocks to GPU cores does not always guarantee the best performance, and identifying proper number of thread blocks per GPU core is a major challenge. Despite these studies, most existing architectural techniques cannot be directly applied to current GPU hardware, and the optimal number of thread blocks can vary significantly depending on the target GPU and application characteristics. To solve these problems, this study proposes a just-in-time thread block number adjustment system using CUDA binary modification upon an LLVM compiler framework, referred to as the CTA-Limiter, in order to dynamically maximize GPU performance on real GPUs without reprogramming. The framework gradually reduces the number of concurrent thread blocks of target CUDA workloads using extra shared memory allocation, and compares the execution time with the previous version to automatically identify the optimal number of co-running thread blocks per GPU Core. The results showed meaningful performance improvements, averaging at 30%, 40%, and 44%, in GTX 960, GTX 1050, and GTX 1080 Ti, respectively. @InProceedings{LCTES19p193, author = {Yongseung Yu and Seokwon Kang and Yongjun Park}, title = {A Compiler-Based Approach for GPGPU Performance Calibration using TLP Modulation (WIP Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {193--197}, doi = {10.1145/3316482.3326343}, year = {2019}, } Publisher's Version |
|
Zhang, Lei |
LCTES '19: "IA-Graph Based Inter-App Conflicts ..."
IA-Graph Based Inter-App Conflicts Detection in Open IoT Systems
Xinyi Li, Lei Zhang, and Xipeng Shen (Chang'an University, China; North Carolina State University, USA) This paper tackles the problem of detecting potential conflicts among independently developed apps that are to be installed into an open Internet of Things (IoT) environment. It provides a new set of definitions and categorizations of the conflicts to more precisely characterize the nature of the problem, and employs a graph representation (named IA Graph) for formally representing IoT controls and inter-app interplays. It provides an efficient conflicts detection algorithm implemented on a SmartThings compiler and shows significantly improved efficacy over prior solutions. @InProceedings{LCTES19p135, author = {Xinyi Li and Lei Zhang and Xipeng Shen}, title = {IA-Graph Based Inter-App Conflicts Detection in Open IoT Systems}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {135--147}, doi = {10.1145/3316482.3326350}, year = {2019}, } Publisher's Version |
|
Zhao, Shuai |
LCTES '19: "From Java to Real-Time Java: ..."
From Java to Real-Time Java: A Model-Driven Methodology with Automated Toolchain (Invited Paper)
Wanli Chang, Shuai Zhao, Ran Wei, Andy Wellings, and Alan Burns (University of York, UK) Real-time systems are receiving increasing attention with the emerging application scenarios that are safety-critical, complex in functionality, high on timing-related performance requirements, and cost-sensitive, such as autonomous vehicles. Development of real-time systems is error-prone and highly dependent on the sophisticated domain expertise, making it a costly process. There is a trend of the existing software without the real-time notion being re-developed to realise real-time features, e.g., in the big data technology. This paper utilises the principles of model-driven engineering (MDE) and proposes the first methodology that automatically converts standard time-sharing Java applications to real-time Java applications. It opens up a new research direction on development automation of real-time programming languages and inspires many research questions that can be jointly investigated by the embedded systems, programming languages as well as MDE communities. @InProceedings{LCTES19p123, author = {Wanli Chang and Shuai Zhao and Ran Wei and Andy Wellings and Alan Burns}, title = {From Java to Real-Time Java: A Model-Driven Methodology with Automated Toolchain (Invited Paper)}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {123--134}, doi = {10.1145/3316482.3326360}, year = {2019}, } Publisher's Version |
|
Zhuo, Heng |
LCTES '19: "BitBench: A Benchmark for ..."
BitBench: A Benchmark for Bitstream Computing
Kyle Daruwalla, Heng Zhuo, Carly Schulz, and Mikko Lipasti (University of Wisconsin-Madison, USA) With the recent increase in ultra-low power applications, researchers are investigating alternative architectures that can operate on streaming input data. These target use cases require complex algorithms that must be evaluated under a real-time deadline, but also satisfy the strict available power budget. Stochastic computing (SC) is an example of an alternative paradigm where the data is represented as single bitstreams, allowing designers to implement operations such as multiplication using a simple AND gate. Consequently, the resulting design is both low area and low power. Similarly, traditional digital filters can take advantage of streaming inputs to effectively choose coefficients, resulting in a low cost implementation. In this work, we construct six key algorithms to characterize bitstream computing. We present these algorithms as a new benchmark suite: BitBench. @InProceedings{LCTES19p177, author = {Kyle Daruwalla and Heng Zhuo and Carly Schulz and Mikko Lipasti}, title = {BitBench: A Benchmark for Bitstream Computing}, booktitle = {Proc.\ LCTES}, publisher = {ACM}, pages = {177--187}, doi = {10.1145/3316482.3326355}, year = {2019}, } Publisher's Version Artifacts Functional Results Replicated |
74 authors
proc time: 16.84