Workshop AISTA 2022 – Author Index |
Contents -
Abstracts -
Authors
|
Cai, Yifeng |
AISTA '22: "TEESlice: Slicing DNN Models ..."
TEESlice: Slicing DNN Models for Secure and Efficient Deployment
Ziqi Zhang, Lucien K. L. Ng, Bingyan Liu, Yifeng Cai, Ding Li, Yao Guo, and Xiangqun Chen (Peking University, China; Chinese University of Hong Kong, China) Providing machine learning services is becoming profit business for IT companies. It is estimated that the AI-related business will bring trillions of dollars to the global economy. When selling machine learning services, companies should consider two important aspects: the security of the DNN model and the inference latency. The DNN models are expensive to train and represent precious intellectual property. The inference latency is important because modern DNN models are usually deployed to time-sensitive tasks and the inference latency affects the user's experience. Existing solutions cannot achieve a good balance between these two factors. To solve this problem, we propose TEESlice that provides a strong security guarantee and low service latency at the same time. TEESlice utilizes two kinds of specialized hardware: Trusted Execution Environments (TEE) and existing AI accelerators. When the company wants to deploy a private DNN model on the user's device, TEESlice can be used to extract the private information into model slices. The slices are attached to a public privacy-excluded backbone to form a hybrid model that has similar performance to the original model. When deploying the hybrid model, the lightweight privacy-related slice is secured by the TEE and the public backbone is put on the AI accelerators. The TEE provides a strong security guarantee on the model privacy and the accelerators reduce the computation latency of the heavy model backbone. Experimental results show that TEESlice can achieve more than 10x throughput promotion with the same level of strong security guarantee as putting the whole model inside the TEE. If the model provider wants to further verify the correctness of the accelerator's computation, TEESlice can still achieve 3-4x performance improvement. @InProceedings{AISTA22p1, author = {Ziqi Zhang and Lucien K. L. Ng and Bingyan Liu and Yifeng Cai and Ding Li and Yao Guo and Xiangqun Chen}, title = {TEESlice: Slicing DNN Models for Secure and Efficient Deployment}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {1--8}, doi = {10.1145/3536168.3543299}, year = {2022}, } Publisher's Version |
|
Chen, Xiangqun |
AISTA '22: "TEESlice: Slicing DNN Models ..."
TEESlice: Slicing DNN Models for Secure and Efficient Deployment
Ziqi Zhang, Lucien K. L. Ng, Bingyan Liu, Yifeng Cai, Ding Li, Yao Guo, and Xiangqun Chen (Peking University, China; Chinese University of Hong Kong, China) Providing machine learning services is becoming profit business for IT companies. It is estimated that the AI-related business will bring trillions of dollars to the global economy. When selling machine learning services, companies should consider two important aspects: the security of the DNN model and the inference latency. The DNN models are expensive to train and represent precious intellectual property. The inference latency is important because modern DNN models are usually deployed to time-sensitive tasks and the inference latency affects the user's experience. Existing solutions cannot achieve a good balance between these two factors. To solve this problem, we propose TEESlice that provides a strong security guarantee and low service latency at the same time. TEESlice utilizes two kinds of specialized hardware: Trusted Execution Environments (TEE) and existing AI accelerators. When the company wants to deploy a private DNN model on the user's device, TEESlice can be used to extract the private information into model slices. The slices are attached to a public privacy-excluded backbone to form a hybrid model that has similar performance to the original model. When deploying the hybrid model, the lightweight privacy-related slice is secured by the TEE and the public backbone is put on the AI accelerators. The TEE provides a strong security guarantee on the model privacy and the accelerators reduce the computation latency of the heavy model backbone. Experimental results show that TEESlice can achieve more than 10x throughput promotion with the same level of strong security guarantee as putting the whole model inside the TEE. If the model provider wants to further verify the correctness of the accelerator's computation, TEESlice can still achieve 3-4x performance improvement. @InProceedings{AISTA22p1, author = {Ziqi Zhang and Lucien K. L. Ng and Bingyan Liu and Yifeng Cai and Ding Li and Yao Guo and Xiangqun Chen}, title = {TEESlice: Slicing DNN Models for Secure and Efficient Deployment}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {1--8}, doi = {10.1145/3536168.3543299}, year = {2022}, } Publisher's Version |
|
Guo, Yao |
AISTA '22: "TEESlice: Slicing DNN Models ..."
TEESlice: Slicing DNN Models for Secure and Efficient Deployment
Ziqi Zhang, Lucien K. L. Ng, Bingyan Liu, Yifeng Cai, Ding Li, Yao Guo, and Xiangqun Chen (Peking University, China; Chinese University of Hong Kong, China) Providing machine learning services is becoming profit business for IT companies. It is estimated that the AI-related business will bring trillions of dollars to the global economy. When selling machine learning services, companies should consider two important aspects: the security of the DNN model and the inference latency. The DNN models are expensive to train and represent precious intellectual property. The inference latency is important because modern DNN models are usually deployed to time-sensitive tasks and the inference latency affects the user's experience. Existing solutions cannot achieve a good balance between these two factors. To solve this problem, we propose TEESlice that provides a strong security guarantee and low service latency at the same time. TEESlice utilizes two kinds of specialized hardware: Trusted Execution Environments (TEE) and existing AI accelerators. When the company wants to deploy a private DNN model on the user's device, TEESlice can be used to extract the private information into model slices. The slices are attached to a public privacy-excluded backbone to form a hybrid model that has similar performance to the original model. When deploying the hybrid model, the lightweight privacy-related slice is secured by the TEE and the public backbone is put on the AI accelerators. The TEE provides a strong security guarantee on the model privacy and the accelerators reduce the computation latency of the heavy model backbone. Experimental results show that TEESlice can achieve more than 10x throughput promotion with the same level of strong security guarantee as putting the whole model inside the TEE. If the model provider wants to further verify the correctness of the accelerator's computation, TEESlice can still achieve 3-4x performance improvement. @InProceedings{AISTA22p1, author = {Ziqi Zhang and Lucien K. L. Ng and Bingyan Liu and Yifeng Cai and Ding Li and Yao Guo and Xiangqun Chen}, title = {TEESlice: Slicing DNN Models for Secure and Efficient Deployment}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {1--8}, doi = {10.1145/3536168.3543299}, year = {2022}, } Publisher's Version |
|
Li, Ding |
AISTA '22: "TEESlice: Slicing DNN Models ..."
TEESlice: Slicing DNN Models for Secure and Efficient Deployment
Ziqi Zhang, Lucien K. L. Ng, Bingyan Liu, Yifeng Cai, Ding Li, Yao Guo, and Xiangqun Chen (Peking University, China; Chinese University of Hong Kong, China) Providing machine learning services is becoming profit business for IT companies. It is estimated that the AI-related business will bring trillions of dollars to the global economy. When selling machine learning services, companies should consider two important aspects: the security of the DNN model and the inference latency. The DNN models are expensive to train and represent precious intellectual property. The inference latency is important because modern DNN models are usually deployed to time-sensitive tasks and the inference latency affects the user's experience. Existing solutions cannot achieve a good balance between these two factors. To solve this problem, we propose TEESlice that provides a strong security guarantee and low service latency at the same time. TEESlice utilizes two kinds of specialized hardware: Trusted Execution Environments (TEE) and existing AI accelerators. When the company wants to deploy a private DNN model on the user's device, TEESlice can be used to extract the private information into model slices. The slices are attached to a public privacy-excluded backbone to form a hybrid model that has similar performance to the original model. When deploying the hybrid model, the lightweight privacy-related slice is secured by the TEE and the public backbone is put on the AI accelerators. The TEE provides a strong security guarantee on the model privacy and the accelerators reduce the computation latency of the heavy model backbone. Experimental results show that TEESlice can achieve more than 10x throughput promotion with the same level of strong security guarantee as putting the whole model inside the TEE. If the model provider wants to further verify the correctness of the accelerator's computation, TEESlice can still achieve 3-4x performance improvement. @InProceedings{AISTA22p1, author = {Ziqi Zhang and Lucien K. L. Ng and Bingyan Liu and Yifeng Cai and Ding Li and Yao Guo and Xiangqun Chen}, title = {TEESlice: Slicing DNN Models for Secure and Efficient Deployment}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {1--8}, doi = {10.1145/3536168.3543299}, year = {2022}, } Publisher's Version |
|
Liu, Bingyan |
AISTA '22: "TEESlice: Slicing DNN Models ..."
TEESlice: Slicing DNN Models for Secure and Efficient Deployment
Ziqi Zhang, Lucien K. L. Ng, Bingyan Liu, Yifeng Cai, Ding Li, Yao Guo, and Xiangqun Chen (Peking University, China; Chinese University of Hong Kong, China) Providing machine learning services is becoming profit business for IT companies. It is estimated that the AI-related business will bring trillions of dollars to the global economy. When selling machine learning services, companies should consider two important aspects: the security of the DNN model and the inference latency. The DNN models are expensive to train and represent precious intellectual property. The inference latency is important because modern DNN models are usually deployed to time-sensitive tasks and the inference latency affects the user's experience. Existing solutions cannot achieve a good balance between these two factors. To solve this problem, we propose TEESlice that provides a strong security guarantee and low service latency at the same time. TEESlice utilizes two kinds of specialized hardware: Trusted Execution Environments (TEE) and existing AI accelerators. When the company wants to deploy a private DNN model on the user's device, TEESlice can be used to extract the private information into model slices. The slices are attached to a public privacy-excluded backbone to form a hybrid model that has similar performance to the original model. When deploying the hybrid model, the lightweight privacy-related slice is secured by the TEE and the public backbone is put on the AI accelerators. The TEE provides a strong security guarantee on the model privacy and the accelerators reduce the computation latency of the heavy model backbone. Experimental results show that TEESlice can achieve more than 10x throughput promotion with the same level of strong security guarantee as putting the whole model inside the TEE. If the model provider wants to further verify the correctness of the accelerator's computation, TEESlice can still achieve 3-4x performance improvement. @InProceedings{AISTA22p1, author = {Ziqi Zhang and Lucien K. L. Ng and Bingyan Liu and Yifeng Cai and Ding Li and Yao Guo and Xiangqun Chen}, title = {TEESlice: Slicing DNN Models for Secure and Efficient Deployment}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {1--8}, doi = {10.1145/3536168.3543299}, year = {2022}, } Publisher's Version |
|
Ng, Lucien K. L. |
AISTA '22: "TEESlice: Slicing DNN Models ..."
TEESlice: Slicing DNN Models for Secure and Efficient Deployment
Ziqi Zhang, Lucien K. L. Ng, Bingyan Liu, Yifeng Cai, Ding Li, Yao Guo, and Xiangqun Chen (Peking University, China; Chinese University of Hong Kong, China) Providing machine learning services is becoming profit business for IT companies. It is estimated that the AI-related business will bring trillions of dollars to the global economy. When selling machine learning services, companies should consider two important aspects: the security of the DNN model and the inference latency. The DNN models are expensive to train and represent precious intellectual property. The inference latency is important because modern DNN models are usually deployed to time-sensitive tasks and the inference latency affects the user's experience. Existing solutions cannot achieve a good balance between these two factors. To solve this problem, we propose TEESlice that provides a strong security guarantee and low service latency at the same time. TEESlice utilizes two kinds of specialized hardware: Trusted Execution Environments (TEE) and existing AI accelerators. When the company wants to deploy a private DNN model on the user's device, TEESlice can be used to extract the private information into model slices. The slices are attached to a public privacy-excluded backbone to form a hybrid model that has similar performance to the original model. When deploying the hybrid model, the lightweight privacy-related slice is secured by the TEE and the public backbone is put on the AI accelerators. The TEE provides a strong security guarantee on the model privacy and the accelerators reduce the computation latency of the heavy model backbone. Experimental results show that TEESlice can achieve more than 10x throughput promotion with the same level of strong security guarantee as putting the whole model inside the TEE. If the model provider wants to further verify the correctness of the accelerator's computation, TEESlice can still achieve 3-4x performance improvement. @InProceedings{AISTA22p1, author = {Ziqi Zhang and Lucien K. L. Ng and Bingyan Liu and Yifeng Cai and Ding Li and Yao Guo and Xiangqun Chen}, title = {TEESlice: Slicing DNN Models for Secure and Efficient Deployment}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {1--8}, doi = {10.1145/3536168.3543299}, year = {2022}, } Publisher's Version |
|
Picus, Oskar |
AISTA '22: "Bugsby: A Tool Support for ..."
Bugsby: A Tool Support for Bug Triage Automation
Oskar Picus and Camelia Serban (Babeș-Bolyai University, Romania) Within the software development life cycle, tackling issues has become an essential part, playing an important role in maintaining software quality. Once a bug report has been submitted, it is very important to perform triage to establish its severity or duplication. Several tools have been developed for tracking software issues, but few or none of them address bug triage in an automatic manner. Manually assessing these reports can be time-consuming and increases the organization’s developer cost to build and maintain their software systems. Aiming to overcome the above-mentioned limitations of existing systems, in this paper we propose a new tool – Bugsby, an open-source web application serving as an issue tracking system that implements various natural language processing-based features to automatically analyze a bug report. An empirical validation of the proposed tool was performed on 5 projects for severity prediction, duplicate retrieval and offensive language detection. @InProceedings{AISTA22p17, author = {Oskar Picus and Camelia Serban}, title = {Bugsby: A Tool Support for Bug Triage Automation}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {17--20}, doi = {10.1145/3536168.3543301}, year = {2022}, } Publisher's Version |
|
Serban, Camelia |
AISTA '22: "Bugsby: A Tool Support for ..."
Bugsby: A Tool Support for Bug Triage Automation
Oskar Picus and Camelia Serban (Babeș-Bolyai University, Romania) Within the software development life cycle, tackling issues has become an essential part, playing an important role in maintaining software quality. Once a bug report has been submitted, it is very important to perform triage to establish its severity or duplication. Several tools have been developed for tracking software issues, but few or none of them address bug triage in an automatic manner. Manually assessing these reports can be time-consuming and increases the organization’s developer cost to build and maintain their software systems. Aiming to overcome the above-mentioned limitations of existing systems, in this paper we propose a new tool – Bugsby, an open-source web application serving as an issue tracking system that implements various natural language processing-based features to automatically analyze a bug report. An empirical validation of the proposed tool was performed on 5 projects for severity prediction, duplicate retrieval and offensive language detection. @InProceedings{AISTA22p17, author = {Oskar Picus and Camelia Serban}, title = {Bugsby: A Tool Support for Bug Triage Automation}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {17--20}, doi = {10.1145/3536168.3543301}, year = {2022}, } Publisher's Version |
|
Tiutin, Cristina-Maria |
AISTA '22: "Test Case Prioritization Based ..."
Test Case Prioritization Based on Neural Networks Classification
Cristina-Maria Tiutin and Andreea Vescan (Babeș-Bolyai University, Romania) Regression testing focuses on validating modified software, in order to detect if new errors were added into previously tested code and to provide confidence that modifications are correct. An approach that involves running all test cases would be time-consuming, however, test case prioritization plans an execution order of the test cases as an attempt to achieve the regression testing goals early in the testing phase. In this paper, we propose a Test Case Prioritization based on Neural Networks Classification (TCP-NNC) approach to be further used in the test case prioritization strategy. The proposed approach incorporates among other factors, the associations between requirements, tests and discovered faults, based on which an artificial neural network is trained, in order to be able to predict priorities for new test cases. The proposal is evaluated through experiments designed on both a real and a synthetic dataset, considering two different sets of features with different neural network architectures. The metrics observed include accuracy, precision and recall, while their results imply that the proposed method is feasible and effective. Among the proposed models, the one with Adam optimizer and three-layered architecture is the best obtained. Statistical tests are also used to compare various proposed models from various perspectives: NN architecture, optimizer, number of used features, used dataset and validation method. @InProceedings{AISTA22p9, author = {Cristina-Maria Tiutin and Andreea Vescan}, title = {Test Case Prioritization Based on Neural Networks Classification}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {9--16}, doi = {10.1145/3536168.3543300}, year = {2022}, } Publisher's Version |
|
Vescan, Andreea |
AISTA '22: "Test Case Prioritization Based ..."
Test Case Prioritization Based on Neural Networks Classification
Cristina-Maria Tiutin and Andreea Vescan (Babeș-Bolyai University, Romania) Regression testing focuses on validating modified software, in order to detect if new errors were added into previously tested code and to provide confidence that modifications are correct. An approach that involves running all test cases would be time-consuming, however, test case prioritization plans an execution order of the test cases as an attempt to achieve the regression testing goals early in the testing phase. In this paper, we propose a Test Case Prioritization based on Neural Networks Classification (TCP-NNC) approach to be further used in the test case prioritization strategy. The proposed approach incorporates among other factors, the associations between requirements, tests and discovered faults, based on which an artificial neural network is trained, in order to be able to predict priorities for new test cases. The proposal is evaluated through experiments designed on both a real and a synthetic dataset, considering two different sets of features with different neural network architectures. The metrics observed include accuracy, precision and recall, while their results imply that the proposed method is feasible and effective. Among the proposed models, the one with Adam optimizer and three-layered architecture is the best obtained. Statistical tests are also used to compare various proposed models from various perspectives: NN architecture, optimizer, number of used features, used dataset and validation method. @InProceedings{AISTA22p9, author = {Cristina-Maria Tiutin and Andreea Vescan}, title = {Test Case Prioritization Based on Neural Networks Classification}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {9--16}, doi = {10.1145/3536168.3543300}, year = {2022}, } Publisher's Version |
|
Zhang, Ziqi |
AISTA '22: "TEESlice: Slicing DNN Models ..."
TEESlice: Slicing DNN Models for Secure and Efficient Deployment
Ziqi Zhang, Lucien K. L. Ng, Bingyan Liu, Yifeng Cai, Ding Li, Yao Guo, and Xiangqun Chen (Peking University, China; Chinese University of Hong Kong, China) Providing machine learning services is becoming profit business for IT companies. It is estimated that the AI-related business will bring trillions of dollars to the global economy. When selling machine learning services, companies should consider two important aspects: the security of the DNN model and the inference latency. The DNN models are expensive to train and represent precious intellectual property. The inference latency is important because modern DNN models are usually deployed to time-sensitive tasks and the inference latency affects the user's experience. Existing solutions cannot achieve a good balance between these two factors. To solve this problem, we propose TEESlice that provides a strong security guarantee and low service latency at the same time. TEESlice utilizes two kinds of specialized hardware: Trusted Execution Environments (TEE) and existing AI accelerators. When the company wants to deploy a private DNN model on the user's device, TEESlice can be used to extract the private information into model slices. The slices are attached to a public privacy-excluded backbone to form a hybrid model that has similar performance to the original model. When deploying the hybrid model, the lightweight privacy-related slice is secured by the TEE and the public backbone is put on the AI accelerators. The TEE provides a strong security guarantee on the model privacy and the accelerators reduce the computation latency of the heavy model backbone. Experimental results show that TEESlice can achieve more than 10x throughput promotion with the same level of strong security guarantee as putting the whole model inside the TEE. If the model provider wants to further verify the correctness of the accelerator's computation, TEESlice can still achieve 3-4x performance improvement. @InProceedings{AISTA22p1, author = {Ziqi Zhang and Lucien K. L. Ng and Bingyan Liu and Yifeng Cai and Ding Li and Yao Guo and Xiangqun Chen}, title = {TEESlice: Slicing DNN Models for Secure and Efficient Deployment}, booktitle = {Proc.\ AISTA}, publisher = {ACM}, pages = {1--8}, doi = {10.1145/3536168.3543299}, year = {2022}, } Publisher's Version |
11 authors
proc time: 1.18