首页
科研方向
实验室成员
实验室文化
学术活动
学术成果
区块链论文收集
新闻动态
栏目分类
当前位置: 主页 > 新闻动态 >
祝贺我们的论文被IEEE TIFS接收!
Authors: Chenkai Zeng, Debiao He, Qi Feng, Xiaolin Yang, and Qingcai Luo
 
Title: SecureGPT: A Framework for Multi-Party Privacy-Preserving Transformer Inference in GPT
 
Journal:  IEEE Transactions on Information Forensics and Security
 
Abstract: Generative Pretrained Transformer (GPT) is an advanced natural language processing (NLP) model and is excellent at understanding and generating human language. As GPT is increasingly utilized, more and more cloud inference services for pre-trained generative models are being offered. However, when users upload their data to cloud servers to experience cloud inference services, ensuring the privacy and security of their data becomes a challenge. Thus, in this work, we present SecureGPT, a framework for multi-party privacypreserving transformer inference in GPT and design a series of building blocks which include M2A (conversion of multiplicative share to additive share), truncation, division, softmax and GELU protocols for our framework. Specifically, we follow the work of SecureNLP and further explore the M2A protocol for nonlinear functions such as GELU and softmax. We also design multi-party private protocols for GPT’s transformer sub-layers. Finally we prove the security of our framework in the semi-honest adversary model with all-but-one corruptions. we evaluate the runtime of our framework under different parties settings and our implementation leads to up to 100× improvement compared to state-of-the-art works.
地址:湖北省武汉市武昌区珞珈山,武汉大学国家网络安全学院

Fax:   Email:cpeng@whu.edu.cn (彭聪)