期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Kernel-based adversarial attacks and defenses on support vector classification
1
作者 Wanman Li Xiaozhang Liu +1 位作者 anli yan Jie yang 《Digital Communications and Networks》 SCIE CSCD 2022年第4期492-497,共6页
While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity... While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity of Support Vector Machines(SVMs),we first describe the evasion attack against SVM classification and then propose a defense strategy in this paper.The evasion attack utilizes the classification surface of SVM to iteratively find the minimal perturbations that mislead the nonlinear classifier.Specially,we propose what is called a vulnerability function to measure the vulnerability of the SVM classifiers.Utilizing this vulnerability function,we put forward an effective defense strategy based on the kernel optimization of SVMs with Gaussian kernel against the evasion attack.Our defense method is verified to be very effective on the benchmark datasets,and the SVM classifier becomes more robust after using our kernel optimization scheme. 展开更多
关键词 Adversarial machine learning Support vector machines Evasion attack Vulnerability function Kernel optimization
下载PDF
An Explanatory Strategy for Reducing the Risk of Privacy Leaks
2
作者 Mingting Liu Xiaozhang Liu +3 位作者 anli yan Xiulai Li Gengquan Xie Xin Tang 《Journal of Information Hiding and Privacy Protection》 2021年第4期181-192,共12页
As machine learning moves into high-risk and sensitive applications such as medical care,autonomous driving,and financial planning,how to interpret the predictions of the black-box model becomes the key to whether peo... As machine learning moves into high-risk and sensitive applications such as medical care,autonomous driving,and financial planning,how to interpret the predictions of the black-box model becomes the key to whether people can trust machine learning decisions.Interpretability relies on providing users with additional information or explanations to improve model transparency and help users understand model decisions.However,these information inevitably leads to the dataset or model into the risk of privacy leaks.We propose a strategy to reduce model privacy leakage for instance interpretability techniques.The following is the specific operation process.Firstly,the user inputs data into the model,and the model calculates the prediction confidence of the data provided by the user and gives the prediction results.Meanwhile,the model obtains the prediction confidence of the interpretation data set.Finally,the data with the smallest Euclidean distance between the confidence of the interpretation set and the prediction data as the explainable data.Experimental results show that The Euclidean distance between the confidence of interpretation data and the confidence of prediction data provided by this method is very small,which shows that the model's prediction of interpreted data is very similar to the model's prediction of user data.Finally,we demonstrate the accuracy of the explanatory data.We measure the matching degree between the real label and the predicted label of the interpreted data and the applicability to the network model.The results show that the interpretation method has high accuracy and wide applicability. 展开更多
关键词 Machine learning model data privacy risks machine learning explanatory strategies
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部