期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Ship Detection and Recognition Based on Improved YOLOv7 被引量:3
1
作者 Wei Wu Xiulai Li +1 位作者 Zhuhua Hu xiaozhang liu 《Computers, Materials & Continua》 SCIE EI 2023年第7期489-498,共10页
In this paper,an advanced YOLOv7 model is proposed to tackle the challenges associated with ship detection and recognition tasks,such as the irregular shapes and varying sizes of ships.The improved model replaces the ... In this paper,an advanced YOLOv7 model is proposed to tackle the challenges associated with ship detection and recognition tasks,such as the irregular shapes and varying sizes of ships.The improved model replaces the fixed anchor boxes utilized in conventional YOLOv7 models with a set of more suitable anchor boxes specifically designed based on the size distribution of ships in the dataset.This paper also introduces a novel multi-scale feature fusion module,which comprises Path Aggregation Network(PAN)modules,enabling the efficient capture of ship features across different scales.Furthermore,data preprocessing is enhanced through the application of data augmentation techniques,including random rotation,scaling,and cropping,which serve to bolster data diversity and robustness.The distribution of positive and negative samples in the dataset is balanced using random sampling,ensuring a more accurate representation of real-world scenarios.Comprehensive experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art approaches in terms of both detection accuracy and robustness,highlighting the potential of the improved YOLOv7 model for practical applications in the maritime domain. 展开更多
关键词 Ship position prediction target detection YOLOv7 data augmentation techniques
下载PDF
Kernel-based adversarial attacks and defenses on support vector classification 被引量:1
2
作者 Wanman Li xiaozhang liu +1 位作者 Anli Yan Jie Yang 《Digital Communications and Networks》 SCIE CSCD 2022年第4期492-497,共6页
While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity... While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity of Support Vector Machines(SVMs),we first describe the evasion attack against SVM classification and then propose a defense strategy in this paper.The evasion attack utilizes the classification surface of SVM to iteratively find the minimal perturbations that mislead the nonlinear classifier.Specially,we propose what is called a vulnerability function to measure the vulnerability of the SVM classifiers.Utilizing this vulnerability function,we put forward an effective defense strategy based on the kernel optimization of SVMs with Gaussian kernel against the evasion attack.Our defense method is verified to be very effective on the benchmark datasets,and the SVM classifier becomes more robust after using our kernel optimization scheme. 展开更多
关键词 Adversarial machine learning Support vector machines Evasion attack Vulnerability function Kernel optimization
下载PDF
An Explanatory Strategy for Reducing the Risk of Privacy Leaks
3
作者 Mingting liu xiaozhang liu +3 位作者 Anli Yan Xiulai Li Gengquan Xie Xin Tang 《Journal of Information Hiding and Privacy Protection》 2021年第4期181-192,共12页
As machine learning moves into high-risk and sensitive applications such as medical care,autonomous driving,and financial planning,how to interpret the predictions of the black-box model becomes the key to whether peo... As machine learning moves into high-risk and sensitive applications such as medical care,autonomous driving,and financial planning,how to interpret the predictions of the black-box model becomes the key to whether people can trust machine learning decisions.Interpretability relies on providing users with additional information or explanations to improve model transparency and help users understand model decisions.However,these information inevitably leads to the dataset or model into the risk of privacy leaks.We propose a strategy to reduce model privacy leakage for instance interpretability techniques.The following is the specific operation process.Firstly,the user inputs data into the model,and the model calculates the prediction confidence of the data provided by the user and gives the prediction results.Meanwhile,the model obtains the prediction confidence of the interpretation data set.Finally,the data with the smallest Euclidean distance between the confidence of the interpretation set and the prediction data as the explainable data.Experimental results show that The Euclidean distance between the confidence of interpretation data and the confidence of prediction data provided by this method is very small,which shows that the model's prediction of interpreted data is very similar to the model's prediction of user data.Finally,we demonstrate the accuracy of the explanatory data.We measure the matching degree between the real label and the predicted label of the interpreted data and the applicability to the network model.The results show that the interpretation method has high accuracy and wide applicability. 展开更多
关键词 Machine learning model data privacy risks machine learning explanatory strategies
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部