In this paper,an advanced YOLOv7 model is proposed to tackle the challenges associated with ship detection and recognition tasks,such as the irregular shapes and varying sizes of ships.The improved model replaces the ...In this paper,an advanced YOLOv7 model is proposed to tackle the challenges associated with ship detection and recognition tasks,such as the irregular shapes and varying sizes of ships.The improved model replaces the fixed anchor boxes utilized in conventional YOLOv7 models with a set of more suitable anchor boxes specifically designed based on the size distribution of ships in the dataset.This paper also introduces a novel multi-scale feature fusion module,which comprises Path Aggregation Network(PAN)modules,enabling the efficient capture of ship features across different scales.Furthermore,data preprocessing is enhanced through the application of data augmentation techniques,including random rotation,scaling,and cropping,which serve to bolster data diversity and robustness.The distribution of positive and negative samples in the dataset is balanced using random sampling,ensuring a more accurate representation of real-world scenarios.Comprehensive experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art approaches in terms of both detection accuracy and robustness,highlighting the potential of the improved YOLOv7 model for practical applications in the maritime domain.展开更多
While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity...While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity of Support Vector Machines(SVMs),we first describe the evasion attack against SVM classification and then propose a defense strategy in this paper.The evasion attack utilizes the classification surface of SVM to iteratively find the minimal perturbations that mislead the nonlinear classifier.Specially,we propose what is called a vulnerability function to measure the vulnerability of the SVM classifiers.Utilizing this vulnerability function,we put forward an effective defense strategy based on the kernel optimization of SVMs with Gaussian kernel against the evasion attack.Our defense method is verified to be very effective on the benchmark datasets,and the SVM classifier becomes more robust after using our kernel optimization scheme.展开更多
As machine learning moves into high-risk and sensitive applications such as medical care,autonomous driving,and financial planning,how to interpret the predictions of the black-box model becomes the key to whether peo...As machine learning moves into high-risk and sensitive applications such as medical care,autonomous driving,and financial planning,how to interpret the predictions of the black-box model becomes the key to whether people can trust machine learning decisions.Interpretability relies on providing users with additional information or explanations to improve model transparency and help users understand model decisions.However,these information inevitably leads to the dataset or model into the risk of privacy leaks.We propose a strategy to reduce model privacy leakage for instance interpretability techniques.The following is the specific operation process.Firstly,the user inputs data into the model,and the model calculates the prediction confidence of the data provided by the user and gives the prediction results.Meanwhile,the model obtains the prediction confidence of the interpretation data set.Finally,the data with the smallest Euclidean distance between the confidence of the interpretation set and the prediction data as the explainable data.Experimental results show that The Euclidean distance between the confidence of interpretation data and the confidence of prediction data provided by this method is very small,which shows that the model's prediction of interpreted data is very similar to the model's prediction of user data.Finally,we demonstrate the accuracy of the explanatory data.We measure the matching degree between the real label and the predicted label of the interpreted data and the applicability to the network model.The results show that the interpretation method has high accuracy and wide applicability.展开更多
基金supported by the Key R&D Project of Hainan Province(Grant No.ZDYF2022GXJS348,ZDYF2022SHFZ039).
文摘In this paper,an advanced YOLOv7 model is proposed to tackle the challenges associated with ship detection and recognition tasks,such as the irregular shapes and varying sizes of ships.The improved model replaces the fixed anchor boxes utilized in conventional YOLOv7 models with a set of more suitable anchor boxes specifically designed based on the size distribution of ships in the dataset.This paper also introduces a novel multi-scale feature fusion module,which comprises Path Aggregation Network(PAN)modules,enabling the efficient capture of ship features across different scales.Furthermore,data preprocessing is enhanced through the application of data augmentation techniques,including random rotation,scaling,and cropping,which serve to bolster data diversity and robustness.The distribution of positive and negative samples in the dataset is balanced using random sampling,ensuring a more accurate representation of real-world scenarios.Comprehensive experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art approaches in terms of both detection accuracy and robustness,highlighting the potential of the improved YOLOv7 model for practical applications in the maritime domain.
基金supported by the National Natural Science Foundation of China under Grant No.61966011.
文摘While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity of Support Vector Machines(SVMs),we first describe the evasion attack against SVM classification and then propose a defense strategy in this paper.The evasion attack utilizes the classification surface of SVM to iteratively find the minimal perturbations that mislead the nonlinear classifier.Specially,we propose what is called a vulnerability function to measure the vulnerability of the SVM classifiers.Utilizing this vulnerability function,we put forward an effective defense strategy based on the kernel optimization of SVMs with Gaussian kernel against the evasion attack.Our defense method is verified to be very effective on the benchmark datasets,and the SVM classifier becomes more robust after using our kernel optimization scheme.
基金This work is supported by the National Natural Science Foundation of China(Grant No.61966011)Hainan University Education and Teaching Reform Research Project(Grant No.HDJWJG01)+3 种基金Key Research and Development Program of Hainan Province(Grant No.ZDYF2020033)Young Talents’Science and Technology Innovation Project of Hainan Association for Science and Technology(Grant No.QCXM202007)Hainan Provincial Natural Science Foundation of China(Grant No.621RC612)Hainan Provincial Natural Science Foundation of China(Grant No.2019RC107).
文摘As machine learning moves into high-risk and sensitive applications such as medical care,autonomous driving,and financial planning,how to interpret the predictions of the black-box model becomes the key to whether people can trust machine learning decisions.Interpretability relies on providing users with additional information or explanations to improve model transparency and help users understand model decisions.However,these information inevitably leads to the dataset or model into the risk of privacy leaks.We propose a strategy to reduce model privacy leakage for instance interpretability techniques.The following is the specific operation process.Firstly,the user inputs data into the model,and the model calculates the prediction confidence of the data provided by the user and gives the prediction results.Meanwhile,the model obtains the prediction confidence of the interpretation data set.Finally,the data with the smallest Euclidean distance between the confidence of the interpretation set and the prediction data as the explainable data.Experimental results show that The Euclidean distance between the confidence of interpretation data and the confidence of prediction data provided by this method is very small,which shows that the model's prediction of interpreted data is very similar to the model's prediction of user data.Finally,we demonstrate the accuracy of the explanatory data.We measure the matching degree between the real label and the predicted label of the interpreted data and the applicability to the network model.The results show that the interpretation method has high accuracy and wide applicability.