Due to rapid development in Artificial Intelligence(AI)and Deep Learning(DL),it is difficult to maintain the security and robustness of these techniques and algorithms due to emergence of novel term adversary sampling...Due to rapid development in Artificial Intelligence(AI)and Deep Learning(DL),it is difficult to maintain the security and robustness of these techniques and algorithms due to emergence of novel term adversary sampling.Such technique is sensitive to these models.Thus,fake samples cause AI and DL model to produce diverse results.Adversarial attacks that successfully implemented in real world scenarios highlight their applicability even further.In this regard,minor modifications of input images cause“Adversarial Attacks”that altered the performance of competing attacks dramatically.Recently,such attacks and defensive strategies are gaining lot of attention by the machine learning and security researchers.Doctors use different kinds of technologies to examine the patient abnormalities including Wireless Capsule Endoscopy(WCE).However,using WCE it is very difficult for doctors to detect an abnormality within images since it takes enough time while inspection and deciding abnormality.As a result,it took weeks to generate patients test report,which is tiring and strenuous for them.Therefore,researchers come out with the solution to adopt computerized technologies,which are more suitable for the classification and detection of such abnormalities.As far as the classification is concern,the adversarial attacks generate problems in classified images.Now days,to handle this issue machine learning is mainstream defensive approach against adversarial attacks.Hence,this research exposes the attacks by altering the datasets with noise including salt and pepper and Fast Gradient Sign Method(FGSM)and then reflects that how machine learning algorithms work fine to handle these noises in order to avoid attacks.Results obtained on the WCE images which are vulnerable to adversarial attack are 96.30%accurate and prove that the proposed defensive model is robust when compared to competitive existing methods.展开更多
By extraction of the thoughts of non-linear model and adaptive model match, an improved Nagao filter is brought. Meanwhile a technique based on simplified pulse coupled neural network and used for noise positioning, i...By extraction of the thoughts of non-linear model and adaptive model match, an improved Nagao filter is brought. Meanwhile a technique based on simplified pulse coupled neural network and used for noise positioning, is put forward. Combining the two methods above, we acquire a new method that can restore images corrupted by salt and pepper noise. Experiments show that this method is more preferable than other popular ones, and still works well while noise density fluctuates severely.展开更多
基金This work was supported by“Human Resources Program in Energy Technology”of the Korea Institute of Energy Technology Evaluation and Planning(KETEP),granted financial resources from the Ministry of Trade,Industry&Energy,Republic of Korea.(No.20204010600090).
文摘Due to rapid development in Artificial Intelligence(AI)and Deep Learning(DL),it is difficult to maintain the security and robustness of these techniques and algorithms due to emergence of novel term adversary sampling.Such technique is sensitive to these models.Thus,fake samples cause AI and DL model to produce diverse results.Adversarial attacks that successfully implemented in real world scenarios highlight their applicability even further.In this regard,minor modifications of input images cause“Adversarial Attacks”that altered the performance of competing attacks dramatically.Recently,such attacks and defensive strategies are gaining lot of attention by the machine learning and security researchers.Doctors use different kinds of technologies to examine the patient abnormalities including Wireless Capsule Endoscopy(WCE).However,using WCE it is very difficult for doctors to detect an abnormality within images since it takes enough time while inspection and deciding abnormality.As a result,it took weeks to generate patients test report,which is tiring and strenuous for them.Therefore,researchers come out with the solution to adopt computerized technologies,which are more suitable for the classification and detection of such abnormalities.As far as the classification is concern,the adversarial attacks generate problems in classified images.Now days,to handle this issue machine learning is mainstream defensive approach against adversarial attacks.Hence,this research exposes the attacks by altering the datasets with noise including salt and pepper and Fast Gradient Sign Method(FGSM)and then reflects that how machine learning algorithms work fine to handle these noises in order to avoid attacks.Results obtained on the WCE images which are vulnerable to adversarial attack are 96.30%accurate and prove that the proposed defensive model is robust when compared to competitive existing methods.
基金the National Technical Innovation Project Essential Project Cultivate Project (Grant No. 706928)the Natural Science Fund of Jiangsu Province (Grant No. BK2007103)
文摘By extraction of the thoughts of non-linear model and adaptive model match, an improved Nagao filter is brought. Meanwhile a technique based on simplified pulse coupled neural network and used for noise positioning, is put forward. Combining the two methods above, we acquire a new method that can restore images corrupted by salt and pepper noise. Experiments show that this method is more preferable than other popular ones, and still works well while noise density fluctuates severely.