期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Protecting artificial intelligence IPs:a survey of watermarking and fingerprinting for machine learning 被引量:2
1
作者 francesco regazzoni Paolo Palmieri +2 位作者 Fethulah Smailbegovic Rosario Cammarota Ilia Polian 《CAAI Transactions on Intelligence Technology》 EI 2021年第2期180-191,共12页
Artificial intelligence(AI)algorithms achieve outstanding results in many applicationdomains such as computer vision and natural language processing The performance ofAl models is the outcome of complex and costly mod... Artificial intelligence(AI)algorithms achieve outstanding results in many applicationdomains such as computer vision and natural language processing The performance ofAl models is the outcome of complex and costly model architecture design and trainingprocesses.Hence,it is paramount for model owners to protect their AI models frompiracy-model cloning,illegitimate distribution and use.IP protection mechanisms havebeen applied to Al models,and in particular to deep neural networks,to verify themodel ownership.State-of-the-art AI model ownership protection techniques have beensurveyed.The pros and cons of Al model ownership protection have been reported.The majonity of previous works are focused on watermarking,while more advancedmethods such fingerprinting and attestation are promising but not yet explored indepth.This study has been concluded by discussing possible research directions in thearea. 展开更多
关键词 artificial COMPUTER NETWORKS
下载PDF
Special Section on Attacking and Protecting Artificial Intelligence
2
作者 Shivam Bhasin Siddharth Garg francesco regazzoni 《CAAI Transactions on Intelligence Technology》 EI 2021年第1期1-2,共2页
Modern Artificial Intelligence(AI)systems largely rely on advanced algorithms,including machine learning techniques such as deep learning.The research community has invested significant efforts in understanding these ... Modern Artificial Intelligence(AI)systems largely rely on advanced algorithms,including machine learning techniques such as deep learning.The research community has invested significant efforts in understanding these algorithms,optimally tuning them,and improving their performance,but it has mostly neglected the security facet of the problem.Recent attacks and exploits demonstrated that machine learning‐based algorithms are susceptible to attacks targeting computer systems,including backdoors,hardware Trojans and fault attacks,but are also susceptible to a range of attacks specifically targeting them,such as adversarial input perturbations. 展开更多
关键词 tuning HARDWARE NEGLECTED
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部