摘要
对涉人工智能犯罪中研发者主观罪过的研究和准确认定,有利于防范人工智能技术风险,促进人工智能技术发展进步。研发者设计以实施犯罪行为为主要目的的智能机器人时,对于智能机器人造成的一切严重危害社会的结果,其主观罪过应被认定为直接故意。研发者设计以实施非犯罪行为为主要目的的智能机器人时,对于智能机器人造成的严重危害社会的结果,当研发者违反了注意义务且有刑法明文规定时,其主观罪过应被认定为犯罪过失。应根据智能机器人的“智能”程度分“直接过失”、“管理过失”、“监督过失”三种类型来确定研发者犯罪过失的认定标准。
The research and identification of the subjective crimes of developers involved in artifi- cial intelligence crimes is conducive to preventing artificial intelligence technology risks and promoting the development of artificial intelligence technology. When a developer designs an intelligent robot with the main purpose of committing criminal behavior,the subjective crime of the intelligent robot caused by all serious harm to society should be considered as direct intention. When a developer designs an intelli- gent robot with the main purpose of implementing non-criminal behavior,the result of serious harm to the society caused by the intelligent robot,when the developer violates the duty of care and has the ex- plicit provisions of the criminal law,the subjective crime should be regarded as criminal negligence. According to the“smart”degree of intelligent robots,the three types of“direct negligence”,“manage- ment negligence”and“supervised fault”should be used to determine the criteria for determining the criminal negligence of developers.
出处
《比较法研究》
CSSCI
北大核心
2019年第4期101-110,共10页
Journal of Comparative Law
关键词
人工智能
研发者
犯罪故意
犯罪过失
认定标准
Artificial Intelligence
developer
criminal intention
criminal negligence
identifica- tion criteria