期刊文献+

人工智能体的刑事风险及其归责 被引量:6

The Criminal Risk of Artificial Intelligence and Liability Fixation
原文传递
导出
摘要 人工智能在推动人类社会智能化变革的同时,其不确定性(自动性)所衍生的侵害人类法益的刑事风险是亟待认真对待的"真问题",绝非凭空臆造、危言耸听的"伪问题"。对人工智能体被滥用于实施犯罪的刑事风险,应根据现有刑事归责原理,按照具体行为方式以及主观罪过形式,分别追究人工智能体背后的算法设计者、产品制造者与使用者(管理者)故意或过失犯罪的刑事责任。对人工智能体脱离人类控制侵害法益的刑事风险,赋予人工智能体以刑事责任主体地位的肯定论归责方案,存在持论依据及论证路径方面的诸多误区。以刑罚规制人工智能体刑事风险缺乏适宜性,应当借鉴"科技社会防卫论",通过建构保安处分机制,由司法机关在参考专业技术意见的基础上,对严重侵害人类利益的人工智能体适用以技术性危险消除措施为内容的对物保安处分,回避以刑罚规制人工智能体刑事风险必须具备可非难性的局限,进而为人工智能的发展预留必要的法律空间。 While artificial intelligence is promoting the intelligent transformation of human society, the criminal risk of infringement of human legal interests derived from its uncertainty(automatic) is a“real problem” that needs to be taken seriously, and it is by no means a fake and alarmist “problem”out of thin air. Regarding the criminal risk and objective harm of artificial intelligence being abused to commit crimes, the algorithm designer, product manufacturer, and product manufacturer behind the artificial intelligence should be investigated separately in accordance with the existing principles of criminal liability fixation, specific behaviors and subjective forms of crime. The criminal responsibility of the user(manager) intentional or negligent crime. There are many misunderstandings in the argumentation basis and the path of argument for the affirmative imputation scheme that the artificial agent deviates from human control and infringes on the legal benefits, gives the artificial agent the status of the subject of criminal responsibility, and uses the penalty to condemn the “criminal” artificial agent. The criminal risk regulation of artificial intelligence with penalties lacks suitability. It should learn from the “Science and Technology Social Defense Theory” to avoid the limitation of condemning the criminal risk of artificial intelligence with criminal punishment. On the basis of professional technical opinions, apply physical security measures based on technical hazard elimination measures to artificial intelligence entities that seriously infringe human interests, try to reserve necessary legal space for the development of artificial intelligence technology, and achieve full protection of human beings. Under the premise of interests and well-being, promote the healthy and orderly development of artificial intelligence technology.
作者 刘仁文 曹波 Liu Renwen;Cao Bo
出处 《江西社会科学》 CSSCI 北大核心 2021年第8期143-155,256,共14页 Jiangxi Social Sciences
基金 国家社会科学基金重点项目“刑法的立体分析与关系刑法学研究”(19AFX007)。
  • 相关文献

参考文献29

二级参考文献242

共引文献1100

同被引文献132

引证文献6

二级引证文献14

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部