摘要
对人工智能进行法律治理的难点,在于人工智能技术具有相对独立于自然人的智能属性。鉴于现有的人工智能技术未能脱离智能工具的范畴,法律规制的重点不是人工智能技术本身,而是人工智能背后相关法律主体的行为。刑法需要把握算法这一人工智能技术的实质与核心,并以算法安全为纽带构建对人工智能相关法律主体的刑事归责路径。在具体方案的设计上,需要厘清刑法在人工智能治理活动中的功能与定位,避免出现过度纠结于技术逻辑的证明,或将刑法功能与前置法之功能相混淆的误区。
The legal governance of artificial intelligence(AI) technologies is difficult because the intellectual property created by AI is relatively independent of natural persons. Since current AI technologies are still intelligent tools, legislation focuses on the behaviors of the legal subjects behind the AI technologies instead of the AI technologies themselves. In criminal law, it is necessary to know how AI technologies function and how algorithms are applied. Legal subjects related to AI technologies should be held criminally culpable to the extent algorithm security is a factor. When specific implementation schemes are prepared, it is necessary to define the functions and positioning of criminal law in AI governance activities. What should be avoided is unnecessary obsession over proof of technical logic or confusing technical functions in criminal laws with functions in prepositional laws.
作者
曾粤兴
高正旭
Zeng Yuexing;Gao Zhengxu
出处
《治理研究》
CSSCI
北大核心
2022年第3期113-123,128,共12页
Governance Studies
基金
国家社科基金青年项目“轻微犯罪出罪机制研究”(编号:21CFX069)。
关键词
人工智能
科技风险
算法安全
法律体系
刑事归责
artificial intelligence
technological risk
algorithm security
legal system
criminal law imputation