摘要
人工智能系统的应用对责任规则提出了挑战,需要辨识这些挑战,评估责任规则的调整方式,以应对这些挑战。首先,人工智能系统的不可预测或(半)自主运行的特征会引发责任漏洞;其次,当生产者难以预见人工智能系统的瑕疵以及使用者的监控职责也难以界定时,人们在证明过错和因果关系方面会遭遇问题;再次,从经济学的角度探讨哪些责任规则能够最大限度地降低与人工智能相关的损害成本;最后,基于对风险和最优责任规则的分析,对欧盟最近发布的《产品责任指令》和《人工智能责任指令》提案进行评估。
The employment of AI systems presents challenges for liability rules.This paper identifies these challenges and evaluates how liability rules should be adapted in response.The paper discusses the gaps in liability that arise when AI systems are unpredictable or act(semi)-autonomously.It considers the problems in proving fault and causality when errors in AI systems are difficult to foresee for producers,and monitoring duties of users are difficult to define.From an economic perspective,the paper considers what liability rules would minimise costs of harm related to AI.Based on the analysis of risks and optimal liability rules,the paper evaluates the recently published EU proposals for a Product Liability Directive and for an AI Liability Directive.
作者
张韬略(译)
陈沪楠(译)
Miriam Buiten;Alexandre de Streel;Martin Peitz;ZHANG Taolue;CHEN Hunan
出处
《上海政法学院学报(法治论丛)》
2024年第4期124-152,共29页
Journal of Shanghai University of Political Science & Law(The Rule of Law Forum)