期刊文献+

何以透明,以何透明:人工智能法透明度规则之构建 被引量:1

Why and How to Achieve Transparency:Establishing Rules for Artificial Intelligence Law
原文传递
导出
摘要 透明度作为人工智能风险治理的一项基本要求,其现实基础在于人工智能应用引发的信息不对称以及这种不对称所蕴含的对个体受保护利益或社会秩序的侵害风险。当下,信息不对称主要存在于人工智能系统分别作为产品、交互性媒介及自动化决定系统而影响现实世界等三类场景之中。相应地,透明度规则的主体内容应为人工智能系统提供者、运营者为满足相对人透明度需求而负有的信息披露义务。考虑到技术保密性对于产业发展的关键作用,且对外披露亦不是增加系统安全性的有效方式,对人工智能技术应以不强制披露为原则。诸项透明度规则既可归于“人工智能风险治理”主题之下,同时又分别属于个人信息保护法、消费者权益保护法、产品质量法、合同法、程序法乃至网络安全法规则,彰显出人工智能法的领域法特征。 Transparency,a basic requirement of AI risk governance,stems from the information imbalance caused by AI applications and the potential risks to individual rights or social order that come with such imbalance.At present,information imbalances are prevalent in three key scenarios where AI systems function as products,interactive platforms,and automated decision-making systems in the real world.Accordingly,the core focus of transparency regulation should center on the obligation of AI system providers and operators to disclose information to address the transparency needs of all parties involved.Recognizing the importance of technological confidentiality in industrial development and the limited effectiveness of external disclosure in enhancing system security,a principle of non-compulsory disclosure should guild AI technology practices.The transparency rules can be classified within the framework of"AI risk governance"while also intersecting with the law related to personal information protection,consumer rights,product standards,contracts,legal procedures,and cybersecurity,underscoring the diverse legal landscape governing AI practices.
作者 刘文杰 Liu Wenjie
出处 《比较法研究》 北大核心 2024年第2期120-134,共15页 Journal of Comparative Law
关键词 透明度 可解释性 生成式人工智能 自动化决定 算法黑箱 transparency interpretability generative Al automated decision-making algorithm black box
  • 相关文献

共引文献455

同被引文献16

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部