期刊文献+

算法可解释性的价值及其法治化路径 被引量:1

The Value of Algorithmic Interpretability and Its Path to Rule of Law
下载PDF
导出
摘要 随着自主学习算法技术的纵深迭代,自动化决策已广泛嵌入人类社会的决策系统中。然而,由于其专业性和黑箱性,算法决策自动化对人类法律秩序构建的程序正当性及问责机制形成威胁,最终对人性尊严构成根本挑战。算法可解释性是将算法系统纳入社会规范系统约束的关键理念,其实现程度对维护法治秩序和保护被决策主体的权益至关重要。当下实现算法可解释性的主要制度依托包括设置比例化的透明度不同程度地打开黑箱、构筑多方协同审查机制落实责任主体、制度化直观关系保证“人在回路”。这些举措的落脚点在于维护算法技术的内在善。 With the deep iteration of autonomous learning algorithm technology,automated decision-making has been widely embedded into the decision systems of human society.However,due to its specialization and opacity,the automation of algorithmic decision-making poses a threat to the legitimacy of the procedural construction of human legal order and accountability mechanisms,ultimately constituting a fundamental challenge to human dignity.Algorithmic interpretability is a key concept in incorporating algorithmic systems into the constraints of social norm systems,and the degree of its implementation is crucial for maintaining the rule of law and protecting the rights of decision-making subjects.The current main institutional basis for achieving algorithmic interpretability includes setting varying degrees of transparency to open the black box,establishing multi-party collaborative review mechanisms to implement responsibility subjects,and institutionalizing intuitive relationships to ensure"human in the loop."The focal point of these measures is to uphold the inherent goodness of algorithmic technology.
作者 王海燕 Wang Haiyan(Southwest university of political science and law,Chongqing 401120)
出处 《重庆社会科学》 北大核心 2024年第1期120-135,共16页 Chongqing Social Sciences
基金 国家社会科学基金重大项目“新时代国家安全法治的体系建设与实施措施研究”(20&ZD190)。
关键词 算法决策 算法可解释性 正当程序 问责制 人类尊严 algorithmic decision-making algorithmic interpretability due process accountability human dignity
  • 相关文献

参考文献26

二级参考文献332

共引文献1291

同被引文献25

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部