期刊文献+

人工智能判断刑事证明标准的实践与问题

Practice and Problems of Artificial Intelligence in Judging Criminal Proof Standards
下载PDF
导出
摘要 人工智能在智慧法院建设中扮演重要角色,在刑事司法领域中也得到了有效的应用,其主要应用模式分为证据辅助判断模式、专家机器人模式和智能办案辅助模式。但司法实践中,人工智能对刑事证明标准的判断出现了诸多问题。如,人工智能本身技术局限带来判断方式僵硬化、算法黑箱导致的安全性问题、算法自带偏见和司法工作人员过渡依赖造成的自由裁量权受限问题以及产生判断事故后,责任如何归属。针对上述问题,实践中可以成立专门的算法监督机构解决算法黑箱问题,通过立法确立人工智能辅助地位,确立司法人员的主体地位,明确事故责任归属主体,加强司法人员关于人工智能的理论素养。做到这些,才能给人工智能在刑事司法领域中的发展,提供健康的生长土壤。 Artificial intelligence plays an important role in the construction of intelligent court and has been effectively applied in the field of criminal justice. Its main application modes include evidence as-sisted judgment mode, expert robot and intelligent case handling assistant mode. But in judicial practice, there are many problems in AI’s judgment of criminal proof standard. For example, the technical limitations of ARTIFICIAL intelligence itself lead to the rigidity of judgment methods, se-curity problems caused by black boxes of algorithms, inherent bias of algorithms and limited dis-cretion caused by excessive dependence of judicial staff, as well as the attribution of responsibility after judgment accidents. In view of the above problems, a special algorithm supervision institution can be established to solve the algorithm black-box problem, establish the auxiliary status of ARTIFICIAL intelligence through legislation, establish the subject status of judicial personnel, de-fine the subject of responsibility attribution, and strengthen the theoretical literacy of judicial per-sonnel about ARTIFICIAL intelligence. Only by doing so can AI provide healthy soil for growth in the criminal justice field.
作者 文黎
出处 《法学(汉斯)》 2021年第1期48-53,共7页 Open Journal of Legal Science
  • 相关文献

二级参考文献27

共引文献372

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部