摘要
目前,科技与司法的融合正如火如荼,作为智慧法院大厦基石的算法正深度参与司法活动。然而算法辅助决策过程与法官心证过程混同,算法黑箱特性与司法公开冲突,算法决策结果问责机制不明确等问题不容忽视。从司法领域应用需求出发,算法可解释性机制作为一种可能的解决方案被提出,制度构建有赖于总体思路和原则依循的准确、清晰界定。是以,算法可解释机制构建以树立场景化规制理念和形成可信任机制为总体思路,以公开性、辅助性、相对性、层次性为原则依循,构建科学合理的可解释性机制。
At present,the integration of technology and justice is in full swing,and the algorithm as the cornerstone of the smart court building is deeply involved in judicial activities.However,the algorithm-assisted decision-making process is confused with the judge's process of evidence,the black-box feature of the algorithm conflicts with judicial openness,and the accountability mechanism of the algorithmic decision-making results is not clear and other issues cannot be ignored.Starting from the application requirements in the judicial field,the algorithmic interpretability mechanism is proposed as a possible solution.The construction of the system depends on the accurate and clear definition of the overall thinking and principles.Therefore,the construction of an algorithmic interpretable mechanism takes the establishment of a scenario-based regulatory concept and the formation of a trustworthy mechanism as the overall idea,and follows the principles of openness,assistance,relativity,and hierarchy to build a scientific and reasonable interpretability mechanism.
作者
徐寅晨
陶怀川
XU Yinchen;TAO Huaichuan(School of Artificial Intelligence Law,Southwest University of Political Science and Law,Chongqing 401120,China)
出处
《四川职业技术学院学报》
2022年第3期61-66,共6页
Journal of Sichuan Vocational and Technical College
基金
西南政法大学人工智能法学院学术新秀科研创新项目“智能辅助办案系统中算法可解释性机制建构”(XSXX202015)。
关键词
算法
可解释机制
场景化规制
算法法律规制
算法可解释性的原则
algorithm
explainable mechanism
scenario-based regulation
algorithmic legal regulation
principle of algorithmic interpretability