摘要
随着人工智能技术的快速发展,自动化决策算法逐渐进入公共领域并越来越多地影响到社会公益与个人权益。而相应的算法风险如算法歧视、算法偏见、算法垄断等不断出现,进而引发了算法治理的切实需求。面对自动化决策使用者与用户之间信息、技术的不对称地位,传统法律资源不敷适用,对自动化决策用户保护之权利的不足成为算法解释权的必要性基础。作为算法治理的重要手段,算法解释权的价值在于对算法的“黑盒”构建“适度透明性”,矫正开发者与用户之间的信息不对称,并且再平衡双方畸形的分配风险负担,成为规制自动化决策使用者、保障用户权益必不可少的制度配置,因此算法解释权研究成为国内外学界与司法实践共同关注的焦点。而在现行法视野下,算法解释权制度存在适格主体过于狭窄,保护范围不够全面,权利内容尚需明确等问题。对此,在解构算法解释权的基础上,从全算法开发流治理与分级分类解释框架的视角对算法解释权制度进行重构。通过全算法开发流治理的建构,对算法解释权的主客体进行适度扩张;通过分级分类解释框架的构筑,结合个案视角明确算法解释权的内容与边界,以此兼顾算法的个性与共性,平衡算法解释的效率与用户权益的保护,全面保障自动化决策中的各方权利主体利益,为数字经济发展赋能。
With the rapid development of artificial intelligence,automated decision-making algorithms(ADM)have gradually entered the public domain and increasingly affected social welfare and individual interests.Meanwhile,emerging risks of ADM,such as algorithmic discrimination,algorithmic bias,and algorithm monopoly have raised the demand of governance to algorithm.Faced with information and technology asymmetry among parties involved,traditional legal resources fall short in protecting the rights of users in ADM,which justifies the right to explanation.In addition,the right to algorithm explanation,serving as an important means of algorithm governance,is conductive to making the black box of algorithm moderately transparent,correcting information asymmetry,and balancing the risk burden between the deployer and the user.It has thus become a necessity in regulating ADM deployers and safeguarding the interests of its users.Therefore,the right to explanation has become the focus in both academic and practical realms from home and abroad.However,the right to algorithm explanation in China is faced with the problem of limited eligible parties,insufficient protection scope,and inexplicit content of rights.In this regard,this paper advocates decons-tructing the right to explanation and further reconstructing it from the perspective of machine learning workflow with a hierarchical classification framework.Introducing the concept of machine learning workflow can reasonably extend the scope of the subject and object of the right,while establishing the framework of hierarchical classification can clarify the content and boundary of the right,which considers both individuality and generality of algorithms and balances the efficiency of explanation and the protection of users’rights.In this way,all parties in ADM can be fully protected,and the development of digital economy can be empo-wered.
作者
丛颖男
王兆毓
朱金清
CONG Yingnan;WANG Zhaoyu;ZHU Jinqing(Business School,China University of Political Science and Law,Beijing 100088,China;School of Law,Tsinghua University,Beijing 100084,China;Beijing Bytedance Network Technology Co.,Ltd,Beijing 100043,China)
出处
《计算机科学》
CSCD
北大核心
2023年第7期347-354,共8页
Computer Science
基金
北京市教改项目“法商大数据分析创新型人才培养模式研究”(京教函[2020]427号)
中国政法大学新兴学科培育建设计划。
关键词
算法解释权
算法治理
自动化决策
个人信息保护
Right to explanation
Algorithm governance
Automated decision making
Protection of personal data