摘要
“算法黑箱”背后的信息不对称将带来社会风险,透明性原则是算法监管、建立信赖关系的前提,智能产业的发展也呼吁透明性原则的建立。但由于利益冲突、技术特征和制度成本等障碍,对人工智能技术不应采取传统的彻底、全部公开透明,而需确立有限、合理的透明标准。这一标准的有效落实依赖于行为规范和私权保护双重范式的协同作用,进行事前预防、事中约束和事后救济等全过程控制,并结合对主体、客体、程度、条件等要素的场景化思考,实现智能科技创新、整体经济效益和社会公共利益之间的平衡与协调。
The information asymmetry behind the"algorithm black box"will bring social risks.The principle of transparency is the prerequisite and basis for algorithm supervision and the establishment of trust relationships.The development of intelligent industries also calls for the principle of transparency.However,due to obstacles such as conflicts of interest,technical characteristics,and institutional costs,the complete transparency in the traditional field should not be adopted to artificial intelligence,but instead requiring limited and reasonable transparency standards.The effective implementation relies on the synergy of the governance of conduct and the protection of private rights,with control from pre-prevention,interim restraint,and post-relief,combined with the subject,object,degree,and conditions in contexts,in order to realize the balance and coordination between intelligent technological innovation,overall economic benefits and social public interests.
作者
季冬梅
JI Dong-mei(Law School,Capital University of Economics and Business,Beijing 100070,China)
出处
《科学学研究》
CSSCI
CSCD
北大核心
2022年第4期611-618,757,共9页
Studies in Science of Science
基金
首都经济贸易大学北京市属高校基本科研业务费专项资金资助(XRZ2021024)
关键词
人工智能
透明性原则
范式选择
要素分析
artificial intelligence
transparency principle
paradigm selection
elements analysis
contextual framework