摘要
人工智能可能给社会带来法律所不能允许的危害风险。理性地审视人工智能带来的风险,从利益驱动到技术发展是一个在场域与频度逐步减少的风险。经过技术的检验和实验室的过滤,人工智能的危害风险并非不可控制。人工智能的危害风险可以体系化地应对,不会形成风险处置,不能导致风险无限扩张。从技术治理、道德约束、强制登记与保险制度以及立法监管等可以系统地化解和有效地处置人工智能的危害风险。
Artificial intelligence has created risks probibited by law.But by rationally reviewing the risks,we can see that they can be reduced to areas driven by profit and areas developed by technology itself in terms of field and frequency.The real harm of artificial intelligence is not uncontrollable from the perspective of technical suspicion and experimental filtration.Therefore,the risks of artificial intelligence can be systematically and effectively controlled through technology governance,moral restraint,compulsory registration,insurance system,legislative supervision,etc.Hence,they would not expand beyond control.
作者
李茂久
张静
LI Maojiu;ZHANG Jing(Criminal Justice School, Zhongnan University of Economics and Law, Wuhan 430073, China;School of Journalism and Cultural Communication, Zhongnan University of Economics and Law, Wuhan 430073, China)
出处
《大连理工大学学报(社会科学版)》
CSSCI
北大核心
2021年第3期15-23,共9页
Journal of Dalian University of Technology(Social Sciences)
基金
国家社会科学基金一般项目“第四范式下数据新闻业务流程集成框架构研究”(15BXW012)。
关键词
人工智能
危害风险
制度保障
治理体系
artificial intelligence
harm risks
institutional guarantee
governance system