摘要
人工智能替代人类做出本应由人类完成的决策,这使得人工智能将面临社会伦理抉择,应以伦理为先导构建人工智能规制体系。至今来看,人工智能是人类设计、经过加工制作出来的产品,人工智能的各种风险也源于其产品缺陷,应以《中华人民共和国产品质量法》规制人工智能包括伦理风险在内的一系列问题。人工智能的决策过程与结果都要接受人类道德的评判,但人工智能却不会以人类特有思维做出判断。因此,人工智能质量标准应加入社会伦理标准。人工智能伦理缺陷具有独特性,应成为一个独立的产品缺陷类型。人工智能发生侵权的情况,应将设计者也纳入产品责任主体,以督促设计者防范产品风险的出现。
Artificial intelligence takes the place of human beings to make decisions that should be completed by human beings, which makes artificial intelligence face the choice of social ethics. Therefore, the regulation system of artificial intelligence should be constructed with ethics as the guide. So far, artificial intelligence is a product designed and processed by human beings, and the risks of artificial intelligence come from its product defects. The product quality law should be used to regulate a series of problems of artificial intelligence, including ethical risks. The decision-making process and results of artificial intelligence should be judged by human morality, but artificial intelligence will not make judgments based on human unique thinking, so the quality standard of artificial intelligence should be added to the social ethics standard. The ethical defects of artificial intelligence are unique and should be an independent type of product defects. Once the infringement of artificial intelligence occurs, the designer should also be included in the subject of product liability, so as to urge the designer to prevent the occurrence of product risk.
作者
张安毅
李博文
ZHANG Anyi;LI Bowen
出处
《成都行政学院学报》
2022年第2期78-85,119,共9页
Journal of Chengdu Administration Institute
基金
河南省哲学社会科学规划项目“面向国家人工智能战略的专利制度研究”(2019BFX004)的阶段性成果。
关键词
人工智能
智能产品
伦理风险
artificial intelligence
intelligent products
ethical risks