摘要
当前,伴随着机器自主决策性能的日渐提高,算法决策已然介入并开始主导越来越多的人类事务。与此同时,算法逻辑中所固有的歧视和偏见也日渐显现,机器自主决策的"去人类化"或"去人类中心化"正在革命性地改变和重塑人与机器之间的关系。在没有或无法植入社会正义理念的情景下,人工智能只不过是用技术外衣重新包装了社会的固有偏见,且有可能会塑造社会新的阴暗面,如果不加以约束和监督,势必会给人类社会带来持久伤害。在此意义上,人工智能并非"道德决策者",即便有朝一日人工智能具有了类人的道德抉择能力,人工智能的存在价值仍然不能与人的生命存在价值相提并论。未来,人工智能将拥有越来越多的自主决策性能却无法承担相应的道义责任。人类社会需要一部机器人大宪章,通过共同的、国际公认的道德和法律框架来指导和约束人工智能"自主"系统的设计、生产和使用。
At present, as the performance of autonomy improves, the machine decision-making driven by algorithm has been the leading force in resolving human affairs. At the same time, the inherent discrimination and prejudice in the logic of the algorithm are gradually emerging. The trends of "de-humanization" or "de-centralized human" in decision-making is revolutionizing and reshaping the relationship between man and machine. In the absence of social justice or human operator cannot control the creation, artificial intelligence is simply repackaging the inherent prejudice of society with technical cloaks and may reshape the new darkness of society. If we have no ways to constrain and supervise the machine leading, the artificial intelligence will hurt and damage human beings. In this sense, the artificial intelligence is only a machine, not a "moral decision maker". Even if the artificial intelligence developed ability to judge "what is moral" and "what is not a moral" in some days, the value of artificial intelligence cannot be equaled with the human life. In near future, the autonomous performance of artificial intelligence will become more and more independent and obvious, but the machine still cannot bear the moral responsibility for its decision-making and behavior. Maybe, we need a robotic charter to guide and constrain the design, usage and development of artificial intelligence through a common, internationally recognized ethical and legal framework.
作者
董青岭
DONG Qingling(School of International Relations,University of International Business and Economics,Beijing 100029,China)
出处
《云梦学刊》
2018年第5期39-44,共6页
Journal of Yunmeng
关键词
人工智能
算法决策
道德风险
artificial intelligence
algorithm decision-making
moral hazard