摘要
人工智能导致的社会风险根源于人工智能社会的不公正。为了更加全面认知人工智能的社会公正风险,需要结合马克思主义分析人工智能时代的社会形态:从“生产公正”角度看,人工智能社会是深度自动化生产的社会,这可能导致劳动向下聚集、劳动能力弱化、劳动“分体化”等生产公正风险;从“分配公正”角度看,人工智能社会是物质极大丰富但个体性、空间性、时间性分配严重不均的社会;从“认知公正”角度看,人工智能社会是虚拟与现实结合的社会,可能导致理性认知剥夺、自控能力剥夺、自主选择剥夺的认知公正风险;从“发展公正”角度看,人工智能与人类社会之间的矛盾与张力,可能导致能量争夺、权责失衡和消极反抗等发展公正问题,进而弱化社会追求公正的动力。导致人工智能社会公正风险的根本原因在于世界资源有限性与人类和人工智能需求无限性之间的矛盾、核心诱因在于人类社会固有的不公正问题、最大障碍在于现有治理手段难以直接作用于人工智能领域的责任主体。对此,需要合理划定人工智能发展的能耗标准和比例、着力解决传统社会中的不公正问题、以人的发展为目的穿透人工智能的“责任黑箱”。
The social risks caused by artificial intelligence (AI) are rooted in the injustice of an AI society. In order to more comprehensively understand the social justice risks of AI, it is necessary to analyze the social form of the era of AI based on Marxism. From the perspective of “production justice”, an AI society is one of deeply automated production, which may lead to production justice risks such as downward aggregation of labor, weakening of labor capacity, and labor “dividuum”. From the perspective of “distributive justice”, an AI society is one of great material abundance but serious unfairness to individuals, in terms of spatial and time distribution. From the perspective of “cognitive justice”, an AI society combines the virtual and the real, which may lead to the risk of cognitive justice, such as rational cognitive deprivation, self-control deprivation, and independent choice deprivation. From the perspective of “development justice”, the contradictions and tensions between AI and human society may lead to development justice problems such as energy competition, imbalance of rights and responsibilities, and passive resistance, all of which can weaken society’s motivation to pursue justice. The fundamental cause of the “justice risk” of an AI society lies in the contradiction between the world’s limited resources and the unlimited demands of human beings and AI. The core incentive lies in the inherent social injustice of human society itself. And the biggest obstacle is that the existing governance methods are unsuited to act directly on the responsible subjects in the field of AI. To this end, it is necessary to reasonably delineate the energy consumption standards and proportions of AI development, focus on solving the unjust problems in traditional society, and penetrate the “black box of responsibility” of AI for the purpose of human development.
出处
《治理研究》
北大核心
2023年第3期118-129,160,共13页
Governance Studies
基金
国家社科基金重大项目“人工智能伦理风险防范研究”(编号:20&ZD041)。
关键词
人工智能
生产公正
分配公正
认知公正
发展公正
artificial intelligence
production justice
distributive justice
cognitive justice
development justice