期刊文献+

人工智能时代的社会公正风险:何种社会?哪些风险? 被引量:3

Social Justice Risks in the Age of AI: What Kind of Society? What are the Risks?
下载PDF
导出
摘要 人工智能导致的社会风险根源于人工智能社会的不公正。为了更加全面认知人工智能的社会公正风险,需要结合马克思主义分析人工智能时代的社会形态:从“生产公正”角度看,人工智能社会是深度自动化生产的社会,这可能导致劳动向下聚集、劳动能力弱化、劳动“分体化”等生产公正风险;从“分配公正”角度看,人工智能社会是物质极大丰富但个体性、空间性、时间性分配严重不均的社会;从“认知公正”角度看,人工智能社会是虚拟与现实结合的社会,可能导致理性认知剥夺、自控能力剥夺、自主选择剥夺的认知公正风险;从“发展公正”角度看,人工智能与人类社会之间的矛盾与张力,可能导致能量争夺、权责失衡和消极反抗等发展公正问题,进而弱化社会追求公正的动力。导致人工智能社会公正风险的根本原因在于世界资源有限性与人类和人工智能需求无限性之间的矛盾、核心诱因在于人类社会固有的不公正问题、最大障碍在于现有治理手段难以直接作用于人工智能领域的责任主体。对此,需要合理划定人工智能发展的能耗标准和比例、着力解决传统社会中的不公正问题、以人的发展为目的穿透人工智能的“责任黑箱”。 The social risks caused by artificial intelligence (AI) are rooted in the injustice of an AI society. In order to more comprehensively understand the social justice risks of AI, it is necessary to analyze the social form of the era of AI based on Marxism. From the perspective of “production justice”, an AI society is one of deeply automated production, which may lead to production justice risks such as downward aggregation of labor, weakening of labor capacity, and labor “dividuum”. From the perspective of “distributive justice”, an AI society is one of great material abundance but serious unfairness to individuals, in terms of spatial and time distribution. From the perspective of “cognitive justice”, an AI society combines the virtual and the real, which may lead to the risk of cognitive justice, such as rational cognitive deprivation, self-control deprivation, and independent choice deprivation. From the perspective of “development justice”, the contradictions and tensions between AI and human society may lead to development justice problems such as energy competition, imbalance of rights and responsibilities, and passive resistance, all of which can weaken society’s motivation to pursue justice. The fundamental cause of the “justice risk” of an AI society lies in the contradiction between the world’s limited resources and the unlimited demands of human beings and AI. The core incentive lies in the inherent social injustice of human society itself. And the biggest obstacle is that the existing governance methods are unsuited to act directly on the responsible subjects in the field of AI. To this end, it is necessary to reasonably delineate the energy consumption standards and proportions of AI development, focus on solving the unjust problems in traditional society, and penetrate the “black box of responsibility” of AI for the purpose of human development.
作者 李猛 Li Meng
出处 《治理研究》 北大核心 2023年第3期118-129,160,共13页 Governance Studies
基金 国家社科基金重大项目“人工智能伦理风险防范研究”(编号:20&ZD041)。
关键词 人工智能 生产公正 分配公正 认知公正 发展公正 artificial intelligence production justice distributive justice cognitive justice development justice
  • 相关文献

参考文献5

二级参考文献15

  • 1张贤明.政治责任与个人道德[J].吉林大学社会科学学报,1999,39(5):69-73. 被引量:6
  • 2乔·萨托利.《民主新论》,东方出版社1993年版.
  • 3.《马克思恩格斯选集》(第2卷)[M].人民出版社,1972年版.第82页.
  • 4.《列宁选集》(第4卷)[M].人民出版社,1972年版.第205页,第571页.
  • 5.《马克思恩格斯全集》(第1卷)[M].人民出版社,1956年版.第183、76、178页.
  • 6.《马克思恩格斯选集》(第3卷)[M].北京:人民出版社,1972..
  • 7.《邓小平文选》(第2卷)[M].北京:人民出版社,1994年版.第104,35页.
  • 8乔治·萨拜因.《政治学说史》(上),第26页,商务印书馆,1990年版.
  • 9《列宁全集》,第5卷,第448页,人民出版社,1956年版.
  • 10V.奥斯特罗姆.《制度分析与发展的反思》,第49页,商务印书馆,1996年版.

共引文献206

二级引证文献5

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部