摘要
关于人工智能技术的军事化运用所可能造成的伦理学后效,既有的批评意见有:此类运用会钻现有国际军备管控条约的空隙并促发此类武器的全球扩散;会降低杀戮门槛,并导致更多平民死亡,等等。然而,这些批评意见往往对传统军事战术平台与人工智能化的战术平台设置双重了标准,并刻意夸大了后者对于改变全球安全局势的全局性意义。而要真正使得军用人工智能技术的伦理学后效能够满足人类现有的价值观体系,其主要举措并不在于去粗暴地禁止此类装备的研发,而恰恰在于如何使得此类设备具备人类意义上的"伦理推理能力"。而在该研发过程中,如何解决"各向同性问题"亦将成为研究重点。
Currently, the ethical after-effect due to the military use of artificial intelligence technology has evoked various criticisms from the ethical point of view, for example, "the deployment of these weapons would like to undermine the international arms control system and thereby facilitate the undesired proliferation of the relevant technologies", and "the deployment of them would like to make killing much easier and thereby threaten the lives of more noncombatants". However, these criticisms have often set double standards for evaluating the traditional military tactical platforms and the tactical platforms using artificial intelh'gence, and deliberately exaggerated the overall significance of the latter in changing the global security pattern. To truly make the ethical after-effect of the military use of artificial intelligence technology accord with the existing value system of the human world, the main measure is not to simply ban the R&D of such kind of equipment but to make such equipment have "ethical reasoning" in the sense of the humans. In the R&D process, how to solve the "isotropic" problem will also become the focus.
出处
《学术前沿》
CSSCI
2016年第7期34-53,69,共21页
Frontiers
基金
国家社科基金重大项目"基于信息技术哲学的当代认识论研究"成果
项目编号:15ZDB020
关键词
军用机器人(技术)
人工智能
伦理学
各向同性
心灵状态指派
military robotics (technology)
artificial intelligence
ethics
isotropy
mental states attribution