期刊文献+

法理学视域下的自动驾驶道德算法

Autonomous Driving Ethics Algorithm from a Jurisprudential Perspective
下载PDF
导出
摘要 自动驾驶解放人类的同时,也带来了道德决策困境:由谁预设道德算法?预设怎样的道德算法?个人化道德算法将走向完全的利己主义,提升社会总体死亡预期,应由政府统一设定算法。强制性道德算法中,支持乘客优先规则的论证理由并不具有说服力,乘客作为所有者和最大受益者,无正当理由不能在风险分配中当然地处于绝对优势地位;从人性角度而言,后果主义战胜了道义论。诸善难以兼得,后果主义中的功利主义整体伤害最小化原则和罗尔斯最大化最小值原则均无法毫无争议地解决道德困境问题,从现实立场出发,整体伤害最小化原则不可避免价值独断主义的弊端,最大化最小值原则恰当地回应了自动驾驶的风险问题,在道德感以及算法可行性上更具有可取性。 While autonomous vehicles (AVs) liberate humans from driving, it also brings ethical dilemmas to decision-making. Who should the moral algorithm be programmed by? What kind of moral algo-rithm should be programmed? Personalized moral algorithms will lead to complete egoism and in-crease the overall death expectation of society, so the government should set the algorithm uni-formly. In the mandatory moral algorithm, the reasons for supporting the passenger priority rule are not convincing. As the owner and the biggest beneficiary of AVs, passengers cannot have an ab-solute advantage in risk allocation without valid and strong reasons. In terms of human nature, consequentialism triumphs over deontology. The utilitarian algorithm and the Rawlsian algorithm in consequentialism cannot solve the moral dilemma without controversy. But from a practical standpoint, the utilitarian algorithm is unavoidable for the drawbacks of value dogmatism, and the Rawlsian algorithm properly responds to the risk of AVs and is more desirable in terms of morality and algorithm feasibility.
作者 李志慧
机构地区 华东政法大学
出处 《争议解决》 2022年第4期783-791,共9页 Dispute Settlement
  • 相关文献

参考文献13

二级参考文献115

共引文献128

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部