摘要
自动驾驶解放人类的同时,也带来了道德决策困境:由谁预设道德算法?预设怎样的道德算法?个人化道德算法将走向完全的利己主义,提升社会总体死亡预期,应由政府统一设定算法。强制性道德算法中,支持乘客优先规则的论证理由并不具有说服力,乘客作为所有者和最大受益者,无正当理由不能在风险分配中当然地处于绝对优势地位;从人性角度而言,后果主义战胜了道义论。诸善难以兼得,后果主义中的功利主义整体伤害最小化原则和罗尔斯最大化最小值原则均无法毫无争议地解决道德困境问题,从现实立场出发,整体伤害最小化原则不可避免价值独断主义的弊端,最大化最小值原则恰当地回应了自动驾驶的风险问题,在道德感以及算法可行性上更具有可取性。
While autonomous vehicles (AVs) liberate humans from driving, it also brings ethical dilemmas to decision-making. Who should the moral algorithm be programmed by? What kind of moral algo-rithm should be programmed? Personalized moral algorithms will lead to complete egoism and in-crease the overall death expectation of society, so the government should set the algorithm uni-formly. In the mandatory moral algorithm, the reasons for supporting the passenger priority rule are not convincing. As the owner and the biggest beneficiary of AVs, passengers cannot have an ab-solute advantage in risk allocation without valid and strong reasons. In terms of human nature, consequentialism triumphs over deontology. The utilitarian algorithm and the Rawlsian algorithm in consequentialism cannot solve the moral dilemma without controversy. But from a practical standpoint, the utilitarian algorithm is unavoidable for the drawbacks of value dogmatism, and the Rawlsian algorithm properly responds to the risk of AVs and is more desirable in terms of morality and algorithm feasibility.
出处
《争议解决》
2022年第4期783-791,共9页
Dispute Settlement