摘要
生命权冲突的紧急避险问题拷问了人类的正义观和自由观,人工智能的发展更加激化了技术发展与伦理规范的冲突。本文遵循一种弱势逻辑,偏重于对基本损害的防范与克服,目标只是尽可能地将其减少到最低点。生命对生命的紧急避险在一些情况下是可以排除可罚性的。自动驾驶技术不可以回避在遇到生命权冲突时如何编程的问题。对于人类司机适用的规则只有部分可以适用于电脑程序。电脑程序需要确定编程规则,这样在遇到紧急状态时,自动驾驶汽车就可以按照事先设定的优先级决策。在规则竞合的情况下,不同的决策者会作出不同的优先级安排。通过这样的安排,程序开发商和汽车生产商的法律风险才能最大程度地降低。
Homicide by Necessity can rule out punitivness in some cases. Autopilot technology cannot circumvent the problem of how to program in the event of Homicide by Necessity. Only some of the rules applicable to human drivers can be applied to computer programs. The program needs to determine the priority rules so that in the event of an emergency, the self-driving car can make decision according to a prioritized order. In the case of rule competition, different decision makers will make different priority arrangements.
作者
王钰
Wang Yu(Guanghua Law School, Zhejiang University,Hangzhou 310008)
出处
《浙江社会科学》
CSSCI
北大核心
2019年第9期70-80,157,共12页
Zhejiang Social Sciences
基金
浙江大学人工智能与法学专项课题经费资助
关键词
生命权冲突
紧急避险
自动驾驶
优先级
life versus life
necessity
autonomous driving
priority of rules