Human agency has become increasingly limited in complex systems with increasingly automated decision-making capabilities.For instance,human occupants are passengers and do not have direct vehicle control in fully auto...Human agency has become increasingly limited in complex systems with increasingly automated decision-making capabilities.For instance,human occupants are passengers and do not have direct vehicle control in fully automated cars(i.e.,driverless cars).An interesting question is whether users are responsible for the accidents of these cars.Normative ethical and legal analyses frequently argue that individuals should not bear responsibility for harm beyond their control.Here,we consider human judgment of responsibility for accidents involving fully automated cars through three studies with seven experiments(N=2668).We compared the responsibility attributed to the occupants in three conditions:an owner in his private fully automated car,a passenger in a driverless robotaxi,and a passenger in a conventional taxi,where none of these three occupants have direct vehicle control over the involved vehicles that cause identical pedestrian injury.In contrast to normative analyses,we show that the occupants of driverless cars(private cars and robotaxis)are attributed more responsibility than conventional taxi passengers.This dilemma is robust across different contexts(e.g.,participants from China vs the Republic of Korea,participants with first-vs third-person perspectives,and occupant presence vs absence).Furthermore,we observe that this is not due to the perception that these occupants have greater control over driving but because they are more expected to foresee the potential consequences of using driverless cars.Our findings suggest that when driverless vehicles(private cars and taxis)cause harm,their users may face more social pressure,which public discourse and legal regulations should manage appropriately.展开更多
Artificial intelligence(AI)is developing rapidly and is being used in several medical capacities,including assisting in diagnosis and treatment decisions.As a result,this raises the conceptual and practical problem of...Artificial intelligence(AI)is developing rapidly and is being used in several medical capacities,including assisting in diagnosis and treatment decisions.As a result,this raises the conceptual and practical problem of how to distribute responsibility when AI-assisted diagnosis and treatment have been used and patients are harmed in the process.Regulations on this issue have not yet been established.It would be beneficial to tackle responsibility attribution prior to the development of biomedical AI technologies and ethical guidelines.In general,human doctors acting as superiors need to bear responsibility for their clinical decisions.However,human doctors should not bear responsibility for the behavior of an AI doctor that is practicing medicine inde-pendently.According to the degree of fault-which includes internal institutional ethics,the AI bidding process in procurement,and the medical process-clinical institutions are required to bear corresponding responsibility.AI manufacturers are responsible for creating accurate algorithms,network security,and insuring patient privacy protection.However,the AI itself should not be subjected to legal evaluation since there is no need for it to bear responsibility.Corresponding responsibility should be borne by the employer,in this case the medical institution.展开更多
基金supported by the National Natural Science Foundation of China(72071143)。
文摘Human agency has become increasingly limited in complex systems with increasingly automated decision-making capabilities.For instance,human occupants are passengers and do not have direct vehicle control in fully automated cars(i.e.,driverless cars).An interesting question is whether users are responsible for the accidents of these cars.Normative ethical and legal analyses frequently argue that individuals should not bear responsibility for harm beyond their control.Here,we consider human judgment of responsibility for accidents involving fully automated cars through three studies with seven experiments(N=2668).We compared the responsibility attributed to the occupants in three conditions:an owner in his private fully automated car,a passenger in a driverless robotaxi,and a passenger in a conventional taxi,where none of these three occupants have direct vehicle control over the involved vehicles that cause identical pedestrian injury.In contrast to normative analyses,we show that the occupants of driverless cars(private cars and robotaxis)are attributed more responsibility than conventional taxi passengers.This dilemma is robust across different contexts(e.g.,participants from China vs the Republic of Korea,participants with first-vs third-person perspectives,and occupant presence vs absence).Furthermore,we observe that this is not due to the perception that these occupants have greater control over driving but because they are more expected to foresee the potential consequences of using driverless cars.Our findings suggest that when driverless vehicles(private cars and taxis)cause harm,their users may face more social pressure,which public discourse and legal regulations should manage appropriately.
基金Project Survey on Ethical awareness and perception of Chinese Med-ical Researcher(Grant No.L1824002)supported by National Natural Science Foundation of China.
文摘Artificial intelligence(AI)is developing rapidly and is being used in several medical capacities,including assisting in diagnosis and treatment decisions.As a result,this raises the conceptual and practical problem of how to distribute responsibility when AI-assisted diagnosis and treatment have been used and patients are harmed in the process.Regulations on this issue have not yet been established.It would be beneficial to tackle responsibility attribution prior to the development of biomedical AI technologies and ethical guidelines.In general,human doctors acting as superiors need to bear responsibility for their clinical decisions.However,human doctors should not bear responsibility for the behavior of an AI doctor that is practicing medicine inde-pendently.According to the degree of fault-which includes internal institutional ethics,the AI bidding process in procurement,and the medical process-clinical institutions are required to bear corresponding responsibility.AI manufacturers are responsible for creating accurate algorithms,network security,and insuring patient privacy protection.However,the AI itself should not be subjected to legal evaluation since there is no need for it to bear responsibility.Corresponding responsibility should be borne by the employer,in this case the medical institution.