摘要
The more powerful technology becomes,the more it magni- fies design errors and human failures.An angry man who has only his fists,cannot hurt very many people.But the same man with a machine gun can kill hundreds in just a few minutes.Emerging technologies under the name of "artificial intelligence"(AI)are likely to provide many new opportunities to observe this "fault magnification"phenomenon.As society contemplates deploying AI in self-driving cars,in surgical robots,in police activities,in managing critical infrastructure,and in weapon systems,it is creating situations in which errors committed by human users or errors in the software could have catastrophic consequences. Are these consequences inevitable?In the wake of the Three Mile Island nuclear power plant failure,Perrow [1] published his book "Normal Accidents"in which he argued that in any sufficiently complex system,with sufficiently many feedback loops,catastrophic accidents are "normal"-- that is,they can not be avoided.