期刊文献+

Robust artificial intelligence and robust human organizations 被引量:1

原文传递
导出
摘要 The more powerful technology becomes,the more it magni- fies design errors and human failures.An angry man who has only his fists,cannot hurt very many people.But the same man with a machine gun can kill hundreds in just a few minutes.Emerging technologies under the name of "artificial intelligence"(AI)are likely to provide many new opportunities to observe this "fault magnification"phenomenon.As society contemplates deploying AI in self-driving cars,in surgical robots,in police activities,in managing critical infrastructure,and in weapon systems,it is creating situations in which errors committed by human users or errors in the software could have catastrophic consequences. Are these consequences inevitable?In the wake of the Three Mile Island nuclear power plant failure,Perrow [1] published his book "Normal Accidents"in which he argued that in any sufficiently complex system,with sufficiently many feedback loops,catastrophic accidents are "normal"-- that is,they can not be avoided.
出处 《Frontiers of Computer Science》 SCIE EI CSCD 2019年第1期1-3,共3页 中国计算机科学前沿(英文版)
  • 相关文献

同被引文献2

引证文献1

二级引证文献6

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部