In recent decades,social scientists have debated declining levels of trust in American institutions.At the same time,many American institutions are coming under scrutiny for their use of artificial intelligence(AI)sys...In recent decades,social scientists have debated declining levels of trust in American institutions.At the same time,many American institutions are coming under scrutiny for their use of artificial intelligence(AI)systems.This paper analyzes the results of a survey experiment over a nationally representative sample to gauge the effect that the use of AI has on the American public’s trust in their social institutions,including government,private corporations,police precincts,and hospitals.We find that artificial intelligence systems were associated with significant trust penalties when used by American police precincts,companies,and hospitals.These penalties were especially strong for American police precincts and,in most cases,were notably stronger than the trust penalties associated with the use of smartphone apps,implicit bias training,machine learning,and mindfulness training.Americans’trust in institutions tends to be negatively impacted by the use of new tools.While there are significant variations in trust between different pairings of institutions and tools,generally speaking,institutions which use AI suffer the most significant loss of trust.American government agencies are a notable exception here,receiving a small but puzzling boost in trust when associated with the use of AI systems.展开更多
基金supported by the National Science Foundation(Nos.IIS-1927227 and CCF-2208664).
文摘In recent decades,social scientists have debated declining levels of trust in American institutions.At the same time,many American institutions are coming under scrutiny for their use of artificial intelligence(AI)systems.This paper analyzes the results of a survey experiment over a nationally representative sample to gauge the effect that the use of AI has on the American public’s trust in their social institutions,including government,private corporations,police precincts,and hospitals.We find that artificial intelligence systems were associated with significant trust penalties when used by American police precincts,companies,and hospitals.These penalties were especially strong for American police precincts and,in most cases,were notably stronger than the trust penalties associated with the use of smartphone apps,implicit bias training,machine learning,and mindfulness training.Americans’trust in institutions tends to be negatively impacted by the use of new tools.While there are significant variations in trust between different pairings of institutions and tools,generally speaking,institutions which use AI suffer the most significant loss of trust.American government agencies are a notable exception here,receiving a small but puzzling boost in trust when associated with the use of AI systems.