期刊文献+

Fiduciary Responsibility: Facilitating Public Trust in Automated Decision Making

原文传递
导出
摘要 Automated decision-making systems are being increasingly deployed and affect the public in a multitude of positive and negative ways.Governmental and private institutions use these systems to process information according to certain human-devised rules in order to address social problems or organizational challenges.Both research and real-world experience indicate that the public lacks trust in automated decision-making systems and the institutions that deploy them.The recreancy theorem argues that the public is more likely to trust and support decisions made or influenced by automated decision-making systems if the institutions that administer them meet their fiduciary responsibility.However,often the public is never informed of how these systems operate and resultant institutional decisions are made.A“black box”effect of automated decision-making systems reduces the public’s perceptions of integrity and trustworthiness.Consequently,the institutions administering these systems are less able to assess whether the decisions are just.The result is that the public loses the capacity to identify,challenge,and rectify unfairness or the costs associated with the loss of public goods or benefits.The current position paper defines and explains the role of fiduciary responsibility within an automated decision-making system.We formulate an automated decision-making system as a data science lifecycle(DSL)and examine the implications of fiduciary responsibility within the context of the DSL.Fiduciary responsibility within DSLs provides a methodology for addressing the public’s lack of trust in automated decision-making systems and the institutions that employ them to make decisions affecting the public.We posit that fiduciary responsibility manifests in several contexts of a DSL,each of which requires its own mitigation of sources of mistrust.To instantiate fiduciary responsibility,a Los Angeles Police Department(LAPD)predictive policing case study is examined.We examine the development and deployment by the LAPD of predictive policing technology and identify several ways in which the LAPD failed to meet its fiduciary responsibility.
出处 《Journal of Social Computing》 EI 2022年第4期345-362,共18页 社会计算(英文)
基金 supported by the National Science Foundation and the National Geospatial Intelligence Agency(No.1830254) the National Science Foundation(No.1934884).
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部