摘要
人工智能进一步提升了信息系统的自动化程度,但在其规模应用过程中出现了一些新问题,如数据安全、隐私保护、公平伦理等。为了解决这些问题,推动AI由可用系统向可信系统转变,提出了可信AI治理框架——T-DACM,从数据、算法、计算、管理4个层级入手提升AI的可信性,设计了不同组件针对性地解决数据安全、模型安全、隐私保护、模型黑盒、公平无偏、追溯定责等具体问题。T-DACM实践案例为业界提供了一个可信AI治理示范,为后续基于可信AI治理框架的产品研发提供了一定的参考。
Artificial intelligence(AI)has further improved the automation of information systems,however,some issues have been exposed during its large-scale application,such as data security,privacy protection,and fair ethics.To solve these issues and promote the transition of AI from available systems to trusted systems,the T-DACM trusted AI governance framework was proposed to improve the credibility of AI from the four levels of data,algorithm,calculation,and management.Different components were designed to solve specific issues such as data security,model security,privacy protection,model black box,fairness,accountability,and traceability.T-DACM practice case provides a demonstration of the trusted AI governance framework for the industry and provides a certain reference for subsequent product research and development based on the trusted AI governance framework.
作者
夏正勋
唐剑飞
罗圣美
张燕
XIA Zhengxun;TANG Jianfei;LUO Shengmei;ZHANG Yan(Transwarp Information Technology(Shanghai)Co.,Ltd.,Shanghai 200233,China)
出处
《大数据》
2022年第4期145-164,共20页
Big Data Research
关键词
可信AI
治理框架
AI公平伦理
AI可解释
AI监管
trusted AI
governance framework
AI ethics and fairness
AI interpretability
AI supervision