The risk of bias is widely noticed in the entire process of generative artificial intelligence(generative AI)systems.To protect the rights of the public and improve the effectiveness of AI regulations,feasible measure...The risk of bias is widely noticed in the entire process of generative artificial intelligence(generative AI)systems.To protect the rights of the public and improve the effectiveness of AI regulations,feasible measures to address the bias problem in the context of large data should be proposed as soon as possible.Since bias originates in every part and various aspects of AI product lifecycles,laws and technical measures should consider each of these layers and take different causes of bias into account,from data training,modeling,and application design.The Interim Measures for the Administration of Generative AI Service(the Interim Measures),formulated by the Office of the Central Cyberspace Affairs Commission(CAC)and other departments have taken the initiatives to govern AI.However,it lacks specific details on issues such as how to prevent the risk of bias and reduce the effect of bias in decision-making.The Interim Measures also fail to take causes of bias into account,and several principles must be further interpreted.Meanwhile,regulations on generative AI at the global level are still in their early stages.By forming a governance framework,this paper could provide the community with useful experiences and play a leading role.The framework includes at least three parts:first,determining the realm of governance and unifying related concepts;second,developing measures for different layers to identify the causes and specific aspects of bias;third,identifying parties with the skills to take responsibility for detecting bias intrusions and proposing a program for the allocation of liabilities among the large-scale platform developers.展开更多
目的采用meta分析的方法评价卵圆孔未闭对非心脏手术患者术后脑卒中发生的影响。方法检索PubMed、Embase、Web of Science、CINAHL、Cochrane Library、中国生物医学文献数据库、万方数据知识服务平台、中国期刊全文数据库。纳入评价卵...目的采用meta分析的方法评价卵圆孔未闭对非心脏手术患者术后脑卒中发生的影响。方法检索PubMed、Embase、Web of Science、CINAHL、Cochrane Library、中国生物医学文献数据库、万方数据知识服务平台、中国期刊全文数据库。纳入评价卵圆孔未闭与术后脑卒中关联强度的研究。主要结局指标为术后脑卒中发生率,次要结局指标为死亡率、心肌梗死率和术后30 d内再入院率。对符合纳入标准的文献进行质量评价和数据提取,采用RevMan 5.4软件进行meta分析。结果纳入8项回顾性队列研究,共21142237例患者。meta分析结果显示,卵圆孔未闭与非心脏手术患者术后脑卒中和术后30 d再入院率存在关联。卵圆孔未闭组与非卵圆孔未闭组患者全因死亡率和心肌梗死率比较差异无统计学意义(P>0.05)。结论卵圆孔未闭可增加非心脏手术患者术后脑卒中发生的风险。展开更多
The development of blockchain is at a nascent stage.Current research on blockchain mainly focuses on a single technology,failing to reflect the correlation between the integrated technologies due to a lack of applicat...The development of blockchain is at a nascent stage.Current research on blockchain mainly focuses on a single technology,failing to reflect the correlation between the integrated technologies due to a lack of application in the real world.In this paper,according to the function classification,we divide blockchain technology into five layers:the data layer,the network layer,the consensus layer,the contract layer,and the application layer.For each layer,we elaborate on its technical principles and the latest research status.We also provide empirical cases of blockchain application.This paper summarizes the general functional modules of the blockchain to support the rapid implementation of blockchain applications.In the end,we investigate the challenges faced by blockchain technology and present the research prospects.展开更多
文摘The risk of bias is widely noticed in the entire process of generative artificial intelligence(generative AI)systems.To protect the rights of the public and improve the effectiveness of AI regulations,feasible measures to address the bias problem in the context of large data should be proposed as soon as possible.Since bias originates in every part and various aspects of AI product lifecycles,laws and technical measures should consider each of these layers and take different causes of bias into account,from data training,modeling,and application design.The Interim Measures for the Administration of Generative AI Service(the Interim Measures),formulated by the Office of the Central Cyberspace Affairs Commission(CAC)and other departments have taken the initiatives to govern AI.However,it lacks specific details on issues such as how to prevent the risk of bias and reduce the effect of bias in decision-making.The Interim Measures also fail to take causes of bias into account,and several principles must be further interpreted.Meanwhile,regulations on generative AI at the global level are still in their early stages.By forming a governance framework,this paper could provide the community with useful experiences and play a leading role.The framework includes at least three parts:first,determining the realm of governance and unifying related concepts;second,developing measures for different layers to identify the causes and specific aspects of bias;third,identifying parties with the skills to take responsibility for detecting bias intrusions and proposing a program for the allocation of liabilities among the large-scale platform developers.
基金Supported by the National Natural Science Foundation of China(61762049,61862033,61902162)Natural Science Foundation of Jiangxi Province(20202BABL202025,20202BABL202026,20202BAB202015)。
文摘The development of blockchain is at a nascent stage.Current research on blockchain mainly focuses on a single technology,failing to reflect the correlation between the integrated technologies due to a lack of application in the real world.In this paper,according to the function classification,we divide blockchain technology into five layers:the data layer,the network layer,the consensus layer,the contract layer,and the application layer.For each layer,we elaborate on its technical principles and the latest research status.We also provide empirical cases of blockchain application.This paper summarizes the general functional modules of the blockchain to support the rapid implementation of blockchain applications.In the end,we investigate the challenges faced by blockchain technology and present the research prospects.