摘要
生成式人工智能进一步提高了人类信息生成和交流的效率,但也面临主体责任认定困难、信息失序等挑战,对人工智能的治理监管引发全球关注。2023年中国在坚持“发展和安全并重”“创新和依法治理相结合”原则的基础上,进一步明确了政府、服务提供者和服务使用者的角色定位和他们在信息失序等风险事件中的责任界定,并从技术研发、行业采纳和公众服务三个业态层次出发,构建了以技术伦理、行业规范和行动准则为导向的综合治理框架,对生成式人工智能的治理走到了世界前列。面对运行机制复杂、风险样态多变、潜在影响未知的生成式人工智能技术,仅基于风险预防的事前规制模式难以起到理想的防范效果,需要一种更加灵活、更具包容性的动态监管思路,以实现生成式人工智能的包容性发展。
Summary Prior to the appearance of generative artificial intelligence(GAI),specialized models,as the underlying architecture of traditional AI,were often found in a specific subdomain,but could not be generalized to other domains or used to solve comprehensive problems.Large language model(LLM)is the application of large scale pre-training model in the field of natural language processing,which is also the key technique behind GAI.Based on LLMs,GAI adopts the development paradigm of“pre-training+fine-tuning”,which can adjust the output results according to application scenarios and users’feedback precisely,and generates content beyond the subjects or fields of the original data.At present,GAI has demonstrated strong capabilities in education,medical care,scientific research,art and other fields,but it brings a series of risks and challenges that harms the rights of individuals and even threatens national security at the same time.An important issue of GAI governance is how to reasonably allocate the responsibility of the subject and improve the identification of GAI’s responsibility.Under the various ecosystem constructed by“large model+application”,the complex and changeable correlationships between upstream large model and downstream enterprises mainly present two modes:the direct dependency mode,the model as a service,and the indirect dependency mode which is the“model+service”.Therefore,the applicational risk of large model is not only embedded in the quality of technology development,but also in the complex interaction of various subjects in the industrial chain and value chain,which makes it impossible to discuss responsibility attribution and behavior norms separated from specific application scenarios.Information disorder is another important risk of GAI.From the two dimensions,in terms of“falseness”and“harm”,information disorder can be further summarized into three categories:mis-information,dis information,and mal-information.From the perspective of machine intelligence,technical defects such as“undifferentiated learning”and“hallucination”exist in GAI which are the important reasons for the generation of mis-information.In response to the above two challenges,in 2023,China still maintains a high degree of activation in AI governance,with the governance characteristics of both an entrepreneurial state and a regulatory state.On the basis of the principle of“attaching equal importance to development and security”and“combining innovation with law-based governance”,the world’s first separate regulation on GAI-“Administrative Measures for Generative Artificial Intelligence Services(Draft for Public Comments)”has been promulgated.China’s governance of GAI starts from three levels of technology research and development,i.e.technical ethics,industry norms and action guidelines to develop a comprehensive governance framework for industry adoption,public service,and governance.This framework re-examines the role and the responsibility of the subjects on different levels,and distinguishes their risk prevention obligations and legal responsibilities,which also builds a hierarchical governance pattern to promote the healthy and sustainable development of GAI.
作者
赵瑜
曹凌霄
ZHAO Yu;CAO Lingxiao(College of Media and International Culture,Zhejiang University,Hangzhou 310058,China)
基金
国家社会科学基金重大项目(21&ZD318)。
关键词
人工智能治理
生成式人工智能
大模型
风险规制
信息失序
动态监管
包容性发展
artificial intelligence governance
generative artificial intelligence
large model
risk regulation
information disorder
dynamic supervision
inclusive development