期刊文献+

生成式大模型训练数据的法律规制——以比例原则为分析视角

Legal Regulation of Generative Large Model Training Data--Using the Principle of Proportionality as an Analytical Perspective
下载PDF
导出
摘要 依赖大量数据训练的生成式人工智能大模型正处于蓬勃发展时期,但其在训练数据收集、处理和输出过程中,存在的潜在的数据合规、数据偏见、数据泄露等风险不仅会威胁技术本身的发展,还会对社会相关利益群体产生一定的威胁,需要通过法律对其进行规制。首先,基于平衡人工智能发展与保障人类权益的目的,对生成式大模型训练数据过程中的风险进行分析,根据比例原则基本原理构建风险治理的框架,即对数据获取的来源与目的加以限制,充分平衡各方权利人的利益;其次,对数据内容与算法技术进行正当性管控,以使对各方的损害达到最小范围;最后,从执行数据收集最小化原则、增强合规数据使用和明确相关主体责任义务3个途径降低数据泄露风险。 Generative artificial intelligence large models trained on a large amount of data are in a period of vigorous development.However,in the process of training data collection,processing and output,its potential risks such as data compliance,data bias and data leakage not only threaten the development of the technology itself,but also pose certain threats to human beings and social interests,which needs to be regulated by law.Based on the purpose of balancing the development of artificial intelligence and protecting human rights and interests,through analyzing the risks in the process of training data for generative large models,a framework of risk governance is constructed according to the basic principle of proportionality,that is,the source and purpose of data acquisition are limited,and the interests of all parties are fully balanced.Data content and algorithm technology are properly controlled to minimize the damage to all parties.The risk of data leakage can be minimized in three ways:by minimizing the principle of data collection,increasing the use of compliance data,and clarifying the responsibilities and obligations of relevant subjects.
作者 钟海燕 黄运康 ZHONG Haiyan;HUANG Yunkang(School of Law,Guangxi Normal University,Guilin Guangxi 541006,China)
出处 《信息安全与通信保密》 2024年第7期99-108,共10页 Information Security and Communications Privacy
基金 广西地方法治与地方治理研究项目“人工智能创作‘作者’法律问题研究”(GXFZY202303)。
关键词 生成式大模型 训练数据 数据风险治理 比例原则 generative large model training data data risk governance proportionality principle
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部