期刊文献+

AIGC时代虚假信息的制造传播与国家安全及公民权益的维护

Manufacture and Dissemination of Fake Information vs.Protection of National Security and People's Rights and Interests in the AIGC Era
原文传递
导出
摘要 随着OpenAI发布的ChatGPT在短时间内吸引了数亿用户,生成式人工智能技术迅速进入了大众视野。生成式人工智能可以通过人类指令来生成满足指令的内容,这一技术带来了内容生产方式的变革。然而,生成式人工智能技术的发展和普及也使虚假信息的制造和传播变得更加容易、多样和逼真。AI技术可用于生成虚假的文本、图像、音频和视频,甚至可以模仿真实人类的语音和行为,致使人们很难辨别究竟是AI生成的内容还是实际存在的真相和观点。倘若AI虚假信息用于造谣、欺诈、社会操纵等目的,将会对国家安全和公民权益造成巨大危害。因此,尽早完善AI新时代虚假信息的监测处理体系已是当务之急。 With the impressive debut of ChatGPT,the idea of artificial intelligence-generated content(AIGC) assisting or even replacing human authors in content creation is becoming a reality.ChatGPT quickly attracted hundreds of millions of users,and various AIGC applications rapidly went online and became popular,marking the advent of a new era in content production.Artificial intelligence technology has experienced three major stages since the 1950s:the shift from machine learning to deep learning,the introduction of Transformer models,and the arrival of the Foundation Model era.With the advancement of artificial intelligence technology,the number of parameters of AI models has continuously increased,from 117 million to 1.5 billion and then to 175billion,until the birth of the ChatGPT.Moreover,various amazing AIs have emerged at the same time;these AIs can be flexibly used in creative fields such as writing,music arrangement,painting,and video production.However,while foundation models are driving the cognitive capabilities of intelligent entities,they also pose risks and challenges to humans:the training data and objectives of large language models may be ambiguous and uncertain and thus may show misleading and biased behavior;when large language models become easy to manipulate,they are most likely to be used for the release of malicious empowerment,resulting in issues such as fake news,online fraud,and more seriously,they may affect social stability and endanger national security.Given that AI technology is easily accessible to the public and has extremely powerful capabilities,malicious actors can misemploy it when applying for AI text containing image rumours,AI video fraud,AI audio fraud,and AI scene fraud and even when imitating the voice and behavior of real humans,making it difficult for people to distinguish between the content generated by AI and actual truth and public opinion.In addition,the arbitrary dissemination of anthropomorphic AI in the water army will not only disturb normal public cognition but also impact the existing social order.Indeed,due to the power of AIGC,the fabrication of fake information has become easier and simpler,and its content is becoming more diverse and realistic.The world has seen serious cases involving rumour-mongering and fraud,greatly jeopardizing the legitimate rights and interests of citizens and threatening national security.In view of this,this study,based on a brief introduction of the mechanism of AIGC,analyses a few cases and proposes the following suggestions and measures:(1) At the technical level,research investment should be increased to improve the effectiveness of Deepfake Detection,e.g.,to design and produce tit-for-tat AI of Justice to counterattack Evil/Criminal AI.(2) At the legislative level,relevant laws and regulations should be improved to clarify the boundaries of crime and noncrime.For example,items of responsibilities and penalties in the newly promulgated “Interim Measures for the Management of Generative Artificial Intelligence Services” should be combined with China's existing relevant laws and regulations to clarify relevant actors in the manufacturing,utilization and dissemination of artificial intelligence false information.(3) At the law enforcement level,specialized institutions should be established to handle emergencies and strictly enforce laws and regulations in combating AI crimes.(4) In terms of publicity and education,AIGC artificial intelligence knowledge should be popularized to improve citizens' AI recognition and alertness so that an effective early danger warning mechanism can be established together with a hazard handling mechanism to avoid and reduce damage to people's interests and to safeguard national security.
作者 陆建平 党自强 Lu Jianping;Dang Ziqiang(College of Media and International Culture,Zhejiang University,Hangzhou 310058,China;College of Computer Science and Technology,Zhejiang University,Hangzhou 310058,China)
出处 《浙江大学学报(人文社会科学版)》 北大核心 2024年第5期42-58,共17页 Journal of Zhejiang University:Humanities and Social Sciences
基金 中央高校基本科研业务费“重点领域研究资助计划”专项(2023年度第四批)资助项目。
关键词 AIGC ChatGPT 深度伪造 虚假信息 国家安全 公民权益 AIGC ChatGPT deepfake fake information national security people's interests
  • 相关文献

参考文献1

二级参考文献12

共引文献12

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部