摘要
人工智能基础模型的安全治理是人工智能法治化发展面临的重要命题。人工智能基础模型平台作为安全治理的主体,其发现、评估和缓解人工智能系统潜在风险的能力至关重要,具有调整和优化自身模型的作用。国内外人工智能基础模型平台积极布局安全风险治理,但仍然面临治理架构缺乏权威性和代表性、伦理规范过于抽象、测试算力不足、风险评估困难、平台存在利己偏好等挑战。有必要立足于中国人工智能基础模型平台安全治理的实际,完善压力驱动、动力保障、能力强化等机制,更好发挥人工智能基础模型平台对安全治理的特有优势和能动作用,支撑人工智能技术及其应用健康有序发展。
ethical standards,insufficient computing power for testing,difficulties in risk assessment,and platforms self-interest bias.It is necessary to improve mechanisms such as pressure-driven,motivation-guaranteed,and capability-enhancing measures based on the practical realities of safety governance for Chinese AI foundation model platforms.This will better leverage the unique advantages and proactive roles of AI foundation model platforms in safety governance,supporting the healthy and orderly development of AI technology and applications.
出处
《财经法学》
2024年第5期3-22,共20页
Law and Economy
基金
中国社会科学院学科建设“登峰战略”资助计划资助(DF2023XXJC07)。
关键词
基础模型
基础模型平台
人工智能安全
人工智能风险
人工智能治理
foundational model
foundational model platform
artificial intelligence safety
artificial intelligence risk
artificial intelligence governance