摘要
人工智能体由控制端、感知端和行动端组成。在控制端,尽管大模型充当了智能体的“智能引擎”,但仍存在“机器幻觉”,其生成的内容面临时效性和可靠性风险。大模型的算法偏见也可能加剧智能体在决策中的偏见。在感知端,智能体的多模态感知能力加大了个人隐私侵权的风险,对个人信息保护制度构成挑战。多智能体系统间的交互可能导致不可预测的、复杂的和动态的系统性安全风险。在行动端,具身智能体的交互式学习模式可能导致全面的、侵入性的隐私风险。智能体的嵌入式和中介化部署方式将深度影响人类的主体性。其高度定制化的部署特性还会面临人工智能对齐的挑战。面向“代理即服务”的产业链特点,应建立从基础模型到基础代理的模块化治理框架。针对具体的高风险场景,应探索精准化治理机制。鉴于人工智能体的生态特性,应着力推进交互式治理。
Artificial intelligence agents consist of control,perception,and action components.At the control end,while large models serve as the"intelligence engine",of the agent,they still face challenges such as"machine hallucinations"and risks of content temporality and reliability.Algorithmic biases in large models may exacerbate biases in decision-making by the agent.At the perception end,the multi-modal perception capabilities of Al agents increase the risk of personal privacy infringement,posing challenges to personal information protection systems.Interactions among multi-agent systems may lead to unpredictable,complex,and dynamic systemic security risks.At the action end,the interactive learning mode of embodied intelligent agents may lead to comprehensive and invasive privacy risks.The embedded and intermediated deployment of intelligent agents may deeply affect human subjectivity.Its highly customized deployment may also face challenges of aligning artificial intelligence.Given the characteristics of the"agent as a service"industry chain,a modular governance framework should be established from basic models to basic agents.For specific high-risk scenarios,precise governance mechanisms should be explored.Considering the ecological nature of artificial intelligence agents,efforts should be made to promote interactive governance.
出处
《东方法学》
北大核心
2024年第3期129-142,共14页
Oriental Law
基金
北京市教育科学规划课题(项目批准号:3030-0014)的阶段性研究成果。
关键词
人工智能体
通用人工智能
人工智能治理
模块化治理
大模型
精准化治理
artificial intelligence agents
general artificial intelligence
Al governance
modular governance
large models
precision governance