期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Future of Education with Neuro-Symbolic AI Agents in Self-Improving Adaptive Instructional Systems
1
作者 Richard Jiarui Tong Xiangen Hu 《Frontiers of Digital Education》 2024年第2期198-212,共15页
This paper proposes a novel approach to use artificial intelligence(Al),particularly large language models(LLMs)and other foundation models(FMs)in an educational environment.It emphasizes the integration of teams of t... This paper proposes a novel approach to use artificial intelligence(Al),particularly large language models(LLMs)and other foundation models(FMs)in an educational environment.It emphasizes the integration of teams of teachable and self-learning LLMs agents that use neuro-symbolic cognitive architecture(NSCA)to provide dynamic personalized support to learners and educators within self-improving adaptive instructional systems(SIAIS).These systems host these agents and support dynamic sessions of engagement workflow.We have developed the never ending open learning adaptive framework(NEOLAF),an LLM-based neuro-symbolic architecture for self-learning AI agents,and the open learning adaptive framework(OLAF),the underlying platform to host the agents,manage agent sessions,and support agent workflows and integration.The NEOLAF and OLAF serve as concrete examples to illustrate the advanced AI implementation approach.We also discuss our proof of concept testing of the NEOLAF agent to develop math problem-solving capabilities and the evaluation test for deployed interactive agent in the learning environment. 展开更多
关键词 large language models(LLMs) neurosymbolic cognitive architecture(NSCA) adaptive instructional systems(aiS) open learning adaptive framework(OLAF) never ending open learning adaptive framework(NEOLAF) artificial intelligence in education(aiED) intelligent tutoring system(ITS) LLM agent
原文传递
A survey of practical adversarial example attacks 被引量:1
2
作者 Lu Sun Mingtian Tan Zhe Zhou 《Cybersecurity》 2018年第1期213-221,共9页
Adversarial examples revealed the weakness of machine learning techniques in terms of robustness,which moreover inspired adversaries to make use of the weakness to attack systems employing machine learning.Existing re... Adversarial examples revealed the weakness of machine learning techniques in terms of robustness,which moreover inspired adversaries to make use of the weakness to attack systems employing machine learning.Existing researches covered the methodologies of adversarial example generation,the root reason of the existence of adversarial examples,and some defense schemes.However practical attack against real world systems did not appear until recent,mainly because of the difficulty in injecting a artificially generated example into the model behind the hosting system without breaking the integrity.Recent case study works against face recognition systems and road sign recognition systems finally abridged the gap between theoretical adversarial example generation methodologies and practical attack schemes against real systems.To guide future research in defending adversarial examples in the real world,we formalize the threat model for practical attacks with adversarial examples,and also analyze the restrictions and key procedures for launching real world adversarial example attacks. 展开更多
关键词 ai systems security Adversarial examples ATTACKS
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部