This paper proposes a novel approach to use artificial intelligence(Al),particularly large language models(LLMs)and other foundation models(FMs)in an educational environment.It emphasizes the integration of teams of t...This paper proposes a novel approach to use artificial intelligence(Al),particularly large language models(LLMs)and other foundation models(FMs)in an educational environment.It emphasizes the integration of teams of teachable and self-learning LLMs agents that use neuro-symbolic cognitive architecture(NSCA)to provide dynamic personalized support to learners and educators within self-improving adaptive instructional systems(SIAIS).These systems host these agents and support dynamic sessions of engagement workflow.We have developed the never ending open learning adaptive framework(NEOLAF),an LLM-based neuro-symbolic architecture for self-learning AI agents,and the open learning adaptive framework(OLAF),the underlying platform to host the agents,manage agent sessions,and support agent workflows and integration.The NEOLAF and OLAF serve as concrete examples to illustrate the advanced AI implementation approach.We also discuss our proof of concept testing of the NEOLAF agent to develop math problem-solving capabilities and the evaluation test for deployed interactive agent in the learning environment.展开更多
Adversarial examples revealed the weakness of machine learning techniques in terms of robustness,which moreover inspired adversaries to make use of the weakness to attack systems employing machine learning.Existing re...Adversarial examples revealed the weakness of machine learning techniques in terms of robustness,which moreover inspired adversaries to make use of the weakness to attack systems employing machine learning.Existing researches covered the methodologies of adversarial example generation,the root reason of the existence of adversarial examples,and some defense schemes.However practical attack against real world systems did not appear until recent,mainly because of the difficulty in injecting a artificially generated example into the model behind the hosting system without breaking the integrity.Recent case study works against face recognition systems and road sign recognition systems finally abridged the gap between theoretical adversarial example generation methodologies and practical attack schemes against real systems.To guide future research in defending adversarial examples in the real world,we formalize the threat model for practical attacks with adversarial examples,and also analyze the restrictions and key procedures for launching real world adversarial example attacks.展开更多
文摘This paper proposes a novel approach to use artificial intelligence(Al),particularly large language models(LLMs)and other foundation models(FMs)in an educational environment.It emphasizes the integration of teams of teachable and self-learning LLMs agents that use neuro-symbolic cognitive architecture(NSCA)to provide dynamic personalized support to learners and educators within self-improving adaptive instructional systems(SIAIS).These systems host these agents and support dynamic sessions of engagement workflow.We have developed the never ending open learning adaptive framework(NEOLAF),an LLM-based neuro-symbolic architecture for self-learning AI agents,and the open learning adaptive framework(OLAF),the underlying platform to host the agents,manage agent sessions,and support agent workflows and integration.The NEOLAF and OLAF serve as concrete examples to illustrate the advanced AI implementation approach.We also discuss our proof of concept testing of the NEOLAF agent to develop math problem-solving capabilities and the evaluation test for deployed interactive agent in the learning environment.
基金partially sponsored by Shanghai Sailing Program No.18YF1402200。
文摘Adversarial examples revealed the weakness of machine learning techniques in terms of robustness,which moreover inspired adversaries to make use of the weakness to attack systems employing machine learning.Existing researches covered the methodologies of adversarial example generation,the root reason of the existence of adversarial examples,and some defense schemes.However practical attack against real world systems did not appear until recent,mainly because of the difficulty in injecting a artificially generated example into the model behind the hosting system without breaking the integrity.Recent case study works against face recognition systems and road sign recognition systems finally abridged the gap between theoretical adversarial example generation methodologies and practical attack schemes against real systems.To guide future research in defending adversarial examples in the real world,we formalize the threat model for practical attacks with adversarial examples,and also analyze the restrictions and key procedures for launching real world adversarial example attacks.