Intrinsic motivation helps autonomous exploring agents traverse a larger portion of their environments.However,simulations of different learning environments in previous research show that after millions of timesteps ...Intrinsic motivation helps autonomous exploring agents traverse a larger portion of their environments.However,simulations of different learning environments in previous research show that after millions of timesteps of successful training,an intrinsically motivated agent may learn to act in ways unintended by the designer.This potential for unintended actions of autonomous exploring agents poses threats to the environment and humans if operated in the real world.We investigated this topic by using Unity’s MachineLearningAgent Toolkit(ML-Agents)implementation of the Proximal Policy Optimization(PPO)algorithm with the Intrinsic Curiosity Module(ICM)to train autonomous exploring agents in three learning environments.We demonstrate that ICM,although designed to assist agent navigation in environments with sparse reward generation,increasing gradually as a tool for purposely training misbehaving agent in significantly less than 1 million timesteps.We present the following achievements:1)experiments designed to cause agents to act undesirably,2)a metric for gauging how well an agent achieves its goal without collisions,and 3)validation of PPO best practices.Then,we used optimized methods to improve the agent’s performance and reduce collisions within the same environments.These achievements help further our understanding of the significance of monitoring training statistics during reinforcement learning for determining how humans can intervene to improve agent safety and performance.展开更多
This research is framed within the affective computing, which explains the importance of emotions in human cognition (decision making, perception, interaction and human intelligence). Applying this approach to a pedag...This research is framed within the affective computing, which explains the importance of emotions in human cognition (decision making, perception, interaction and human intelligence). Applying this approach to a pedagogical agent is an essential part to enhance the effectiveness of the teaching-learning process of an intelligent learning system. This work focuses on the design of the inference engine that will give life to the interface, where the latter is represented by a pedagogical agent. The inference engine is based on an affective-motivational model. This model is implemented by using artificial intelligence technique called fuzzy cognitive maps.展开更多
基金This work was partly supported by the United States Air Force Office of Scientific Research(AFOSR)contract FA9550-22-1-0268 awarded to KHA,https://www.afrl.af.mil/AFOSR/.The contract is entitled:“Investigating Improving Safety of Autonomous Exploring Intelligent Agents with Human-in-the-Loop Reinforcement Learning,”and in part by Jackson State University。
文摘Intrinsic motivation helps autonomous exploring agents traverse a larger portion of their environments.However,simulations of different learning environments in previous research show that after millions of timesteps of successful training,an intrinsically motivated agent may learn to act in ways unintended by the designer.This potential for unintended actions of autonomous exploring agents poses threats to the environment and humans if operated in the real world.We investigated this topic by using Unity’s MachineLearningAgent Toolkit(ML-Agents)implementation of the Proximal Policy Optimization(PPO)algorithm with the Intrinsic Curiosity Module(ICM)to train autonomous exploring agents in three learning environments.We demonstrate that ICM,although designed to assist agent navigation in environments with sparse reward generation,increasing gradually as a tool for purposely training misbehaving agent in significantly less than 1 million timesteps.We present the following achievements:1)experiments designed to cause agents to act undesirably,2)a metric for gauging how well an agent achieves its goal without collisions,and 3)validation of PPO best practices.Then,we used optimized methods to improve the agent’s performance and reduce collisions within the same environments.These achievements help further our understanding of the significance of monitoring training statistics during reinforcement learning for determining how humans can intervene to improve agent safety and performance.
文摘This research is framed within the affective computing, which explains the importance of emotions in human cognition (decision making, perception, interaction and human intelligence). Applying this approach to a pedagogical agent is an essential part to enhance the effectiveness of the teaching-learning process of an intelligent learning system. This work focuses on the design of the inference engine that will give life to the interface, where the latter is represented by a pedagogical agent. The inference engine is based on an affective-motivational model. This model is implemented by using artificial intelligence technique called fuzzy cognitive maps.