Autonomous agents can explore the environment around them when equipped with advanced hardware and software systems that help intelligent agents minimize collisions.These systems are developed under the term Artificia...Autonomous agents can explore the environment around them when equipped with advanced hardware and software systems that help intelligent agents minimize collisions.These systems are developed under the term Artificial Intelligence(AI)safety.AI safety is essential to provide reliable service to consumers in various fields such asmilitary,education,healthcare,and automotive.This paper presents the design of an AI safety algorithmfor safe autonomous navigation using Reinforcement Learning(RL).Machine Learning Agents Toolkit(ML-Agents)was used to train the agentwith a proximal policy optimizer algorithmwith an intrinsic curiositymodule(PPO+ICM).This training aims to improve AI safety and minimize or prevent any mistakes that can cause dangerous collisions by the intelligent agent.Four experiments have been executed to validate the results of our research.The designed algorithmwas tested in a virtual environment with four differentmodels.A comparison was presented in four cases to identify the best-performing model for improvingAI safety.The designed algorithmenabled the intelligent agent to perform the required task safely using RL.A goal collision ratio of 64%was achieved,and the collision incidents were minimized from 134 to 52 in the virtual environment within 30min.展开更多
基金the United States Air Force Office of Scientific Research(AFOSR)contract FA9550-22-1-0268 awarded to KHA,https://www.afrl.af.mil/AFOSR/.The contract is entitled:“Investigating Improving Safety of Autonomous Exploring Intelligent Agents with Human-in-the-Loop Reinforcement Learning,”and in part by Jackson State University.
文摘Autonomous agents can explore the environment around them when equipped with advanced hardware and software systems that help intelligent agents minimize collisions.These systems are developed under the term Artificial Intelligence(AI)safety.AI safety is essential to provide reliable service to consumers in various fields such asmilitary,education,healthcare,and automotive.This paper presents the design of an AI safety algorithmfor safe autonomous navigation using Reinforcement Learning(RL).Machine Learning Agents Toolkit(ML-Agents)was used to train the agentwith a proximal policy optimizer algorithmwith an intrinsic curiositymodule(PPO+ICM).This training aims to improve AI safety and minimize or prevent any mistakes that can cause dangerous collisions by the intelligent agent.Four experiments have been executed to validate the results of our research.The designed algorithmwas tested in a virtual environment with four differentmodels.A comparison was presented in four cases to identify the best-performing model for improvingAI safety.The designed algorithmenabled the intelligent agent to perform the required task safely using RL.A goal collision ratio of 64%was achieved,and the collision incidents were minimized from 134 to 52 in the virtual environment within 30min.