摘要
As human beings,people coordinate movements and interact with the environment through sensory information and motor adaptation in the daily lives.Many characteristics of these interactions can be studied using optimization-based models,which assume that the precise knowledge of both the sensorimotor system and its interactive environment is available for the central nervous system(CNS).However,both static and dynamic uncertainties occur inevitably in the daily movements.When these uncertainties are taken into consideration,the previously developed models based on optimization theory may fail to explain how the CNS can still coordinate human movements which are also robust with respect to the uncertainties.In order to address this problem,this paper presents a novel computational mechanism for sensorimotor control from a perspective of robust adaptive dynamic programming(RADP).Sharing some essential features of reinforcement learning,which was originally observed from mammals,the RADP model for sensorimotor control suggests that,instead of identifying the system dynamics of both the motor system and the environment,the CNS computes iteratively a robust optimal control policy using the real-time sensory data.An online learning algorithm is provided in this paper,with rigorous convergence and stability analysis.Then,it is applied to simulate several experiments reported from the past literature.By comparing the proposed numerical results with these experimentally observed data,the authors show that the proposed model can reproduce movement trajectories which are consistent with experimental observations.In addition,the RADP theory provides a unified framework that connects optimality and robustness properties in the sensorimotor system.
基金
supported in part by the US National Science Foundation Grant Nos.ECCS-1101401 and ECCS-1230040