期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
User Role Discovery and Optimization Method Based on K-means++ and Reinforcement Learning in Mobile Applications 被引量:1
1
作者 Yuanbang Li wengang zhou +1 位作者 Chi Xu Yuchun Shi 《Computer Modeling in Engineering & Sciences》 SCIE EI 2022年第6期1365-1386,共22页
With the widespread use of mobile phones,users can share their location and activity anytime,anywhere,as a form of check-in data.These data reflect user features.Long-term stability and a set of user-shared features c... With the widespread use of mobile phones,users can share their location and activity anytime,anywhere,as a form of check-in data.These data reflect user features.Long-term stability and a set of user-shared features can be abstracted as user roles.This role is closely related to the users’social background,occupation,and living habits.This study makes four main contributions to the literature.First,user feature models from different views for each user are constructed from the analysis of the check-in data.Second,the K-means algorithm is used to discover user roles from user features.Third,a reinforcement learning algorithm is proposed to strengthen the clustering effect of user roles and improve the stability of the clustering result.Finally,experiments are used to verify the validity of the method.The results show that the method can improve the effect of clustering by 1.5∼2 times,and improve the stability of the cluster results about 2∼3 times of the original.This method is the first time to apply reinforcement learning to the optimization of user roles in mobile applications,which enhances the clustering effect and improves the stability of the automatic method when discovering user roles. 展开更多
关键词 User role discovery user role optimization K-means++ reinforcement learning
下载PDF
Coach-assistedmulti-agent reinforcement learning framework for unexpected crashed agents 被引量:2
2
作者 Jian ZHAO Youpeng ZHAO +5 位作者 Weixun WANG Mingyu YANG Xunhan HU wengang zhou Jianye HAO Houqiang LI 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2022年第7期1032-1042,共11页
Multi-agent reinforcement learning is difficult to apply in practice,partially because of the gap between simulated and real-world scenarios.One reason for the gap is that simulated systems always assume that agents c... Multi-agent reinforcement learning is difficult to apply in practice,partially because of the gap between simulated and real-world scenarios.One reason for the gap is that simulated systems always assume that agents can work normally all the time,while in practice,one or more agents may unexpectedly“crash”during the coordination process due to inevitable hardware or software failures.Such crashes destroy the cooperation among agents and lead to performance degradation.In this work,we present a formal conceptualization of a cooperative multi-agent reinforcement learning system with unexpected crashes.To enhance the robustness of the system to crashes,we propose a coach-assisted multi-agent reinforcement learning framework that introduces a virtual coach agent to adjust the crash rate during training.We have designed three coaching strategies(fixed crash rate,curriculum learning,and adaptive crash rate)and a re-sampling strategy for our coach agent.To our knowledge,this work is the first to study unexpected crashes in a multi-agent system.Extensive experiments on grid-world and StarCraft II micromanagement tasks demonstrate the efficacy of the adaptive strategy compared with the fixed crash rate strategy and curriculum learning strategy.The ablation study further illustrates the effectiveness of our re-sampling strategy. 展开更多
关键词 Multi-agent system Reinforcement learning Unexpected crashed agents
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部