期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Training a Quantum Neural Network to Solve the Contextual Multi-Armed Bandit Problem
1
作者 Wei Hu James Hu 《Natural Science》 2019年第1期17-27,共11页
Artificial intelligence has permeated all aspects of our lives today. However, to make AI behave like real AI, the critical bottleneck lies in the speed of computing. Quantum computers employ the peculiar and unique p... Artificial intelligence has permeated all aspects of our lives today. However, to make AI behave like real AI, the critical bottleneck lies in the speed of computing. Quantum computers employ the peculiar and unique properties of quantum states such as superposition, entanglement, and interference to process information in ways that classical computers cannot. As a new paradigm of computation, quantum computers are capable of performing tasks intractable for classical processors, thus providing a quantum leap in AI research and making the development of real AI a possibility. In this regard, quantum machine learning not only enhances the classical machine learning approach but more importantly it provides an avenue to explore new machine learning models that have no classical counterparts. The qubit-based quantum computers cannot naturally represent the continuous variables commonly used in machine learning, since the measurement outputs of qubit-based circuits are generally discrete. Therefore, a continuous-variable (CV) quantum architecture based on a photonic quantum computing model is selected for our study. In this work, we employ machine learning and optimization to create photonic quantum circuits that can solve the contextual multi-armed bandit problem, a problem in the domain of reinforcement learning, which demonstrates that quantum reinforcement learning algorithms can be learned by a quantum device. 展开更多
关键词 Continuous-Variable QUANTUM COMPUTERS QUANTUM Machine LEARNING QUANTUM Reinforcement LEARNING contextual multi-armed bandit problem
下载PDF
Strict greedy design paradigm applied to the stochastic multi-armed bandit problem
2
作者 Joey Hong 《机床与液压》 北大核心 2015年第6期1-6,共6页
The process of making decisions is something humans do inherently and routinely,to the extent that it appears commonplace. However,in order to achieve good overall performance,decisions must take into account both the... The process of making decisions is something humans do inherently and routinely,to the extent that it appears commonplace. However,in order to achieve good overall performance,decisions must take into account both the outcomes of past decisions and opportunities of future ones. Reinforcement learning,which is fundamental to sequential decision-making,consists of the following components: 1 A set of decisions epochs; 2 A set of environment states; 3 A set of available actions to transition states; 4 State-action dependent immediate rewards for each action.At each decision,the environment state provides the decision maker with a set of available actions from which to choose. As a result of selecting a particular action in the state,the environment generates an immediate reward for the decision maker and shifts to a different state and decision. The ultimate goal for the decision maker is to maximize the total reward after a sequence of time steps.This paper will focus on an archetypal example of reinforcement learning,the stochastic multi-armed bandit problem. After introducing the dilemma,I will briefly cover the most common methods used to solve it,namely the UCB and εn- greedy algorithms. I will also introduce my own greedy implementation,the strict-greedy algorithm,which more tightly follows the greedy pattern in algorithm design,and show that it runs comparably to the two accepted algorithms. 展开更多
关键词 Greedy algorithms Allocation strategy Stochastic multi-armed bandit problem
下载PDF
Optimal index shooting policy for layered missile defense system 被引量:1
3
作者 LI Longyue FAN Chengli +2 位作者 XING Qinghua XU Hailong ZHAO Huizhen 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2020年第1期118-129,共12页
In order to cope with the increasing threat of the ballistic missile(BM)in a shorter reaction time,the shooting policy of the layered defense system needs to be optimized.The main decisionmaking problem of shooting op... In order to cope with the increasing threat of the ballistic missile(BM)in a shorter reaction time,the shooting policy of the layered defense system needs to be optimized.The main decisionmaking problem of shooting optimization is how to choose the next BM which needs to be shot according to the previous engagements and results,thus maximizing the expected return of BMs killed or minimizing the cost of BMs penetration.Motivated by this,this study aims to determine an optimal shooting policy for a two-layer missile defense(TLMD)system.This paper considers a scenario in which the TLMD system wishes to shoot at a collection of BMs one at a time,and to maximize the return obtained from BMs killed before the system demise.To provide a policy analysis tool,this paper develops a general model for shooting decision-making,the shooting engagements can be described as a discounted reward Markov decision process.The index shooting policy is a strategy that can effectively balance the shooting returns and the risk that the defense mission fails,and the goal is to maximize the return obtained from BMs killed before the system demise.The numerical results show that the index policy is better than a range of competitors,especially the mean returns and the mean killing BM number. 展开更多
关键词 Gittins index shooting policy layered missile defense multi-armed bandits problem Markov decision process
下载PDF
面向LinUCB算法的数据投毒攻击方法
4
作者 姜伟龙 何琨 《中国科学:信息科学》 CSCD 北大核心 2024年第7期1569-1587,共19页
LinUCB算法是求解上下文多臂老虎机问题的一种典型算法,被广泛应用于新闻投放、产品推荐、医疗资源分配等场景中.目前对该算法的安全性研究略显薄弱,这就要求研究者进一步加深对该算法的攻击方式的研究,以作出具有针对性乃至泛用性的防... LinUCB算法是求解上下文多臂老虎机问题的一种典型算法,被广泛应用于新闻投放、产品推荐、医疗资源分配等场景中.目前对该算法的安全性研究略显薄弱,这就要求研究者进一步加深对该算法的攻击方式的研究,以作出具有针对性乃至泛用性的防御措施.本文提出了两种通过添加虚假数据的方式对LinUCB算法进行离线数据投毒攻击的攻击方案,即TCA方案(target context attack)与OCA方案(optimized context attack).前者是基于训练数据与目标上下文的相似性来生成投毒数据的;后者是建模一个优化问题,通过求解该问题来构造投毒数据,是前者的优化版本.实验测试表明,仅需添加少量投毒数据作为攻击成本即可实现对攻击目标的100%攻击成功率. 展开更多
关键词 上下文多臂老虎机 LinUCB算法 数据投毒攻击 白盒攻击 优化问题
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部