Offline reinforcement learning leverages previously collected offline datasets to learn optimal policies with no necessity to access the real environment.Such a paradigm is also desirable for multi-agent reinforcement...Offline reinforcement learning leverages previously collected offline datasets to learn optimal policies with no necessity to access the real environment.Such a paradigm is also desirable for multi-agent reinforcement learning(MARL)tasks,given the combinatorially increased interactions among agents and with the environment.However,in MARL,the paradigm of offline pre-training with online fine-tuning has not been studied,nor even datasets or benchmarks for offline MARL research are available.In this paper,we facilitate the research by providing large-scale datasets and using them to examine the usage of the decision transformer in the context of MARL.We investigate the generalization of MARL offline pre-training in the following three aspects:1)between single agents and multiple agents,2)from offline pretraining to online fine tuning,and 3)to that of multiple downstream tasks with few-shot and zero-shot capabilities.We start by introducing the first offline MARL dataset with diverse quality levels based on the StarCraftII environment,and then propose the novel architecture of multi-agent decision transformer(MADT)for effective offline learning.MADT leverages the transformer′s modelling ability for sequence modelling and integrates it seamlessly with both offline and online MARL tasks.A significant benefit of MADT is that it learns generalizable policies that can transfer between different types of agents under different task scenarios.On the StarCraft II offline dataset,MADT outperforms the state-of-the-art offline reinforcement learning(RL)baselines,including BCQ and CQL.When applied to online tasks,the pre-trained MADT significantly improves sample efficiency and enjoys strong performance in both few-short and zero-shot cases.To the best of our knowledge,this is the first work that studies and demonstrates the effectiveness of offline pre-trained models in terms of sample efficiency and generalizability enhancements for MARL.展开更多
This study highlights the theoretical investigation of quantum coherence in mechanical oscillators and its transfer between the cavity and mechanical modes of an optomechanical system comprising an optical cavity and ...This study highlights the theoretical investigation of quantum coherence in mechanical oscillators and its transfer between the cavity and mechanical modes of an optomechanical system comprising an optical cavity and two mechanical oscillators that,in this study,were simultaneously coupled to the optical cavity at different optomechanical coupling strengths.The quantum coherence transfer between the optical and mechanical modes is found to depend strongly on the relative magnitude of the two optomechanical couplings.The laser power,decay rates of the cavity and mechanical oscillators,environmental temperature,and frequency of the mechanical oscillator are observed to significantly influence the investigated quantum coherences.Moreover,quantum coherence generation in the optomechanical system is restricted by the system's stability condition,which helps sustain high and stable quantum coherence in the optomechanical system.展开更多
基金Linghui Meng was supported in part by the Strategic Priority Research Program of the Chinese Academy of Sciences(No.XDA27030300)Haifeng Zhang was supported in part by the National Natural Science Foundation of China(No.62206289).
文摘Offline reinforcement learning leverages previously collected offline datasets to learn optimal policies with no necessity to access the real environment.Such a paradigm is also desirable for multi-agent reinforcement learning(MARL)tasks,given the combinatorially increased interactions among agents and with the environment.However,in MARL,the paradigm of offline pre-training with online fine-tuning has not been studied,nor even datasets or benchmarks for offline MARL research are available.In this paper,we facilitate the research by providing large-scale datasets and using them to examine the usage of the decision transformer in the context of MARL.We investigate the generalization of MARL offline pre-training in the following three aspects:1)between single agents and multiple agents,2)from offline pretraining to online fine tuning,and 3)to that of multiple downstream tasks with few-shot and zero-shot capabilities.We start by introducing the first offline MARL dataset with diverse quality levels based on the StarCraftII environment,and then propose the novel architecture of multi-agent decision transformer(MADT)for effective offline learning.MADT leverages the transformer′s modelling ability for sequence modelling and integrates it seamlessly with both offline and online MARL tasks.A significant benefit of MADT is that it learns generalizable policies that can transfer between different types of agents under different task scenarios.On the StarCraft II offline dataset,MADT outperforms the state-of-the-art offline reinforcement learning(RL)baselines,including BCQ and CQL.When applied to online tasks,the pre-trained MADT significantly improves sample efficiency and enjoys strong performance in both few-short and zero-shot cases.To the best of our knowledge,this is the first work that studies and demonstrates the effectiveness of offline pre-trained models in terms of sample efficiency and generalizability enhancements for MARL.
基金supported by the National Natural Science Foundation of China(Grant Nos.11565014,11775190,11375093,and11775035)the Natural Science Foundation of Jiangxi Province(Grant No.20171BAB201015)+1 种基金the Science and Technology of Jiangxi Province(Grant No.20171BAB212006)the Education Bureau of Jiangxi Province(Grant No.JJ160503)
文摘This study highlights the theoretical investigation of quantum coherence in mechanical oscillators and its transfer between the cavity and mechanical modes of an optomechanical system comprising an optical cavity and two mechanical oscillators that,in this study,were simultaneously coupled to the optical cavity at different optomechanical coupling strengths.The quantum coherence transfer between the optical and mechanical modes is found to depend strongly on the relative magnitude of the two optomechanical couplings.The laser power,decay rates of the cavity and mechanical oscillators,environmental temperature,and frequency of the mechanical oscillator are observed to significantly influence the investigated quantum coherences.Moreover,quantum coherence generation in the optomechanical system is restricted by the system's stability condition,which helps sustain high and stable quantum coherence in the optomechanical system.