An experiment of producing high density polyethylene (HDPE) nano-composite filled with 4wt.% talc was presented. Acting as filler and a reinforcing agent in the HDPE, talc powder, sized at around 5 μm, was surface-tr...An experiment of producing high density polyethylene (HDPE) nano-composite filled with 4wt.% talc was presented. Acting as filler and a reinforcing agent in the HDPE, talc powder, sized at around 5 μm, was surface-treated with aluminum diethylene glycol dinitrate coupling agent before adding to the HDPE. Analyses of the reinforced HDPE nano-composite show significant improvement in its mechanical properties including, tensile strength (>26 MPa), break elongation (<1.1%), flexural strength (>22 MPa), and friction coefficients<0.11. The results demonstrate that, after surface-treated, talc can be used as a promising filling material and a reinforcing agent in making HDPE nano-composite.展开更多
In this paper, the reinforcement learning method for cooperative multi-agent systems(MAS) with incremental number of agents is studied. The existing multi-agent reinforcement learning approaches deal with the MAS with...In this paper, the reinforcement learning method for cooperative multi-agent systems(MAS) with incremental number of agents is studied. The existing multi-agent reinforcement learning approaches deal with the MAS with a specific number of agents, and can learn well-performed policies. However, if there is an increasing number of agents, the previously learned in may not perform well in the current scenario. The new agents need to learn from scratch to find optimal policies with others,which may slow down the learning speed of the whole team. To solve that problem, in this paper, we propose a new algorithm to take full advantage of the historical knowledge which was learned before, and transfer it from the previous agents to the new agents. Since the previous agents have been trained well in the source environment, they are treated as teacher agents in the target environment. Correspondingly, the new agents are called student agents. To enable the student agents to learn from the teacher agents, we first modify the input nodes of the networks for teacher agents to adapt to the current environment. Then, the teacher agents take the observations of the student agents as input, and output the advised actions and values as supervising information. Finally, the student agents combine the reward from the environment and the supervising information from the teacher agents, and learn the optimal policies with modified loss functions. By taking full advantage of the knowledge of teacher agents, the search space for the student agents will be reduced significantly, which can accelerate the learning speed of the holistic system. The proposed algorithm is verified in some multi-agent simulation environments, and its efficiency has been demonstrated by the experiment results.展开更多
The scale of ground-to-air confrontation task assignments is large and needs to deal with many concurrent task assignments and random events.Aiming at the problems where existing task assignment methods are applied to...The scale of ground-to-air confrontation task assignments is large and needs to deal with many concurrent task assignments and random events.Aiming at the problems where existing task assignment methods are applied to ground-to-air confrontation,there is low efficiency in dealing with complex tasks,and there are interactive conflicts in multiagent systems.This study proposes a multiagent architecture based on a one-general agent with multiple narrow agents(OGMN)to reduce task assignment conflicts.Considering the slow speed of traditional dynamic task assignment algorithms,this paper proposes the proximal policy optimization for task assignment of general and narrow agents(PPOTAGNA)algorithm.The algorithm based on the idea of the optimal assignment strategy algorithm and combined with the training framework of deep reinforcement learning(DRL)adds a multihead attention mechanism and a stage reward mechanism to the bilateral band clipping PPO algorithm to solve the problem of low training efficiency.Finally,simulation experiments are carried out in the digital battlefield.The multiagent architecture based on OGMN combined with the PPO-TAGNA algorithm can obtain higher rewards faster and has a higher win ratio.By analyzing agent behavior,the efficiency,superiority and rationality of resource utilization of this method are verified.展开更多
Robot learning in unstructured environments has been proved to be an extremely challenging problem, mainly because of many uncertainties always present in the real world. Human beings, on the other hand, seem to cope ...Robot learning in unstructured environments has been proved to be an extremely challenging problem, mainly because of many uncertainties always present in the real world. Human beings, on the other hand, seem to cope very well with uncertain and unpredictable environments, often relying on perception-based information. Furthermore, humans beings can also utilize perceptions to guide their learning on those parts of the perception-action space that are actually relevant to the task. Therefore, we conduct a research aimed at improving robot learning through the incorporation of both perception-based and measurement-based information. For this reason, a fuzzy reinforcement learning (FRL) agent is proposed in this paper. Based on a neural-fuzzy architecture, different kinds of information can be incorporated into the FRL agent to initialise its action network, critic network and evaluation feedback module so as to accelerate its learning. By making use of the global optimisation capability of GAs (genetic algorithms), a GA-based FRL (GAFRL) agent is presented to solve the local minima problem in traditional actor-critic reinforcement learning. On the other hand, with the prediction capability of the critic network, GAs can perform a more effective global search. Different GAFRL agents are constructed and verified by using the simulation model of a physical biped robot. The simulation analysis shows that the biped learning rate for dynamic balance can be improved by incorporating perception-based information on biped balancing and walking evaluation. The biped robot can find its application in ocean exploration, detection or sea rescue activity, as well as military maritime activity.展开更多
Single-agent reinforcement learning (RL) is commonly used to learn how to play computer games, in which the agent makes one move before making the next in a sequential decision process. Recently single agent was also ...Single-agent reinforcement learning (RL) is commonly used to learn how to play computer games, in which the agent makes one move before making the next in a sequential decision process. Recently single agent was also employed in the design of molecules and drugs. While a single agent is a good fit for computer games, it has limitations when used in molecule design. Its sequential learning makes it impossible to modify or improve the previous steps while working on the current step. In this paper, we proposed to apply the multi-agent RL approach to the research of molecules, which can optimize all sites of a molecule simultaneously. To elucidate the validity of our approach, we chose one chemical compound Favipiravir to explore its local chemical space. Favipiravir is a broad-spectrum inhibitor of viral RNA polymerase, and is one of the compounds that are currently being used in SARS-CoV-2 (COVID-19) clinical trials. Our experiments revealed the collaborative learning of a team of deep RL agents as well as the learning of its individual learning agent in the exploration of Favipiravir. In particular, our multi-agents not only discovered the molecules near Favipiravir in chemical space, but also the learnability of each site in the string representation of Favipiravir, critical information for us to understand the underline mechanism that supports machine learning of molecules.展开更多
Due to the fading characteristics of wireless channels and the burstiness of data traffic,how to deal with congestion in Ad-hoc networks with effective algorithms is still open and challenging.In this paper,we focus o...Due to the fading characteristics of wireless channels and the burstiness of data traffic,how to deal with congestion in Ad-hoc networks with effective algorithms is still open and challenging.In this paper,we focus on enabling congestion control to minimize network transmission delays through flexible power control.To effectively solve the congestion problem,we propose a distributed cross-layer scheduling algorithm,which is empowered by graph-based multi-agent deep reinforcement learning.The transmit power is adaptively adjusted in real-time by our algorithm based only on local information(i.e.,channel state information and queue length)and local communication(i.e.,information exchanged with neighbors).Moreover,the training complexity of the algorithm is low due to the regional cooperation based on the graph attention network.In the evaluation,we show that our algorithm can reduce the transmission delay of data flow under severe signal interference and drastically changing channel states,and demonstrate the adaptability and stability in different topologies.The method is general and can be extended to various types of topologies.展开更多
The purpose of this study is to investigate the effect of the concentration of silane coupling solution on the tensile strength of basalt fiber and the interfacial properties of basalt fiber reinforced polymer composi...The purpose of this study is to investigate the effect of the concentration of silane coupling solution on the tensile strength of basalt fiber and the interfacial properties of basalt fiber reinforced polymer composites.The surface treatment of basalt fibers was carried out using an aqueous alcohol solution method.Basalt fibers were subjected to surface treatment with 3-Methacryloxypropyl trimethoxy silane at 0.5 wt.%,1 wt.%,2 wt.%,4 wt.%and 10 wt.%.The basalt monofilament tensile tests were carried out to investigate the variation in strength with the concentration of the silane coupling agent.The microdroplet test was performed to examine the effect of the concentration of the silane coupling agent on interfacial strength of basalt reinforced polymer composites.The film was formed on the surface of the basalt fiber treated silane coupling agent solution.The tensile strength of basalt fiber increased because the damaged fiber surface was repaired by the firm of silane coupling agent.The firm was effective in not only the surface protection of basalt fiber but also the improvement on the interfacial strength of fiber-matrix interface.However,the surface treatment using the high concentration silane coupling agent solution has an adverse effect on the mechanical properties of the composite materials,because of causing the degradation of the interfacial strength of the composite materials.展开更多
Multl-agent reinforcement learning algorithms are studied. A prediction-based multi-agent reinforcement learning algorithm is presented for multl-robot cooperation task. The multi-robot cooperation experiment based on...Multl-agent reinforcement learning algorithms are studied. A prediction-based multi-agent reinforcement learning algorithm is presented for multl-robot cooperation task. The multi-robot cooperation experiment based on multi-agent inverted pendulum is made to test the efficency of the new algorithm, and the experiment results show that the new algorithm can achieve the cooperation strategy much faster than the primitive multiagent reinforcement learning algorithm.展开更多
In recent years,with the continuous development of deep learning and knowledge graph reasoning methods,more and more researchers have shown great interest in improving knowledge graph reasoning methods by inferring mi...In recent years,with the continuous development of deep learning and knowledge graph reasoning methods,more and more researchers have shown great interest in improving knowledge graph reasoning methods by inferring missing facts through reasoning.By searching paths on the knowledge graph and making fact and link predictions based on these paths,deep learning-based Reinforcement Learning(RL)agents can demonstrate good performance and interpretability.Therefore,deep reinforcement learning-based knowledge reasoning methods have rapidly emerged in recent years and have become a hot research topic.However,even in a small and fixed knowledge graph reasoning action space,there are still a large number of invalid actions.It often leads to the interruption of RL agents’wandering due to the selection of invalid actions,resulting in a significant decrease in the success rate of path mining.In order to improve the success rate of RL agents in the early stages of path search,this article proposes a knowledge reasoning method based on Deep Transfer Reinforcement Learning path(DTRLpath).Before supervised pre-training and retraining,a pre-task of searching for effective actions in a single step is added.The RL agent is first trained in the pre-task to improve its ability to search for effective actions.Then,the trained agent is transferred to the target reasoning task for path search training,which improves its success rate in searching for target task paths.Finally,based on the comparative experimental results on the FB15K-237 and NELL-995 datasets,it can be concluded that the proposed method significantly improves the success rate of path search and outperforms similar methods in most reasoning tasks.展开更多
The weak interface interaction and solid-solid phase transition have long been a conundrum for 1,3,5,7-tetranitro-1,3,5,7-tetraazacyclooctane(HMX)-based polymer-bonded explosives(PBX).A two-step strategy that involves...The weak interface interaction and solid-solid phase transition have long been a conundrum for 1,3,5,7-tetranitro-1,3,5,7-tetraazacyclooctane(HMX)-based polymer-bonded explosives(PBX).A two-step strategy that involves the pretreatment of HMX to endow—OH groups on the surface via polyalcohol bonding agent modification and in situ coating with nitrate ester-containing polymer,was proposed to address the problem.Two types of energetic polyether—glycidyl azide polymer(GAP)and nitrate modified GAP(GNP)were grafted onto HMX crystal based on isocyanate addition reaction bridged through neutral polymeric bonding agent(NPBA)layer.The morphology and structure of the HMX-based composites were characterized in detail and the core-shell structure was validated.The grafted polymers obviously enhanced the adhesion force between HMX crystals and fluoropolymer(F2314)binder.Due to the interfacial reinforcement among the components,the two HMX-based composites exhibited a remarkable increment of phase transition peak temperature by 10.2°C and 19.6°C with no more than 1.5%shell content,respectively.Furthermore,the impact and friction sensitivity of the composites decreased significantly as a result of the barrier produced by the grafted polymers.These findings will enhance the future prospects for the interface design of energetic composites aiming to solve the weak interface and safety concerns.展开更多
Cooperative multi-agent reinforcement learning( MARL) is an important topic in the field of artificial intelligence,in which distributed constraint optimization( DCOP) algorithms have been widely used to coordinat...Cooperative multi-agent reinforcement learning( MARL) is an important topic in the field of artificial intelligence,in which distributed constraint optimization( DCOP) algorithms have been widely used to coordinate the actions of multiple agents. However,dense communication among agents affects the practicability of DCOP algorithms. In this paper,we propose a novel DCOP algorithm dealing with the previous DCOP algorithms' communication problem by reducing constraints.The contributions of this paper are primarily threefold:(1) It is proved that removing constraints can effectively reduce the communication burden of DCOP algorithms.(2) An criterion is provided to identify insignificant constraints whose elimination doesn't have a great impact on the performance of the whole system.(3) A constraint-reduced DCOP algorithm is proposed by adopting a variant of spectral clustering algorithm to detect and eliminate the insignificant constraints. Our algorithm reduces the communication burdern of the benchmark DCOP algorithm while keeping its overall performance unaffected. The performance of constraint-reduced DCOP algorithm is evaluated on four configurations of cooperative sensor networks. The effectiveness of communication reduction is also verified by comparisons between the constraint-reduced DCOP and the benchmark DCOP.展开更多
Due to the low water-cement ratio of ultra-high-performance concrete(UHPC),fluidity and shrinkage cracking are key aspects determining the performance and durability of this type of concrete.In this study,the effects ...Due to the low water-cement ratio of ultra-high-performance concrete(UHPC),fluidity and shrinkage cracking are key aspects determining the performance and durability of this type of concrete.In this study,the effects of different types of cementitious materials,chemical shrinkage-reducing agents(SRA)and steel fiber(SF)were assessed.Compared with M2-UHPC and M3-UHPC,M1-UHPC was found to have better fluidity and shrinkage cracking performance.Moreover,different SRA incorporation methods,dosage and different SF types and aspect ratios were implemented.The incorporation of SRA and SF led to a decrease in the fluidity of UHPC.SRA internal content of 1%(NSRA-1%),SRA external content of 1%(WSRA-1%),STS-0.22 and STE-0.7 decreased the fluidity of UHPC by 3.3%,8.3%,9.2%and 25%,respectively.However,SRA and SF improved the UHPC shrinkage cracking performance.NSRA-1%and STE-0.7 reduced the shrinkage value of UHPC by 40%and 60%,respectively,and increased the crack resistance by 338%and 175%,respectively.In addition,the addition of SF was observed to make the microstructure of UHPC more compact,and the compressive strength and flexural strength of 28 d were increased by 26.9%and 19.9%,respectively.展开更多
文摘An experiment of producing high density polyethylene (HDPE) nano-composite filled with 4wt.% talc was presented. Acting as filler and a reinforcing agent in the HDPE, talc powder, sized at around 5 μm, was surface-treated with aluminum diethylene glycol dinitrate coupling agent before adding to the HDPE. Analyses of the reinforced HDPE nano-composite show significant improvement in its mechanical properties including, tensile strength (>26 MPa), break elongation (<1.1%), flexural strength (>22 MPa), and friction coefficients<0.11. The results demonstrate that, after surface-treated, talc can be used as a promising filling material and a reinforcing agent in making HDPE nano-composite.
基金supported by the National Key R&D Program of China (2018AAA0101400)the National Natural Science Foundation of China (62173251+3 种基金61921004U1713209)the Natural Science Foundation of Jiangsu Province of China (BK20202006)the Guangdong Provincial Key Laboratory of Intelligent Decision and Cooperative Control。
文摘In this paper, the reinforcement learning method for cooperative multi-agent systems(MAS) with incremental number of agents is studied. The existing multi-agent reinforcement learning approaches deal with the MAS with a specific number of agents, and can learn well-performed policies. However, if there is an increasing number of agents, the previously learned in may not perform well in the current scenario. The new agents need to learn from scratch to find optimal policies with others,which may slow down the learning speed of the whole team. To solve that problem, in this paper, we propose a new algorithm to take full advantage of the historical knowledge which was learned before, and transfer it from the previous agents to the new agents. Since the previous agents have been trained well in the source environment, they are treated as teacher agents in the target environment. Correspondingly, the new agents are called student agents. To enable the student agents to learn from the teacher agents, we first modify the input nodes of the networks for teacher agents to adapt to the current environment. Then, the teacher agents take the observations of the student agents as input, and output the advised actions and values as supervising information. Finally, the student agents combine the reward from the environment and the supervising information from the teacher agents, and learn the optimal policies with modified loss functions. By taking full advantage of the knowledge of teacher agents, the search space for the student agents will be reduced significantly, which can accelerate the learning speed of the holistic system. The proposed algorithm is verified in some multi-agent simulation environments, and its efficiency has been demonstrated by the experiment results.
基金the Project of National Natural Science Foundation of China(Grant No.62106283)the Project of National Natural Science Foundation of China(Grant No.72001214)to provide fund for conducting experimentsthe Project of Natural Science Foundation of Shaanxi Province(Grant No.2020JQ-484)。
文摘The scale of ground-to-air confrontation task assignments is large and needs to deal with many concurrent task assignments and random events.Aiming at the problems where existing task assignment methods are applied to ground-to-air confrontation,there is low efficiency in dealing with complex tasks,and there are interactive conflicts in multiagent systems.This study proposes a multiagent architecture based on a one-general agent with multiple narrow agents(OGMN)to reduce task assignment conflicts.Considering the slow speed of traditional dynamic task assignment algorithms,this paper proposes the proximal policy optimization for task assignment of general and narrow agents(PPOTAGNA)algorithm.The algorithm based on the idea of the optimal assignment strategy algorithm and combined with the training framework of deep reinforcement learning(DRL)adds a multihead attention mechanism and a stage reward mechanism to the bilateral band clipping PPO algorithm to solve the problem of low training efficiency.Finally,simulation experiments are carried out in the digital battlefield.The multiagent architecture based on OGMN combined with the PPO-TAGNA algorithm can obtain higher rewards faster and has a higher win ratio.By analyzing agent behavior,the efficiency,superiority and rationality of resource utilization of this method are verified.
文摘Robot learning in unstructured environments has been proved to be an extremely challenging problem, mainly because of many uncertainties always present in the real world. Human beings, on the other hand, seem to cope very well with uncertain and unpredictable environments, often relying on perception-based information. Furthermore, humans beings can also utilize perceptions to guide their learning on those parts of the perception-action space that are actually relevant to the task. Therefore, we conduct a research aimed at improving robot learning through the incorporation of both perception-based and measurement-based information. For this reason, a fuzzy reinforcement learning (FRL) agent is proposed in this paper. Based on a neural-fuzzy architecture, different kinds of information can be incorporated into the FRL agent to initialise its action network, critic network and evaluation feedback module so as to accelerate its learning. By making use of the global optimisation capability of GAs (genetic algorithms), a GA-based FRL (GAFRL) agent is presented to solve the local minima problem in traditional actor-critic reinforcement learning. On the other hand, with the prediction capability of the critic network, GAs can perform a more effective global search. Different GAFRL agents are constructed and verified by using the simulation model of a physical biped robot. The simulation analysis shows that the biped learning rate for dynamic balance can be improved by incorporating perception-based information on biped balancing and walking evaluation. The biped robot can find its application in ocean exploration, detection or sea rescue activity, as well as military maritime activity.
文摘Single-agent reinforcement learning (RL) is commonly used to learn how to play computer games, in which the agent makes one move before making the next in a sequential decision process. Recently single agent was also employed in the design of molecules and drugs. While a single agent is a good fit for computer games, it has limitations when used in molecule design. Its sequential learning makes it impossible to modify or improve the previous steps while working on the current step. In this paper, we proposed to apply the multi-agent RL approach to the research of molecules, which can optimize all sites of a molecule simultaneously. To elucidate the validity of our approach, we chose one chemical compound Favipiravir to explore its local chemical space. Favipiravir is a broad-spectrum inhibitor of viral RNA polymerase, and is one of the compounds that are currently being used in SARS-CoV-2 (COVID-19) clinical trials. Our experiments revealed the collaborative learning of a team of deep RL agents as well as the learning of its individual learning agent in the exploration of Favipiravir. In particular, our multi-agents not only discovered the molecules near Favipiravir in chemical space, but also the learnability of each site in the string representation of Favipiravir, critical information for us to understand the underline mechanism that supports machine learning of molecules.
基金supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.RS-2022-00155885, Artificial Intelligence Convergence Innovation Human Resources Development (Hanyang University ERICA))supported by the National Natural Science Foundation of China under Grant No. 61971264the National Natural Science Foundation of China/Research Grants Council Collaborative Research Scheme under Grant No. 62261160390
文摘Due to the fading characteristics of wireless channels and the burstiness of data traffic,how to deal with congestion in Ad-hoc networks with effective algorithms is still open and challenging.In this paper,we focus on enabling congestion control to minimize network transmission delays through flexible power control.To effectively solve the congestion problem,we propose a distributed cross-layer scheduling algorithm,which is empowered by graph-based multi-agent deep reinforcement learning.The transmit power is adaptively adjusted in real-time by our algorithm based only on local information(i.e.,channel state information and queue length)and local communication(i.e.,information exchanged with neighbors).Moreover,the training complexity of the algorithm is low due to the regional cooperation based on the graph attention network.In the evaluation,we show that our algorithm can reduce the transmission delay of data flow under severe signal interference and drastically changing channel states,and demonstrate the adaptability and stability in different topologies.The method is general and can be extended to various types of topologies.
文摘The purpose of this study is to investigate the effect of the concentration of silane coupling solution on the tensile strength of basalt fiber and the interfacial properties of basalt fiber reinforced polymer composites.The surface treatment of basalt fibers was carried out using an aqueous alcohol solution method.Basalt fibers were subjected to surface treatment with 3-Methacryloxypropyl trimethoxy silane at 0.5 wt.%,1 wt.%,2 wt.%,4 wt.%and 10 wt.%.The basalt monofilament tensile tests were carried out to investigate the variation in strength with the concentration of the silane coupling agent.The microdroplet test was performed to examine the effect of the concentration of the silane coupling agent on interfacial strength of basalt reinforced polymer composites.The film was formed on the surface of the basalt fiber treated silane coupling agent solution.The tensile strength of basalt fiber increased because the damaged fiber surface was repaired by the firm of silane coupling agent.The firm was effective in not only the surface protection of basalt fiber but also the improvement on the interfacial strength of fiber-matrix interface.However,the surface treatment using the high concentration silane coupling agent solution has an adverse effect on the mechanical properties of the composite materials,because of causing the degradation of the interfacial strength of the composite materials.
基金Sponsored bythe Ministerial Level Foundation (70302)
文摘Multl-agent reinforcement learning algorithms are studied. A prediction-based multi-agent reinforcement learning algorithm is presented for multl-robot cooperation task. The multi-robot cooperation experiment based on multi-agent inverted pendulum is made to test the efficency of the new algorithm, and the experiment results show that the new algorithm can achieve the cooperation strategy much faster than the primitive multiagent reinforcement learning algorithm.
基金supported by Key Laboratory of Information System Requirement,No.LHZZ202202Natural Science Foundation of Xinjiang Uyghur Autonomous Region(2023D01C55)Scientific Research Program of the Higher Education Institution of Xinjiang(XJEDU2023P127).
文摘In recent years,with the continuous development of deep learning and knowledge graph reasoning methods,more and more researchers have shown great interest in improving knowledge graph reasoning methods by inferring missing facts through reasoning.By searching paths on the knowledge graph and making fact and link predictions based on these paths,deep learning-based Reinforcement Learning(RL)agents can demonstrate good performance and interpretability.Therefore,deep reinforcement learning-based knowledge reasoning methods have rapidly emerged in recent years and have become a hot research topic.However,even in a small and fixed knowledge graph reasoning action space,there are still a large number of invalid actions.It often leads to the interruption of RL agents’wandering due to the selection of invalid actions,resulting in a significant decrease in the success rate of path mining.In order to improve the success rate of RL agents in the early stages of path search,this article proposes a knowledge reasoning method based on Deep Transfer Reinforcement Learning path(DTRLpath).Before supervised pre-training and retraining,a pre-task of searching for effective actions in a single step is added.The RL agent is first trained in the pre-task to improve its ability to search for effective actions.Then,the trained agent is transferred to the target reasoning task for path search training,which improves its success rate in searching for target task paths.Finally,based on the comparative experimental results on the FB15K-237 and NELL-995 datasets,it can be concluded that the proposed method significantly improves the success rate of path search and outperforms similar methods in most reasoning tasks.
基金the support for this work by National Natural Science Foundation of China(Grant Nos.22175139 and 22105156)。
文摘The weak interface interaction and solid-solid phase transition have long been a conundrum for 1,3,5,7-tetranitro-1,3,5,7-tetraazacyclooctane(HMX)-based polymer-bonded explosives(PBX).A two-step strategy that involves the pretreatment of HMX to endow—OH groups on the surface via polyalcohol bonding agent modification and in situ coating with nitrate ester-containing polymer,was proposed to address the problem.Two types of energetic polyether—glycidyl azide polymer(GAP)and nitrate modified GAP(GNP)were grafted onto HMX crystal based on isocyanate addition reaction bridged through neutral polymeric bonding agent(NPBA)layer.The morphology and structure of the HMX-based composites were characterized in detail and the core-shell structure was validated.The grafted polymers obviously enhanced the adhesion force between HMX crystals and fluoropolymer(F2314)binder.Due to the interfacial reinforcement among the components,the two HMX-based composites exhibited a remarkable increment of phase transition peak temperature by 10.2°C and 19.6°C with no more than 1.5%shell content,respectively.Furthermore,the impact and friction sensitivity of the composites decreased significantly as a result of the barrier produced by the grafted polymers.These findings will enhance the future prospects for the interface design of energetic composites aiming to solve the weak interface and safety concerns.
基金Supported by the National Social Science Foundation of China(15ZDA034,14BZZ028)Beijing Social Science Foundation(16JDGLA036)JKF Program of People’s Public Security University of China(2016JKF01318)
文摘Cooperative multi-agent reinforcement learning( MARL) is an important topic in the field of artificial intelligence,in which distributed constraint optimization( DCOP) algorithms have been widely used to coordinate the actions of multiple agents. However,dense communication among agents affects the practicability of DCOP algorithms. In this paper,we propose a novel DCOP algorithm dealing with the previous DCOP algorithms' communication problem by reducing constraints.The contributions of this paper are primarily threefold:(1) It is proved that removing constraints can effectively reduce the communication burden of DCOP algorithms.(2) An criterion is provided to identify insignificant constraints whose elimination doesn't have a great impact on the performance of the whole system.(3) A constraint-reduced DCOP algorithm is proposed by adopting a variant of spectral clustering algorithm to detect and eliminate the insignificant constraints. Our algorithm reduces the communication burdern of the benchmark DCOP algorithm while keeping its overall performance unaffected. The performance of constraint-reduced DCOP algorithm is evaluated on four configurations of cooperative sensor networks. The effectiveness of communication reduction is also verified by comparisons between the constraint-reduced DCOP and the benchmark DCOP.
基金the Key Research and Development Program of Hubei Province(2022BCA082 and 2022BCA077).
文摘Due to the low water-cement ratio of ultra-high-performance concrete(UHPC),fluidity and shrinkage cracking are key aspects determining the performance and durability of this type of concrete.In this study,the effects of different types of cementitious materials,chemical shrinkage-reducing agents(SRA)and steel fiber(SF)were assessed.Compared with M2-UHPC and M3-UHPC,M1-UHPC was found to have better fluidity and shrinkage cracking performance.Moreover,different SRA incorporation methods,dosage and different SF types and aspect ratios were implemented.The incorporation of SRA and SF led to a decrease in the fluidity of UHPC.SRA internal content of 1%(NSRA-1%),SRA external content of 1%(WSRA-1%),STS-0.22 and STE-0.7 decreased the fluidity of UHPC by 3.3%,8.3%,9.2%and 25%,respectively.However,SRA and SF improved the UHPC shrinkage cracking performance.NSRA-1%and STE-0.7 reduced the shrinkage value of UHPC by 40%and 60%,respectively,and increased the crack resistance by 338%and 175%,respectively.In addition,the addition of SF was observed to make the microstructure of UHPC more compact,and the compressive strength and flexural strength of 28 d were increased by 26.9%and 19.9%,respectively.