Polyamide 6 (PA6) was employed as a charring agent of intumescent flame retardant (IFR) to improve the flame retardancy of ethylene-vinyl acetate copolymer (EVA). Different processing procedures were used to regulate ...Polyamide 6 (PA6) was employed as a charring agent of intumescent flame retardant (IFR) to improve the flame retardancy of ethylene-vinyl acetate copolymer (EVA). Different processing procedures were used to regulate the localization of IFR in the EVA matrix. Localizations in which IFR was dispersed in the PA6phase or in the EVA phase were prepared. The effect of the localization of IFR on the flame retardancy of EVA was investigated. The limited oxygen index (LOI), vertical burning (UL 94) and cone calorimeter test (CCT)showed that the localization of IFR in the EVA matrix exhibited a remarkable influence on the flame retardancy.Compared with EVA/IFR, a weak improvement in the flame retardancy was observed in the EVA/PA6/IFR blend withthe localization of IFR in the PA6 phase. When IFR was regulated from the PA6 phase to the EVA matrix,a remarkable increase in the flame retardancy was exhibited. The LOI was increased from 27.8%to 32.7%, and the UL 94 vertical rating was increased from V-2 to V-0. Moreover, an approximately 41.36%decrease in the peak heat release rate was exhibited. A continuous and compact intumescent charring layer that formed in the blends with the localization of IFR in the EVA matrix should be responsible for its excellent flame retardancy.展开更多
In this paper,a local-learning algorithm for multi-agent is presented based on the fact that individual agent performs local perception and local interaction under group environment.As for in-dividual-learning,agent a...In this paper,a local-learning algorithm for multi-agent is presented based on the fact that individual agent performs local perception and local interaction under group environment.As for in-dividual-learning,agent adopts greedy strategy to maximize its reward when interacting with envi-ronment.In group-learning,local interaction takes place between each two agents.A local-learning algorithm to choose and modify agents' actions is proposed to improve the traditional Q-learning algorithm,respectively in the situations of zero-sum games and general-sum games with unique equi-librium or multi-equilibrium.And this local-learning algorithm is proved to be convergent and the computation complexity is lower than the Nash-Q.Additionally,through grid-game test,it is indicated that by using this local-learning algorithm,the local behaviors of agents can spread to globe.展开更多
Skilled individual agents are firm basis of a strong soccer team. The skills available to Everest 2002 (agents) include kicking, dribbling, forwarding, ball interception and tackling. These intermediate sub goals are ...Skilled individual agents are firm basis of a strong soccer team. The skills available to Everest 2002 (agents) include kicking, dribbling, forwarding, ball interception and tackling. These intermediate sub goals are implemented by a combination of local optimization which hopes to determine the optimal primitive action from a local perspective and adversarial consideration which takes into account opponents and limitations imposed by simulation environment. Everest 2002 RoboCup simulation teams, building on 11 skilled agents and an on-line coach, won the 2nd place in RoboCup 2002 simulation league.展开更多
基金the National Natural Science Foundation of China (No.51673059)the Science and Technology Planning Project of Henan Province (No. 212102210636)the Opening Project of Jiangxi Province Key Laboratory of Polymer Micro/Nano Manufacturing and Devices (East China University of Technology)。
文摘Polyamide 6 (PA6) was employed as a charring agent of intumescent flame retardant (IFR) to improve the flame retardancy of ethylene-vinyl acetate copolymer (EVA). Different processing procedures were used to regulate the localization of IFR in the EVA matrix. Localizations in which IFR was dispersed in the PA6phase or in the EVA phase were prepared. The effect of the localization of IFR on the flame retardancy of EVA was investigated. The limited oxygen index (LOI), vertical burning (UL 94) and cone calorimeter test (CCT)showed that the localization of IFR in the EVA matrix exhibited a remarkable influence on the flame retardancy.Compared with EVA/IFR, a weak improvement in the flame retardancy was observed in the EVA/PA6/IFR blend withthe localization of IFR in the PA6 phase. When IFR was regulated from the PA6 phase to the EVA matrix,a remarkable increase in the flame retardancy was exhibited. The LOI was increased from 27.8%to 32.7%, and the UL 94 vertical rating was increased from V-2 to V-0. Moreover, an approximately 41.36%decrease in the peak heat release rate was exhibited. A continuous and compact intumescent charring layer that formed in the blends with the localization of IFR in the EVA matrix should be responsible for its excellent flame retardancy.
文摘In this paper,a local-learning algorithm for multi-agent is presented based on the fact that individual agent performs local perception and local interaction under group environment.As for in-dividual-learning,agent adopts greedy strategy to maximize its reward when interacting with envi-ronment.In group-learning,local interaction takes place between each two agents.A local-learning algorithm to choose and modify agents' actions is proposed to improve the traditional Q-learning algorithm,respectively in the situations of zero-sum games and general-sum games with unique equi-librium or multi-equilibrium.And this local-learning algorithm is proved to be convergent and the computation complexity is lower than the Nash-Q.Additionally,through grid-game test,it is indicated that by using this local-learning algorithm,the local behaviors of agents can spread to globe.
文摘Skilled individual agents are firm basis of a strong soccer team. The skills available to Everest 2002 (agents) include kicking, dribbling, forwarding, ball interception and tackling. These intermediate sub goals are implemented by a combination of local optimization which hopes to determine the optimal primitive action from a local perspective and adversarial consideration which takes into account opponents and limitations imposed by simulation environment. Everest 2002 RoboCup simulation teams, building on 11 skilled agents and an on-line coach, won the 2nd place in RoboCup 2002 simulation league.