期刊文献+
共找到1,186篇文章
< 1 2 60 >
每页显示 20 50 100
Service Function Chain Deployment Algorithm Based on Multi-Agent Deep Reinforcement Learning
1
作者 Wanwei Huang Qiancheng Zhang +2 位作者 Tao Liu YaoliXu Dalei Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第9期4875-4893,共19页
Aiming at the rapid growth of network services,which leads to the problems of long service request processing time and high deployment cost in the deployment of network function virtualization service function chain(S... Aiming at the rapid growth of network services,which leads to the problems of long service request processing time and high deployment cost in the deployment of network function virtualization service function chain(SFC)under 5G networks,this paper proposes a multi-agent deep deterministic policy gradient optimization algorithm for SFC deployment(MADDPG-SD).Initially,an optimization model is devised to enhance the request acceptance rate,minimizing the latency and deploying the cost SFC is constructed for the network resource-constrained case.Subsequently,we model the dynamic problem as a Markov decision process(MDP),facilitating adaptation to the evolving states of network resources.Finally,by allocating SFCs to different agents and adopting a collaborative deployment strategy,each agent aims to maximize the request acceptance rate or minimize latency and costs.These agents learn strategies from historical data of virtual network functions in SFCs to guide server node selection,and achieve approximately optimal SFC deployment strategies through a cooperative framework of centralized training and distributed execution.Experimental simulation results indicate that the proposed method,while simultaneously meeting performance requirements and resource capacity constraints,has effectively increased the acceptance rate of requests compared to the comparative algorithms,reducing the end-to-end latency by 4.942%and the deployment cost by 8.045%. 展开更多
关键词 Network function virtualization service function chain markov decision process multi-agent reinforcement learning
下载PDF
Discrete decision model and multi-agent simulation of the Liang Zong two-chain hierarchical organization in a complex project
2
作者 MAI Qiang ZHAO Yueqiang AN Shi 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2018年第2期311-320,共10页
Different from the organization structure of complex projects in Western countries, the Liang Zong hierarchical organization structure of complex projects in China has two different chains, the chief-engineer chain an... Different from the organization structure of complex projects in Western countries, the Liang Zong hierarchical organization structure of complex projects in China has two different chains, the chief-engineer chain and the general-director chain,to handle the trade-off between technical and management decisions. However, previous works on organization search have mainly focused on the single-chain hierarchical organization in which all decisions are regarded as homogeneous. The heterogeneity and the interdependency between technical decisions and management decisions have been neglected. A two-chain hierarchical organization structure mapped from a real complex project is constructed. Then, a discrete decision model for a Liang Zong two-chain hierarchical organization in an NK model framework is proposed. This model proves that this kind of organization structure can reduce the search space by a large amount and that the search process should reach a final stable state more quickly. For a more complicated decision mechanism, a multi-agent simulation based on the above NK model is used to explore the effect of the two-chain organization structure on the speed, stability, and performance of the search process. The results provide three insights into how, compared with the single-chain hierarchical organization, the two-chain organization can improve the search process: it can reduce the number of iterations efficiently; the search is more stable because the search space is a smoother hill-like fitness landscape; in general, the search performance can be improved.However, when the organization structure is very complicated, the performance of a two-chain organization is inferior to that of a single-chain organization. These findings about the efficiency of the unique Chinese-style organization structure can be used to guide organization design for complex projects. 展开更多
关键词 complex project two-chain hierarchical organization discrete decision model multi-agent simulation
下载PDF
Heterogeneous Network Selection Optimization Algorithm Based on a Markov Decision Model 被引量:7
3
作者 Jianli Xie Wenjuan Gao Cuiran Li 《China Communications》 SCIE CSCD 2020年第2期40-53,共14页
A network selection optimization algorithm based on the Markov decision process(MDP)is proposed so that mobile terminals can always connect to the best wireless network in a heterogeneous network environment.Consideri... A network selection optimization algorithm based on the Markov decision process(MDP)is proposed so that mobile terminals can always connect to the best wireless network in a heterogeneous network environment.Considering the different types of service requirements,the MDP model and its reward function are constructed based on the quality of service(QoS)attribute parameters of the mobile users,and the network attribute weights are calculated by using the analytic hierarchy process(AHP).The network handoff decision condition is designed according to the different types of user services and the time-varying characteristics of the network,and the MDP model is solved by using the genetic algorithm and simulated annealing(GA-SA),thus,users can seamlessly switch to the network with the best long-term expected reward value.Simulation results show that the proposed algorithm has good convergence performance,and can guarantee that users with different service types will obtain satisfactory expected total reward values and have low numbers of network handoffs. 展开更多
关键词 heterogeneous wireless networks markov decision process reward function genetic algorithm simulated annealing
下载PDF
Modeling and Design of Real-Time Pricing Systems Based on Markov Decision Processes 被引量:4
4
作者 Koichi Kobayashi Ichiro Maruta +1 位作者 Kazunori Sakurama Shun-ichi Azuma 《Applied Mathematics》 2014年第10期1485-1495,共11页
A real-time pricing system of electricity is a system that charges different electricity prices for different hours of the day and for different days, and is effective for reducing the peak and flattening the load cur... A real-time pricing system of electricity is a system that charges different electricity prices for different hours of the day and for different days, and is effective for reducing the peak and flattening the load curve. In this paper, using a Markov decision process (MDP), we propose a modeling method and an optimal control method for real-time pricing systems. First, the outline of real-time pricing systems is explained. Next, a model of a set of customers is derived as a multi-agent MDP. Furthermore, the optimal control problem is formulated, and is reduced to a quadratic programming problem. Finally, a numerical simulation is presented. 展开更多
关键词 markov decision PROCESS OPTIMAL Control REAL-TIME PRICING System
下载PDF
Variance minimization for continuous-time Markov decision processes: two approaches 被引量:1
5
作者 ZHU Quan-xin 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2010年第4期400-410,共11页
This paper studies the limit average variance criterion for continuous-time Markov decision processes in Polish spaces. Based on two approaches, this paper proves not only the existence of solutions to the variance mi... This paper studies the limit average variance criterion for continuous-time Markov decision processes in Polish spaces. Based on two approaches, this paper proves not only the existence of solutions to the variance minimization optimality equation and the existence of a variance minimal policy that is canonical, but also the existence of solutions to the two variance minimization optimality inequalities and the existence of a variance minimal policy which may not be canonical. An example is given to illustrate all of our conditions. 展开更多
关键词 Continuous-time markov decision process Polish space variance minimization optimality equation optimality inequality.
下载PDF
Reliability Analysis of Satellite Turntable System under Multiple Operation Modes Based on Multi-Valued Decision Diagrams
6
作者 Peng Zhang Zhijie Zhou +2 位作者 Yao Ding Dao Zhao Yijun Zhang 《Journal of Beijing Institute of Technology》 EI CAS 2023年第1期52-68,共17页
As a payload support system deployed on satellites,the turntable system is often switched among different working modes during the on-orbit operation,which can experience great state changes.In each mode,the missions ... As a payload support system deployed on satellites,the turntable system is often switched among different working modes during the on-orbit operation,which can experience great state changes.In each mode,the missions to be completed are different,consecutive and non-over-lapping,from which the turntable system can be considered to be a phased-mission system(PMS).Reliability analysis for PMS has been widely studied.However,the system mode cycle characteristic has not been taken into account before.In this paper,reliability analysis method of the satellite turntable system is proposed considering its multiple operation modes and mode cycle characteristic.Firstly,the multi-valued decision diagrams(MDD)manipulation rules between two adjacent mission cycles are proposed.On this basis,MDD models for the turntable system in different states are established and the reliability is calculated using the continuous time Markov chains(CTMC)method.Finally,the comparative study is carried out to show the effectiveness of our proposed method. 展开更多
关键词 phased-mission systems multi-valued decision diagrams continuous time markov chains(CTMC) reliability analysis satellite turntable system
下载PDF
Multi-Agent Deep Reinforcement Learning for Cross-Layer Scheduling in Mobile Ad-Hoc Networks
7
作者 Xinxing Zheng Yu Zhao +1 位作者 Joohyun Lee Wei Chen 《China Communications》 SCIE CSCD 2023年第8期78-88,共11页
Due to the fading characteristics of wireless channels and the burstiness of data traffic,how to deal with congestion in Ad-hoc networks with effective algorithms is still open and challenging.In this paper,we focus o... Due to the fading characteristics of wireless channels and the burstiness of data traffic,how to deal with congestion in Ad-hoc networks with effective algorithms is still open and challenging.In this paper,we focus on enabling congestion control to minimize network transmission delays through flexible power control.To effectively solve the congestion problem,we propose a distributed cross-layer scheduling algorithm,which is empowered by graph-based multi-agent deep reinforcement learning.The transmit power is adaptively adjusted in real-time by our algorithm based only on local information(i.e.,channel state information and queue length)and local communication(i.e.,information exchanged with neighbors).Moreover,the training complexity of the algorithm is low due to the regional cooperation based on the graph attention network.In the evaluation,we show that our algorithm can reduce the transmission delay of data flow under severe signal interference and drastically changing channel states,and demonstrate the adaptability and stability in different topologies.The method is general and can be extended to various types of topologies. 展开更多
关键词 Ad-hoc network cross-layer scheduling multi agent deep reinforcement learning interference elimination power control queue scheduling actorcritic methods markov decision process
下载PDF
Robust analysis of discounted Markov decision processes with uncertain transition probabilities 被引量:2
8
作者 LOU Zhen-kai HOU Fu-jun LOU Xu-ming 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2020年第4期417-436,共20页
Optimal policies in Markov decision problems may be quite sensitive with regard to transition probabilities.In practice,some transition probabilities may be uncertain.The goals of the present study are to find the rob... Optimal policies in Markov decision problems may be quite sensitive with regard to transition probabilities.In practice,some transition probabilities may be uncertain.The goals of the present study are to find the robust range for a certain optimal policy and to obtain value intervals of exact transition probabilities.Our research yields powerful contributions for Markov decision processes(MDPs)with uncertain transition probabilities.We first propose a method for estimating unknown transition probabilities based on maximum likelihood.Since the estimation may be far from accurate,and the highest expected total reward of the MDP may be sensitive to these transition probabilities,we analyze the robustness of an optimal policy and propose an approach for robust analysis.After giving the definition of a robust optimal policy with uncertain transition probabilities represented as sets of numbers,we formulate a model to obtain the optimal policy.Finally,we define the value intervals of the exact transition probabilities and construct models to determine the lower and upper bounds.Numerical examples are given to show the practicability of our methods. 展开更多
关键词 markov decision processes uncertain transition probabilities robustness and sensitivity robust optimal policy value interval
下载PDF
Consensus protocol for heterogeneous multi-agent systems: A Markov chain approach 被引量:1
9
作者 朱善迎 陈彩莲 关新平 《Chinese Physics B》 SCIE EI CAS CSCD 2013年第1期569-573,共5页
This paper deals with the consensus problem for heterogeneous multi-agent systems. Different from most existing consensus protocols, we consider the consensus seeking of two types of agents, namely, active agents and ... This paper deals with the consensus problem for heterogeneous multi-agent systems. Different from most existing consensus protocols, we consider the consensus seeking of two types of agents, namely, active agents and passive agents. The objective is to directly control the active agents such that the states of all the agents would achieve consensus. In order to obtain a computational approach, we subtly introduce an appropriate Markov chain to cast the heterogeneous systems into a unified framework. Such a framework is helpful for tackling the constraints from passive agents. Furthermore, a sufficient and necessary condition is established to guarantee the consensus in heterogeneous multi-agent systems. Finally, simulation results are provided to verify the theoretical analysis and the effectiveness of the proposed protocol. 展开更多
关键词 heterogeneous multi-agent system consensus protocol markov chain
下载PDF
Variance Optimization for Continuous-Time Markov Decision Processes
10
作者 Yaqing Fu 《Open Journal of Statistics》 2019年第2期181-195,共15页
This paper considers the variance optimization problem of average reward in continuous-time Markov decision process (MDP). It is assumed that the state space is countable and the action space is Borel measurable space... This paper considers the variance optimization problem of average reward in continuous-time Markov decision process (MDP). It is assumed that the state space is countable and the action space is Borel measurable space. The main purpose of this paper is to find the policy with the minimal variance in the deterministic stationary policy space. Unlike the traditional Markov decision process, the cost function in the variance criterion will be affected by future actions. To this end, we convert the variance minimization problem into a standard (MDP) by introducing a concept called pseudo-variance. Further, by giving the policy iterative algorithm of pseudo-variance optimization problem, the optimal policy of the original variance optimization problem is derived, and a sufficient condition for the variance optimal policy is given. Finally, we use an example to illustrate the conclusion of this paper. 展开更多
关键词 CONTINUOUS-TIME markov decision Process Variance OPTIMALITY of Average REWARD Optimal POLICY of Variance POLICY ITERATION
下载PDF
Adaptive Strategies for Accelerating the Convergence of Average Cost Markov Decision Processes Using a Moving Average Digital Filter
11
作者 Edilson F. Arruda Fabrício Ourique 《American Journal of Operations Research》 2013年第6期514-520,共7页
This paper proposes a technique to accelerate the convergence of the value iteration algorithm applied to discrete average cost Markov decision processes. An adaptive partial information value iteration algorithm is p... This paper proposes a technique to accelerate the convergence of the value iteration algorithm applied to discrete average cost Markov decision processes. An adaptive partial information value iteration algorithm is proposed that updates an increasingly accurate approximate version of the original problem with a view to saving computations at the early iterations, when one is typically far from the optimal solution. The proposed algorithm is compared to classical value iteration for a broad set of adaptive parameters and the results suggest that significant computational savings can be obtained, while also ensuring a robust performance with respect to the parameters. 展开更多
关键词 AVERAGE Cost markov decision Processes Value ITERATION Computational EFFORT GRADIENT
下载PDF
Conditional Value-at-Risk for Random Immediate Reward Variables in Markov Decision Processes
12
作者 Masayuki Kageyama Takayuki Fujii +1 位作者 Koji Kanefuji Hiroe Tsubaki 《American Journal of Computational Mathematics》 2011年第3期183-188,共6页
We consider risk minimization problems for Markov decision processes. From a standpoint of making the risk of random reward variable at each time as small as possible, a risk measure is introduced using conditional va... We consider risk minimization problems for Markov decision processes. From a standpoint of making the risk of random reward variable at each time as small as possible, a risk measure is introduced using conditional value-at-risk for random immediate reward variables in Markov decision processes, under whose risk measure criteria the risk-optimal policies are characterized by the optimality equations for the discounted or average case. As an application, the inventory models are considered. 展开更多
关键词 markov decision Processes CONDITIONAL VALUE-AT-RISK Risk Optimal Policy INVENTORY Model
下载PDF
Seeking for Passenger under Dynamic Prices: A Markov Decision Process Approach
13
作者 Qianrong Shen 《Journal of Computer and Communications》 2021年第12期80-97,共18页
In recent years, ride-on-demand (RoD) services such as Uber and Didi are becoming increasingly popular. Different from traditional taxi services, RoD services adopt dynamic pricing mechanisms to manipulate the supply ... In recent years, ride-on-demand (RoD) services such as Uber and Didi are becoming increasingly popular. Different from traditional taxi services, RoD services adopt dynamic pricing mechanisms to manipulate the supply and demand on the road, and such mechanisms improve service capacity and quality. Seeking route recommendation has been widely studied in taxi service. In RoD services, the dynamic price is a new and accurate indicator that represents the supply and demand condition, but it is yet rarely studied in providing clues for drivers to seek for passengers. In this paper, we proposed to incorporate the impacts of dynamic prices as a key factor in recommending seeking routes to drivers. We first showed the importance and need to do that by analyzing real service data. We then designed a Markov Decision Process (MDP) model based on passenger order and car GPS trajectories datasets, and took into account dynamic prices in designing rewards. Results show that our model not only guides drivers to locations with higher prices, but also significantly improves driver revenue. Compared with things with the drivers before using the model, the maximum yield after using it can be increased to 28%. 展开更多
关键词 Ride-on-Demand Service markov decision Process Dynamic Pricing Taxi Services Route Recommendation
下载PDF
基于Markov微分博弈的移动目标防御决策优化 被引量:1
14
作者 胡春娇 陈瑛 王高才 《计算机应用研究》 CSCD 北大核心 2023年第9期2832-2837,共6页
随着网络攻防向实时连续和动态高频变化的方向发展,传统的离散多阶段网络攻防博弈模型已难以满足实际需求,而且传统网络攻防模型中的节点状态单一,难以准确描述实际网络对抗中节点类型的演化过程。将节点传染病动力学模型加以改进并应... 随着网络攻防向实时连续和动态高频变化的方向发展,传统的离散多阶段网络攻防博弈模型已难以满足实际需求,而且传统网络攻防模型中的节点状态单一,难以准确描述实际网络对抗中节点类型的演化过程。将节点传染病动力学模型加以改进并应用到网络攻防对抗中,用来描述攻防过程中不同状态节点的演化过程及节点状态间的迁移关系。在构建移动目标Markov微分博弈防御模型时,各阶段内运用微分博弈模型分析,阶段间运用Markov决策过程描述状态转移,通过均衡分析和求解,设计防御决策优化算法。最后,通过仿真实验验证该模型和优化策略的可行性和有效性。 展开更多
关键词 移动目标 防御决策优化 markov微分 博弈模型
下载PDF
A dynamical neural network approach for distributionally robust chance-constrained Markov decision process 被引量:1
15
作者 Tian Xia Jia Liu Zhiping Chen 《Science China Mathematics》 SCIE CSCD 2024年第6期1395-1418,共24页
In this paper,we study the distributionally robust joint chance-constrained Markov decision process.Utilizing the logarithmic transformation technique,we derive its deterministic reformulation with bi-convex terms und... In this paper,we study the distributionally robust joint chance-constrained Markov decision process.Utilizing the logarithmic transformation technique,we derive its deterministic reformulation with bi-convex terms under the moment-based uncertainty set.To cope with the non-convexity and improve the robustness of the solution,we propose a dynamical neural network approach to solve the reformulated optimization problem.Numerical results on a machine replacement problem demonstrate the efficiency of the proposed dynamical neural network approach when compared with the sequential convex approximation approach. 展开更多
关键词 markov decision process chance constraints distributionally robust optimization moment-based ambiguity set dynamical neural network
原文传递
Distributed optimization of electricity-Gas-Heat integrated energy system with multi-agent deep reinforcement learning 被引量:3
16
作者 Lei Dong Jing Wei +1 位作者 Hao Lin Xinying Wang 《Global Energy Interconnection》 EI CAS CSCD 2022年第6期604-617,共14页
The coordinated optimization problem of the electricity-gas-heat integrated energy system(IES)has the characteristics of strong coupling,non-convexity,and nonlinearity.The centralized optimization method has a high co... The coordinated optimization problem of the electricity-gas-heat integrated energy system(IES)has the characteristics of strong coupling,non-convexity,and nonlinearity.The centralized optimization method has a high cost of communication and complex modeling.Meanwhile,the traditional numerical iterative solution cannot deal with uncertainty and solution efficiency,which is difficult to apply online.For the coordinated optimization problem of the electricity-gas-heat IES in this study,we constructed a model for the distributed IES with a dynamic distribution factor and transformed the centralized optimization problem into a distributed optimization problem in the multi-agent reinforcement learning environment using multi-agent deep deterministic policy gradient.Introducing the dynamic distribution factor allows the system to consider the impact of changes in real-time supply and demand on system optimization,dynamically coordinating different energy sources for complementary utilization and effectively improving the system economy.Compared with centralized optimization,the distributed model with multiple decision centers can achieve similar results while easing the pressure on system communication.The proposed method considers the dual uncertainty of renewable energy and load in the training.Compared with the traditional iterative solution method,it can better cope with uncertainty and realize real-time decision making of the system,which is conducive to the online application.Finally,we verify the effectiveness of the proposed method using an example of an IES coupled with three energy hub agents. 展开更多
关键词 Integrated energy system multi-agent system Distributed optimization multi-agent deep deterministic policy gradient Real-time optimization decision
下载PDF
Research on virtual entity decision model for LVC tactical confrontation of army units 被引量:1
17
作者 GAO Ang GUO Qisheng +3 位作者 DONG Zhiming TANG Zaijiang ZHANG Ziwei FENG Qiqi 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2022年第5期1249-1267,共19页
According to the requirements of the live-virtual-constructive(LVC)tactical confrontation(TC)on the virtual entity(VE)decision model of graded combat capability,diversified actions,real-time decision-making,and genera... According to the requirements of the live-virtual-constructive(LVC)tactical confrontation(TC)on the virtual entity(VE)decision model of graded combat capability,diversified actions,real-time decision-making,and generalization for the enemy,the confrontation process is modeled as a zero-sum stochastic game(ZSG).By introducing the theory of dynamic relative power potential field,the problem of reward sparsity in the model can be solved.By reward shaping,the problem of credit assignment between agents can be solved.Based on the idea of meta-learning,an extensible multi-agent deep reinforcement learning(EMADRL)framework and solving method is proposed to improve the effectiveness and efficiency of model solving.Experiments show that the model meets the requirements well and the algorithm learning efficiency is high. 展开更多
关键词 live-virtual-constructive(LVC) army unit tactical confrontation(TC) intelligent decision model multi-agent deep reinforcement learning
下载PDF
Offline Pre-trained Multi-agent Decision Transformer 被引量:2
18
作者 Linghui Meng Muning Wen +8 位作者 Chenyang Le Xiyun Li Dengpeng Xing Weinan Zhang Ying Wen Haifeng Zhang Jun Wang Yaodong Yang Bo Xu 《Machine Intelligence Research》 EI CSCD 2023年第2期233-248,共16页
Offline reinforcement learning leverages previously collected offline datasets to learn optimal policies with no necessity to access the real environment.Such a paradigm is also desirable for multi-agent reinforcement... Offline reinforcement learning leverages previously collected offline datasets to learn optimal policies with no necessity to access the real environment.Such a paradigm is also desirable for multi-agent reinforcement learning(MARL)tasks,given the combinatorially increased interactions among agents and with the environment.However,in MARL,the paradigm of offline pre-training with online fine-tuning has not been studied,nor even datasets or benchmarks for offline MARL research are available.In this paper,we facilitate the research by providing large-scale datasets and using them to examine the usage of the decision transformer in the context of MARL.We investigate the generalization of MARL offline pre-training in the following three aspects:1)between single agents and multiple agents,2)from offline pretraining to online fine tuning,and 3)to that of multiple downstream tasks with few-shot and zero-shot capabilities.We start by introducing the first offline MARL dataset with diverse quality levels based on the StarCraftII environment,and then propose the novel architecture of multi-agent decision transformer(MADT)for effective offline learning.MADT leverages the transformer′s modelling ability for sequence modelling and integrates it seamlessly with both offline and online MARL tasks.A significant benefit of MADT is that it learns generalizable policies that can transfer between different types of agents under different task scenarios.On the StarCraft II offline dataset,MADT outperforms the state-of-the-art offline reinforcement learning(RL)baselines,including BCQ and CQL.When applied to online tasks,the pre-trained MADT significantly improves sample efficiency and enjoys strong performance in both few-short and zero-shot cases.To the best of our knowledge,this is the first work that studies and demonstrates the effectiveness of offline pre-trained models in terms of sample efficiency and generalizability enhancements for MARL. 展开更多
关键词 Pre-training model multi-agent reinforcement learning(MARL) decision making TRANSFORMER offline reinforcement learning
原文传递
Probabilistic Analysis and Multicriteria Decision for Machine Assignment Problem with General Service Times
19
作者 Wang, Jing 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 1994年第1期53-61,共9页
In this paper we carried out a probabilistic analysis for a machine repair system with a general service-time distribution by means of generalized Markov renewal processes. Some formulas for the steady-state performan... In this paper we carried out a probabilistic analysis for a machine repair system with a general service-time distribution by means of generalized Markov renewal processes. Some formulas for the steady-state performance measures. such as the distribution of queue sizes, average queue length, degree of repairman utilization and so on. are then derived. Finally, the machine repair model and a multiple critcria decision-making method are applied to study machine assignment problem with a general service-time distribution to determine the optimum number of machines being serviced by one repairman. 展开更多
关键词 Machine assignment problem Queueing model Multicriteria decision markov processes
下载PDF
Human Centered Design in Multi-agent Virtual Environment 被引量:1
20
作者 WANG Zhao-hui ZHENG Guo-lei ZHU Xin-xiong 《Computer Aided Drafting,Design and Manufacturing》 2007年第2期49-57,共9页
The efficient and reliable human centered design of products and processes is a major goal in manufacturing industries for numerous human factors must be taken into account during the entire life cycle of products. A ... The efficient and reliable human centered design of products and processes is a major goal in manufacturing industries for numerous human factors must be taken into account during the entire life cycle of products. A multi-agents intelligent design system is presented for manufacturing process simulation and products' ergonomic analysis. In virtual design environment, the virtual human with high-level intelligence performs tasks' operation autonomously and shows optimum posture configuration with ergonomic assessment results in real time. The functions are realized by intelligent agents architecture based on a modem approach derived from fuzzy multi-objects decision-making theory. A case study is presented to demonstrate the feasibility of the suggested methodology. 展开更多
关键词 virtual human fuzzy multi-objective decisions making multi-agents system autonomous behavior
下载PDF
上一页 1 2 60 下一页 到第
使用帮助 返回顶部