期刊文献+

Web服务组合的马氏策略规划

Policy Planning of the Web Services Composition Based on the Markov Decision Process
下载PDF
导出
摘要 针对Web服务存在的业务逻辑与服务质量的不确定性,以及时序、时间窗约束,本文提出了利用马尔可夫决策理论来解决Web服务组合中最优策略规划问题的方法。该方法首先将Web服务组合描述为有向无环图表示的任务网络,网络中每个节点代表一个任务。任务是由相应的Web服务来实现,任务之间的弧线代表任务间时序的约束,任务执行应满足时间窗的约束。在此基础上,建立Web服务组合的马尔可夫决策模型,从而获得Web服务组合的最优策略。 To deal with the non-deterministics of business logic and the QoS of Web Services, temporal and time window constraints, the Markov decision process (MDP) is proposed to solve the optimal policy planning problem of Web services composition (WSC). Web services invocation is regarded as an acyclic directed graph, named task network. In the graph, each node stands for a task, and it is realized by the corresponding Web services. Edges in the graph represent temporal constraints. Each task has a duration and its execution has time windows constraints. A formal MDP model is used to describe WSC, and the optimal policy of WSC is got through planning based on MDP. Finally, some future research directions are proposed.
作者 曾伟 胡垚
出处 《计算机工程与科学》 CSCD 北大核心 2009年第3期153-155,共3页 Computer Engineering & Science
基金 图像信息处理与智能控制教育重点实验室开放基金资助项目(200709)
关键词 WEB服务组合 马尔可夫决策过程 时间窗 策略规划 Web services composition Markov decision process time window policy planning
  • 相关文献

参考文献7

  • 1Rao J,Su X M. A Survey of Automated Web Service Composition Methods[M] ffCardoso J, Sheth A P eds, Lecture Notes in Computer Science 3387. Berlin Heidelberg: Springer-Verlag, 2005 : 43-54.
  • 2Doshi P, Goodwin R, Akkiraju R. Dynamic Workflow Composition Using Markov Decision Processes[J]. International Journal of Web Services Research, 2005,2(1) :1-17.
  • 3Gao A Q, Yang D Q,Tang S W, et al. Web Service Composition Using Markov Decision Processes[M]//Fan W, Wu Z, Yand J eds. Lecture Notes in Computer Science 3739. Berlin Heidelberg: Springer-Verlag , 2005 : 308- 319.
  • 4Zhao H B,Doshi P. A Hierarchical Framework for Composing Nested Web Processes[M]//Dan A, Lamersdorf Weds. Lecture Notes in Computer Science 4294. Berlin Heidelberg: Springer-Verlag, 2006 : 116-128.
  • 5Erol K, Hendler J, Nau D S. UMCP:A Sound and Complete Procedure for Hierarchical Task Network Planning[C]//Proc of the Int'l Conf on M Planning Systems, 1994:249-254.
  • 6Puterman M L. Markov Decision Processes: Discrete Stochastic Dynamic Programming[M]. New York:Wiley, 1994.
  • 7Beynier A, Jeanpierre L, Mouaddib A I. Optimal Planning for Autonomous Agents Under Time and Resource Uncertainty[C]//Proc of the 3rd Int'l Conf on Informaties in Control, Automation and Robotics, Robotics and Automation, 2006:182-187.

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部