With the proportion of intelligent services in the industrial internet of things(IIoT)rising rapidly,its data dependency and decomposability increase the difficulty of scheduling computing resources.In this paper,we p...With the proportion of intelligent services in the industrial internet of things(IIoT)rising rapidly,its data dependency and decomposability increase the difficulty of scheduling computing resources.In this paper,we propose an intelligent service computing framework.In the framework,we take the long-term rewards of its important participants,edge service providers,as the optimization goal,which is related to service delay and computing cost.Considering the different update frequencies of data deployment and service offloading,double-timescale reinforcement learning is utilized in the framework.In the small-scale strategy,the frequent concurrency of services and the difference in service time lead to the fuzzy relationship between reward and action.To solve the fuzzy reward problem,a reward mapping-based reinforcement learning(RMRL)algorithm is proposed,which enables the agent to learn the relationship between reward and action more clearly.The large time scale strategy adopts the improved Monte Carlo tree search(MCTS)algorithm to improve the learning speed.The simulation results show that the strategy is superior to popular reinforcement learning algorithms such as double Q-learning(DDQN)and dueling Q-learning(dueling-DQN)in learning speed,and the reward is also increased by 14%.展开更多
We are entering into a new era of enterprise computing that is characterized by an emphasis on broadband convergence, knowledge sharing, and calm services. Some people refer to this as the 'ubiquitous network'...We are entering into a new era of enterprise computing that is characterized by an emphasis on broadband convergence, knowledge sharing, and calm services. Some people refer to this as the 'ubiquitous network' business model because its focus is on a high degree of connectivity between a company and its customers, suppliers, and channel partners. Moreover, immediate access to ideas, goods, and services will be of greater value than the traditional model of permanent and ponderous possession. This paper illustrates how ubiquitous computing technology can be combined with legacy computer-based information systems, along with thoughts on relevant issues of ubiquitous commerce. We also propose a model for how to assess levels of ubiquitous computing services.展开更多
This study presents a clear evolution of computing and its key applications. Cloud computing services evolved from distributed, grid, and utility computing. Critical companies such as Salesforce,Amazon, Google, and Mi...This study presents a clear evolution of computing and its key applications. Cloud computing services evolved from distributed, grid, and utility computing. Critical companies such as Salesforce,Amazon, Google, and Microsoft play important roles in cloud computing. Dramatic changes in the technology environment have created new challenges for current information technologies. This study discusses four significant challenges for cloud computing services,including the next-generation Internet, data synchronization, cloud security, and competitive advantages.And then it also discusses how managers can learn about the future of cloud computing services.展开更多
The purpose of this paper is to provide a better knowledge of the cloud computing as well as to suggest relevant research paths in this growing field. Also, we will go through the future benefits of cloud computing an...The purpose of this paper is to provide a better knowledge of the cloud computing as well as to suggest relevant research paths in this growing field. Also, we will go through the future benefits of cloud computing and the upcoming possible challenges we will have. Intext Cloud, performance, cloud computing, architecture, scale-up, and big data are all terms used in this context. Cloud computing offers a wide range of architectural configurations, including the number of processors, memory, and nodes. Cloud computing has already changed the way we store, process, and access data, and it is expected to continue to have a significant impact on the future of information technology. Cloud computing enables organizations to scale their IT resources up or down quickly and easily, without the need for costly hardware upgrades. This can help organizations to respond more quickly to changing business needs and market conditions. By moving IT resources to the cloud, organizations can reduce their IT infrastructure costs and improve their operational efficiency. Cloud computing also allows organizations to pay only for the resources they use, rather than investing in expensive hardware and software licenses. Cloud providers invest heavily in security and compliance measures, which can help to protect organizations from cyber threats and ensure regulatory compliance. Cloud computing provides a scalable platform for AI and machine learning applications, enabling organizations to build and deploy these technologies more easily and cost-effectively. A task, an application, and its input can take up to 20 times longer or cost 10 times more than optimal. Cloud products’ ready adaptability has resulted in a paradigm change. Previously, an application was optimized for a specific cluster;however, in the cloud, the architectural configuration is tuned for the workload. The evolution of cloud computing from the era of mainframes and dumb terminals has been significant, but there are still many advancements to come. As we look towards the future, IT leaders and the companies they serve will face increasingly complex challenges in order to stay competitive in a constantly evolving cloud computing landscape. Additionally, it will be crucial to remain compliant with existing regulations as well as new regulations that may emerge in the future. It is safe to say that the next decade of cloud computing will be just as dramatic as the last where many internet services are becoming cloud-based, and huge enterprises will struggle to fund physical infrastructure. Cloud computing is significantly used in business innovation and because of its agility and adaptability, cloud technology enables new ways of working, operating, and running a business. The service enables users to access files and applications stored in the cloud from anywhere, removing the requirement for users to be always physically close to actual hardware. Cloud computing makes the connection available from anywhere because they are kept on a network of hosted computers that carry data over the internet. Cloud computing has shown to be advantageous to both consumers and corporations. To be more specific, the cloud has altered our way of life. Overall, cloud computing is likely to continue to play a significant role in the future of IT, enabling organizations to become more agile, efficient, and innovative in the face of rapid technological change. This is likely to drive further innovation in AI and machine learning in the coming years.展开更多
With the growing popularity of data-intensive services on the Internet, the traditional process-centric model for business process meets challenges due to the lack of abilities to describe data semantics and dependenc...With the growing popularity of data-intensive services on the Internet, the traditional process-centric model for business process meets challenges due to the lack of abilities to describe data semantics and dependencies, resulting in the inflexibility of the design and implement for the processes. This paper proposes a novel data-aware business process model which is able to describe both explicit control flow and implicit data flow. Data model with dependencies which are formulated by Linear-time Temporal Logic(LTL) is presented, and their satisfiability is validated by an automaton-based model checking algorithm. Data dependencies are fully considered in modeling phase, which helps to improve the efficiency and reliability of programming during developing phase. Finally, a prototype system based on j BPM for data-aware workflow is designed using such model, and has been deployed to Beijing Kingfore heating management system to validate the flexibility, efficacy and convenience of our approach for massive coding and large-scale system management in reality.展开更多
Under virtualization idea based on large-scale dismantling and sharing, the implementing of network interconnection of calculation components and storage components by loose coupling, which are tightly coupling in tra...Under virtualization idea based on large-scale dismantling and sharing, the implementing of network interconnection of calculation components and storage components by loose coupling, which are tightly coupling in traditional server, achieves computing capacity, storage capacity and service capacity distri- bution according to need in application-level. Under the new server model, the segregation and protection of user space and system space as well as the security monitoring of virtual resources are the important factors of ultimate security guarantee. This article presents a large-scale and expansible distributed invasion detection system of virtual computing environment based on virtual machine. The system supports security monitoring management of global resources and provides uniform view of security attacks under virtual computing environment, thereby protecting the user applications and system security under capacity services domain.展开更多
After a comprehensive literature review and analysis, a unified cloud computing framework is proposed, which comprises MapReduce, a vertual machine, Hadoop distributed file system (HDFS), Hbase, Hadoop, and virtuali...After a comprehensive literature review and analysis, a unified cloud computing framework is proposed, which comprises MapReduce, a vertual machine, Hadoop distributed file system (HDFS), Hbase, Hadoop, and virtualization. This study also compares Microsoft, Trend Micro, and the proposed unified cloud computing architecture to show that the proposed unified framework of the cloud computing service model is comprehensive and appropriate for the current complexities of businesses. The findings of this study can contribute to the knowledge for academics and practitioners to understand, assess, and analyze a cloud computing service application.展开更多
The advantages of a cloud computing service are cost advantages,availability,scalability,flexibility,reduced time to market,and dynamic access to computing resources.Enterprises can improve the successful adoption rate...The advantages of a cloud computing service are cost advantages,availability,scalability,flexibility,reduced time to market,and dynamic access to computing resources.Enterprises can improve the successful adoption rate of cloud computing services if they understand the critical factors.Tofind critical factors,this studyfirst reviewed the literature and established a three-layer hierarch-ical factor table for adopting a cloud computing service based on the Technology-Organization-Environment framework.Then,a hybrid method that combines two multi-criteria decision-making tools—called the Fuzzy Analytic Network Process method and the concept of VlseKriterijumska Optimizacija I Kompromisno Resenje acceptable advantage—was used to objectively identify critical factors for the adop-tion of a cloud computing service,replacing the subjective decision of the authors.The results of this study determinedfive critical factors,namely data access secur-ity,information transmission security,senior management support,fallback cloud management,and employee acceptance.Finally,the paper presents thefindings and implications of the study.展开更多
This paper aims to present a role-based interaction model for dynamic service composition in Grid environments. Assigning roles to a service means to associate with it capabilities that describes all the operations th...This paper aims to present a role-based interaction model for dynamic service composition in Grid environments. Assigning roles to a service means to associate with it capabilities that describes all the operations the service intends to perform. When all of the services can be recognized by their roles, the appropriate services can be selected. Based on the interaction policy, a role-based interaction model not only facilitates access control, but also offers flexible interaction mechanism for adapting service-oriented applications. This interaction model adopts programmable reactive tuple space to facilitate context-dependent coordination.展开更多
The energy consumption in large-scale data centers is attracting more and more attention today with the increasing data center energy costs making the enhanced performance very expensive. This is becoming a bottleneck...The energy consumption in large-scale data centers is attracting more and more attention today with the increasing data center energy costs making the enhanced performance very expensive. This is becoming a bottleneck to further developments in terms of both scale and performance of cloud computing. Thus, the reduction of the energy consumption by data centers is becoming a key research topic in green IT and green computing. The web servers providing cloud service computing run at various speeds for different scenarios. By shifting among these states using speed scaling, the energy consumption is proportional to the workload, which is termed energy-proportionality. This study uses stochastic service decision nets to investigate energy-efficient speed scaling on web servers. This model combines stochastic Petri nets with Markov decision process models. This enables the model to dynamically optimize the speed scaling strategy and make performance evaluations. The model is graphical and intuitive enough to characterize complicated system behavior and decisions. The model is service-oriented using the typical service patterns to reduce the complex model to a simple model with a smaller state space. Performance and reward equivalent analyse substantially reduces the system behavior sub-net. The model gives the optimal strategy and evaluates performance and energy metrics more concisely.展开更多
Video streaming services are trending to be deployed on cloud. Cloud computing offers better stability and lower price than traditional IT facilities. Huge storage capacity is essential for video streaming service. Mo...Video streaming services are trending to be deployed on cloud. Cloud computing offers better stability and lower price than traditional IT facilities. Huge storage capacity is essential for video streaming service. More and more cloud providers appear so there are increasing cloud platforms to choose. A better choice is to use more than one data center, which is called multi-cloud. In this paper a closed-loop approach is proposed for optimizing Quality of Service (QoS) and cost. Modules of monitoring and controlling data centers are required as well as the application feedback such as video streaming services. An algorithm is proposed to help choose cloud providers and data centers in a multi-cloud environment as a video service manager. Performance with different video service workloads are evaluated. Compared with using only one cloud provider, dynamically deploying services in multi-cloud is better in aspects of both cost and QoS. If cloud service costs are different among data centers, the algorithm will help make choices to lower the cost and keep a high QoS.展开更多
Conventional resource provision algorithms focus on how to maximize resource utilization and meet a fixed constraint of response time which be written in service level agreement(SLA).Unfortunately,the expected respo...Conventional resource provision algorithms focus on how to maximize resource utilization and meet a fixed constraint of response time which be written in service level agreement(SLA).Unfortunately,the expected response time is highly variable and it is usually longer than the value of SLA.So,it leads to a poor resource utilization and unnecessary servers migration.We develop a framework for customer-driven dynamic resource allocation in cloud computing.Termed CDSMS(customer-driven service manage system),and the framework’s contributions are twofold.First,it can reduce the total migration times by adjusting the value of parameters of response time dynamically according to customers’profiles.Second,it can choose a best resource provision algorithm automatically in different scenarios to improve resource utilization.Finally,we perform a serious experiment in a real cloud computing platform.Experimental results show that CDSMS provides a satisfactory solution for the prediction of expected response time and the interval period between two tasks and reduce the total resource usage cost.展开更多
This study, based on the theory of equivalence relations, proposes a novel multilevel index model for decentralized service repositories to eliminate redundant information and enhance the time-management quality of th...This study, based on the theory of equivalence relations, proposes a novel multilevel index model for decentralized service repositories to eliminate redundant information and enhance the time-management quality of the service retrieval process of the service repository architecture. An efficient resource discovery algorithm based on Discrete Hash Tables is presented to enable efficient and effective retrieval services among different distributed repositories. The performance of the proposed model and the supporting algorithms have been evaluated in a distributed environment. Experimental results validate the effectiveness of our proposed indexing model and search algorithm.展开更多
The Orc language is a concurrency calculus pro- posed to study the orchestration patterns in service oriented computing. Its special features, such as high concurrency and asynchronism make it a brilliant subject for ...The Orc language is a concurrency calculus pro- posed to study the orchestration patterns in service oriented computing. Its special features, such as high concurrency and asynchronism make it a brilliant subject for studying web applications that rely on web services. The conventional se- mantics for Orc does not contain the execution status of ser- vices so that a program cannot determine whether a service has terminated normally or halted with a failure after it pub- lished some results. It means that this kind of failure cannot be captured by the fault handler. Furthermore, such a seman- tic model cannot establish an order saying that a program is better if it fails less often. This paper employs UTP methods to propose a denotational semantic model for Orc that con- rains execution status information. A failure handling seman- tics is defined to recover a failure execution back to normal. A refinement order is defined to compare two systems based on their execution failures. Based on this order, a system that introduces a failure recovery mechanism is considered bet- ter than one without. An extended operational semantics is also proposed and proven to be equivalent to the denotational semantics.展开更多
基金supported by the National Natural Science Foundation of China(No.62171051)。
文摘With the proportion of intelligent services in the industrial internet of things(IIoT)rising rapidly,its data dependency and decomposability increase the difficulty of scheduling computing resources.In this paper,we propose an intelligent service computing framework.In the framework,we take the long-term rewards of its important participants,edge service providers,as the optimization goal,which is related to service delay and computing cost.Considering the different update frequencies of data deployment and service offloading,double-timescale reinforcement learning is utilized in the framework.In the small-scale strategy,the frequent concurrency of services and the difference in service time lead to the fuzzy relationship between reward and action.To solve the fuzzy reward problem,a reward mapping-based reinforcement learning(RMRL)algorithm is proposed,which enables the agent to learn the relationship between reward and action more clearly.The large time scale strategy adopts the improved Monte Carlo tree search(MCTS)algorithm to improve the learning speed.The simulation results show that the strategy is superior to popular reinforcement learning algorithms such as double Q-learning(DDQN)and dueling Q-learning(dueling-DQN)in learning speed,and the reward is also increased by 14%.
文摘We are entering into a new era of enterprise computing that is characterized by an emphasis on broadband convergence, knowledge sharing, and calm services. Some people refer to this as the 'ubiquitous network' business model because its focus is on a high degree of connectivity between a company and its customers, suppliers, and channel partners. Moreover, immediate access to ideas, goods, and services will be of greater value than the traditional model of permanent and ponderous possession. This paper illustrates how ubiquitous computing technology can be combined with legacy computer-based information systems, along with thoughts on relevant issues of ubiquitous commerce. We also propose a model for how to assess levels of ubiquitous computing services.
基金supported by the NSC under Grant No.102-2410-H-130-038
文摘This study presents a clear evolution of computing and its key applications. Cloud computing services evolved from distributed, grid, and utility computing. Critical companies such as Salesforce,Amazon, Google, and Microsoft play important roles in cloud computing. Dramatic changes in the technology environment have created new challenges for current information technologies. This study discusses four significant challenges for cloud computing services,including the next-generation Internet, data synchronization, cloud security, and competitive advantages.And then it also discusses how managers can learn about the future of cloud computing services.
文摘The purpose of this paper is to provide a better knowledge of the cloud computing as well as to suggest relevant research paths in this growing field. Also, we will go through the future benefits of cloud computing and the upcoming possible challenges we will have. Intext Cloud, performance, cloud computing, architecture, scale-up, and big data are all terms used in this context. Cloud computing offers a wide range of architectural configurations, including the number of processors, memory, and nodes. Cloud computing has already changed the way we store, process, and access data, and it is expected to continue to have a significant impact on the future of information technology. Cloud computing enables organizations to scale their IT resources up or down quickly and easily, without the need for costly hardware upgrades. This can help organizations to respond more quickly to changing business needs and market conditions. By moving IT resources to the cloud, organizations can reduce their IT infrastructure costs and improve their operational efficiency. Cloud computing also allows organizations to pay only for the resources they use, rather than investing in expensive hardware and software licenses. Cloud providers invest heavily in security and compliance measures, which can help to protect organizations from cyber threats and ensure regulatory compliance. Cloud computing provides a scalable platform for AI and machine learning applications, enabling organizations to build and deploy these technologies more easily and cost-effectively. A task, an application, and its input can take up to 20 times longer or cost 10 times more than optimal. Cloud products’ ready adaptability has resulted in a paradigm change. Previously, an application was optimized for a specific cluster;however, in the cloud, the architectural configuration is tuned for the workload. The evolution of cloud computing from the era of mainframes and dumb terminals has been significant, but there are still many advancements to come. As we look towards the future, IT leaders and the companies they serve will face increasingly complex challenges in order to stay competitive in a constantly evolving cloud computing landscape. Additionally, it will be crucial to remain compliant with existing regulations as well as new regulations that may emerge in the future. It is safe to say that the next decade of cloud computing will be just as dramatic as the last where many internet services are becoming cloud-based, and huge enterprises will struggle to fund physical infrastructure. Cloud computing is significantly used in business innovation and because of its agility and adaptability, cloud technology enables new ways of working, operating, and running a business. The service enables users to access files and applications stored in the cloud from anywhere, removing the requirement for users to be always physically close to actual hardware. Cloud computing makes the connection available from anywhere because they are kept on a network of hosted computers that carry data over the internet. Cloud computing has shown to be advantageous to both consumers and corporations. To be more specific, the cloud has altered our way of life. Overall, cloud computing is likely to continue to play a significant role in the future of IT, enabling organizations to become more agile, efficient, and innovative in the face of rapid technological change. This is likely to drive further innovation in AI and machine learning in the coming years.
基金supported by the National Natural Science Foundation of China (No. 61502043, No. 61132001)Beijing Natural Science Foundation (No. 4162042)BeiJing Talents Fund (No. 2015000020124G082)
文摘With the growing popularity of data-intensive services on the Internet, the traditional process-centric model for business process meets challenges due to the lack of abilities to describe data semantics and dependencies, resulting in the inflexibility of the design and implement for the processes. This paper proposes a novel data-aware business process model which is able to describe both explicit control flow and implicit data flow. Data model with dependencies which are formulated by Linear-time Temporal Logic(LTL) is presented, and their satisfiability is validated by an automaton-based model checking algorithm. Data dependencies are fully considered in modeling phase, which helps to improve the efficiency and reliability of programming during developing phase. Finally, a prototype system based on j BPM for data-aware workflow is designed using such model, and has been deployed to Beijing Kingfore heating management system to validate the flexibility, efficacy and convenience of our approach for massive coding and large-scale system management in reality.
基金Supported by the High Technology Research and Development Programme of China (No. 2003AA1Z2070 ) and the National Natural Science Foundation of China (No. 90412013).
文摘Under virtualization idea based on large-scale dismantling and sharing, the implementing of network interconnection of calculation components and storage components by loose coupling, which are tightly coupling in traditional server, achieves computing capacity, storage capacity and service capacity distri- bution according to need in application-level. Under the new server model, the segregation and protection of user space and system space as well as the security monitoring of virtual resources are the important factors of ultimate security guarantee. This article presents a large-scale and expansible distributed invasion detection system of virtual computing environment based on virtual machine. The system supports security monitoring management of global resources and provides uniform view of security attacks under virtual computing environment, thereby protecting the user applications and system security under capacity services domain.
文摘After a comprehensive literature review and analysis, a unified cloud computing framework is proposed, which comprises MapReduce, a vertual machine, Hadoop distributed file system (HDFS), Hbase, Hadoop, and virtualization. This study also compares Microsoft, Trend Micro, and the proposed unified cloud computing architecture to show that the proposed unified framework of the cloud computing service model is comprehensive and appropriate for the current complexities of businesses. The findings of this study can contribute to the knowledge for academics and practitioners to understand, assess, and analyze a cloud computing service application.
基金supported by the Ministry of Science and Technology(MOST),Taiwan,R.O.C.(104-2410-H-327-024-).
文摘The advantages of a cloud computing service are cost advantages,availability,scalability,flexibility,reduced time to market,and dynamic access to computing resources.Enterprises can improve the successful adoption rate of cloud computing services if they understand the critical factors.Tofind critical factors,this studyfirst reviewed the literature and established a three-layer hierarch-ical factor table for adopting a cloud computing service based on the Technology-Organization-Environment framework.Then,a hybrid method that combines two multi-criteria decision-making tools—called the Fuzzy Analytic Network Process method and the concept of VlseKriterijumska Optimizacija I Kompromisno Resenje acceptable advantage—was used to objectively identify critical factors for the adop-tion of a cloud computing service,replacing the subjective decision of the authors.The results of this study determinedfive critical factors,namely data access secur-ity,information transmission security,senior management support,fallback cloud management,and employee acceptance.Finally,the paper presents thefindings and implications of the study.
文摘This paper aims to present a role-based interaction model for dynamic service composition in Grid environments. Assigning roles to a service means to associate with it capabilities that describes all the operations the service intends to perform. When all of the services can be recognized by their roles, the appropriate services can be selected. Based on the interaction policy, a role-based interaction model not only facilitates access control, but also offers flexible interaction mechanism for adapting service-oriented applications. This interaction model adopts programmable reactive tuple space to facilitate context-dependent coordination.
基金supported by the National Key Basic Research and Development (973) Program (Nos. 2012CB315801, 2011CB302805, 2010CB328105,and 2009CB320504)the National Natural Science Foundation of China (Nos. 60932003, 61020106002, and 61161140320)the Intel Research Council with the title of "Security Vulnerability Analysis based on Cloud Platform with Intel IA Architecture"
文摘The energy consumption in large-scale data centers is attracting more and more attention today with the increasing data center energy costs making the enhanced performance very expensive. This is becoming a bottleneck to further developments in terms of both scale and performance of cloud computing. Thus, the reduction of the energy consumption by data centers is becoming a key research topic in green IT and green computing. The web servers providing cloud service computing run at various speeds for different scenarios. By shifting among these states using speed scaling, the energy consumption is proportional to the workload, which is termed energy-proportionality. This study uses stochastic service decision nets to investigate energy-efficient speed scaling on web servers. This model combines stochastic Petri nets with Markov decision process models. This enables the model to dynamically optimize the speed scaling strategy and make performance evaluations. The model is graphical and intuitive enough to characterize complicated system behavior and decisions. The model is service-oriented using the typical service patterns to reduce the complex model to a simple model with a smaller state space. Performance and reward equivalent analyse substantially reduces the system behavior sub-net. The model gives the optimal strategy and evaluates performance and energy metrics more concisely.
基金supported in part by National Key Basic Research and Development (973) Program of China(Nos. 2011CB302805 and 2013CB228206)the National High-Tech Research and Development (863) Program of China (No. 2013BAH19F01)the National Natural Science Foundation of China (No. 61233016)
文摘Video streaming services are trending to be deployed on cloud. Cloud computing offers better stability and lower price than traditional IT facilities. Huge storage capacity is essential for video streaming service. More and more cloud providers appear so there are increasing cloud platforms to choose. A better choice is to use more than one data center, which is called multi-cloud. In this paper a closed-loop approach is proposed for optimizing Quality of Service (QoS) and cost. Modules of monitoring and controlling data centers are required as well as the application feedback such as video streaming services. An algorithm is proposed to help choose cloud providers and data centers in a multi-cloud environment as a video service manager. Performance with different video service workloads are evaluated. Compared with using only one cloud provider, dynamically deploying services in multi-cloud is better in aspects of both cost and QoS. If cloud service costs are different among data centers, the algorithm will help make choices to lower the cost and keep a high QoS.
基金Supported by the National Natural Science Foundation of China(61272454)
文摘Conventional resource provision algorithms focus on how to maximize resource utilization and meet a fixed constraint of response time which be written in service level agreement(SLA).Unfortunately,the expected response time is highly variable and it is usually longer than the value of SLA.So,it leads to a poor resource utilization and unnecessary servers migration.We develop a framework for customer-driven dynamic resource allocation in cloud computing.Termed CDSMS(customer-driven service manage system),and the framework’s contributions are twofold.First,it can reduce the total migration times by adjusting the value of parameters of response time dynamically according to customers’profiles.Second,it can choose a best resource provision algorithm automatically in different scenarios to improve resource utilization.Finally,we perform a serious experiment in a real cloud computing platform.Experimental results show that CDSMS provides a satisfactory solution for the prediction of expected response time and the interval period between two tasks and reduce the total resource usage cost.
基金supported by the National Natural Science Foundation of China(Nos.61502209 and 61502207)Postdoc Funds of China and Jiangsu Province(Nos.2015M580396 and 1501023A)the Jiangsu University Foundation(No.5503000049)
文摘This study, based on the theory of equivalence relations, proposes a novel multilevel index model for decentralized service repositories to eliminate redundant information and enhance the time-management quality of the service retrieval process of the service repository architecture. An efficient resource discovery algorithm based on Discrete Hash Tables is presented to enable efficient and effective retrieval services among different distributed repositories. The performance of the proposed model and the supporting algorithms have been evaluated in a distributed environment. Experimental results validate the effectiveness of our proposed indexing model and search algorithm.
基金This work was supported by the National High Tech- nology Research and Development Program of China (2012AA011205), the National Natural Science Foundation of China (Grant Nos. 61361136002, 61321064 and 91118007), Shanghai Knowledge Service Platform Project (ZF1213) and Shanghai Minhang Talent Project.
文摘The Orc language is a concurrency calculus pro- posed to study the orchestration patterns in service oriented computing. Its special features, such as high concurrency and asynchronism make it a brilliant subject for studying web applications that rely on web services. The conventional se- mantics for Orc does not contain the execution status of ser- vices so that a program cannot determine whether a service has terminated normally or halted with a failure after it pub- lished some results. It means that this kind of failure cannot be captured by the fault handler. Furthermore, such a seman- tic model cannot establish an order saying that a program is better if it fails less often. This paper employs UTP methods to propose a denotational semantic model for Orc that con- rains execution status information. A failure handling seman- tics is defined to recover a failure execution back to normal. A refinement order is defined to compare two systems based on their execution failures. Based on this order, a system that introduces a failure recovery mechanism is considered bet- ter than one without. An extended operational semantics is also proposed and proven to be equivalent to the denotational semantics.