There are two kinds of dispatching policies in content-aware web server cluster; segregation dispatching policy and mixture dispatching policy. Traditional scheduling algorithms all adopt mixture dispatching policy. T...There are two kinds of dispatching policies in content-aware web server cluster; segregation dispatching policy and mixture dispatching policy. Traditional scheduling algorithms all adopt mixture dispatching policy. They do not consider that dynamic requests' serving has the tendency to slow down static requests' serving, and that different requests have different resource demands, so they can not use duster's resource reasonably and effectively. This paper uses stochastic reward net (SRN) to model and analyze the two dispatching policies, and uses stochastic Petri net package (SPNP) to simulate the models. The simulation results and practical tests both show that segregation dispatching policy is better than mixture dispatching policy. The principle of segregation dispatching policy can guide us to design efficient scheduling algorithm.展开更多
Distributed architectures support increased load on popular web sites by dispatching client requests transparently among multiple servers in a cluster. Packet Single-Rewriting technology and client address hashing alg...Distributed architectures support increased load on popular web sites by dispatching client requests transparently among multiple servers in a cluster. Packet Single-Rewriting technology and client address hashing algorithm in ONE-IP technology which can ensure application-session-keep have been analyzed, an improved request dispatching algorithm which is simple, effective and supports dynamic load balance has been proposed. In this algorithm, dispatcher evaluates which server node will process request by applying a hash function to the client IP address and comparing the result with its assigned identifier subset; it adjusts the size of the subset according to the performance and current load of each server, so as to utilize all servers' resource effectively. Simulation shows that the improved algorithm has better performance than the original one.展开更多
An approach for web server cluster(WSC)reliability and degradation process analysis is proposed.The reliability process is modeled as a non-homogeneous Markov process(NHMH)composed of several non-homogeneous Poisson p...An approach for web server cluster(WSC)reliability and degradation process analysis is proposed.The reliability process is modeled as a non-homogeneous Markov process(NHMH)composed of several non-homogeneous Poisson processes(NHPPs).The arrival rate of each NHPP corresponds to the system software failure rate which is expressed using Cox s proportional hazards model(PHM)in terms of the cumulative and instantaneous load of the software.The cumulative load refers to software cumulative execution time,and the instantaneous load denotes the rate that the users requests arrive at a server.The result of reliability analysis is a time-varying reliability and degradation process over the WSC lifetime.Finally,the evaluation experiment shows the effectiveness of the proposed approach.展开更多
Requests distribution is an key technology for Web cluster server. This paper presents a throughput-driven scheduling algorithm (TDSA). The algorithm adopts the throughput of cluster back-ends to evaluate their load...Requests distribution is an key technology for Web cluster server. This paper presents a throughput-driven scheduling algorithm (TDSA). The algorithm adopts the throughput of cluster back-ends to evaluate their load and employs the neural network model to predict the future load so that the scheduling system features a self-learning capability and good adaptability to the change of load. Moreover, it separates static requests from dynamic requests to make full use of the CPU resources and takes the locality of requests into account to improve the cache hit ratio. Experimental re suits from the testing tool of WebBench^TM show better per formance for Web cluster server with TDSA than that with traditional scheduling algorithms.展开更多
Aiming at the load imbalance and poor scalability in single-tier Web server clusters, an efficient load balancing ap- proach is proposed for constructing an N-hierarchical (multi-tier) Web server cluster. In each la...Aiming at the load imbalance and poor scalability in single-tier Web server clusters, an efficient load balancing ap- proach is proposed for constructing an N-hierarchical (multi-tier) Web server cluster. In each layer, multiple load balancers are set to receive the user requests simultaneously, and different load bal- ancing algorithms are used to construct the high-scalable Web cluster system. At the same time, an improved load balancing al- gorithm is proposed, which can dynamically calculate weights according to the utilization of the server resources, and reasonably distribute the loads for each server according to the load status of the servers. The experimental results show that the proposed ap- proach can greatly decrease the load imbalance among the Web servers and reduce the response time of the entire Web cluster system.展开更多
基金Supported by the National Natural Science Foun-dation of China (90204008) the Science Council of Wuhan(20001001004)
文摘There are two kinds of dispatching policies in content-aware web server cluster; segregation dispatching policy and mixture dispatching policy. Traditional scheduling algorithms all adopt mixture dispatching policy. They do not consider that dynamic requests' serving has the tendency to slow down static requests' serving, and that different requests have different resource demands, so they can not use duster's resource reasonably and effectively. This paper uses stochastic reward net (SRN) to model and analyze the two dispatching policies, and uses stochastic Petri net package (SPNP) to simulate the models. The simulation results and practical tests both show that segregation dispatching policy is better than mixture dispatching policy. The principle of segregation dispatching policy can guide us to design efficient scheduling algorithm.
基金This work was supported by the National "863" program of China ( No.2003AA148010) and National Torch Project of China (No.2001EB001233) .
文摘Distributed architectures support increased load on popular web sites by dispatching client requests transparently among multiple servers in a cluster. Packet Single-Rewriting technology and client address hashing algorithm in ONE-IP technology which can ensure application-session-keep have been analyzed, an improved request dispatching algorithm which is simple, effective and supports dynamic load balance has been proposed. In this algorithm, dispatcher evaluates which server node will process request by applying a hash function to the client IP address and comparing the result with its assigned identifier subset; it adjusts the size of the subset according to the performance and current load of each server, so as to utilize all servers' resource effectively. Simulation shows that the improved algorithm has better performance than the original one.
基金The National Natural Science Foundation of China(No.61402333,61402242)the National Science Foundation of Tianjin(No.15JCQNJC00400)
文摘An approach for web server cluster(WSC)reliability and degradation process analysis is proposed.The reliability process is modeled as a non-homogeneous Markov process(NHMH)composed of several non-homogeneous Poisson processes(NHPPs).The arrival rate of each NHPP corresponds to the system software failure rate which is expressed using Cox s proportional hazards model(PHM)in terms of the cumulative and instantaneous load of the software.The cumulative load refers to software cumulative execution time,and the instantaneous load denotes the rate that the users requests arrive at a server.The result of reliability analysis is a time-varying reliability and degradation process over the WSC lifetime.Finally,the evaluation experiment shows the effectiveness of the proposed approach.
基金Supported by the National Natural Science Funda-tion of China (60175015)
文摘Requests distribution is an key technology for Web cluster server. This paper presents a throughput-driven scheduling algorithm (TDSA). The algorithm adopts the throughput of cluster back-ends to evaluate their load and employs the neural network model to predict the future load so that the scheduling system features a self-learning capability and good adaptability to the change of load. Moreover, it separates static requests from dynamic requests to make full use of the CPU resources and takes the locality of requests into account to improve the cache hit ratio. Experimental re suits from the testing tool of WebBench^TM show better per formance for Web cluster server with TDSA than that with traditional scheduling algorithms.
基金Supported by the National Natural Science Foundation of China(61073063,61173029,61272182 and 61173030)the Ocean Public Welfare Scientific Research Project of State Oceanic Administration of China(201105033)National Digital Ocean Key Laboratory Open Fund Projects(KLDO201306)
文摘Aiming at the load imbalance and poor scalability in single-tier Web server clusters, an efficient load balancing ap- proach is proposed for constructing an N-hierarchical (multi-tier) Web server cluster. In each layer, multiple load balancers are set to receive the user requests simultaneously, and different load bal- ancing algorithms are used to construct the high-scalable Web cluster system. At the same time, an improved load balancing al- gorithm is proposed, which can dynamically calculate weights according to the utilization of the server resources, and reasonably distribute the loads for each server according to the load status of the servers. The experimental results show that the proposed ap- proach can greatly decrease the load imbalance among the Web servers and reduce the response time of the entire Web cluster system.