This paper investigates a multi-component repairable system with double threshold control policy.The system is composed of n identical and independent components which operate simultaneously at the beginning,and it is...This paper investigates a multi-component repairable system with double threshold control policy.The system is composed of n identical and independent components which operate simultaneously at the beginning,and it is down when the number of operating components decreases to k−1(k≤n).When the number of failed components is less than the value L,the repairman repairs them with a low repair rate.The high repair rate is activated as soon as L failed components present,and continues until the number of failed components drops to the value N−1.Applying the matrix analytical method,the Laplace transform technique and the properties of the phase type distribution,various performance measures including the availability,the rate of occurrence of failures,and the reliability are derived in transient and stationary regimes.Further,numerical examples are reported to show the behaviour of the system.展开更多
With the simultaneous rise of energy costs and demand for cloud computing, efficient control of data centers becomes crucial. In the data center control problem, one needs to plan at every time step how many servers t...With the simultaneous rise of energy costs and demand for cloud computing, efficient control of data centers becomes crucial. In the data center control problem, one needs to plan at every time step how many servers to switch on or off in order to meet stochastic job arrivals while trying to minimize electricity consumption. This problem becomes particularly challenging when servers can be of various types and jobs from different classes can only be served by certain types of server, as it is often the case in real data centers. We model this problem as a robust Markov decision process(i.e., the transition function is not assumed to be known precisely). We give sufficient conditions(which seem to be reasonable and satisfied in practice) guaranteeing that an optimal threshold policy exists. This property can then be exploited in the design of an efficient solving method, which we provide.Finally, we present some experimental results demonstrating the practicability of our approach and compare with a previous related approach based on model predictive control.展开更多
This paper is about an optimal pricing control under a Markov chain model.The objective is to dynamically adjust the product price over time to maximize a discounted reward function.It is shown that the optimal contro...This paper is about an optimal pricing control under a Markov chain model.The objective is to dynamically adjust the product price over time to maximize a discounted reward function.It is shown that the optimal control policy is of threshold type.Closed-form solutions are obtained.A numerical example is also provided to illustrate our results.展开更多
基金This research was supported by the National Natural Science Foundation of China(No.71571127)the funding of V.C.&V.R.Key Lab of Sichuan Province(SCVCVR2019.05VS)the Sichuan Science and Technology Program(Nos.2020YFS0318,2019YFS0155,2019YFS0146,2020YFG0430,2020YFS0307).
文摘This paper investigates a multi-component repairable system with double threshold control policy.The system is composed of n identical and independent components which operate simultaneously at the beginning,and it is down when the number of operating components decreases to k−1(k≤n).When the number of failed components is less than the value L,the repairman repairs them with a low repair rate.The high repair rate is activated as soon as L failed components present,and continues until the number of failed components drops to the value N−1.Applying the matrix analytical method,the Laplace transform technique and the properties of the phase type distribution,various performance measures including the availability,the rate of occurrence of failures,and the reliability are derived in transient and stationary regimes.Further,numerical examples are reported to show the behaviour of the system.
文摘With the simultaneous rise of energy costs and demand for cloud computing, efficient control of data centers becomes crucial. In the data center control problem, one needs to plan at every time step how many servers to switch on or off in order to meet stochastic job arrivals while trying to minimize electricity consumption. This problem becomes particularly challenging when servers can be of various types and jobs from different classes can only be served by certain types of server, as it is often the case in real data centers. We model this problem as a robust Markov decision process(i.e., the transition function is not assumed to be known precisely). We give sufficient conditions(which seem to be reasonable and satisfied in practice) guaranteeing that an optimal threshold policy exists. This property can then be exploited in the design of an efficient solving method, which we provide.Finally, we present some experimental results demonstrating the practicability of our approach and compare with a previous related approach based on model predictive control.
基金supported by National Natural Science Foundation of China (Grant Nos. 11831010 and 61961160732)Shandong Provincial Natural Science Foundation (Grant No. ZR2019ZD42)
文摘This paper is about an optimal pricing control under a Markov chain model.The objective is to dynamically adjust the product price over time to maximize a discounted reward function.It is shown that the optimal control policy is of threshold type.Closed-form solutions are obtained.A numerical example is also provided to illustrate our results.