Container-based virtualization techniques are becoming an alternative to traditional virtual machines,due to less overhead and better scaling.As one of the most widely used open-source container orchestration systems,...Container-based virtualization techniques are becoming an alternative to traditional virtual machines,due to less overhead and better scaling.As one of the most widely used open-source container orchestration systems,Kubernetes provides a built-in mechanism,that is,horizontal pod autoscaler(HPA),for dynamic resource provisioning.By default,scaling pods only based on CPU utilization,a single performance metric,HPA may create more pods than actually needed.Through extensive measurements of a containerized n-tier application benchmark,RUBBoS,we find that excessive pods consume more CPU and memory and even deteriorate response times of applications,due to interference.Furthermore,a Kubernetes service does not balance incoming requests among old pods and new pods created by HPA,due to stateful HTTP.In this paper,we propose a bi-metric approach to scaling pods by taking into account both CPU utilization and utilization of a thread pool,which is a kind of important soft resource in Httpd and Tomcat.Our approach collects the utilization of CPU and memory of pods.Meanwhile,it makes use of ELBA,a milli-bottleneck detector,to calculate queue lengths of Httpd and Tomcat pods and then evaluate the utilization of their thread pools.Based on the utilization of both CPU and thread pools,our approach could scale up less replicas of Httpd and Tomcat pods,contributing to a reduction of hardware resource utilization.At the same time,our approach leverages preStop hook along with liveness and readiness probes to relieve load imbalance among old Tomcat pods and new ones.Based on the containerized RUBBoS,our experimental results show that the proposed approach could not only reduce the usage of CPU and memory by as much as 14%and 24%when compared with HPA,but also relieve the load imbalance to reduce average response time of requests by as much as 80%.Our approach also demonstrates that it is better to scale pods by multiple metrics rather than a single one.展开更多
基金The research has been supported by a grant from NSFC(Grant No.61702063)Fundamental Science and by a grant from Frontier Technology Research Projects of Chongqing(cstc2017jcyjAX0089)China Scholarship Council(201708505099).
文摘Container-based virtualization techniques are becoming an alternative to traditional virtual machines,due to less overhead and better scaling.As one of the most widely used open-source container orchestration systems,Kubernetes provides a built-in mechanism,that is,horizontal pod autoscaler(HPA),for dynamic resource provisioning.By default,scaling pods only based on CPU utilization,a single performance metric,HPA may create more pods than actually needed.Through extensive measurements of a containerized n-tier application benchmark,RUBBoS,we find that excessive pods consume more CPU and memory and even deteriorate response times of applications,due to interference.Furthermore,a Kubernetes service does not balance incoming requests among old pods and new pods created by HPA,due to stateful HTTP.In this paper,we propose a bi-metric approach to scaling pods by taking into account both CPU utilization and utilization of a thread pool,which is a kind of important soft resource in Httpd and Tomcat.Our approach collects the utilization of CPU and memory of pods.Meanwhile,it makes use of ELBA,a milli-bottleneck detector,to calculate queue lengths of Httpd and Tomcat pods and then evaluate the utilization of their thread pools.Based on the utilization of both CPU and thread pools,our approach could scale up less replicas of Httpd and Tomcat pods,contributing to a reduction of hardware resource utilization.At the same time,our approach leverages preStop hook along with liveness and readiness probes to relieve load imbalance among old Tomcat pods and new ones.Based on the containerized RUBBoS,our experimental results show that the proposed approach could not only reduce the usage of CPU and memory by as much as 14%and 24%when compared with HPA,but also relieve the load imbalance to reduce average response time of requests by as much as 80%.Our approach also demonstrates that it is better to scale pods by multiple metrics rather than a single one.