Expenditure on wells constitute a significant part of the operational costs for a petroleum enterprise, where most of the cost results from drilling. This has prompted drilling departments to continuously look for wa...Expenditure on wells constitute a significant part of the operational costs for a petroleum enterprise, where most of the cost results from drilling. This has prompted drilling departments to continuously look for ways to reduce their drilling costs and be as efficient as possible. A system called the Drilling Comprehensive Information Management and Application System (DCIMAS) is developed and presented here, with an aim at collecting, storing and making full use of the valuable well data and information relating to all drilling activities and operations. The DCIMAS comprises three main parts, including a data collection and transmission system, a data warehouse (DW) management system, and an integrated platform of core applications. With the support of the application platform, the DW management system is introduced, whereby the operation data are captured at well sites and transmitted electronically to a data warehouse via transmission equipment and ETL (extract, transformation and load) tools. With the high quality of the data guaranteed, our central task is to make the best use of the operation data and information for drilling analysis and to provide further information to guide later production stages. Applications have been developed and integrated on a uniform platform to interface directly with different layers of the multi-tier DW. Now, engineers in every department spend less time on data handling and more time on applying technology in their real work with the system.展开更多
The current mathematical models for the storage assignment problem are generally established based on the traveling salesman problem(TSP),which has been widely applied in the conventional automated storage and retri...The current mathematical models for the storage assignment problem are generally established based on the traveling salesman problem(TSP),which has been widely applied in the conventional automated storage and retrieval system(AS/RS).However,the previous mathematical models in conventional AS/RS do not match multi-tier shuttle warehousing systems(MSWS) because the characteristics of parallel retrieval in multiple tiers and progressive vertical movement destroy the foundation of TSP.In this study,a two-stage open queuing network model in which shuttles and a lift are regarded as servers at different stages is proposed to analyze system performance in the terms of shuttle waiting period(SWP) and lift idle period(LIP) during transaction cycle time.A mean arrival time difference matrix for pairwise stock keeping units(SKUs) is presented to determine the mean waiting time and queue length to optimize the storage assignment problem on the basis of SKU correlation.The decomposition method is applied to analyze the interactions among outbound task time,SWP,and LIP.The ant colony clustering algorithm is designed to determine storage partitions using clustering items.In addition,goods are assigned for storage according to the rearranging permutation and the combination of storage partitions in a 2D plane.This combination is derived based on the analysis results of the queuing network model and on three basic principles.The storage assignment method and its entire optimization algorithm method as applied in a MSWS are verified through a practical engineering project conducted in the tobacco industry.The applying results show that the total SWP and LIP can be reduced effectively to improve the utilization rates of all devices and to increase the throughput of the distribution center.展开更多
A seismic assessment of two multi-tier pagodas by numerical analysis is presented herein.The Changu Narayan temple and the Kumbeshwar temple in Nepal are used as the case studies.Both pagodas are built of brick masonr...A seismic assessment of two multi-tier pagodas by numerical analysis is presented herein.The Changu Narayan temple and the Kumbeshwar temple in Nepal are used as the case studies.Both pagodas are built of brick masonry in earthen mortar,with timber columns and crossbeams.The Changu Narayan temple is a two-tier pagoda,and was seriously damaged during the 2015 Gorkha earthquake.The Kumbeshwar temple is a five-tier pagoda,and its top-tier collapsed due to the Gorkha earthquake.A seismic assessment was carried out using finite element(FE)analysis.The FE models were prepared,and dynamic identification tests and penetrometer tests were conducted.Pushover analysis and nonlinear dynamic analysis were performed as part of the seismic assessment.The main shock of the 2015 Gorkha earthquake was considered as the input accelerograms.The behavior between the two pagodas was compared with the collapse mechanisms and damage patterns observed in the actual structures.The comparison suggested common structural features of multi-tier pagodas.This study is dedicated to providing a better understanding of the seismic behavior of multi-tier pagoda-type structures and provides suggestions for their effective analysis.展开更多
Complex multi-tier applications deployed in cloud computing environments can experience rapid changes in their workloads. To ensure market readiness of such applications, adequate resources need to be provisioned so t...Complex multi-tier applications deployed in cloud computing environments can experience rapid changes in their workloads. To ensure market readiness of such applications, adequate resources need to be provisioned so that the applications can meet the demands of specified workload levels and at the same time ensure that service level agreements are met. Multi-tier cloud applications can have complex deployment configurations with load balancers, web servers, application servers and database servers. Complex dependencies may exist between servers in various tiers. To support provisioning and capacity planning decisions, performance testing approaches with synthetic workloads are used. Accuracy of a performance testing approach is determined by how closely the generated synthetic workloads mimic the realistic workloads. Since multi-tier applications can have varied deployment configurations and characteristic workloads, there is a need for a generic performance testing methodology that allows accurately modeling the performance of applications. We propose a methodology for performance testing of complex multi-tier applications. The workloads of multi-tier cloud applications are captured in two different models-benchmark application and workload models. An architecture model captures the deployment configurations of multi-tier applications. We propose a rapid deployment prototyping methodology that can help in choosing the best and most cost effective deployments for multi-tier applications that meet the specified performance requirements. We also describe a system bottleneck detection approach based on experimental evaluation of multi-tier applications.展开更多
Data temperature is a response to the ever-growing amount of data.These data have to be stored,but they have been observed that only a small portion of the data are accessed more frequently at any one time.This leads ...Data temperature is a response to the ever-growing amount of data.These data have to be stored,but they have been observed that only a small portion of the data are accessed more frequently at any one time.This leads to the concept of hot and cold data.Cold data can be migrated away from high-performance nodes to free up performance for higher priority data.Existing studies classify hot and cold data primarily on the basis of data age and usage frequency.We present this as a limitation in the current implementation of data temperature.This is due to the fact that age automatically assumes that all new data have priority and that usage is purely reactive.We propose new variables and conditions that influence smarter decision-making on what are hot or cold data and allow greater user control over data location and their movement.We identify new metadata variables and user-defined variables to extend the current data temperature value.We further establish rules and conditions for limiting unnecessary movement of the data,which helps to prevent wasted input output(I/O)costs.We also propose a hybrid algorithm that combines existing variables and new variables and conditions into a single data temperature.The proposed system provides higher accuracy,increases performance,and gives greater user control for optimal positioning of data within multi-tiered storage solutions.展开更多
With the multi-tier pricing scheme provided by most of the cloud service providers (CSPs), the cloud userstypically select a high enough transmission service level to ensure the quality of service (QoS), due to th...With the multi-tier pricing scheme provided by most of the cloud service providers (CSPs), the cloud userstypically select a high enough transmission service level to ensure the quality of service (QoS), due to the severe penalty ofmissing the transmission deadline. This leads to the so-called over-provisioning problem, which increases the transmissioncost of the cloud user. Given the fact that cloud users may not be aware of their traffic demand before accessing the network,the over-provisioning problem becomes more serious. In this paper, we investigate how to reduce the transmission cost fromthe perspective of cloud users, especially when they are not aware of their traffic demand before the transmission deadline.The key idea is to split a long-term transmission request into several short ones. By selecting the most suitable transmissionservice level for each short-term request, a cost-efiqcient inter-datacenter transmission service level selection framework isobtained. We further formulate the transmission service level selection problem as a linear programming problem andresolve it in an on-line style with Lyapunov optimization. We evaluate the proposed approach with real traffic data. Theexperimental results show that our method can reduce the transmission cost by up to 65.04%.展开更多
Resource allocation for multi-tier web appli- cations in virtualization environments is one of the most important problems in autonomous computing. On one hand, the more resources that are provisioned to a multi- tier...Resource allocation for multi-tier web appli- cations in virtualization environments is one of the most important problems in autonomous computing. On one hand, the more resources that are provisioned to a multi- tier web application, the easier it is to meet service level objectives (SLO). On the other hand, the virtual machine which hosts the multi-tier web application needs to be consolidated as much as possible in order to maintain high resource utilization. This paper presents an adaptive resource controller which consists of a feedback utiliza- tion controller and an auto-regressive and moving average model (ARMA)-based model estimator. It can meet application-level quality of service (QoS) goals while achieving high resource utilization. To evaluate the proposed controllers, simulations are performed on a testbed simulating a virtual data center using Xen virtual machines. Experimental results indicate that the control- lers can improve CPU utilization and make the best trade- off between resource utilization and performance for multi-tier web applications.展开更多
文摘Expenditure on wells constitute a significant part of the operational costs for a petroleum enterprise, where most of the cost results from drilling. This has prompted drilling departments to continuously look for ways to reduce their drilling costs and be as efficient as possible. A system called the Drilling Comprehensive Information Management and Application System (DCIMAS) is developed and presented here, with an aim at collecting, storing and making full use of the valuable well data and information relating to all drilling activities and operations. The DCIMAS comprises three main parts, including a data collection and transmission system, a data warehouse (DW) management system, and an integrated platform of core applications. With the support of the application platform, the DW management system is introduced, whereby the operation data are captured at well sites and transmitted electronically to a data warehouse via transmission equipment and ETL (extract, transformation and load) tools. With the high quality of the data guaranteed, our central task is to make the best use of the operation data and information for drilling analysis and to provide further information to guide later production stages. Applications have been developed and integrated on a uniform platform to interface directly with different layers of the multi-tier DW. Now, engineers in every department spend less time on data handling and more time on applying technology in their real work with the system.
基金Supported by National Natural Science Foundation of China(Grant No.661403234)Shandong Provincial Science and Techhnology Development Plan of China(Grant No.2014GGX106009)
文摘The current mathematical models for the storage assignment problem are generally established based on the traveling salesman problem(TSP),which has been widely applied in the conventional automated storage and retrieval system(AS/RS).However,the previous mathematical models in conventional AS/RS do not match multi-tier shuttle warehousing systems(MSWS) because the characteristics of parallel retrieval in multiple tiers and progressive vertical movement destroy the foundation of TSP.In this study,a two-stage open queuing network model in which shuttles and a lift are regarded as servers at different stages is proposed to analyze system performance in the terms of shuttle waiting period(SWP) and lift idle period(LIP) during transaction cycle time.A mean arrival time difference matrix for pairwise stock keeping units(SKUs) is presented to determine the mean waiting time and queue length to optimize the storage assignment problem on the basis of SKU correlation.The decomposition method is applied to analyze the interactions among outbound task time,SWP,and LIP.The ant colony clustering algorithm is designed to determine storage partitions using clustering items.In addition,goods are assigned for storage according to the rearranging permutation and the combination of storage partitions in a 2D plane.This combination is derived based on the analysis results of the queuing network model and on three basic principles.The storage assignment method and its entire optimization algorithm method as applied in a MSWS are verified through a practical engineering project conducted in the tobacco industry.The applying results show that the total SWP and LIP can be reduced effectively to improve the utilization rates of all devices and to increase the throughput of the distribution center.
基金Funding of Grant-in-Aid for Scientific Research(A)Provided by the Japan Society for the Promotion of the Science under Grant No.16H01825。
文摘A seismic assessment of two multi-tier pagodas by numerical analysis is presented herein.The Changu Narayan temple and the Kumbeshwar temple in Nepal are used as the case studies.Both pagodas are built of brick masonry in earthen mortar,with timber columns and crossbeams.The Changu Narayan temple is a two-tier pagoda,and was seriously damaged during the 2015 Gorkha earthquake.The Kumbeshwar temple is a five-tier pagoda,and its top-tier collapsed due to the Gorkha earthquake.A seismic assessment was carried out using finite element(FE)analysis.The FE models were prepared,and dynamic identification tests and penetrometer tests were conducted.Pushover analysis and nonlinear dynamic analysis were performed as part of the seismic assessment.The main shock of the 2015 Gorkha earthquake was considered as the input accelerograms.The behavior between the two pagodas was compared with the collapse mechanisms and damage patterns observed in the actual structures.The comparison suggested common structural features of multi-tier pagodas.This study is dedicated to providing a better understanding of the seismic behavior of multi-tier pagoda-type structures and provides suggestions for their effective analysis.
文摘Complex multi-tier applications deployed in cloud computing environments can experience rapid changes in their workloads. To ensure market readiness of such applications, adequate resources need to be provisioned so that the applications can meet the demands of specified workload levels and at the same time ensure that service level agreements are met. Multi-tier cloud applications can have complex deployment configurations with load balancers, web servers, application servers and database servers. Complex dependencies may exist between servers in various tiers. To support provisioning and capacity planning decisions, performance testing approaches with synthetic workloads are used. Accuracy of a performance testing approach is determined by how closely the generated synthetic workloads mimic the realistic workloads. Since multi-tier applications can have varied deployment configurations and characteristic workloads, there is a need for a generic performance testing methodology that allows accurately modeling the performance of applications. We propose a methodology for performance testing of complex multi-tier applications. The workloads of multi-tier cloud applications are captured in two different models-benchmark application and workload models. An architecture model captures the deployment configurations of multi-tier applications. We propose a rapid deployment prototyping methodology that can help in choosing the best and most cost effective deployments for multi-tier applications that meet the specified performance requirements. We also describe a system bottleneck detection approach based on experimental evaluation of multi-tier applications.
文摘Data temperature is a response to the ever-growing amount of data.These data have to be stored,but they have been observed that only a small portion of the data are accessed more frequently at any one time.This leads to the concept of hot and cold data.Cold data can be migrated away from high-performance nodes to free up performance for higher priority data.Existing studies classify hot and cold data primarily on the basis of data age and usage frequency.We present this as a limitation in the current implementation of data temperature.This is due to the fact that age automatically assumes that all new data have priority and that usage is purely reactive.We propose new variables and conditions that influence smarter decision-making on what are hot or cold data and allow greater user control over data location and their movement.We identify new metadata variables and user-defined variables to extend the current data temperature value.We further establish rules and conditions for limiting unnecessary movement of the data,which helps to prevent wasted input output(I/O)costs.We also propose a hybrid algorithm that combines existing variables and new variables and conditions into a single data temperature.The proposed system provides higher accuracy,increases performance,and gives greater user control for optimal positioning of data within multi-tiered storage solutions.
基金This work is partially supported by the National Key Research and Development Program of China under Grant No. 2016YFB1000205, the State Key Program of National Natural Science Foundation of China under Grant No. 61432002, the National Natural Science Foundation of China-Guangdong Joint Fund under Grant No. U1701263, the National Natural Science Foundation of China under Grant Nos. 61702365, 61672379, and 61772112, the Natural Science Foundation of Tianjin under Grant Nos. 17JCQNJC00700 and 17JCYBJC15500, and the Special Program of Artificial Intelligence of Tianjin Municipal Science and Technology Commission under Grant No. 17ZXRGGX00150.
文摘With the multi-tier pricing scheme provided by most of the cloud service providers (CSPs), the cloud userstypically select a high enough transmission service level to ensure the quality of service (QoS), due to the severe penalty ofmissing the transmission deadline. This leads to the so-called over-provisioning problem, which increases the transmissioncost of the cloud user. Given the fact that cloud users may not be aware of their traffic demand before accessing the network,the over-provisioning problem becomes more serious. In this paper, we investigate how to reduce the transmission cost fromthe perspective of cloud users, especially when they are not aware of their traffic demand before the transmission deadline.The key idea is to split a long-term transmission request into several short ones. By selecting the most suitable transmissionservice level for each short-term request, a cost-efiqcient inter-datacenter transmission service level selection framework isobtained. We further formulate the transmission service level selection problem as a linear programming problem andresolve it in an on-line style with Lyapunov optimization. We evaluate the proposed approach with real traffic data. Theexperimental results show that our method can reduce the transmission cost by up to 65.04%.
文摘Resource allocation for multi-tier web appli- cations in virtualization environments is one of the most important problems in autonomous computing. On one hand, the more resources that are provisioned to a multi- tier web application, the easier it is to meet service level objectives (SLO). On the other hand, the virtual machine which hosts the multi-tier web application needs to be consolidated as much as possible in order to maintain high resource utilization. This paper presents an adaptive resource controller which consists of a feedback utiliza- tion controller and an auto-regressive and moving average model (ARMA)-based model estimator. It can meet application-level quality of service (QoS) goals while achieving high resource utilization. To evaluate the proposed controllers, simulations are performed on a testbed simulating a virtual data center using Xen virtual machines. Experimental results indicate that the control- lers can improve CPU utilization and make the best trade- off between resource utilization and performance for multi-tier web applications.