Expenditure on wells constitute a significant part of the operational costs for a petroleum enterprise, where most of the cost results from drilling. This has prompted drilling departments to continuously look for wa...Expenditure on wells constitute a significant part of the operational costs for a petroleum enterprise, where most of the cost results from drilling. This has prompted drilling departments to continuously look for ways to reduce their drilling costs and be as efficient as possible. A system called the Drilling Comprehensive Information Management and Application System (DCIMAS) is developed and presented here, with an aim at collecting, storing and making full use of the valuable well data and information relating to all drilling activities and operations. The DCIMAS comprises three main parts, including a data collection and transmission system, a data warehouse (DW) management system, and an integrated platform of core applications. With the support of the application platform, the DW management system is introduced, whereby the operation data are captured at well sites and transmitted electronically to a data warehouse via transmission equipment and ETL (extract, transformation and load) tools. With the high quality of the data guaranteed, our central task is to make the best use of the operation data and information for drilling analysis and to provide further information to guide later production stages. Applications have been developed and integrated on a uniform platform to interface directly with different layers of the multi-tier DW. Now, engineers in every department spend less time on data handling and more time on applying technology in their real work with the system.展开更多
A seismic assessment of two multi-tier pagodas by numerical analysis is presented herein.The Changu Narayan temple and the Kumbeshwar temple in Nepal are used as the case studies.Both pagodas are built of brick masonr...A seismic assessment of two multi-tier pagodas by numerical analysis is presented herein.The Changu Narayan temple and the Kumbeshwar temple in Nepal are used as the case studies.Both pagodas are built of brick masonry in earthen mortar,with timber columns and crossbeams.The Changu Narayan temple is a two-tier pagoda,and was seriously damaged during the 2015 Gorkha earthquake.The Kumbeshwar temple is a five-tier pagoda,and its top-tier collapsed due to the Gorkha earthquake.A seismic assessment was carried out using finite element(FE)analysis.The FE models were prepared,and dynamic identification tests and penetrometer tests were conducted.Pushover analysis and nonlinear dynamic analysis were performed as part of the seismic assessment.The main shock of the 2015 Gorkha earthquake was considered as the input accelerograms.The behavior between the two pagodas was compared with the collapse mechanisms and damage patterns observed in the actual structures.The comparison suggested common structural features of multi-tier pagodas.This study is dedicated to providing a better understanding of the seismic behavior of multi-tier pagoda-type structures and provides suggestions for their effective analysis.展开更多
The current mathematical models for the storage assignment problem are generally established based on the traveling salesman problem(TSP),which has been widely applied in the conventional automated storage and retri...The current mathematical models for the storage assignment problem are generally established based on the traveling salesman problem(TSP),which has been widely applied in the conventional automated storage and retrieval system(AS/RS).However,the previous mathematical models in conventional AS/RS do not match multi-tier shuttle warehousing systems(MSWS) because the characteristics of parallel retrieval in multiple tiers and progressive vertical movement destroy the foundation of TSP.In this study,a two-stage open queuing network model in which shuttles and a lift are regarded as servers at different stages is proposed to analyze system performance in the terms of shuttle waiting period(SWP) and lift idle period(LIP) during transaction cycle time.A mean arrival time difference matrix for pairwise stock keeping units(SKUs) is presented to determine the mean waiting time and queue length to optimize the storage assignment problem on the basis of SKU correlation.The decomposition method is applied to analyze the interactions among outbound task time,SWP,and LIP.The ant colony clustering algorithm is designed to determine storage partitions using clustering items.In addition,goods are assigned for storage according to the rearranging permutation and the combination of storage partitions in a 2D plane.This combination is derived based on the analysis results of the queuing network model and on three basic principles.The storage assignment method and its entire optimization algorithm method as applied in a MSWS are verified through a practical engineering project conducted in the tobacco industry.The applying results show that the total SWP and LIP can be reduced effectively to improve the utilization rates of all devices and to increase the throughput of the distribution center.展开更多
Complex multi-tier applications deployed in cloud computing environments can experience rapid changes in their workloads. To ensure market readiness of such applications, adequate resources need to be provisioned so t...Complex multi-tier applications deployed in cloud computing environments can experience rapid changes in their workloads. To ensure market readiness of such applications, adequate resources need to be provisioned so that the applications can meet the demands of specified workload levels and at the same time ensure that service level agreements are met. Multi-tier cloud applications can have complex deployment configurations with load balancers, web servers, application servers and database servers. Complex dependencies may exist between servers in various tiers. To support provisioning and capacity planning decisions, performance testing approaches with synthetic workloads are used. Accuracy of a performance testing approach is determined by how closely the generated synthetic workloads mimic the realistic workloads. Since multi-tier applications can have varied deployment configurations and characteristic workloads, there is a need for a generic performance testing methodology that allows accurately modeling the performance of applications. We propose a methodology for performance testing of complex multi-tier applications. The workloads of multi-tier cloud applications are captured in two different models-benchmark application and workload models. An architecture model captures the deployment configurations of multi-tier applications. We propose a rapid deployment prototyping methodology that can help in choosing the best and most cost effective deployments for multi-tier applications that meet the specified performance requirements. We also describe a system bottleneck detection approach based on experimental evaluation of multi-tier applications.展开更多
Data temperature is a response to the ever-growing amount of data.These data have to be stored,but they have been observed that only a small portion of the data are accessed more frequently at any one time.This leads ...Data temperature is a response to the ever-growing amount of data.These data have to be stored,but they have been observed that only a small portion of the data are accessed more frequently at any one time.This leads to the concept of hot and cold data.Cold data can be migrated away from high-performance nodes to free up performance for higher priority data.Existing studies classify hot and cold data primarily on the basis of data age and usage frequency.We present this as a limitation in the current implementation of data temperature.This is due to the fact that age automatically assumes that all new data have priority and that usage is purely reactive.We propose new variables and conditions that influence smarter decision-making on what are hot or cold data and allow greater user control over data location and their movement.We identify new metadata variables and user-defined variables to extend the current data temperature value.We further establish rules and conditions for limiting unnecessary movement of the data,which helps to prevent wasted input output(I/O)costs.We also propose a hybrid algorithm that combines existing variables and new variables and conditions into a single data temperature.The proposed system provides higher accuracy,increases performance,and gives greater user control for optimal positioning of data within multi-tiered storage solutions.展开更多
Based on the characteristics of guaranteed handover (GH) algorithm, the finite capacity in one system makes the blocking probability (PB) of GH algorithm increase rapidly in the case of high traffic losd. So, when...Based on the characteristics of guaranteed handover (GH) algorithm, the finite capacity in one system makes the blocking probability (PB) of GH algorithm increase rapidly in the case of high traffic losd. So, when large amounts of multimedia services are transmitted via a single low earth orbit (LEO) satellite system, the PB of it is much higher. In order to solve the problem, a novel handover scheme defined by multi-tier optimal layer selection is proposed. The scheme sufficiently takes into account the characteristics of double-tier satellite network, which is constituted by LEO satellites combined with medium earth orbit (MEO) satellites, and the multimedia transmitted by such network, so it can augment this systematic capacity and effectively reduces the traffic loed in the LEO which performs GH algorithm. The detailed processes are also presented. The simulation and numerical results show that the approach integrated with GH algorithm achieves a significant improvement in the PB and practicality, as compared to the single LEO layer network.展开更多
With the multi-tier pricing scheme provided by most of the cloud service providers (CSPs), the cloud userstypically select a high enough transmission service level to ensure the quality of service (QoS), due to th...With the multi-tier pricing scheme provided by most of the cloud service providers (CSPs), the cloud userstypically select a high enough transmission service level to ensure the quality of service (QoS), due to the severe penalty ofmissing the transmission deadline. This leads to the so-called over-provisioning problem, which increases the transmissioncost of the cloud user. Given the fact that cloud users may not be aware of their traffic demand before accessing the network,the over-provisioning problem becomes more serious. In this paper, we investigate how to reduce the transmission cost fromthe perspective of cloud users, especially when they are not aware of their traffic demand before the transmission deadline.The key idea is to split a long-term transmission request into several short ones. By selecting the most suitable transmissionservice level for each short-term request, a cost-efiqcient inter-datacenter transmission service level selection framework isobtained. We further formulate the transmission service level selection problem as a linear programming problem andresolve it in an on-line style with Lyapunov optimization. We evaluate the proposed approach with real traffic data. Theexperimental results show that our method can reduce the transmission cost by up to 65.04%.展开更多
Resource allocation for multi-tier web appli- cations in virtualization environments is one of the most important problems in autonomous computing. On one hand, the more resources that are provisioned to a multi- tier...Resource allocation for multi-tier web appli- cations in virtualization environments is one of the most important problems in autonomous computing. On one hand, the more resources that are provisioned to a multi- tier web application, the easier it is to meet service level objectives (SLO). On the other hand, the virtual machine which hosts the multi-tier web application needs to be consolidated as much as possible in order to maintain high resource utilization. This paper presents an adaptive resource controller which consists of a feedback utiliza- tion controller and an auto-regressive and moving average model (ARMA)-based model estimator. It can meet application-level quality of service (QoS) goals while achieving high resource utilization. To evaluate the proposed controllers, simulations are performed on a testbed simulating a virtual data center using Xen virtual machines. Experimental results indicate that the control- lers can improve CPU utilization and make the best trade- off between resource utilization and performance for multi-tier web applications.展开更多
Due to the unprecedented rate of transformation in thefield of wireless communication industry,there is a need to prioritise the coverage,network power and throughput as preconditions.In Heterogeneous Networks(HetNets...Due to the unprecedented rate of transformation in thefield of wireless communication industry,there is a need to prioritise the coverage,network power and throughput as preconditions.In Heterogeneous Networks(HetNets)the low power node inclusion like Femto and Pico cells creates a network of Multi-Tier(M-Tier)which is regarded as the most significant strategy for enhancing the coverage,throughput,4G Long Term Evolution(LTE)ability.This work mainly focuses on M-Tier 3D Heterogeneous Networks Energy Efficiency(EE)based Carrier Aggregation(CA)scheme for streaming real-time huge data like images.Atfirst,M-Tier 3D HetNets scheme was made for investigating Signal to Noise Interference Ratio(SNIR)on assessing the collective Pico-tier and Femto-tier interference.Next,the scheme of channel allocation is scrutinised so as to esti-mate throughput from the multiple tiers.Additionally,with the use of CA technique,the problem of energy efficiency for M-Tier 3D Heterogeneous Network(HetNet)in relation to energy metrics and throughput was evaluated with the use of LTE and Wireless Fidelity(Wi-Fi)coexistence.The simulation is carried out in a MATLAB setting,and the outcomes reveal a huge impact on EE.The simulation is carried in terms of EE,transmission time,throughput,packet success rate,convergence probability,and coverage region.The analysis from simu-lation shows that on improving the output of the device,interference among small cell base stations is reduced on increasing EE.The outcomes attained aid in the effective creation of M-Tier 3D HetNets for enhancing EE by employing Multi-Stream Carrier Aggregation(MSCA)in HetNets.展开更多
Currently many enterprises have established some independent application systems such as CAD, CAPP, CAE and so on. Enterprise information integration makes these information islands connected, and thereby forms a unif...Currently many enterprises have established some independent application systems such as CAD, CAPP, CAE and so on. Enterprise information integration makes these information islands connected, and thereby forms a uniform enterprise-wide information environment. First this paper discusses the main research contents of enterprise integration. Then the author introduces an Internet-based configurable and open information integration framework, and presents a multi-tier integration architecture based on reusable component. Finally a development case of enterprise integration framework is introduced.展开更多
Cloud computing environment is getting more interesting as a new trend of data management.Data replication has been widely applied to improve data access in distributed systems such as Grid and Cloud.However,due to th...Cloud computing environment is getting more interesting as a new trend of data management.Data replication has been widely applied to improve data access in distributed systems such as Grid and Cloud.However,due to the finite storage capacity of each site,copies that are useful for future jobs can be wastefully deleted and replaced with less valuable ones.Therefore,it is considerable to have appropriate replication strategy that can dynamically store the replicas while satisfying quality of service(QoS)requirements and storage capacity constraints.In this paper,we present a dynamic replication algorithm,named hierarchical data replication strategy(HDRS).HDRS consists of the replica creation that can adaptively increase replicas based on exponential growth or decay rate,the replica placement according to the access load and labeling technique,and finally the replica replacement based on the value of file in the future.We evaluate different dynamic data replication methods using CloudSim simulation.Experiments demonstrate that HDRS can reduce response time and bandwidth usage compared with other algorithms.It means that the HDRS can determine a popular file and replicates it to the best site.This method avoids useless replications and decreases access latency by balancing the load of sites.展开更多
文摘Expenditure on wells constitute a significant part of the operational costs for a petroleum enterprise, where most of the cost results from drilling. This has prompted drilling departments to continuously look for ways to reduce their drilling costs and be as efficient as possible. A system called the Drilling Comprehensive Information Management and Application System (DCIMAS) is developed and presented here, with an aim at collecting, storing and making full use of the valuable well data and information relating to all drilling activities and operations. The DCIMAS comprises three main parts, including a data collection and transmission system, a data warehouse (DW) management system, and an integrated platform of core applications. With the support of the application platform, the DW management system is introduced, whereby the operation data are captured at well sites and transmitted electronically to a data warehouse via transmission equipment and ETL (extract, transformation and load) tools. With the high quality of the data guaranteed, our central task is to make the best use of the operation data and information for drilling analysis and to provide further information to guide later production stages. Applications have been developed and integrated on a uniform platform to interface directly with different layers of the multi-tier DW. Now, engineers in every department spend less time on data handling and more time on applying technology in their real work with the system.
基金Funding of Grant-in-Aid for Scientific Research(A)Provided by the Japan Society for the Promotion of the Science under Grant No.16H01825。
文摘A seismic assessment of two multi-tier pagodas by numerical analysis is presented herein.The Changu Narayan temple and the Kumbeshwar temple in Nepal are used as the case studies.Both pagodas are built of brick masonry in earthen mortar,with timber columns and crossbeams.The Changu Narayan temple is a two-tier pagoda,and was seriously damaged during the 2015 Gorkha earthquake.The Kumbeshwar temple is a five-tier pagoda,and its top-tier collapsed due to the Gorkha earthquake.A seismic assessment was carried out using finite element(FE)analysis.The FE models were prepared,and dynamic identification tests and penetrometer tests were conducted.Pushover analysis and nonlinear dynamic analysis were performed as part of the seismic assessment.The main shock of the 2015 Gorkha earthquake was considered as the input accelerograms.The behavior between the two pagodas was compared with the collapse mechanisms and damage patterns observed in the actual structures.The comparison suggested common structural features of multi-tier pagodas.This study is dedicated to providing a better understanding of the seismic behavior of multi-tier pagoda-type structures and provides suggestions for their effective analysis.
基金Supported by National Natural Science Foundation of China(Grant No.661403234)Shandong Provincial Science and Techhnology Development Plan of China(Grant No.2014GGX106009)
文摘The current mathematical models for the storage assignment problem are generally established based on the traveling salesman problem(TSP),which has been widely applied in the conventional automated storage and retrieval system(AS/RS).However,the previous mathematical models in conventional AS/RS do not match multi-tier shuttle warehousing systems(MSWS) because the characteristics of parallel retrieval in multiple tiers and progressive vertical movement destroy the foundation of TSP.In this study,a two-stage open queuing network model in which shuttles and a lift are regarded as servers at different stages is proposed to analyze system performance in the terms of shuttle waiting period(SWP) and lift idle period(LIP) during transaction cycle time.A mean arrival time difference matrix for pairwise stock keeping units(SKUs) is presented to determine the mean waiting time and queue length to optimize the storage assignment problem on the basis of SKU correlation.The decomposition method is applied to analyze the interactions among outbound task time,SWP,and LIP.The ant colony clustering algorithm is designed to determine storage partitions using clustering items.In addition,goods are assigned for storage according to the rearranging permutation and the combination of storage partitions in a 2D plane.This combination is derived based on the analysis results of the queuing network model and on three basic principles.The storage assignment method and its entire optimization algorithm method as applied in a MSWS are verified through a practical engineering project conducted in the tobacco industry.The applying results show that the total SWP and LIP can be reduced effectively to improve the utilization rates of all devices and to increase the throughput of the distribution center.
文摘Complex multi-tier applications deployed in cloud computing environments can experience rapid changes in their workloads. To ensure market readiness of such applications, adequate resources need to be provisioned so that the applications can meet the demands of specified workload levels and at the same time ensure that service level agreements are met. Multi-tier cloud applications can have complex deployment configurations with load balancers, web servers, application servers and database servers. Complex dependencies may exist between servers in various tiers. To support provisioning and capacity planning decisions, performance testing approaches with synthetic workloads are used. Accuracy of a performance testing approach is determined by how closely the generated synthetic workloads mimic the realistic workloads. Since multi-tier applications can have varied deployment configurations and characteristic workloads, there is a need for a generic performance testing methodology that allows accurately modeling the performance of applications. We propose a methodology for performance testing of complex multi-tier applications. The workloads of multi-tier cloud applications are captured in two different models-benchmark application and workload models. An architecture model captures the deployment configurations of multi-tier applications. We propose a rapid deployment prototyping methodology that can help in choosing the best and most cost effective deployments for multi-tier applications that meet the specified performance requirements. We also describe a system bottleneck detection approach based on experimental evaluation of multi-tier applications.
文摘Data temperature is a response to the ever-growing amount of data.These data have to be stored,but they have been observed that only a small portion of the data are accessed more frequently at any one time.This leads to the concept of hot and cold data.Cold data can be migrated away from high-performance nodes to free up performance for higher priority data.Existing studies classify hot and cold data primarily on the basis of data age and usage frequency.We present this as a limitation in the current implementation of data temperature.This is due to the fact that age automatically assumes that all new data have priority and that usage is purely reactive.We propose new variables and conditions that influence smarter decision-making on what are hot or cold data and allow greater user control over data location and their movement.We identify new metadata variables and user-defined variables to extend the current data temperature value.We further establish rules and conditions for limiting unnecessary movement of the data,which helps to prevent wasted input output(I/O)costs.We also propose a hybrid algorithm that combines existing variables and new variables and conditions into a single data temperature.The proposed system provides higher accuracy,increases performance,and gives greater user control for optimal positioning of data within multi-tiered storage solutions.
文摘Based on the characteristics of guaranteed handover (GH) algorithm, the finite capacity in one system makes the blocking probability (PB) of GH algorithm increase rapidly in the case of high traffic losd. So, when large amounts of multimedia services are transmitted via a single low earth orbit (LEO) satellite system, the PB of it is much higher. In order to solve the problem, a novel handover scheme defined by multi-tier optimal layer selection is proposed. The scheme sufficiently takes into account the characteristics of double-tier satellite network, which is constituted by LEO satellites combined with medium earth orbit (MEO) satellites, and the multimedia transmitted by such network, so it can augment this systematic capacity and effectively reduces the traffic loed in the LEO which performs GH algorithm. The detailed processes are also presented. The simulation and numerical results show that the approach integrated with GH algorithm achieves a significant improvement in the PB and practicality, as compared to the single LEO layer network.
基金This work is partially supported by the National Key Research and Development Program of China under Grant No. 2016YFB1000205, the State Key Program of National Natural Science Foundation of China under Grant No. 61432002, the National Natural Science Foundation of China-Guangdong Joint Fund under Grant No. U1701263, the National Natural Science Foundation of China under Grant Nos. 61702365, 61672379, and 61772112, the Natural Science Foundation of Tianjin under Grant Nos. 17JCQNJC00700 and 17JCYBJC15500, and the Special Program of Artificial Intelligence of Tianjin Municipal Science and Technology Commission under Grant No. 17ZXRGGX00150.
文摘With the multi-tier pricing scheme provided by most of the cloud service providers (CSPs), the cloud userstypically select a high enough transmission service level to ensure the quality of service (QoS), due to the severe penalty ofmissing the transmission deadline. This leads to the so-called over-provisioning problem, which increases the transmissioncost of the cloud user. Given the fact that cloud users may not be aware of their traffic demand before accessing the network,the over-provisioning problem becomes more serious. In this paper, we investigate how to reduce the transmission cost fromthe perspective of cloud users, especially when they are not aware of their traffic demand before the transmission deadline.The key idea is to split a long-term transmission request into several short ones. By selecting the most suitable transmissionservice level for each short-term request, a cost-efiqcient inter-datacenter transmission service level selection framework isobtained. We further formulate the transmission service level selection problem as a linear programming problem andresolve it in an on-line style with Lyapunov optimization. We evaluate the proposed approach with real traffic data. Theexperimental results show that our method can reduce the transmission cost by up to 65.04%.
文摘Resource allocation for multi-tier web appli- cations in virtualization environments is one of the most important problems in autonomous computing. On one hand, the more resources that are provisioned to a multi- tier web application, the easier it is to meet service level objectives (SLO). On the other hand, the virtual machine which hosts the multi-tier web application needs to be consolidated as much as possible in order to maintain high resource utilization. This paper presents an adaptive resource controller which consists of a feedback utiliza- tion controller and an auto-regressive and moving average model (ARMA)-based model estimator. It can meet application-level quality of service (QoS) goals while achieving high resource utilization. To evaluate the proposed controllers, simulations are performed on a testbed simulating a virtual data center using Xen virtual machines. Experimental results indicate that the control- lers can improve CPU utilization and make the best trade- off between resource utilization and performance for multi-tier web applications.
文摘Due to the unprecedented rate of transformation in thefield of wireless communication industry,there is a need to prioritise the coverage,network power and throughput as preconditions.In Heterogeneous Networks(HetNets)the low power node inclusion like Femto and Pico cells creates a network of Multi-Tier(M-Tier)which is regarded as the most significant strategy for enhancing the coverage,throughput,4G Long Term Evolution(LTE)ability.This work mainly focuses on M-Tier 3D Heterogeneous Networks Energy Efficiency(EE)based Carrier Aggregation(CA)scheme for streaming real-time huge data like images.Atfirst,M-Tier 3D HetNets scheme was made for investigating Signal to Noise Interference Ratio(SNIR)on assessing the collective Pico-tier and Femto-tier interference.Next,the scheme of channel allocation is scrutinised so as to esti-mate throughput from the multiple tiers.Additionally,with the use of CA technique,the problem of energy efficiency for M-Tier 3D Heterogeneous Network(HetNet)in relation to energy metrics and throughput was evaluated with the use of LTE and Wireless Fidelity(Wi-Fi)coexistence.The simulation is carried out in a MATLAB setting,and the outcomes reveal a huge impact on EE.The simulation is carried in terms of EE,transmission time,throughput,packet success rate,convergence probability,and coverage region.The analysis from simu-lation shows that on improving the output of the device,interference among small cell base stations is reduced on increasing EE.The outcomes attained aid in the effective creation of M-Tier 3D HetNets for enhancing EE by employing Multi-Stream Carrier Aggregation(MSCA)in HetNets.
文摘Currently many enterprises have established some independent application systems such as CAD, CAPP, CAE and so on. Enterprise information integration makes these information islands connected, and thereby forms a uniform enterprise-wide information environment. First this paper discusses the main research contents of enterprise integration. Then the author introduces an Internet-based configurable and open information integration framework, and presents a multi-tier integration architecture based on reusable component. Finally a development case of enterprise integration framework is introduced.
文摘Cloud computing environment is getting more interesting as a new trend of data management.Data replication has been widely applied to improve data access in distributed systems such as Grid and Cloud.However,due to the finite storage capacity of each site,copies that are useful for future jobs can be wastefully deleted and replaced with less valuable ones.Therefore,it is considerable to have appropriate replication strategy that can dynamically store the replicas while satisfying quality of service(QoS)requirements and storage capacity constraints.In this paper,we present a dynamic replication algorithm,named hierarchical data replication strategy(HDRS).HDRS consists of the replica creation that can adaptively increase replicas based on exponential growth or decay rate,the replica placement according to the access load and labeling technique,and finally the replica replacement based on the value of file in the future.We evaluate different dynamic data replication methods using CloudSim simulation.Experiments demonstrate that HDRS can reduce response time and bandwidth usage compared with other algorithms.It means that the HDRS can determine a popular file and replicates it to the best site.This method avoids useless replications and decreases access latency by balancing the load of sites.