Cloud Datacenter Network(CDN)providers usually have the option to scale their network structures to allow for far more resource capacities,though such scaling options may come with exponential costs that contradict th...Cloud Datacenter Network(CDN)providers usually have the option to scale their network structures to allow for far more resource capacities,though such scaling options may come with exponential costs that contradict their utility objectives.Yet,besides the cost of the physical assets and network resources,such scaling may also imposemore loads on the electricity power grids to feed the added nodes with the required energy to run and cool,which comes with extra costs too.Thus,those CDNproviders who utilize their resources better can certainly afford their services at lower price-units when compared to others who simply choose the scaling solutions.Resource utilization is a quite challenging process;indeed,clients of CDNs usually tend to exaggerate their true resource requirements when they lease their resources.Service providers are committed to their clients with Service Level Agreements(SLAs).Therefore,any amendment to the resource allocations needs to be approved by the clients first.In this work,we propose deploying a Stackelberg leadership framework to formulate a negotiation game between the cloud service providers and their client tenants.Through this,the providers seek to retrieve those leased unused resources from their clients.Cooperation is not expected from the clients,and they may ask high price units to return their extra resources to the provider’s premises.Hence,to motivate cooperation in such a non-cooperative game,as an extension to theVickery auctions,we developed an incentive-compatible pricingmodel for the returned resources.Moreover,we also proposed building a behavior belief function that shapes the way of negotiation and compensation for each client.Compared to other benchmark models,the assessment results showthat our proposed models provide for timely negotiation schemes,allowing for better resource utilization rates,higher utilities,and grid-friend CDNs.展开更多
Science data are very important resources for innovative research in all scientific disciplines. The Ministry of Science and Technology (MOST) of China has launched a comprehensive platform program for supporting sc...Science data are very important resources for innovative research in all scientific disciplines. The Ministry of Science and Technology (MOST) of China has launched a comprehensive platform program for supporting scientific innovations and agricultural science database construction and sharing project is one of the activities under this program supported by MOST. This paper briefly described the achievements of the Agricultural Science Data Center Project.展开更多
PDM (product data management) is one kind of techniques based on software and database, which integrates information and process related to products. But it is not enough to perform the complication of PDM in enterpri...PDM (product data management) is one kind of techniques based on software and database, which integrates information and process related to products. But it is not enough to perform the complication of PDM in enterprises. Then the mechanism to harmonize all kinds of information and process is needed. The paper introduces a novel approach to implement the intelligent monitor of PDM based on MAS (multi agent system). It carries out the management of information and process by MC (monitor center). The paper first puts forward the architecture of the whole system, then defines the structure of MC and its interoperation mode.展开更多
This research presents a step-by-step guideline for traffic data collection </span></span><span><span><span style="font-family:""><span style="font-family:Verdana;&...This research presents a step-by-step guideline for traffic data collection </span></span><span><span><span style="font-family:""><span style="font-family:Verdana;">standards set by the Institute of Transportation Engineers (ITE) and American Association of State Highway and Transportation Officials (AASHTO). This study reviews manual and automatic methods of traffic counting and provides detailed information on traffic volume and vehicle classification studies. This research also provides a detailed analysis of the Delaware Department of Transportation (DelDOT)-TMC (Transportation Management Center) websites and compares it to selected Department of Transportation websites of other states such as Vermont, Connecticut, New Jersey, Pennsylvania, California, Texas, and Virginia. The purpose of the comparison is to analyze the </span><span style="font-family:Verdana;">data sources;user friendliness, accessibility, types of data available, presentation formats, and style for each state to determine how they compared to</span> <span style="font-family:Verdana;">DelDOT-TMC. Although there were some similarities, the findings suggest</span><span style="font-family:Verdana;"> that two major differences are present. The overall results revealed that DelDOT-TMC provides limited traffic and roadway weather data, and presentation formats to the public as compared to the other states. Further, a unitless variable, called the Capacity Factor (Q), has been developed within this study </span><span style="font-family:Verdana;">to represent this relative comparison. This study shows that DelDOT TMC</span> <span style="font-family:Verdana;">performs well within the group of selected states and better than selected</span><span style="font-family:Verdana;"> states </span><span style="font-family:Verdana;">of similar size and most selected states of larger size;where only Virginia</span><span style="font-family:Verdana;"> performs better then DelDOT TMC. DelDOT TMC does not perform as well </span><span style="font-family:Verdana;">as selected neighboring states;however, it performs in an acceptable range</span><span style="font-family:Verdana;"> relative to neighboring states.展开更多
Client software on mobile devices that can cause the remote control perform data mining tasks and show production results is significantly added the value for the nomadic users and organizations that need to perform d...Client software on mobile devices that can cause the remote control perform data mining tasks and show production results is significantly added the value for the nomadic users and organizations that need to perform data analysis stored in the repository, far away from the site, where users work, allowing them to generate knowledge regardless of their physical location. This paper presents new data analysis methods and new ways to detect people work location via mobile computing technology. The growing number of applications, content, and data can be accessed from a wide range of devices. It becomes necessary to introduce a centralized mobile device management. MDM is a KDE software package working with enterprise systems using mobile devices. The paper discussed the design system in detail.展开更多
The brokering approach can be successfully used to overcome the crucial question of searching among enormous amount of data (raw and/or processed) produced and stored in different information systems. In this paper,...The brokering approach can be successfully used to overcome the crucial question of searching among enormous amount of data (raw and/or processed) produced and stored in different information systems. In this paper, authors describe the Data Management System the DMS (Data Management System) developed by INGV (Istituto Nazionale di Geofisica e Vulcanologia) to support the brokering system GEOSS (Global Earth Observation System of Systems) adopted for the ARCA (Arctic Present Climate Change and Past Extreme Events) project. This DMS includes heterogeneous data that contributes to the ARCA objective (www.arcaproject.it) focusing on multi-parametric and multi-disciplinary studies on the mechanism (s) behind the release of large volumes of cold and fresh water from melting of ice caps. The DMS is accessible directly at the www.arca.rm.ingv.it, or through the IADC (Italian Arctic Data Center) at http://arcticnode.dta.cnr.it/iadc/gi-portal/index.jsp that interoperates with the GEOSS brokering system (http://www.geoportal.org0 making easy and fast the search of specific data set and its URL.展开更多
The interest in selecting an appropriate cloud data center is exponentially increasing due to the popularity and continuous growth of the cloud computing sector.Cloud data center selection challenges are compounded by...The interest in selecting an appropriate cloud data center is exponentially increasing due to the popularity and continuous growth of the cloud computing sector.Cloud data center selection challenges are compounded by ever-increasing users’requests and the number of data centers required to execute these requests.Cloud service broker policy defines cloud data center’s selection,which is a case of an NP-hard problem that needs a precise solution for an efficient and superior solution.Differential evolution algorithm is a metaheuristic algorithm characterized by its speed and robustness,and it is well suited for selecting an appropriate cloud data center.This paper presents a modified differential evolution algorithm-based cloud service broker policy for the most appropriate data center selection in the cloud computing environment.The differential evolution algorithm is modified using the proposed new mutation technique ensuring enhanced performance and providing an appropriate selection of data centers.The proposed policy’s superiority in selecting the most suitable data center is evaluated using the CloudAnalyst simulator.The results are compared with the state-of-arts cloud service broker policies.展开更多
In a data center network (DCN), load balancing is required when servers transfer data on the same path. This is necessary to avoid congestion. Load balancing is challenged by the dynamic transferral of demands and c...In a data center network (DCN), load balancing is required when servers transfer data on the same path. This is necessary to avoid congestion. Load balancing is challenged by the dynamic transferral of demands and complex routing control. Because of the distributed nature of a traditional network, previous research on load balancing has mostly focused on improving the performance of the local network; thus, the load has not been optimally balanced across the entire network. In this paper, we propose a novel dynamic load-balancing algorithm for fat-tree. This algorithm avoids congestions to the great possible extent by searching for non-conflicting paths in a centralized way. We implement the algorithm in the popular software-defined networking architecture and evaluate the algorithm' s performance on the Mininet platform. The results show that our algorithm has higher bisection band- width than the traditional equal-cost multi-path load-balancing algorithm and thus more effectively avoids congestion.展开更多
Take a digital libraries' service system for example, Objects Served Relationship Management (OSRM) in complex systems is proposed firstly as a new concept, and its connotation is explained. The significances and c...Take a digital libraries' service system for example, Objects Served Relationship Management (OSRM) in complex systems is proposed firstly as a new concept, and its connotation is explained. The significances and constructions of OSRM are analyzed. Both the fundamental facts and the important natures that the things which are interested by Objects Served (OS) (e. g. publishers and readers) and the server (e. g. digital libraries are the servers of publishers and readers) will not be the same completely although there are a lot of common benefits between OS and servers, are indeed clarified. The valuable information,which should be used by OS and their server, is often hidden behind them. Thus, how to find, manage and control the relationship among OS and their servers is very necessary and important for the common benefits among all of them.(e. g. the three dimensions of OSRM in digital library system and its overall framwork are proposed. The different strategies to different cases in the digital library's multidimensional framework are analyzed.)展开更多
This article explores, through a case study, measures of energy efficiency in data processing centers. An analysis of this case demonstrates how the design criteria could improve the rate of consumption in IT centers,...This article explores, through a case study, measures of energy efficiency in data processing centers. An analysis of this case demonstrates how the design criteria could improve the rate of consumption in IT centers, which is currently the second most contaminating industry on the planet, and is the responsible for 2% of CO2 emissions, surpassed only by the aeronautical industry. The present and future situation of IT center energy consumption and associated environmental effects is analyzed, and also looks at how state-of-the-art technology, correctly implemented, could ensure significant rationalization of data processing center energy consumption. The article will examine optimization techniques, specific problems and case studies.展开更多
In the cloud age, heterogeneous application modes on large-scale infrastructures bring about the chal- lenges on resource utilization and manageability to data cen- ters. Many resource and runtime management systems a...In the cloud age, heterogeneous application modes on large-scale infrastructures bring about the chal- lenges on resource utilization and manageability to data cen- ters. Many resource and runtime management systems are developed or evolved to address these challenges and rele- vant problems from different perspectives. This paper tries to identify the main motivations, key concerns, common fea- tures, and representative solutions of such systems through a survey and analysis. A typical kind of these systems is gener- alized as the consolidated cluster system, whose design goal is identified as reducing the overall costs under the quality of service premise. A survey on this kind of systems is given, and the critical issues concerned by such systems are sum- marized as resource consolidation and runtime coordination. These two issues are analyzed and classified according to the design styles and external characteristics abstracted from the surveyed work. Five representative consolidated cluster systems from both academia and industry are illustrated and compared in detail based on the analysis and classifications. We hope this survey and analysis to be conducive to both de- sign implementation and technology selection of this kind of systems, in response to the constantly emerging challenges on infrastructure and application management in data centers.展开更多
The Crossrail project currently under construction in Central London has been described as "The Big Digon Steroids", obviously referencing the Central Artery/Tunnel project in Boston completed in 2007. Toaddress the...The Crossrail project currently under construction in Central London has been described as "The Big Digon Steroids", obviously referencing the Central Artery/Tunnel project in Boston completed in 2007. Toaddress the multiple demands for timely construction performance monitoring, Crossrail envisioned theunderground construction information management system (UCIMS) to monitor construction progressand structural health along the entire route, with a network of geotechnical instruments (i.e. slope inclinometers,extensometers, piezometers, etc.) and tunnel boring machine (TBM) position information.The UCIMS is a geospatially referenced relational database that was developed using an open sourcegeographic information system (GIS) that allowed all stakeholders near immediate feedback of constructionperformance. The purpose of this article is to provide a brief history of geotechnical andstructural monitoring software, to describe the structure and operation of the UCIMS, and to demonstratehow the functionality afforded by this system provided the requisite feedback to the stakeholders.Examples will be given regarding how the data management and visualization concepts incorporatedinto the UCIMS advanced the geotechnical construction industry. 2015 Institute of Rock and Soil Mechanics, Chinese Academy of Sciences. Production and hosting byElsevier B.V. All rights reserved.展开更多
Invoice tax affairs sharing, which is based on the relying on invoice tax information management system, with the aims of avoiding the risk of tax affairs, optimizing the organizational structure, standardizing the pr...Invoice tax affairs sharing, which is based on the relying on invoice tax information management system, with the aims of avoiding the risk of tax affairs, optimizing the organizational structure, standardizing the process, improving the efficiency, reducing the operating costs and supporting the planning of tax affairs, is dedicated to providing internal and external customers with specialized, centralized and shareable management services of invoice tax affairs.展开更多
基金The Deanship of Scientific Research at Hashemite University partially funds this workDeanship of Scientific Research at the Northern Border University,Arar,KSA for funding this research work through the project number“NBU-FFR-2024-1580-08”.
文摘Cloud Datacenter Network(CDN)providers usually have the option to scale their network structures to allow for far more resource capacities,though such scaling options may come with exponential costs that contradict their utility objectives.Yet,besides the cost of the physical assets and network resources,such scaling may also imposemore loads on the electricity power grids to feed the added nodes with the required energy to run and cool,which comes with extra costs too.Thus,those CDNproviders who utilize their resources better can certainly afford their services at lower price-units when compared to others who simply choose the scaling solutions.Resource utilization is a quite challenging process;indeed,clients of CDNs usually tend to exaggerate their true resource requirements when they lease their resources.Service providers are committed to their clients with Service Level Agreements(SLAs).Therefore,any amendment to the resource allocations needs to be approved by the clients first.In this work,we propose deploying a Stackelberg leadership framework to formulate a negotiation game between the cloud service providers and their client tenants.Through this,the providers seek to retrieve those leased unused resources from their clients.Cooperation is not expected from the clients,and they may ask high price units to return their extra resources to the provider’s premises.Hence,to motivate cooperation in such a non-cooperative game,as an extension to theVickery auctions,we developed an incentive-compatible pricingmodel for the returned resources.Moreover,we also proposed building a behavior belief function that shapes the way of negotiation and compensation for each client.Compared to other benchmark models,the assessment results showthat our proposed models provide for timely negotiation schemes,allowing for better resource utilization rates,higher utilities,and grid-friend CDNs.
基金Supported by Ministry of Science and Technology"National Science and Technology Platform Program"(2005DKA31800)
文摘Science data are very important resources for innovative research in all scientific disciplines. The Ministry of Science and Technology (MOST) of China has launched a comprehensive platform program for supporting scientific innovations and agricultural science database construction and sharing project is one of the activities under this program supported by MOST. This paper briefly described the achievements of the Agricultural Science Data Center Project.
文摘PDM (product data management) is one kind of techniques based on software and database, which integrates information and process related to products. But it is not enough to perform the complication of PDM in enterprises. Then the mechanism to harmonize all kinds of information and process is needed. The paper introduces a novel approach to implement the intelligent monitor of PDM based on MAS (multi agent system). It carries out the management of information and process by MC (monitor center). The paper first puts forward the architecture of the whole system, then defines the structure of MC and its interoperation mode.
文摘This research presents a step-by-step guideline for traffic data collection </span></span><span><span><span style="font-family:""><span style="font-family:Verdana;">standards set by the Institute of Transportation Engineers (ITE) and American Association of State Highway and Transportation Officials (AASHTO). This study reviews manual and automatic methods of traffic counting and provides detailed information on traffic volume and vehicle classification studies. This research also provides a detailed analysis of the Delaware Department of Transportation (DelDOT)-TMC (Transportation Management Center) websites and compares it to selected Department of Transportation websites of other states such as Vermont, Connecticut, New Jersey, Pennsylvania, California, Texas, and Virginia. The purpose of the comparison is to analyze the </span><span style="font-family:Verdana;">data sources;user friendliness, accessibility, types of data available, presentation formats, and style for each state to determine how they compared to</span> <span style="font-family:Verdana;">DelDOT-TMC. Although there were some similarities, the findings suggest</span><span style="font-family:Verdana;"> that two major differences are present. The overall results revealed that DelDOT-TMC provides limited traffic and roadway weather data, and presentation formats to the public as compared to the other states. Further, a unitless variable, called the Capacity Factor (Q), has been developed within this study </span><span style="font-family:Verdana;">to represent this relative comparison. This study shows that DelDOT TMC</span> <span style="font-family:Verdana;">performs well within the group of selected states and better than selected</span><span style="font-family:Verdana;"> states </span><span style="font-family:Verdana;">of similar size and most selected states of larger size;where only Virginia</span><span style="font-family:Verdana;"> performs better then DelDOT TMC. DelDOT TMC does not perform as well </span><span style="font-family:Verdana;">as selected neighboring states;however, it performs in an acceptable range</span><span style="font-family:Verdana;"> relative to neighboring states.
文摘Client software on mobile devices that can cause the remote control perform data mining tasks and show production results is significantly added the value for the nomadic users and organizations that need to perform data analysis stored in the repository, far away from the site, where users work, allowing them to generate knowledge regardless of their physical location. This paper presents new data analysis methods and new ways to detect people work location via mobile computing technology. The growing number of applications, content, and data can be accessed from a wide range of devices. It becomes necessary to introduce a centralized mobile device management. MDM is a KDE software package working with enterprise systems using mobile devices. The paper discussed the design system in detail.
文摘The brokering approach can be successfully used to overcome the crucial question of searching among enormous amount of data (raw and/or processed) produced and stored in different information systems. In this paper, authors describe the Data Management System the DMS (Data Management System) developed by INGV (Istituto Nazionale di Geofisica e Vulcanologia) to support the brokering system GEOSS (Global Earth Observation System of Systems) adopted for the ARCA (Arctic Present Climate Change and Past Extreme Events) project. This DMS includes heterogeneous data that contributes to the ARCA objective (www.arcaproject.it) focusing on multi-parametric and multi-disciplinary studies on the mechanism (s) behind the release of large volumes of cold and fresh water from melting of ice caps. The DMS is accessible directly at the www.arca.rm.ingv.it, or through the IADC (Italian Arctic Data Center) at http://arcticnode.dta.cnr.it/iadc/gi-portal/index.jsp that interoperates with the GEOSS brokering system (http://www.geoportal.org0 making easy and fast the search of specific data set and its URL.
基金This work was supported by Universiti Sains Malaysia under external grant(Grant Number 304/PNAV/650958/U154).
文摘The interest in selecting an appropriate cloud data center is exponentially increasing due to the popularity and continuous growth of the cloud computing sector.Cloud data center selection challenges are compounded by ever-increasing users’requests and the number of data centers required to execute these requests.Cloud service broker policy defines cloud data center’s selection,which is a case of an NP-hard problem that needs a precise solution for an efficient and superior solution.Differential evolution algorithm is a metaheuristic algorithm characterized by its speed and robustness,and it is well suited for selecting an appropriate cloud data center.This paper presents a modified differential evolution algorithm-based cloud service broker policy for the most appropriate data center selection in the cloud computing environment.The differential evolution algorithm is modified using the proposed new mutation technique ensuring enhanced performance and providing an appropriate selection of data centers.The proposed policy’s superiority in selecting the most suitable data center is evaluated using the CloudAnalyst simulator.The results are compared with the state-of-arts cloud service broker policies.
基金supported by the National Basic Research Program of China(973 Program)(2012CB315903)the Key Science and Technology Innovation Team Project of Zhejiang Province(2011R50010-05)+3 种基金the National Science and Technology Support Program(2014BAH24F01)863 Program of China(2012AA01A507)the National Natural Science Foundation of China(61379118 and 61103200)sponsored by the Research Fund of ZTE Corporation
文摘In a data center network (DCN), load balancing is required when servers transfer data on the same path. This is necessary to avoid congestion. Load balancing is challenged by the dynamic transferral of demands and complex routing control. Because of the distributed nature of a traditional network, previous research on load balancing has mostly focused on improving the performance of the local network; thus, the load has not been optimally balanced across the entire network. In this paper, we propose a novel dynamic load-balancing algorithm for fat-tree. This algorithm avoids congestions to the great possible extent by searching for non-conflicting paths in a centralized way. We implement the algorithm in the popular software-defined networking architecture and evaluate the algorithm' s performance on the Mininet platform. The results show that our algorithm has higher bisection band- width than the traditional equal-cost multi-path load-balancing algorithm and thus more effectively avoids congestion.
文摘Take a digital libraries' service system for example, Objects Served Relationship Management (OSRM) in complex systems is proposed firstly as a new concept, and its connotation is explained. The significances and constructions of OSRM are analyzed. Both the fundamental facts and the important natures that the things which are interested by Objects Served (OS) (e. g. publishers and readers) and the server (e. g. digital libraries are the servers of publishers and readers) will not be the same completely although there are a lot of common benefits between OS and servers, are indeed clarified. The valuable information,which should be used by OS and their server, is often hidden behind them. Thus, how to find, manage and control the relationship among OS and their servers is very necessary and important for the common benefits among all of them.(e. g. the three dimensions of OSRM in digital library system and its overall framwork are proposed. The different strategies to different cases in the digital library's multidimensional framework are analyzed.)
文摘This article explores, through a case study, measures of energy efficiency in data processing centers. An analysis of this case demonstrates how the design criteria could improve the rate of consumption in IT centers, which is currently the second most contaminating industry on the planet, and is the responsible for 2% of CO2 emissions, surpassed only by the aeronautical industry. The present and future situation of IT center energy consumption and associated environmental effects is analyzed, and also looks at how state-of-the-art technology, correctly implemented, could ensure significant rationalization of data processing center energy consumption. The article will examine optimization techniques, specific problems and case studies.
文摘In the cloud age, heterogeneous application modes on large-scale infrastructures bring about the chal- lenges on resource utilization and manageability to data cen- ters. Many resource and runtime management systems are developed or evolved to address these challenges and rele- vant problems from different perspectives. This paper tries to identify the main motivations, key concerns, common fea- tures, and representative solutions of such systems through a survey and analysis. A typical kind of these systems is gener- alized as the consolidated cluster system, whose design goal is identified as reducing the overall costs under the quality of service premise. A survey on this kind of systems is given, and the critical issues concerned by such systems are sum- marized as resource consolidation and runtime coordination. These two issues are analyzed and classified according to the design styles and external characteristics abstracted from the surveyed work. Five representative consolidated cluster systems from both academia and industry are illustrated and compared in detail based on the analysis and classifications. We hope this survey and analysis to be conducive to both de- sign implementation and technology selection of this kind of systems, in response to the constantly emerging challenges on infrastructure and application management in data centers.
文摘The Crossrail project currently under construction in Central London has been described as "The Big Digon Steroids", obviously referencing the Central Artery/Tunnel project in Boston completed in 2007. Toaddress the multiple demands for timely construction performance monitoring, Crossrail envisioned theunderground construction information management system (UCIMS) to monitor construction progressand structural health along the entire route, with a network of geotechnical instruments (i.e. slope inclinometers,extensometers, piezometers, etc.) and tunnel boring machine (TBM) position information.The UCIMS is a geospatially referenced relational database that was developed using an open sourcegeographic information system (GIS) that allowed all stakeholders near immediate feedback of constructionperformance. The purpose of this article is to provide a brief history of geotechnical andstructural monitoring software, to describe the structure and operation of the UCIMS, and to demonstratehow the functionality afforded by this system provided the requisite feedback to the stakeholders.Examples will be given regarding how the data management and visualization concepts incorporatedinto the UCIMS advanced the geotechnical construction industry. 2015 Institute of Rock and Soil Mechanics, Chinese Academy of Sciences. Production and hosting byElsevier B.V. All rights reserved.
文摘Invoice tax affairs sharing, which is based on the relying on invoice tax information management system, with the aims of avoiding the risk of tax affairs, optimizing the organizational structure, standardizing the process, improving the efficiency, reducing the operating costs and supporting the planning of tax affairs, is dedicated to providing internal and external customers with specialized, centralized and shareable management services of invoice tax affairs.