Virtual data center is a new form of cloud computing concept applied to data center. As one of the most important challenges, virtual data center embedding problem has attracted much attention from researchers. In dat...Virtual data center is a new form of cloud computing concept applied to data center. As one of the most important challenges, virtual data center embedding problem has attracted much attention from researchers. In data centers, energy issue is very important for the reality that data center energy consumption has increased by dozens of times in the last decade. In this paper, we are concerned about the cost-aware multi-domain virtual data center embedding problem. In order to solve this problem, this paper first addresses the energy consumption model. The model includes the energy consumption model of the virtual machine node and the virtual switch node, to quantify the energy consumption in the virtual data center embedding process. Based on the energy consumption model above, this paper presents a heuristic algorithm for cost-aware multi-domain virtual data center embedding. The algorithm consists of two steps: inter-domain embedding and intra-domain embedding. Inter-domain virtual data center embedding refers to dividing virtual data center requests into several slices to select the appropriate single data center. Intra-domain virtual data center refers to embedding virtual data center requests in each data center. We first propose an inter-domain virtual data center embedding algorithm based on label propagation to select the appropriate single data center. We then propose a cost-aware virtual data center embedding algorithm to perform the intra-domain data center embedding. Extensive simulation results show that our proposed algorithm in this paper can effectively reduce the energy consumption while ensuring the success ratio of embedding.展开更多
From the viewpoint of systems science, this article takes Xiaosha River artificial wetland under planning and construction as object of study based on the systems theory and takes the accomplished and running project ...From the viewpoint of systems science, this article takes Xiaosha River artificial wetland under planning and construction as object of study based on the systems theory and takes the accomplished and running project of Xinxuehe artificial wetland as reference. The virtual data of quantity and quality of inflow and the quality of outflow of Xiaosha River artificial wetland are built up according to the running experience, forecasting model and theoretical method of the reference project as well as the comparison analysis of the similarity and difference of the two example projects. The virtual data are used to study the building of forecasting model of BP neural network of Xiaosha River artificial wetland.展开更多
Natural mortality coefficient (M) was estimated from fish abundance (N) and catch (C) data using a Virtual Population Analysis (VPA) model. Monte Carlo simulations were used to evaluate the impact of different error d...Natural mortality coefficient (M) was estimated from fish abundance (N) and catch (C) data using a Virtual Population Analysis (VPA) model. Monte Carlo simulations were used to evaluate the impact of different error distributions for the simulated data on the estimates of M. Among the four error structures (normal, lognormal, Poisson and gamma), simulations of normally dis-tributed errors produced the most viable estimates for M, with the lowest relative estimation errors (REEs) and median mean absolute deviations (MADs) for the ratio of the true to the estimated Ms. In contrast, the lognormal distribution had the largest REE value. Errors with different coefficients of variation (CV) were added to N and C. In general, when CVs in the data were less than 10%, reliable estimates of M were obtained. For normal and lognormal distributions, the estimates of M were more sensitive to the CVs in N than in C; when only C had error the estimates were close to the true. For Poisson and gamma distributions, opposite results were obtained. For instance, the estimates were more sensitive to the CVs in C than in N, with the largest REE from the scenario of error only in C. Two scenarios of high and low fishing mortality coefficient (F) were generated, and the simulation results showed that the method performed better for the scenario with low F. This method was also applied to the published data for the anchovy (Engraulis japonicus) of the Yellow Sea. Viable estimates of M were obtained for young groups, which may be explained by the fact that the great uncertainties in N and C observed for older Yellow Sea anchovy introduced large variation in the corresponding estimates of M.展开更多
This paper addresses the problem of selecting a route for every pair of communicating nodes in a virtual circuit data network in order to minimize the average delay encountered by messages. The problem was previously ...This paper addresses the problem of selecting a route for every pair of communicating nodes in a virtual circuit data network in order to minimize the average delay encountered by messages. The problem was previously modeled as a network of M/M/1 queues. Agenetic algorithm to solve this problem is presented. Extensive computational results across a variety of networks are reported. These results indicate that the presented solution procedure outperforms the other methods in the literature and is effective for a wide range of traffic loads.展开更多
Digital transformation has been corner stone of business innovation in the last decade, and these innovations have dramatically changed the definition and boundaries of enterprise business applications. Introduction o...Digital transformation has been corner stone of business innovation in the last decade, and these innovations have dramatically changed the definition and boundaries of enterprise business applications. Introduction of new products/ services, version management of existing products/ services, management of customer/partner connections, management of multi-channel service delivery (web, social media, web etc.), merger/acquisitions of new businesses and adoption of new innovations/technologies will drive data growth in business applications. These datasets exist in different sharing nothing business applications at different locations and in various forms. So, to make sense of this information and derive insight, it is essential to break the data silos, streamline data retrieval and simplify information access across the entire organization. The information access framework must support just-in-time processing capabilities to bring data from multiple sources, be fast and powerful enough to transform and process huge amounts of data quickly, and be agile enough to accommodate new data sources per user needs. This paper discusses the SAP HANA Smart Data Access data-virtualization technology to enable unified access to heterogenous data across the organization and analysis of huge volume of data in real-time using SAP HANA in-memory platform.展开更多
Remote data monitoring system which adopts virtual instrument usually applies data sharing, acquisition and remote transmission technology via internet. It is able to finish concurrent data acquisition and processing ...Remote data monitoring system which adopts virtual instrument usually applies data sharing, acquisition and remote transmission technology via internet. It is able to finish concurrent data acquisition and processing for multi-user and multi-task and also build a personalized virtual testing environment for more people but with fewer instruments. In this paper, we' 11 elaborate on the design and implementation of information sharing platform through a typical example of how to build multi-user concurrent virtual testing environment based on the virtnal software LabVIEW.展开更多
The construction of virtual community in foreign language learning is a comprehensive foreign language learning environment integrated with foreign language vocabulary database construction and vocabulary retrieval, c...The construction of virtual community in foreign language learning is a comprehensive foreign language learning environment integrated with foreign language vocabulary database construction and vocabulary retrieval, combining the virtual reality technology to construct the language environment of foreign language learning. The virtual community of foreign language leaming can improve the sense of language authenticity in foreign language learning and improve the quality of foreign language teaching. A method of building a virtual community for foreign language learning is proposed based on data mining technology, data acquisition and feature preprocessing model for building semantic vocabulary of foreign language learning is constructed, the linguistic environment characteristics of the semantic vocabulary data of foreign language learning is analyzed, and the semantic noumenon structure model is obtained. Fuzzy clustering method is used for vocabulary clustering and comprehensive retrieval in the virtual community of foreign language learning, the performance of vocabulary classification in foreign language learning is improved, the adaptive semantic information fusion method is used to realize the vocabulary data mining in the virtual community of foreign language learning, information retrieval and access scheduling for virtual communities in foreign language learning are realized based on data mining results. The simulation results show that the accuracy of foreign language vocabulary retrieval is good, improve the efficiency of foreign language learning.展开更多
How organizations analyze and use data for decision-making has been changed by cognitive computing and artificial intelligence (AI). Cognitive computing solutions can translate enormous amounts of data into valuable i...How organizations analyze and use data for decision-making has been changed by cognitive computing and artificial intelligence (AI). Cognitive computing solutions can translate enormous amounts of data into valuable insights by utilizing the power of cutting-edge algorithms and machine learning, empowering enterprises to make deft decisions quickly and efficiently. This article explores the idea of cognitive computing and AI in decision-making, emphasizing its function in converting unvalued data into valuable knowledge. It details the advantages of utilizing these technologies, such as greater productivity, accuracy, and efficiency. Businesses may use cognitive computing and AI to their advantage to obtain a competitive edge in today’s data-driven world by knowing their capabilities and possibilities [1].展开更多
This paper proposes a virtual router cluster system based on the separation of the control plane and the data plane from multiple perspectives,such as architecture,key technologies,scenarios and standardization.To som...This paper proposes a virtual router cluster system based on the separation of the control plane and the data plane from multiple perspectives,such as architecture,key technologies,scenarios and standardization.To some extent,the virtual cluster simplifies network topology and management,achieves automatic conFig.uration and saves the IP address.It is a kind of low-cost expansion method of aggregation equipment port density.展开更多
Today,we are living in the era of“big data”where massive amounts of data are used for quantitative decisions and communication management.With the continuous penetration of big data-based intelligent technology in a...Today,we are living in the era of“big data”where massive amounts of data are used for quantitative decisions and communication management.With the continuous penetration of big data-based intelligent technology in all fields of human life,the enormous commercial value inherent in the data industry has become a crucial force that drives the aggregation of new industries.For the publishing industry,the introduction of big data and relevant intelligent technologies,such as data intelligence analysis and scenario services,into the structure and value system of the publishing industry,has become an effective path to expanding and reshaping the demand space of publishing products,content decisions,workflow chain,and marketing direction.In the integration and reconstruction of big data,cloud computing,artificial intelligence,and other related technologies,it is expected that a generalized publishing industry pattern dominated by virtual interaction will be formed in the future.展开更多
基金supported in part by the following funding agencies of China:National Natural Science Foundation under Grant 61602050 and U1534201National Key Research and Development Program of China under Grant 2016QY01W0200
文摘Virtual data center is a new form of cloud computing concept applied to data center. As one of the most important challenges, virtual data center embedding problem has attracted much attention from researchers. In data centers, energy issue is very important for the reality that data center energy consumption has increased by dozens of times in the last decade. In this paper, we are concerned about the cost-aware multi-domain virtual data center embedding problem. In order to solve this problem, this paper first addresses the energy consumption model. The model includes the energy consumption model of the virtual machine node and the virtual switch node, to quantify the energy consumption in the virtual data center embedding process. Based on the energy consumption model above, this paper presents a heuristic algorithm for cost-aware multi-domain virtual data center embedding. The algorithm consists of two steps: inter-domain embedding and intra-domain embedding. Inter-domain virtual data center embedding refers to dividing virtual data center requests into several slices to select the appropriate single data center. Intra-domain virtual data center refers to embedding virtual data center requests in each data center. We first propose an inter-domain virtual data center embedding algorithm based on label propagation to select the appropriate single data center. We then propose a cost-aware virtual data center embedding algorithm to perform the intra-domain data center embedding. Extensive simulation results show that our proposed algorithm in this paper can effectively reduce the energy consumption while ensuring the success ratio of embedding.
文摘From the viewpoint of systems science, this article takes Xiaosha River artificial wetland under planning and construction as object of study based on the systems theory and takes the accomplished and running project of Xinxuehe artificial wetland as reference. The virtual data of quantity and quality of inflow and the quality of outflow of Xiaosha River artificial wetland are built up according to the running experience, forecasting model and theoretical method of the reference project as well as the comparison analysis of the similarity and difference of the two example projects. The virtual data are used to study the building of forecasting model of BP neural network of Xiaosha River artificial wetland.
文摘Natural mortality coefficient (M) was estimated from fish abundance (N) and catch (C) data using a Virtual Population Analysis (VPA) model. Monte Carlo simulations were used to evaluate the impact of different error distributions for the simulated data on the estimates of M. Among the four error structures (normal, lognormal, Poisson and gamma), simulations of normally dis-tributed errors produced the most viable estimates for M, with the lowest relative estimation errors (REEs) and median mean absolute deviations (MADs) for the ratio of the true to the estimated Ms. In contrast, the lognormal distribution had the largest REE value. Errors with different coefficients of variation (CV) were added to N and C. In general, when CVs in the data were less than 10%, reliable estimates of M were obtained. For normal and lognormal distributions, the estimates of M were more sensitive to the CVs in N than in C; when only C had error the estimates were close to the true. For Poisson and gamma distributions, opposite results were obtained. For instance, the estimates were more sensitive to the CVs in C than in N, with the largest REE from the scenario of error only in C. Two scenarios of high and low fishing mortality coefficient (F) were generated, and the simulation results showed that the method performed better for the scenario with low F. This method was also applied to the published data for the anchovy (Engraulis japonicus) of the Yellow Sea. Viable estimates of M were obtained for young groups, which may be explained by the fact that the great uncertainties in N and C observed for older Yellow Sea anchovy introduced large variation in the corresponding estimates of M.
文摘This paper addresses the problem of selecting a route for every pair of communicating nodes in a virtual circuit data network in order to minimize the average delay encountered by messages. The problem was previously modeled as a network of M/M/1 queues. Agenetic algorithm to solve this problem is presented. Extensive computational results across a variety of networks are reported. These results indicate that the presented solution procedure outperforms the other methods in the literature and is effective for a wide range of traffic loads.
文摘Digital transformation has been corner stone of business innovation in the last decade, and these innovations have dramatically changed the definition and boundaries of enterprise business applications. Introduction of new products/ services, version management of existing products/ services, management of customer/partner connections, management of multi-channel service delivery (web, social media, web etc.), merger/acquisitions of new businesses and adoption of new innovations/technologies will drive data growth in business applications. These datasets exist in different sharing nothing business applications at different locations and in various forms. So, to make sense of this information and derive insight, it is essential to break the data silos, streamline data retrieval and simplify information access across the entire organization. The information access framework must support just-in-time processing capabilities to bring data from multiple sources, be fast and powerful enough to transform and process huge amounts of data quickly, and be agile enough to accommodate new data sources per user needs. This paper discusses the SAP HANA Smart Data Access data-virtualization technology to enable unified access to heterogenous data across the organization and analysis of huge volume of data in real-time using SAP HANA in-memory platform.
文摘Remote data monitoring system which adopts virtual instrument usually applies data sharing, acquisition and remote transmission technology via internet. It is able to finish concurrent data acquisition and processing for multi-user and multi-task and also build a personalized virtual testing environment for more people but with fewer instruments. In this paper, we' 11 elaborate on the design and implementation of information sharing platform through a typical example of how to build multi-user concurrent virtual testing environment based on the virtnal software LabVIEW.
文摘The construction of virtual community in foreign language learning is a comprehensive foreign language learning environment integrated with foreign language vocabulary database construction and vocabulary retrieval, combining the virtual reality technology to construct the language environment of foreign language learning. The virtual community of foreign language leaming can improve the sense of language authenticity in foreign language learning and improve the quality of foreign language teaching. A method of building a virtual community for foreign language learning is proposed based on data mining technology, data acquisition and feature preprocessing model for building semantic vocabulary of foreign language learning is constructed, the linguistic environment characteristics of the semantic vocabulary data of foreign language learning is analyzed, and the semantic noumenon structure model is obtained. Fuzzy clustering method is used for vocabulary clustering and comprehensive retrieval in the virtual community of foreign language learning, the performance of vocabulary classification in foreign language learning is improved, the adaptive semantic information fusion method is used to realize the vocabulary data mining in the virtual community of foreign language learning, information retrieval and access scheduling for virtual communities in foreign language learning are realized based on data mining results. The simulation results show that the accuracy of foreign language vocabulary retrieval is good, improve the efficiency of foreign language learning.
文摘How organizations analyze and use data for decision-making has been changed by cognitive computing and artificial intelligence (AI). Cognitive computing solutions can translate enormous amounts of data into valuable insights by utilizing the power of cutting-edge algorithms and machine learning, empowering enterprises to make deft decisions quickly and efficiently. This article explores the idea of cognitive computing and AI in decision-making, emphasizing its function in converting unvalued data into valuable knowledge. It details the advantages of utilizing these technologies, such as greater productivity, accuracy, and efficiency. Businesses may use cognitive computing and AI to their advantage to obtain a competitive edge in today’s data-driven world by knowing their capabilities and possibilities [1].
基金supported by the Collaboration Research on Key Techniques of Future Network between China,Japan and Korea(2010DFB13470)~~
文摘This paper proposes a virtual router cluster system based on the separation of the control plane and the data plane from multiple perspectives,such as architecture,key technologies,scenarios and standardization.To some extent,the virtual cluster simplifies network topology and management,achieves automatic conFig.uration and saves the IP address.It is a kind of low-cost expansion method of aggregation equipment port density.
文摘Today,we are living in the era of“big data”where massive amounts of data are used for quantitative decisions and communication management.With the continuous penetration of big data-based intelligent technology in all fields of human life,the enormous commercial value inherent in the data industry has become a crucial force that drives the aggregation of new industries.For the publishing industry,the introduction of big data and relevant intelligent technologies,such as data intelligence analysis and scenario services,into the structure and value system of the publishing industry,has become an effective path to expanding and reshaping the demand space of publishing products,content decisions,workflow chain,and marketing direction.In the integration and reconstruction of big data,cloud computing,artificial intelligence,and other related technologies,it is expected that a generalized publishing industry pattern dominated by virtual interaction will be formed in the future.