This paper proposes a PCA and KPCA self-fusion based MSTAR SAR automatic target recognition algorithm. This algorithm combines the linear feature extracted from principal component analysis (PCA) and nonlinear featu...This paper proposes a PCA and KPCA self-fusion based MSTAR SAR automatic target recognition algorithm. This algorithm combines the linear feature extracted from principal component analysis (PCA) and nonlinear feature extracted from kernel principal component analysis (KPCA) respectively, and then utilizes the adaptive feature fusion algorithm which is based on the weighted maximum margin criterion (WMMC) to fuse the features in order to achieve better performance. The linear regression classifier is used in the experiments. The experimental results indicate that the proposed self-fusion algorithm achieves higher recognition rate compared with the traditional PCA and KPCA feature fusion algorithms.展开更多
The moisture content of subgrade soil in seasonally frozen regions is often higher than its optimum value,leading to a decline in mechanical properties and a reduction in subgrade bearing capacity.Electro-osmosis has ...The moisture content of subgrade soil in seasonally frozen regions is often higher than its optimum value,leading to a decline in mechanical properties and a reduction in subgrade bearing capacity.Electro-osmosis has shown promise as a technology for controlling subgrade moisture,but significant heterogeneity has also been observed in treated soil.This study investigates the impact of electro-osmosis on soil stiffness through a series of bender element tests of compacted clay.The effects of dry density and supply voltage on the performance of electroosmosis treatment and the layered structure and anisotropy of the soil were analyzed.The results show that electro-osmosis treatment increased the shear wave velocity of the soil by 140% compared to untreated saturated soil and by 70% compared to soil with optimum water content.It has also been found that layered compaction of soil resulted in a layered structure,with electro-osmosis having a more prominent impact on soil near the cathode,resulting in a more pronounced layered structure.Besides,electro-osmosis was found to enhance soil anisotropy,particularly near the anode.Increasing the dry density and voltage levels can help improve soil uniformity.These findings provide insights into the potential use of electro-osmosis in improving soil stiffness,which could benefit various engineering applications.展开更多
The energy consumption in large-scale data centers is attracting more and more attention today with the increasing data center energy costs making the enhanced performance very expensive. This is becoming a bottleneck...The energy consumption in large-scale data centers is attracting more and more attention today with the increasing data center energy costs making the enhanced performance very expensive. This is becoming a bottleneck to further developments in terms of both scale and performance of cloud computing. Thus, the reduction of the energy consumption by data centers is becoming a key research topic in green IT and green computing. The web servers providing cloud service computing run at various speeds for different scenarios. By shifting among these states using speed scaling, the energy consumption is proportional to the workload, which is termed energy-proportionality. This study uses stochastic service decision nets to investigate energy-efficient speed scaling on web servers. This model combines stochastic Petri nets with Markov decision process models. This enables the model to dynamically optimize the speed scaling strategy and make performance evaluations. The model is graphical and intuitive enough to characterize complicated system behavior and decisions. The model is service-oriented using the typical service patterns to reduce the complex model to a simple model with a smaller state space. Performance and reward equivalent analyse substantially reduces the system behavior sub-net. The model gives the optimal strategy and evaluates performance and energy metrics more concisely.展开更多
Rapid advancement of distributed computing systems enables complex services in remote computing clusters. Massive applications with large-scale and disparate characteristics also create high requirements for computing...Rapid advancement of distributed computing systems enables complex services in remote computing clusters. Massive applications with large-scale and disparate characteristics also create high requirements for computing systems. Cloud computing provides a series of novel approaches to meet new trends and demands.However, some scalability issues have to be addressed in the request scheduling process and few studies have been conducted to solve these problems. Thus, this study investigates the scalability of the request scheduling process in cloud computing. We provide a theoretical definition of the scalability of this process. By modeling the scheduling server as a stochastic preemptive priority queue, we conduct a comprehensive theoretical and numerical analysis of the scalability metric under different structures and various environment configurations. The comparison and conclusion are expected to shed light on the future design and deployment of the request scheduling process in cloud computing.展开更多
Content-centric network (CCN) is a new Inter- net architecture in which content is treated as the primitive of communication. In CCN, routers are equipped with con- tent stores at the content level, which act as cac...Content-centric network (CCN) is a new Inter- net architecture in which content is treated as the primitive of communication. In CCN, routers are equipped with con- tent stores at the content level, which act as caches for fre- quently requested content. Based on this design, the Internet is available to provide content distribution services without any application-layer support. In addition, as caches are inte- grated into routers, the overall performance of CCN will be deeply affected by the caching efficiency. In this paper, our aim is to gain some insights on how caches should be designed to maintain a high performance in a cost-efficient way. We try to model the two-layer cache hi- erarchy composed of CCN touters using a two-dimensional discrete-time Markov chain, and develop an efficient algo- rithm to calculate the hit ratios of these caches. Simulations validate the accuracy of our modeling method, and convey some meaningful information which can help us better un- derstand the caching mechanism of CCN.展开更多
The technology of Ultra-High Voltage (UHV) transmission requires higher dependability for electric power grid. Power Grid Communication Networking (PGCN), the fundamental information infrastructure, severs data tr...The technology of Ultra-High Voltage (UHV) transmission requires higher dependability for electric power grid. Power Grid Communication Networking (PGCN), the fundamental information infrastructure, severs data transmission including control signal, protection signal, and common data services. Dependability is the necessary requirement to ensure services timely and accurately. Dependability analysis aims to predicate operation status and provide suitable strategies getting rid of the potential dangers. Due to the dependability of PGCN may be affected by external environment, devices quality, implementation strategies, and so on, the scale explosion and the structure complexity make the PGCN's dependability much challenging. In this paper, with the observation of interdependency between power grid and PGCN, we propose an electricity services based dependability analysis model of PGCN. The model includes methods of analyzing its dependability and procedures of designing the dependable strategies. We respectively discuss the deterministic analysis method based on matrix analysis and stochastic analysis model based on stochastic Petri nets.展开更多
Performance evaluation plays a crucial role in the design of network systems. Many theoretical tools, including queueing theory, effective bandwidth and network calculus, have been proposed to provide modeling mechani...Performance evaluation plays a crucial role in the design of network systems. Many theoretical tools, including queueing theory, effective bandwidth and network calculus, have been proposed to provide modeling mechanisms and resuits. While these theories have been widely adopted for performance evaluation, each has its own limitation. With that network systems have become more complex and harder to describe, where a lot of uncertainty and randomness exists, to make performance evaluation of such systems tractable, some compromise is often necessary and helpful. Stochas- tic network calculus (SNC) is such a theoretical tool. While SNC is a relatively new theory, it is gaining increasing interest and popularity. In the current SNC literature, much attention has been paid on the development of the theory itself. In addition, researchers have also started applying SNC to performance analysis of various types of systems in recent years. The aim of this paper is to provide a tutorial on the new theoretical tool. Specifically, various SNC traffic models and SNC server models are reviewed. The focus is on how to apply SNC, for which, four critical steps are formalized and discussed. In addition, a list of SNC application topics/areas, where there may exist huge research potential, is presented.展开更多
Enterprises build private clouds to provide IT re- sources for geographically distributed subsidiaries or prod- uct divisions. Public cloud providers like Amazon lease their platforms to enterprise users, thus, enterp...Enterprises build private clouds to provide IT re- sources for geographically distributed subsidiaries or prod- uct divisions. Public cloud providers like Amazon lease their platforms to enterprise users, thus, enterprises can also rent a number of virtual machines (VMs) from their data centers in the service provider networks. Unfortunately, the network cannot always guarantee stable connectivity for their clients to access the VMs or low-latency transfer among data centers. Usually, both latency and bandwidth are in unstable network environment. Being affected by background traffics, the net- work status can be volatile. To reduce the latency uncertainty of client accesses, enterprises should consider the network status when they deploy data centers or rent virtual data cen- ters from cloud providers. In this paper, we first develop a data center deployment and assignment scheme for an enter- prise to meet its users' requirements under uncertain network status. To accommodate to the changes of the network status and users' demands, a VMs migration-based redeployment scheme is adopted. These two schemes work in a joint way, and lay out a framework to help enterprises make better use of private or public clouds.展开更多
Hierarchical abstraction is a scalable strategy to deal with large networks.Existing visualization methods have allowed to aggregate the network nodes into hierarchies based on the node attributes or network topology,...Hierarchical abstraction is a scalable strategy to deal with large networks.Existing visualization methods have allowed to aggregate the network nodes into hierarchies based on the node attributes or network topology,each of which has its own advantage.Very few previous system has the capability to enjoy the best of both worlds.This paper presents OnionGraph,an integrated framework for the exploratory visual analysis of heterogeneous multivariate networks.OnionGraph allows nodes to be aggregated based on either node attributes,topology,or a hierarchical combination of both.These aggregations can be split,merged and filtered under the focus+context interaction model,or automatically traversed by the information-theoretic navigation method.Node aggregations that contain subsets of nodes are displayed by the onion metaphor,indicating the level and details of the abstraction.We have evaluated the OnionGraph tool in three real-world cases.Performance experiments demonstrate that on a commodity desktop,our method can scale to million-node networks while preserving the interactivity for analysis.展开更多
A peer-to-peer hierarchical replica location mechanism(PRLM)was designed for data grids to provide better load balancing capability and scalability.Global replica indexes of the PRLM are organized based on even distri...A peer-to-peer hierarchical replica location mechanism(PRLM)was designed for data grids to provide better load balancing capability and scalability.Global replica indexes of the PRLM are organized based on even distributed Chord(ED-Chord)structure.The locality can optimize queries on local replica indexes of virtual organizations.ED-Chord protocol collects the node identifiers information using a distributed method and assigns optimal identifiers for new nodes to make them more uniformly distributed in the entire identifier space.Theoretical analysis and simulations show that PRLM provides good performance,scalability and load balanc-ing capability for replica location in data grids.展开更多
基金supported in part by the National Natural Science Foundation of China under Grant No. 61033012, No. 611003177, and No. 61070181Fundamental Research Funds for the Central Universities under Grant No.1600-852016 and No. DUT12JR07
文摘This paper proposes a PCA and KPCA self-fusion based MSTAR SAR automatic target recognition algorithm. This algorithm combines the linear feature extracted from principal component analysis (PCA) and nonlinear feature extracted from kernel principal component analysis (KPCA) respectively, and then utilizes the adaptive feature fusion algorithm which is based on the weighted maximum margin criterion (WMMC) to fuse the features in order to achieve better performance. The linear regression classifier is used in the experiments. The experimental results indicate that the proposed self-fusion algorithm achieves higher recognition rate compared with the traditional PCA and KPCA feature fusion algorithms.
基金supported by the National Natural Science Foundation of China(No.41971076,No.42171128)。
文摘The moisture content of subgrade soil in seasonally frozen regions is often higher than its optimum value,leading to a decline in mechanical properties and a reduction in subgrade bearing capacity.Electro-osmosis has shown promise as a technology for controlling subgrade moisture,but significant heterogeneity has also been observed in treated soil.This study investigates the impact of electro-osmosis on soil stiffness through a series of bender element tests of compacted clay.The effects of dry density and supply voltage on the performance of electroosmosis treatment and the layered structure and anisotropy of the soil were analyzed.The results show that electro-osmosis treatment increased the shear wave velocity of the soil by 140% compared to untreated saturated soil and by 70% compared to soil with optimum water content.It has also been found that layered compaction of soil resulted in a layered structure,with electro-osmosis having a more prominent impact on soil near the cathode,resulting in a more pronounced layered structure.Besides,electro-osmosis was found to enhance soil anisotropy,particularly near the anode.Increasing the dry density and voltage levels can help improve soil uniformity.These findings provide insights into the potential use of electro-osmosis in improving soil stiffness,which could benefit various engineering applications.
基金supported by the National Key Basic Research and Development (973) Program (Nos. 2012CB315801, 2011CB302805, 2010CB328105,and 2009CB320504)the National Natural Science Foundation of China (Nos. 60932003, 61020106002, and 61161140320)the Intel Research Council with the title of "Security Vulnerability Analysis based on Cloud Platform with Intel IA Architecture"
文摘The energy consumption in large-scale data centers is attracting more and more attention today with the increasing data center energy costs making the enhanced performance very expensive. This is becoming a bottleneck to further developments in terms of both scale and performance of cloud computing. Thus, the reduction of the energy consumption by data centers is becoming a key research topic in green IT and green computing. The web servers providing cloud service computing run at various speeds for different scenarios. By shifting among these states using speed scaling, the energy consumption is proportional to the workload, which is termed energy-proportionality. This study uses stochastic service decision nets to investigate energy-efficient speed scaling on web servers. This model combines stochastic Petri nets with Markov decision process models. This enables the model to dynamically optimize the speed scaling strategy and make performance evaluations. The model is graphical and intuitive enough to characterize complicated system behavior and decisions. The model is service-oriented using the typical service patterns to reduce the complex model to a simple model with a smaller state space. Performance and reward equivalent analyse substantially reduces the system behavior sub-net. The model gives the optimal strategy and evaluates performance and energy metrics more concisely.
基金supported by the National Natural Science Foundation of China (No. 61472199)Tsinghua University Initiative Scientific Research Program (No. 20121087999)
文摘Rapid advancement of distributed computing systems enables complex services in remote computing clusters. Massive applications with large-scale and disparate characteristics also create high requirements for computing systems. Cloud computing provides a series of novel approaches to meet new trends and demands.However, some scalability issues have to be addressed in the request scheduling process and few studies have been conducted to solve these problems. Thus, this study investigates the scalability of the request scheduling process in cloud computing. We provide a theoretical definition of the scalability of this process. By modeling the scheduling server as a stochastic preemptive priority queue, we conduct a comprehensive theoretical and numerical analysis of the scalability metric under different structures and various environment configurations. The comparison and conclusion are expected to shed light on the future design and deployment of the request scheduling process in cloud computing.
文摘Content-centric network (CCN) is a new Inter- net architecture in which content is treated as the primitive of communication. In CCN, routers are equipped with con- tent stores at the content level, which act as caches for fre- quently requested content. Based on this design, the Internet is available to provide content distribution services without any application-layer support. In addition, as caches are inte- grated into routers, the overall performance of CCN will be deeply affected by the caching efficiency. In this paper, our aim is to gain some insights on how caches should be designed to maintain a high performance in a cost-efficient way. We try to model the two-layer cache hi- erarchy composed of CCN touters using a two-dimensional discrete-time Markov chain, and develop an efficient algo- rithm to calculate the hit ratios of these caches. Simulations validate the accuracy of our modeling method, and convey some meaningful information which can help us better un- derstand the caching mechanism of CCN.
基金supported by the National Key Basic Research and Development (973) Program of China(No. 2010CB328105)the National Natural Science Foundation of China (Nos. 61020106002,61071065,and 11171368)+2 种基金China Postdoctoral Science Foundation (No. 2013M540952)Tsinghua University Initiative Scientific Research Program (No. 20121087999)SGCC research and development projects
文摘The technology of Ultra-High Voltage (UHV) transmission requires higher dependability for electric power grid. Power Grid Communication Networking (PGCN), the fundamental information infrastructure, severs data transmission including control signal, protection signal, and common data services. Dependability is the necessary requirement to ensure services timely and accurately. Dependability analysis aims to predicate operation status and provide suitable strategies getting rid of the potential dangers. Due to the dependability of PGCN may be affected by external environment, devices quality, implementation strategies, and so on, the scale explosion and the structure complexity make the PGCN's dependability much challenging. In this paper, with the observation of interdependency between power grid and PGCN, we propose an electricity services based dependability analysis model of PGCN. The model includes methods of analyzing its dependability and procedures of designing the dependable strategies. We respectively discuss the deterministic analysis method based on matrix analysis and stochastic analysis model based on stochastic Petri nets.
基金The authors gratefully acknowledge the anonymous reviewers for their constructive comments. This work was supported in part by the National Basic Research Program of China (973) (Grant Nos. 2010CB328105, 2011CB302703), the National Natural Science Foundation of China (Grant Nos. 60932003, 61071065, 61020106002).
文摘Performance evaluation plays a crucial role in the design of network systems. Many theoretical tools, including queueing theory, effective bandwidth and network calculus, have been proposed to provide modeling mechanisms and resuits. While these theories have been widely adopted for performance evaluation, each has its own limitation. With that network systems have become more complex and harder to describe, where a lot of uncertainty and randomness exists, to make performance evaluation of such systems tractable, some compromise is often necessary and helpful. Stochas- tic network calculus (SNC) is such a theoretical tool. While SNC is a relatively new theory, it is gaining increasing interest and popularity. In the current SNC literature, much attention has been paid on the development of the theory itself. In addition, researchers have also started applying SNC to performance analysis of various types of systems in recent years. The aim of this paper is to provide a tutorial on the new theoretical tool. Specifically, various SNC traffic models and SNC server models are reviewed. The focus is on how to apply SNC, for which, four critical steps are formalized and discussed. In addition, a list of SNC application topics/areas, where there may exist huge research potential, is presented.
基金This work was supported in part by the National Basic Research Program of China (2010CB328105, 2009CB320504), the National Natural Science Foundation of China (NSFC) (Grant No. 60932003). We would like to thank the anonymous reviewers for their suggestions that help us improve this paper.
文摘Enterprises build private clouds to provide IT re- sources for geographically distributed subsidiaries or prod- uct divisions. Public cloud providers like Amazon lease their platforms to enterprise users, thus, enterprises can also rent a number of virtual machines (VMs) from their data centers in the service provider networks. Unfortunately, the network cannot always guarantee stable connectivity for their clients to access the VMs or low-latency transfer among data centers. Usually, both latency and bandwidth are in unstable network environment. Being affected by background traffics, the net- work status can be volatile. To reduce the latency uncertainty of client accesses, enterprises should consider the network status when they deploy data centers or rent virtual data cen- ters from cloud providers. In this paper, we first develop a data center deployment and assignment scheme for an enter- prise to meet its users' requirements under uncertain network status. To accommodate to the changes of the network status and users' demands, a VMs migration-based redeployment scheme is adopted. These two schemes work in a joint way, and lay out a framework to help enterprises make better use of private or public clouds.
文摘Hierarchical abstraction is a scalable strategy to deal with large networks.Existing visualization methods have allowed to aggregate the network nodes into hierarchies based on the node attributes or network topology,each of which has its own advantage.Very few previous system has the capability to enjoy the best of both worlds.This paper presents OnionGraph,an integrated framework for the exploratory visual analysis of heterogeneous multivariate networks.OnionGraph allows nodes to be aggregated based on either node attributes,topology,or a hierarchical combination of both.These aggregations can be split,merged and filtered under the focus+context interaction model,or automatically traversed by the information-theoretic navigation method.Node aggregations that contain subsets of nodes are displayed by the onion metaphor,indicating the level and details of the abstraction.We have evaluated the OnionGraph tool in three real-world cases.Performance experiments demonstrate that on a commodity desktop,our method can scale to million-node networks while preserving the interactivity for analysis.
基金supported by the National Natural Science Foundation of China (Grant No.90412012).
文摘A peer-to-peer hierarchical replica location mechanism(PRLM)was designed for data grids to provide better load balancing capability and scalability.Global replica indexes of the PRLM are organized based on even distributed Chord(ED-Chord)structure.The locality can optimize queries on local replica indexes of virtual organizations.ED-Chord protocol collects the node identifiers information using a distributed method and assigns optimal identifiers for new nodes to make them more uniformly distributed in the entire identifier space.Theoretical analysis and simulations show that PRLM provides good performance,scalability and load balanc-ing capability for replica location in data grids.