In this study,we introduce our newly developed measurement-fed-perception self-adaption Low-cost UAV Coordinated Carbon Observation Network(LUCCN)prototype.The LUCCN primarily consists of two categories of instruments...In this study,we introduce our newly developed measurement-fed-perception self-adaption Low-cost UAV Coordinated Carbon Observation Network(LUCCN)prototype.The LUCCN primarily consists of two categories of instruments,including ground-based and UAV-based in-situ measurement.We use the GMP343,a low-cost non-dispersive infrared sensor,in both ground-based and UAV-based instruments.The first integrated measurement campaign took place in Shenzhen,China,4 May 2023.During the campaign,we found that LUCCN’s UAV component presented significant data-collecting advantages over its ground-based counterpart owing to the relatively high altitudes of the point emission sources,which was especially obvious at a gas power plant in Shenzhen.The emission flux was calculated by a crosssectional flux(CSF)method,the results of which differed from the Open-Data Inventory for Anthropogenic Carbon dioxide(ODIAC).The CSF result was slightly larger than others because of the low sampling rate of the whole emission cross section.The LUCCN system will be applied in future carbon monitoring campaigns to increase the spatiotemporal coverage of carbon emission information,especially in scenarios involving the detection of smaller-scale,rapidly varying sources and sinks.展开更多
With the rapid growth of network bandwidth,traffic identification is currently an important challenge for network management and security.In recent years,packet sampling has been widely used in most network management...With the rapid growth of network bandwidth,traffic identification is currently an important challenge for network management and security.In recent years,packet sampling has been widely used in most network management systems.In this paper,in order to improve the accuracy of network traffic identification,sampled NetFlow data is applied to traffic identification,and the impact of packet sampling on the accuracy of the identification method is studied.This study includes feature selection,a metric correlation analysis for the application behavior,and a traffic identification algorithm.Theoretical analysis and experimental results show that the significance of behavior characteristics becomes lower in the packet sampling environment.Meanwhile,in this paper,the correlation analysis results in different trends according to different features.However,as long as the flow number meets the statistical requirement,the feature selection and the correlation degree will be independent of the sampling ratio.While in a high sampling ratio,where the effective information would be less,the identification accuracy is much lower than the unsampled packets.Finally,in order to improve the accuracy of the identification,we propose a Deep Belief Networks Application Identification(DBNAI)method,which can achieve better classification performance than other state-of-the-art methods.展开更多
The revolution of information technology within the p ast twenty years has dramatically changed the picture of our economy. Numerous n ew possibilities of communication have let competition advantages for many compa n...The revolution of information technology within the p ast twenty years has dramatically changed the picture of our economy. Numerous n ew possibilities of communication have let competition advantages for many compa nies and even advantageous macroeconomic consequences emerge on national and international level. Through newly developed information technologies the knowl edge base of market participants improves with a concurrent reduction of the inf ormation obtaining costs. As a result considerable competition advantages develo p for those companies acting in E-commerce networks. These advantages of the la test development lead to macroeconomic effects on national level, if the effecti veness and efficiency increasing possibilities are used more strongly than in ot her countries. Positive international effects arise since the allocation effic iency is increased through intensified competition between different market pa rticipants in various countries. This in turn leads to an increase in wo rldwide prosperity. This causal chain however is not yet realistic to the whole extent, as such an i ncreased transparency of information is not necessarily accepted by all market p articipants. Otherwise a considerable productivity increase would already have o ccurred in industrial countries. Overall the question arises, whether the change s in the competition situation make single enterprises technically more effectiv e, concurrently however deteriorate the efficiency of the entire market through informational asymmetries. To answer these and further questions and to measure the effectiveness and effic iency of various E-commerce networks an interdisciplinary analysis platform is to be developed. With the help of this platform, it should be possible to examin e single and macroeconomic questions, reveal temporal connections and to analyse aspects of business management and national economy, information management, em ployment politics and finance politics. For this, various part-models for the i ndividual knowledge disciplines have to be generated and brought together in the platform. This platform allows various users to make the right decisions (effec tiveness) with the help of the developed models and to competently estimate the effects (efficiency). Currently models of the individual knowledge disciplines (business management, e conomics, computer science) are being developed within the research project EEE. con. This project deals with the question of Supply Chain Management (SCM), E-P rocurement, with the implementation of inter-organisational information systems , as well as various market, competition and organisation models. The department of economics and computer science from Prof. Dr.-Ing. habil. W. Dangelmaier particularly deals with the development of an agent controlled SCM- communication model which is part of the E-commerce analysis platform. Both are described in this paper. Furthermore, a unified modelling language in order to allow a prototypic implementation of the analysis tool and to make the work with other project participants and external participants easier is decided upon wit hin this project.展开更多
The global Internet is a complex network of interconnected autonomous systems(ASes).Understanding Internet inter-domain path information is crucial for understanding,managing,and improving the Internet.The path inform...The global Internet is a complex network of interconnected autonomous systems(ASes).Understanding Internet inter-domain path information is crucial for understanding,managing,and improving the Internet.The path information can also help protect user privacy and security.However,due to the complicated and heterogeneous structure of the Internet,path information is not publicly available.Obtaining path information is challenging due to the limited measurement probes and collectors.Therefore,inferring Internet inter-domain paths from the limited data is a supplementary approach to measure Internet inter-domain paths.The purpose of this survey is to provide an overview of techniques that have been conducted to infer Internet inter-domain paths from 2005 to 2023 and present the main lessons from these studies.To this end,we summarize the inter-domain path inference techniques based on the granularity of the paths,for each method,we describe the data sources,the key ideas,the advantages,and the limitations.To help readers understand the path inference techniques,we also summarize the background techniques for path inference,such as techniques to measure the Internet,infer AS relationships,resolve aliases,and map IP addresses to ASes.A case study of the existing techniques is also presented to show the real-world applications of inter-domain path inference.Additionally,we discuss the challenges and opportunities in inferring Internet inter-domain paths,the drawbacks of the state-of-the-art techniques,and the future directions.展开更多
Configuration errors are proved to be the main reasons for network interruption and anomalies.Many researchers have paid their attention to configuration analysis and provisioning,but few works focus on understanding ...Configuration errors are proved to be the main reasons for network interruption and anomalies.Many researchers have paid their attention to configuration analysis and provisioning,but few works focus on understanding the configuration evolution.In this paper,we uncover the configuration evolution of an operational IP backbone based on the weekly reports gathered from January 2006 to January 2013.We find that rate limiting and launching routes for new customers are configured most frequently.In addition,we conduct an analysis of network failures and find that link failures are the main causes for network failures.We suggest that we should configure redundant links for the links which are easy to break down.At last,according to the analysis results,we illustrate how to provide semi-automated configuration for rate limiting and adding customers.展开更多
This paper introduces the development of technology and its measures for city power network in China, looks forward to the prospect of development in forthcoming decade and the technical principles to be adopted.
With the advent of large-scale and high-speed IPv6 network technology, an effective multi-point traffic sampling is becoming a necessity. A distributed multi-point traffic sampling method that provides an accurate and...With the advent of large-scale and high-speed IPv6 network technology, an effective multi-point traffic sampling is becoming a necessity. A distributed multi-point traffic sampling method that provides an accurate and efficient solution to measure IPv6 traffic is proposed. The proposed method is to sample IPv6 traffic based on the analysis of bit randomness of each byte in the packet header. It offers a way to consistently select the same subset of packets at each measurement point, which satisfies the requirement of the distributed multi-point measurement. Finally, using real IPv6 traffic traces, the conclusion that the sampled traffic data have a good uniformity that satisfies the requirement of sampling randomness and can correctly reflect the packet size distribution of full packet trace is proved.展开更多
By analyzing the effect of cross traffic (CT) enforced on packet delay, an improved path capacity measurement method, pcapminp algorithm, was proposed. With this method, path capacity was measured by filtering probe s...By analyzing the effect of cross traffic (CT) enforced on packet delay, an improved path capacity measurement method, pcapminp algorithm, was proposed. With this method, path capacity was measured by filtering probe samples based on measured minimum packet-pair delay. The measurability of minimum packet-pair delay was also analyzed by simulation. The results show that, when comparing with pathrate, if the CT load is light, both pcapminp and pathrate have similar accuracy; but in the case of heavy CT load, pcapminp is more accurate than Pathrate. When CT load reaches 90%, pcapminp algorithm has only 5% measurement error, which is 10% lower than that of pathrate algorithm. At any CT load levels, the probe cost of pcapminp algorithm is two magnitudes smaller than that of pathrate, and the measurement duration is one magnitude shorter than that of pathrate algorithm.展开更多
The main objective of the present study is the development of a new algorithm that can adapt to complex and changeable environments.An artificial fish swarm algorithm is developed which relies on a wireless sensor net...The main objective of the present study is the development of a new algorithm that can adapt to complex and changeable environments.An artificial fish swarm algorithm is developed which relies on a wireless sensor network(WSN)in a hydrodynamic background.The nodes of this algorithm are viscous fluids and artificial fish,while related‘events’are directly connected to the food available in the related virtual environment.The results show that the total processing time of the data by the source node is 6.661 ms,of which the processing time of crosstalk data is 3.789 ms,accounting for 56.89%.The total processing time of the data by the relay node is 15.492 ms,of which the system scheduling and the Carrier Sense Multiple Access(CSMA)rollback time of the forwarding is 8.922 ms,accounting for 57.59%.The total time for the data processing of the receiving node is 11.835 ms,of which the processing time of crosstalk data is 3.791 ms,accounting for 32.02%;the serial data processing time is 4.542 ms,accounting for 38.36%.Crosstalk packets occupy a certain amount of system overhead in the internal communication of nodes,which is one of the causes of node-level congestion.We show that optimizing the crosstalk phenomenon can alleviate the internal congestion of nodes to some extent.展开更多
Network measurement is an important approach to understand network behaviors, which has been widely studied. Both Transfer Control Protocol (TCP) and Interact Control Messages Protocol (ICMP) are applied in networ...Network measurement is an important approach to understand network behaviors, which has been widely studied. Both Transfer Control Protocol (TCP) and Interact Control Messages Protocol (ICMP) are applied in network measurement, while investigating the differences between the measured results of these two protocols is an important topic that has been less investigated. In this paper, to compare the differences between TCP and ICMP when they are used in measuring host connectivity, RTT, and packet loss rate, two groups of comparison programs have been designed, and after careful evaluation of the program parameters, a lot of comparison experiments are executed on the Internet. The experimental results show that, there are significant differences between the host connectivity measured using TCP or ICMP; in general, the accuracy of connectivity measured using TCP is 20%- 30% higher than that measured using ICMP. The case of RTT and packet loss rate is complicated, which are related to path loads and destination host loads. While commonly, the RTF and packet loss rate" measured using TCP or ICMP are very close. According to the experimental results, some advices are also given on protocol selection for conducting accurate connectivity, RTF and packet loss rate measurements.展开更多
In this paper,an active network measurement platform is proposed which is a combination of hardware and software.Its innovation lies in the high performance of hardware combined with features that the software is easy...In this paper,an active network measurement platform is proposed which is a combination of hardware and software.Its innovation lies in the high performance of hardware combined with features that the software is easy to program,which retains software flexibility at the same time.By improving the precision of packet timestamp programmable hardware equipment,it provides packet sending control more accurately and supports the microsecond packet interval.We have implemented a model on the NetMagic platform,and done some experiments to analyze the accuracy difference of the user,the kernel and hardware timestamp.展开更多
The Distributed Network Performance Measurement Sys-tem provides functions to derive performance indices of networks and services, which are significant for Network Management System. To make these two systems coopera...The Distributed Network Performance Measurement Sys-tem provides functions to derive performance indices of networks and services, which are significant for Network Management System. To make these two systems cooperate, we realize this cross-system invocation platform, using Web Service, a mechanism which allows two systems to exchange data over the internet through publishing interfaces [1]. There are several mature Web Service frameworks, Apache Axis2, Apache CXF etc. In this paper we choose Apache Axis2 to achieve the objective that the Network Management System can invocate the net-work performance measurement functions via the Web Services.展开更多
A new solution of combination network of GPS and high precise distance measurements with EDM is proposed. Meanwhile, it’s inadvisable only using GPS network without distance measurements. Three schemes: terrestrial n...A new solution of combination network of GPS and high precise distance measurements with EDM is proposed. Meanwhile, it’s inadvisable only using GPS network without distance measurements. Three schemes: terrestrial network, GPS network and combination network are discussed for horizontal control network design of Xiangjiaba Dam in view of precision, reliability, coordinate and outlay in detail.展开更多
Accurate link quality estimation is a fundamental building block in quality aware multi hop routing. In an inherently lossy, unreliable and dynamic medium such as wireless, the task of accurate estimation becomes very...Accurate link quality estimation is a fundamental building block in quality aware multi hop routing. In an inherently lossy, unreliable and dynamic medium such as wireless, the task of accurate estimation becomes very challenging. Over the years ETX has been widely used as a reliable link quality estimation metric. However, more recently it has been established that under heavy traffic loads ETX performance gets significantly worse. We examine the ETX metric's behavior in detail with respect to the MAC layer and UDP data; and identify the causes of its unreliability. Motivated by the observations made in our analysis, we present the design and implementation of our link quality measurement metric xDDR - a variation of ETX. This article extends xDDR to support network mobility. Our experiments show that xDDR substantially outperforms minimum hop count, ETX and HETX in terms of end-to-end packet delivery ratio in static as well as mobile scenarios.展开更多
In order to classify the Intemet traffic of different Internet applications more quickly, two open Internet traffic traces, Auckland I1 and UNIBS traffic traces, are employed as study objects. Eight earliest packets w...In order to classify the Intemet traffic of different Internet applications more quickly, two open Internet traffic traces, Auckland I1 and UNIBS traffic traces, are employed as study objects. Eight earliest packets with non-zero flow payload sizes are selected and their payload sizes are used as the early-stage flow features. Such features can be easily and rapidly extracted at the early flow stage, which makes them outstanding. The behavior patterns of different Intemet applications are analyzed by visualizing the early-stage packet size values. Analysis results show that most Internet applications can reflect their own early packet size behavior patterns. Early packet sizes are assumed to carry enough information for effective traffic identification. Three classical machine learning classifiers, classifier, naive Bayesian trees, i. e., the naive Bayesian and the radial basis function neural networks, are used to validate the effectiveness of the proposed assumption. The experimental results show that the early stage packet sizes can be used as features for traffic identification.展开更多
Traffic classification research has been suffering from a trouble of collecting accurate samples with ground truth.A model named Traffic Labeller(TL) is proposed to solve this problem.TL system captures all user socke...Traffic classification research has been suffering from a trouble of collecting accurate samples with ground truth.A model named Traffic Labeller(TL) is proposed to solve this problem.TL system captures all user socket calls and their corresponding application process information in the user mode on a Windows host.Once a sending data call has been captured,its 5-tuple {source IP,destination IP,source port,destination port and transport layer protocol},associated with its application information,is sent to an intermediate NDIS driver in the kernel mode.Then the intermediate driver writes application type information on TOS field of the IP packets which match the 5-tuple.In this way,each IP packet sent from the Windows host carries their application information.Therefore,traffic samples collected on the network have been labelled with the accurate application information and can be used for training effective traffic classification models.展开更多
Traffic matrix is an abstract representation of the traffic volume flowing between sets of source and destination pairs.It is a key input parameter of network operations management,planning,provisioning and traffic en...Traffic matrix is an abstract representation of the traffic volume flowing between sets of source and destination pairs.It is a key input parameter of network operations management,planning,provisioning and traffic engineering.Traffic matrix is also important in the context of OpenFlow-based networks.Because even good measurement systems can suffer from errors and data collection systems can fail,missing values are common.Existing matrix completion methods do not consider traffic exhibit characteristics and only provide a finite precision.To address this problem,this paper proposes a novel approach based on compressive sensing and traffic self-similarity to reconstruct the missing traffic flow data.Firstly,we analyze the realworld traffic matrix,which all exhibit lowrank structure,temporal smoothness feature and spatial self-similarity.Then,we propose Self-Similarity and Temporal Compressive Sensing(SSTCS) algorithm to reconstruct the missing traffic data.The extensive experiments with the real-world traffic matrix show that our proposed SSTCS can significantly reduce data reconstruction errors and achieve satisfactory accuracy comparing with the existing solutions.Typically SSTCS can successfully reconstruct the traffic matrix with less than 32%errors when as much as98%of the data is missing.展开更多
The technique of IP traceback may effectively block DOS (Denial Of Service) and meet the requirement of the computer forensic, but its accuracy depends upon that condition that each node in the Internet must support I...The technique of IP traceback may effectively block DOS (Denial Of Service) and meet the requirement of the computer forensic, but its accuracy depends upon that condition that each node in the Internet must support IP packet marking or detected agents. So far, this requirement is not satisfied. On the basis of traditional traceroute,this paper investigates the efficiency of discovering path methods from aspects of the size and order of detecting packets, and the length of paths.It points out that the size of padding in probed packets has a slight effect on discovering latency, and the latency with the method of bulk sending receiving is much smaller than one with the traditional traceroute. Moreover, the loss rate of packets with the technique of TTL (Time To Live) which increases monotonously is less than that with the technique of TTL which decreases monotonously. Lastly,OS (Operating System) passive fingerprint is used as heuristic to predict the length of the discovered path so as to reduce disturbance in network traffic.展开更多
Influences of the clock resolution of bandwidth estimator on the accuracy and stability of the packet pair algorithm was analyzed.A mathematic model has been established to reveal the relationship between the result d...Influences of the clock resolution of bandwidth estimator on the accuracy and stability of the packet pair algorithm was analyzed.A mathematic model has been established to reveal the relationship between the result deviation coefficient and the packet size,clock resolution and real bandwidth(value)of the measured route.A bandwidth self-adapting packet pair algorithm was presented based on the mathematic model to reduce the estimation error resulting from the clock resolution and to improve the accuracy and stability of measurement by adjusting the deviation coefficient.Experimental results have verified the validity and stability of the algorithm.展开更多
基金supported by the National Key Research and Development Plan(Grant No.2021YFB3901000)the Chinese Academy of Sciences Project for Young Scientists in Basic Research(YSBR-037)+2 种基金the International Partnership Program of the Chinese Academy of Sciences(060GJHZ2022070MI)the MOST-ESA Dragon-5 Programme for Monitoring Greenhouse Gases from Space(ID.59355)the Finland–China Mobility Cooperation Project funded by the Academy of Finland(No.348596)。
文摘In this study,we introduce our newly developed measurement-fed-perception self-adaption Low-cost UAV Coordinated Carbon Observation Network(LUCCN)prototype.The LUCCN primarily consists of two categories of instruments,including ground-based and UAV-based in-situ measurement.We use the GMP343,a low-cost non-dispersive infrared sensor,in both ground-based and UAV-based instruments.The first integrated measurement campaign took place in Shenzhen,China,4 May 2023.During the campaign,we found that LUCCN’s UAV component presented significant data-collecting advantages over its ground-based counterpart owing to the relatively high altitudes of the point emission sources,which was especially obvious at a gas power plant in Shenzhen.The emission flux was calculated by a crosssectional flux(CSF)method,the results of which differed from the Open-Data Inventory for Anthropogenic Carbon dioxide(ODIAC).The CSF result was slightly larger than others because of the low sampling rate of the whole emission cross section.The LUCCN system will be applied in future carbon monitoring campaigns to increase the spatiotemporal coverage of carbon emission information,especially in scenarios involving the detection of smaller-scale,rapidly varying sources and sinks.
基金supported by Key Scientific and Technological Research Projects in Henan Province(Grand No 192102210125)Key scientific research projects of colleges and universities in Henan Province(23A520054)Open Foundation of State key Laboratory of Networking and Switching Technology(Beijing University of Posts and Telecommunications)(SKLNST-2020-2-01).
文摘With the rapid growth of network bandwidth,traffic identification is currently an important challenge for network management and security.In recent years,packet sampling has been widely used in most network management systems.In this paper,in order to improve the accuracy of network traffic identification,sampled NetFlow data is applied to traffic identification,and the impact of packet sampling on the accuracy of the identification method is studied.This study includes feature selection,a metric correlation analysis for the application behavior,and a traffic identification algorithm.Theoretical analysis and experimental results show that the significance of behavior characteristics becomes lower in the packet sampling environment.Meanwhile,in this paper,the correlation analysis results in different trends according to different features.However,as long as the flow number meets the statistical requirement,the feature selection and the correlation degree will be independent of the sampling ratio.While in a high sampling ratio,where the effective information would be less,the identification accuracy is much lower than the unsampled packets.Finally,in order to improve the accuracy of the identification,we propose a Deep Belief Networks Application Identification(DBNAI)method,which can achieve better classification performance than other state-of-the-art methods.
文摘The revolution of information technology within the p ast twenty years has dramatically changed the picture of our economy. Numerous n ew possibilities of communication have let competition advantages for many compa nies and even advantageous macroeconomic consequences emerge on national and international level. Through newly developed information technologies the knowl edge base of market participants improves with a concurrent reduction of the inf ormation obtaining costs. As a result considerable competition advantages develo p for those companies acting in E-commerce networks. These advantages of the la test development lead to macroeconomic effects on national level, if the effecti veness and efficiency increasing possibilities are used more strongly than in ot her countries. Positive international effects arise since the allocation effic iency is increased through intensified competition between different market pa rticipants in various countries. This in turn leads to an increase in wo rldwide prosperity. This causal chain however is not yet realistic to the whole extent, as such an i ncreased transparency of information is not necessarily accepted by all market p articipants. Otherwise a considerable productivity increase would already have o ccurred in industrial countries. Overall the question arises, whether the change s in the competition situation make single enterprises technically more effectiv e, concurrently however deteriorate the efficiency of the entire market through informational asymmetries. To answer these and further questions and to measure the effectiveness and effic iency of various E-commerce networks an interdisciplinary analysis platform is to be developed. With the help of this platform, it should be possible to examin e single and macroeconomic questions, reveal temporal connections and to analyse aspects of business management and national economy, information management, em ployment politics and finance politics. For this, various part-models for the i ndividual knowledge disciplines have to be generated and brought together in the platform. This platform allows various users to make the right decisions (effec tiveness) with the help of the developed models and to competently estimate the effects (efficiency). Currently models of the individual knowledge disciplines (business management, e conomics, computer science) are being developed within the research project EEE. con. This project deals with the question of Supply Chain Management (SCM), E-P rocurement, with the implementation of inter-organisational information systems , as well as various market, competition and organisation models. The department of economics and computer science from Prof. Dr.-Ing. habil. W. Dangelmaier particularly deals with the development of an agent controlled SCM- communication model which is part of the E-commerce analysis platform. Both are described in this paper. Furthermore, a unified modelling language in order to allow a prototypic implementation of the analysis tool and to make the work with other project participants and external participants easier is decided upon wit hin this project.
基金the China Postdoctoral Science Foundation(2023TQ0089)the National Natural Science Foundation of China(Nos.62072465,62172155)the Science and Technology Innovation Program of Hunan Province(Nos.2022RC3061,2023RC3027).
文摘The global Internet is a complex network of interconnected autonomous systems(ASes).Understanding Internet inter-domain path information is crucial for understanding,managing,and improving the Internet.The path information can also help protect user privacy and security.However,due to the complicated and heterogeneous structure of the Internet,path information is not publicly available.Obtaining path information is challenging due to the limited measurement probes and collectors.Therefore,inferring Internet inter-domain paths from the limited data is a supplementary approach to measure Internet inter-domain paths.The purpose of this survey is to provide an overview of techniques that have been conducted to infer Internet inter-domain paths from 2005 to 2023 and present the main lessons from these studies.To this end,we summarize the inter-domain path inference techniques based on the granularity of the paths,for each method,we describe the data sources,the key ideas,the advantages,and the limitations.To help readers understand the path inference techniques,we also summarize the background techniques for path inference,such as techniques to measure the Internet,infer AS relationships,resolve aliases,and map IP addresses to ASes.A case study of the existing techniques is also presented to show the real-world applications of inter-domain path inference.Additionally,we discuss the challenges and opportunities in inferring Internet inter-domain paths,the drawbacks of the state-of-the-art techniques,and the future directions.
基金supported by the National Natural Science Foundation of China under Grant Nos.61602105 and 61572123China Postdoctoral Science Foundation under Grant Nos.2016M601323+1 种基金the Fundamental Research Funds for the Central Universities Project under Grant No.N150403007CERNET Innovation Project under Grant No.NGII20160126
文摘Configuration errors are proved to be the main reasons for network interruption and anomalies.Many researchers have paid their attention to configuration analysis and provisioning,but few works focus on understanding the configuration evolution.In this paper,we uncover the configuration evolution of an operational IP backbone based on the weekly reports gathered from January 2006 to January 2013.We find that rate limiting and launching routes for new customers are configured most frequently.In addition,we conduct an analysis of network failures and find that link failures are the main causes for network failures.We suggest that we should configure redundant links for the links which are easy to break down.At last,according to the analysis results,we illustrate how to provide semi-automated configuration for rate limiting and adding customers.
文摘This paper introduces the development of technology and its measures for city power network in China, looks forward to the prospect of development in forthcoming decade and the technical principles to be adopted.
基金This project was supported by the National Natural Science Foundation of China (60572147,60132030)
文摘With the advent of large-scale and high-speed IPv6 network technology, an effective multi-point traffic sampling is becoming a necessity. A distributed multi-point traffic sampling method that provides an accurate and efficient solution to measure IPv6 traffic is proposed. The proposed method is to sample IPv6 traffic based on the analysis of bit randomness of each byte in the packet header. It offers a way to consistently select the same subset of packets at each measurement point, which satisfies the requirement of the distributed multi-point measurement. Finally, using real IPv6 traffic traces, the conclusion that the sampled traffic data have a good uniformity that satisfies the requirement of sampling randomness and can correctly reflect the packet size distribution of full packet trace is proved.
基金Projects(60473031, 60673155) supported by the National Natural Science Foundation of ChinaProject(2005AA121560) supported by the High-Tech Research and Development Program of China
文摘By analyzing the effect of cross traffic (CT) enforced on packet delay, an improved path capacity measurement method, pcapminp algorithm, was proposed. With this method, path capacity was measured by filtering probe samples based on measured minimum packet-pair delay. The measurability of minimum packet-pair delay was also analyzed by simulation. The results show that, when comparing with pathrate, if the CT load is light, both pcapminp and pathrate have similar accuracy; but in the case of heavy CT load, pcapminp is more accurate than Pathrate. When CT load reaches 90%, pcapminp algorithm has only 5% measurement error, which is 10% lower than that of pathrate algorithm. At any CT load levels, the probe cost of pcapminp algorithm is two magnitudes smaller than that of pathrate, and the measurement duration is one magnitude shorter than that of pathrate algorithm.
基金financially supported by Natural Science Foundation of Heilongjiang Province of China[Grant No.LH2019F042].
文摘The main objective of the present study is the development of a new algorithm that can adapt to complex and changeable environments.An artificial fish swarm algorithm is developed which relies on a wireless sensor network(WSN)in a hydrodynamic background.The nodes of this algorithm are viscous fluids and artificial fish,while related‘events’are directly connected to the food available in the related virtual environment.The results show that the total processing time of the data by the source node is 6.661 ms,of which the processing time of crosstalk data is 3.789 ms,accounting for 56.89%.The total processing time of the data by the relay node is 15.492 ms,of which the system scheduling and the Carrier Sense Multiple Access(CSMA)rollback time of the forwarding is 8.922 ms,accounting for 57.59%.The total time for the data processing of the receiving node is 11.835 ms,of which the processing time of crosstalk data is 3.791 ms,accounting for 32.02%;the serial data processing time is 4.542 ms,accounting for 38.36%.Crosstalk packets occupy a certain amount of system overhead in the internal communication of nodes,which is one of the causes of node-level congestion.We show that optimizing the crosstalk phenomenon can alleviate the internal congestion of nodes to some extent.
基金This work was financially supported by National Natural Science Foundation of China under grant60273070and60403031,and theNational high-Technology (863) Programunder grant2005AA121560
文摘Network measurement is an important approach to understand network behaviors, which has been widely studied. Both Transfer Control Protocol (TCP) and Interact Control Messages Protocol (ICMP) are applied in network measurement, while investigating the differences between the measured results of these two protocols is an important topic that has been less investigated. In this paper, to compare the differences between TCP and ICMP when they are used in measuring host connectivity, RTT, and packet loss rate, two groups of comparison programs have been designed, and after careful evaluation of the program parameters, a lot of comparison experiments are executed on the Internet. The experimental results show that, there are significant differences between the host connectivity measured using TCP or ICMP; in general, the accuracy of connectivity measured using TCP is 20%- 30% higher than that measured using ICMP. The case of RTT and packet loss rate is complicated, which are related to path loads and destination host loads. While commonly, the RTF and packet loss rate" measured using TCP or ICMP are very close. According to the experimental results, some advices are also given on protocol selection for conducting accurate connectivity, RTF and packet loss rate measurements.
基金Supported by the National High Technology Research and Development Programme of China(No.2007AA01Z416)"New Start" Academic Research Projects of Beijing Union University(No.ZK201204)
文摘In this paper,an active network measurement platform is proposed which is a combination of hardware and software.Its innovation lies in the high performance of hardware combined with features that the software is easy to program,which retains software flexibility at the same time.By improving the precision of packet timestamp programmable hardware equipment,it provides packet sending control more accurately and supports the microsecond packet interval.We have implemented a model on the NetMagic platform,and done some experiments to analyze the accuracy difference of the user,the kernel and hardware timestamp.
文摘The Distributed Network Performance Measurement Sys-tem provides functions to derive performance indices of networks and services, which are significant for Network Management System. To make these two systems cooperate, we realize this cross-system invocation platform, using Web Service, a mechanism which allows two systems to exchange data over the internet through publishing interfaces [1]. There are several mature Web Service frameworks, Apache Axis2, Apache CXF etc. In this paper we choose Apache Axis2 to achieve the objective that the Network Management System can invocate the net-work performance measurement functions via the Web Services.
基金Supported bythe National 973 Programof China(No.2003CB716705) International Cooperative Fund of European Union(No.EVGI-CT-2002-00061) .
文摘A new solution of combination network of GPS and high precise distance measurements with EDM is proposed. Meanwhile, it’s inadvisable only using GPS network without distance measurements. Three schemes: terrestrial network, GPS network and combination network are discussed for horizontal control network design of Xiangjiaba Dam in view of precision, reliability, coordinate and outlay in detail.
文摘Accurate link quality estimation is a fundamental building block in quality aware multi hop routing. In an inherently lossy, unreliable and dynamic medium such as wireless, the task of accurate estimation becomes very challenging. Over the years ETX has been widely used as a reliable link quality estimation metric. However, more recently it has been established that under heavy traffic loads ETX performance gets significantly worse. We examine the ETX metric's behavior in detail with respect to the MAC layer and UDP data; and identify the causes of its unreliability. Motivated by the observations made in our analysis, we present the design and implementation of our link quality measurement metric xDDR - a variation of ETX. This article extends xDDR to support network mobility. Our experiments show that xDDR substantially outperforms minimum hop count, ETX and HETX in terms of end-to-end packet delivery ratio in static as well as mobile scenarios.
基金The Program for New Century Excellent Talents in University(No.NCET-11-0565)the Fundamental Research Funds for the Central Universities(No.K13JB00160,2012JBZ010,2011JBM217)+2 种基金the Ph.D.Programs Foundation of Ministry of Education of China(No.20120009120010)the Program for Innovative Research Team in University of Ministry of Education of China(No.IRT201206)the Natural Science Foundation of Shandong Province(No.ZR2012FM010,ZR2011FZ001)
文摘In order to classify the Intemet traffic of different Internet applications more quickly, two open Internet traffic traces, Auckland I1 and UNIBS traffic traces, are employed as study objects. Eight earliest packets with non-zero flow payload sizes are selected and their payload sizes are used as the early-stage flow features. Such features can be easily and rapidly extracted at the early flow stage, which makes them outstanding. The behavior patterns of different Intemet applications are analyzed by visualizing the early-stage packet size values. Analysis results show that most Internet applications can reflect their own early packet size behavior patterns. Early packet sizes are assumed to carry enough information for effective traffic identification. Three classical machine learning classifiers, classifier, naive Bayesian trees, i. e., the naive Bayesian and the radial basis function neural networks, are used to validate the effectiveness of the proposed assumption. The experimental results show that the early stage packet sizes can be used as features for traffic identification.
基金ACKNOWLEDGEMENT This research was partially supported by the National Basic Research Program of China (973 Program) under Grant No. 2011CB30- 2605 the National High Technology Research and Development Program of China (863 Pro- gram) under Grant No. 2012AA012502+3 种基金 the National Key Technology Research and Dev- elopment Program of China under Grant No. 2012BAH37B00 the Program for New Cen- tury Excellent Talents in University under Gr- ant No. NCET-10-0863 the National Natural Science Foundation of China under Grants No 61173078, No. 61203105, No. 61173079, No. 61070130, No. 60903176 and the Provincial Natural Science Foundation of Shandong under Grants No. ZR2012FM010, No. ZR2011FZ001, No. ZR2010FM047, No. ZR2010FQ028, No. ZR2012FQ016.
文摘Traffic classification research has been suffering from a trouble of collecting accurate samples with ground truth.A model named Traffic Labeller(TL) is proposed to solve this problem.TL system captures all user socket calls and their corresponding application process information in the user mode on a Windows host.Once a sending data call has been captured,its 5-tuple {source IP,destination IP,source port,destination port and transport layer protocol},associated with its application information,is sent to an intermediate NDIS driver in the kernel mode.Then the intermediate driver writes application type information on TOS field of the IP packets which match the 5-tuple.In this way,each IP packet sent from the Windows host carries their application information.Therefore,traffic samples collected on the network have been labelled with the accurate application information and can be used for training effective traffic classification models.
基金This work is supported by the Prospcctive Research Project on Future Networks of Jiangsu Future Networks Innovation Institute under Grant No.BY2013095-1-05, the National Ba- sic Research Program of China (973) under Grant No. 2012CB315805 and the National Natural Science Foundation of China under Grants No. 61173167.
文摘Traffic matrix is an abstract representation of the traffic volume flowing between sets of source and destination pairs.It is a key input parameter of network operations management,planning,provisioning and traffic engineering.Traffic matrix is also important in the context of OpenFlow-based networks.Because even good measurement systems can suffer from errors and data collection systems can fail,missing values are common.Existing matrix completion methods do not consider traffic exhibit characteristics and only provide a finite precision.To address this problem,this paper proposes a novel approach based on compressive sensing and traffic self-similarity to reconstruct the missing traffic flow data.Firstly,we analyze the realworld traffic matrix,which all exhibit lowrank structure,temporal smoothness feature and spatial self-similarity.Then,we propose Self-Similarity and Temporal Compressive Sensing(SSTCS) algorithm to reconstruct the missing traffic data.The extensive experiments with the real-world traffic matrix show that our proposed SSTCS can significantly reduce data reconstruction errors and achieve satisfactory accuracy comparing with the existing solutions.Typically SSTCS can successfully reconstruct the traffic matrix with less than 32%errors when as much as98%of the data is missing.
文摘The technique of IP traceback may effectively block DOS (Denial Of Service) and meet the requirement of the computer forensic, but its accuracy depends upon that condition that each node in the Internet must support IP packet marking or detected agents. So far, this requirement is not satisfied. On the basis of traditional traceroute,this paper investigates the efficiency of discovering path methods from aspects of the size and order of detecting packets, and the length of paths.It points out that the size of padding in probed packets has a slight effect on discovering latency, and the latency with the method of bulk sending receiving is much smaller than one with the traditional traceroute. Moreover, the loss rate of packets with the technique of TTL (Time To Live) which increases monotonously is less than that with the technique of TTL which decreases monotonously. Lastly,OS (Operating System) passive fingerprint is used as heuristic to predict the length of the discovered path so as to reduce disturbance in network traffic.
基金This workis supported by973Project(National Keystone Foundation Research Project,No.G199903271)the National Natural Science Foundation of China(No.90104022)the National High Technology Development Program of China(No.2001AA112120,No.2002AA104550).
文摘Influences of the clock resolution of bandwidth estimator on the accuracy and stability of the packet pair algorithm was analyzed.A mathematic model has been established to reveal the relationship between the result deviation coefficient and the packet size,clock resolution and real bandwidth(value)of the measured route.A bandwidth self-adapting packet pair algorithm was presented based on the mathematic model to reduce the estimation error resulting from the clock resolution and to improve the accuracy and stability of measurement by adjusting the deviation coefficient.Experimental results have verified the validity and stability of the algorithm.